idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
8,301 | What does interaction depth mean in GBM? | Previous answer is not correct.
Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves.
So:
NumberOfLeaves = interaction.depth + 1 | What does interaction depth mean in GBM? | Previous answer is not correct.
Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves.
So:
NumberOfLeaves = interaction.depth + 1 | What does interaction depth mean in GBM?
Previous answer is not correct.
Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves.
So:
NumberOfLeaves = interaction.depth + 1 | What does interaction depth mean in GBM?
Previous answer is not correct.
Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves.
So:
NumberOfLeaves = interaction.depth + 1 |
8,302 | What does interaction depth mean in GBM? | Actually, the previous answers are incorrect.
Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following:
$$\begin{align*}
N &= 2^{(K+1)} - 1\\
L &= 2^K
\end{align*}
$$
The previous 2 formulas can easily be demonstrated: a tree of depth K can b... | What does interaction depth mean in GBM? | Actually, the previous answers are incorrect.
Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following:
$$\begin{align*}
N | What does interaction depth mean in GBM?
Actually, the previous answers are incorrect.
Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following:
$$\begin{align*}
N &= 2^{(K+1)} - 1\\
L &= 2^K
\end{align*}
$$
The previous 2 formulas can easily... | What does interaction depth mean in GBM?
Actually, the previous answers are incorrect.
Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following:
$$\begin{align*}
N |
8,303 | What does interaction depth mean in GBM? | You can try
table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 ,
interaction.depth = 1 ),n.trees=1))
and see that there are only 2 unique predicted values. interaction.depth = 2 will get you 3 distinct predicted values. And convince yours... | What does interaction depth mean in GBM? | You can try
table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 ,
interaction.depth = 1 ),n.trees=1))
and see that there | What does interaction depth mean in GBM?
You can try
table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 ,
interaction.depth = 1 ),n.trees=1))
and see that there are only 2 unique predicted values. interaction.depth = 2 will get you 3 disti... | What does interaction depth mean in GBM?
You can try
table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 ,
interaction.depth = 1 ),n.trees=1))
and see that there |
8,304 | Is there a Project Euler-alike for machine learning? | Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in to access the datasets (for legal agreements and so forth), but if you don't actually finish an entry, there's no penalty... | Is there a Project Euler-alike for machine learning? | Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in t | Is there a Project Euler-alike for machine learning?
Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in to access the datasets (for legal agreements and so forth), but if y... | Is there a Project Euler-alike for machine learning?
Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in t |
8,305 | Is there a Project Euler-alike for machine learning? | UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see how you do. | Is there a Project Euler-alike for machine learning? | UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see ho | Is there a Project Euler-alike for machine learning?
UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see how you do. | Is there a Project Euler-alike for machine learning?
UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see ho |
8,306 | Is there a Project Euler-alike for machine learning? | How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning.
As it was pointed in the comments this course has next edition: http://jan2012.ml-class.org/# | Is there a Project Euler-alike for machine learning? | How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning.
As it was point | Is there a Project Euler-alike for machine learning?
How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning.
As it was pointed in the comments this course has next edition: http://jan2012.ml-... | Is there a Project Euler-alike for machine learning?
How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning.
As it was point |
8,307 | Can ANOVA be significant when none of the pairwise t-tests is? | Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now.
Here's an example I made that has the ANOVA significant at the 5% level but none of the 6 pairwise comparisons are signific... | Can ANOVA be significant when none of the pairwise t-tests is? | Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now.
Her | Can ANOVA be significant when none of the pairwise t-tests is?
Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now.
Here's an example I made that has the ANOVA significant at t... | Can ANOVA be significant when none of the pairwise t-tests is?
Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now.
Her |
8,308 | Can ANOVA be significant when none of the pairwise t-tests is? | Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance).
Here's some code that seeks out such a possibility. Note that it increments the seed by 1 each time it runs, so that th... | Can ANOVA be significant when none of the pairwise t-tests is? | Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance). | Can ANOVA be significant when none of the pairwise t-tests is?
Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance).
Here's some code that seeks out such a possibility. Note... | Can ANOVA be significant when none of the pairwise t-tests is?
Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance). |
8,309 | Can ANOVA be significant when none of the pairwise t-tests is? | It's entirely possible:
One or more pairwise t-test is signfiicant but the overall F-test isn't
The overall F-test is significant but none of the pairwise t-test is
The overall F test tests all contrasts simultaneously. As such, it must be less sensitive (less statistical power) to individual contrasts (eg: a pairwis... | Can ANOVA be significant when none of the pairwise t-tests is? | It's entirely possible:
One or more pairwise t-test is signfiicant but the overall F-test isn't
The overall F-test is significant but none of the pairwise t-test is
The overall F test tests all cont | Can ANOVA be significant when none of the pairwise t-tests is?
It's entirely possible:
One or more pairwise t-test is signfiicant but the overall F-test isn't
The overall F-test is significant but none of the pairwise t-test is
The overall F test tests all contrasts simultaneously. As such, it must be less sensitive ... | Can ANOVA be significant when none of the pairwise t-tests is?
It's entirely possible:
One or more pairwise t-test is signfiicant but the overall F-test isn't
The overall F-test is significant but none of the pairwise t-test is
The overall F test tests all cont |
8,310 | Can ANOVA be significant when none of the pairwise t-tests is? | The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important).
The p-value of the ANOVA test depends on the variance of all the group means (so all the means are important).
For example, the following two situations have the same maximal difference b... | Can ANOVA be significant when none of the pairwise t-tests is? | The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important).
The p-value of the ANOVA test depends on the variance of all the grou | Can ANOVA be significant when none of the pairwise t-tests is?
The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important).
The p-value of the ANOVA test depends on the variance of all the group means (so all the means are important).
For example, ... | Can ANOVA be significant when none of the pairwise t-tests is?
The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important).
The p-value of the ANOVA test depends on the variance of all the grou |
8,311 | Strategy to deal with rare events logistic regression | (1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & here. So should you throw away observations from your sample? No. King & Zeng don't advocate this:
[...] in fields like i... | Strategy to deal with rare events logistic regression | (1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & her | Strategy to deal with rare events logistic regression
(1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & here. So should you throw away observations from your sample? No. Kin... | Strategy to deal with rare events logistic regression
(1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & her |
8,312 | Strategy to deal with rare events logistic regression | On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain more?
On the other hand, if you can cast your dependent variable as a count/ordinal problem (like casualties from conflic... | Strategy to deal with rare events logistic regression | On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain m | Strategy to deal with rare events logistic regression
On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain more?
On the other hand, if you can cast your dependent variable as... | Strategy to deal with rare events logistic regression
On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain m |
8,313 | Strategy to deal with rare events logistic regression | In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully.
This paper can give more information about it: Yap, Bee Wah, et al. "An Application of Oversampling, Undersampling, B... | Strategy to deal with rare events logistic regression | In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully. | Strategy to deal with rare events logistic regression
In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully.
This paper can give more information about it: Yap, Bee Wah, et... | Strategy to deal with rare events logistic regression
In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully. |
8,314 | Strategy to deal with rare events logistic regression | Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better solution?
I would try a more complicated model by for example adding product terms at the input, or adding a max-out laye... | Strategy to deal with rare events logistic regression | Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better s | Strategy to deal with rare events logistic regression
Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better solution?
I would try a more complicated model by for example addin... | Strategy to deal with rare events logistic regression
Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better s |
8,315 | Strategy to deal with rare events logistic regression | Great question.
To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow models from Machine Learning (BART, randomForest, boosted trees, etc.) that will almost certainly do a better job at predict... | Strategy to deal with rare events logistic regression | Great question.
To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow mode | Strategy to deal with rare events logistic regression
Great question.
To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow models from Machine Learning (BART, randomForest, boosted trees, etc.)... | Strategy to deal with rare events logistic regression
Great question.
To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow mode |
8,316 | Who first used/invented p-values? | Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718)
The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doctrine of Chance (1718) from page 251-254 who extends this line of thinking further.
De Moivre makes two steps/advancement... | Who first used/invented p-values? | Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718)
The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doct | Who first used/invented p-values?
Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718)
The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doctrine of Chance (1718) from page 251-254 who extends this line of thinking further.
De ... | Who first used/invented p-values?
Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718)
The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doct |
8,317 | Who first used/invented p-values? | I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities.
If you accept hypothesis testing as the basis, predating probability, then the Online Etymology Dictionary offers this:
"hypothesis (n.)
1590s, "a particula... | Who first used/invented p-values? | I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities.
If you accept hypothesis testing as t | Who first used/invented p-values?
I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities.
If you accept hypothesis testing as the basis, predating probability, then the Online Etymology Dictionary offers this:
"h... | Who first used/invented p-values?
I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities.
If you accept hypothesis testing as t |
8,318 | Choosing optimal alpha in elastic net logistic regression | Clarifying what is meant by $\alpha$ and Elastic Net parameters
Different terminology and parameters are used by different packages, but the meaning is generally the same:
The R package Glmnet uses the following definition
$\min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) +
\lambda\left[... | Choosing optimal alpha in elastic net logistic regression | Clarifying what is meant by $\alpha$ and Elastic Net parameters
Different terminology and parameters are used by different packages, but the meaning is generally the same:
The R package Glmnet uses t | Choosing optimal alpha in elastic net logistic regression
Clarifying what is meant by $\alpha$ and Elastic Net parameters
Different terminology and parameters are used by different packages, but the meaning is generally the same:
The R package Glmnet uses the following definition
$\min_{\beta_0,\beta} \frac{1}{N} \sum... | Choosing optimal alpha in elastic net logistic regression
Clarifying what is meant by $\alpha$ and Elastic Net parameters
Different terminology and parameters are used by different packages, but the meaning is generally the same:
The R package Glmnet uses t |
8,319 | Choosing optimal alpha in elastic net logistic regression | Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless.
Normally you should just pick the hyperparameters (here: $\alpha$) with the best CV score. Alternatively, you could select the best $k$ models $f_1, ..., f... | Choosing optimal alpha in elastic net logistic regression | Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless.
Normally you should just pick the h | Choosing optimal alpha in elastic net logistic regression
Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless.
Normally you should just pick the hyperparameters (here: $\alpha$) with the best CV score. Altern... | Choosing optimal alpha in elastic net logistic regression
Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless.
Normally you should just pick the h |
8,320 | Optimising for Precision-Recall curves under class imbalance | The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers".
Up-sampling the low-frequency class is a reasonable approach.
There are many other ways of dealing with class imbalance. Boosting and bagging are two techniques that come to m... | Optimising for Precision-Recall curves under class imbalance | The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers".
Up-sampling the low-frequency class is a reasonable app | Optimising for Precision-Recall curves under class imbalance
The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers".
Up-sampling the low-frequency class is a reasonable approach.
There are many other ways of dealing with class imba... | Optimising for Precision-Recall curves under class imbalance
The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers".
Up-sampling the low-frequency class is a reasonable app |
8,321 | Optimising for Precision-Recall curves under class imbalance | A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on unbalanced data:
Data Sampling (as suggested in the question)
Algorithm modification
Cost sensitive learning | Optimising for Precision-Recall curves under class imbalance | A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on | Optimising for Precision-Recall curves under class imbalance
A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on unbalanced data:
Data Sampling (as suggested in the questi... | Optimising for Precision-Recall curves under class imbalance
A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on |
8,322 | Optimising for Precision-Recall curves under class imbalance | I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained by different distributions of validation dataset and the properties of particular METRICS used - precision and recall, t... | Optimising for Precision-Recall curves under class imbalance | I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained | Optimising for Precision-Recall curves under class imbalance
I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained by different distributions of validation dataset and the pr... | Optimising for Precision-Recall curves under class imbalance
I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained |
8,323 | Optimising for Precision-Recall curves under class imbalance | Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen
1) the number of TruePositives (TP) increases for "all thresholds" and, as a result, ratios TP/(TP+FP) and TP/(TP+FN) increase for all thresholds. So that the ar... | Optimising for Precision-Recall curves under class imbalance | Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen
1) the number of TruePositives (TP) incre | Optimising for Precision-Recall curves under class imbalance
Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen
1) the number of TruePositives (TP) increases for "all thresholds" and, as a result, ratios TP/(TP+F... | Optimising for Precision-Recall curves under class imbalance
Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen
1) the number of TruePositives (TP) incre |
8,324 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in the paper Bayesian robustness modelling of location and scale parameters by Andrade and O'Hagan (2011).
The estimates are... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in th | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in the paper Bayesian robust... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in th |
8,325 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. It is not true that there is a robust bayesian
estimate of location (there are bayesian estimators of locations but as ... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. It is not true that the... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. |
8,326 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Adding a prior on the variance improves robustness against outliers.
There is a nice paper by Andrew Gelman: "Prior distri... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Ad | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Adding a prior on the var... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Ad |
8,327 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the marginal for $\mu$, yielding a $t$ distribution with $N$ degrees of freedom.
Similarly, if you want a robust estimator for... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the ma | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the marginal for $\mu$, yield... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the ma |
8,328 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribution of the data to be Laplace distribution instead of a t-distribution, then as in normal regression where we model the ... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribut | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribution of the data to be L... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribut |
8,329 | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \in {1 \ldots K}$ is $\textrm{Var}(y_k) \in [0, \infty)$. The question here is, "What is a robust model for the likelihoo... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? | Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \ | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \in {1 \ldots K}$ is $\t... | What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \ |
8,330 | How to understand SARIMAX intuitively? | As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error:
$$
x_t = \phi x_{t-1} + \varepsilon_t
$$
Let's substitute in $ x_{t-1} $, and then $ x_{t-2} $:
$$\begin{aligned}
x_t &= \phi (\phi x_{t-2} + \varepsilon_{t-1}) + \varepsilon_t \\
&= \phi^2x_{t-... | How to understand SARIMAX intuitively? | As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error:
$$
x_t = \phi x_{t-1} + \varepsilon_t
$$
Let's substitute in $ x_{t-1} $, a | How to understand SARIMAX intuitively?
As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error:
$$
x_t = \phi x_{t-1} + \varepsilon_t
$$
Let's substitute in $ x_{t-1} $, and then $ x_{t-2} $:
$$\begin{aligned}
x_t &= \phi (\phi x_{t-2} + \varepsilon_{t... | How to understand SARIMAX intuitively?
As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error:
$$
x_t = \phi x_{t-1} + \varepsilon_t
$$
Let's substitute in $ x_{t-1} $, a |
8,331 | Convolutional neural networks: Aren't the central neurons over-represented in the output? | Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few scientific papers on "sparse representations", especially in memory research.
I think you would benefit from reading abou... | Convolutional neural networks: Aren't the central neurons over-represented in the output? | Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few s | Convolutional neural networks: Aren't the central neurons over-represented in the output?
Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few scientific papers on "sparse re... | Convolutional neural networks: Aren't the central neurons over-represented in the output?
Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few s |
8,332 | Convolutional neural networks: Aren't the central neurons over-represented in the output? | You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolution will apply the filter the same number of times to each pixel. | Convolutional neural networks: Aren't the central neurons over-represented in the output? | You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolu | Convolutional neural networks: Aren't the central neurons over-represented in the output?
You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolution will apply the filter the... | Convolutional neural networks: Aren't the central neurons over-represented in the output?
You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolu |
8,333 | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics | This is a very nice compact introduction to the basic ideas!
Reinforcement Learning
I think your use case description of reinforcement learning is not exactly right. The term classify is not appropriate. An better description would be:
I don't know how to act in this environment, can you find a good behavior and meanw... | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics | This is a very nice compact introduction to the basic ideas!
Reinforcement Learning
I think your use case description of reinforcement learning is not exactly right. The term classify is not appropria | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
This is a very nice compact introduction to the basic ideas!
Reinforcement Learning
I think your use case description of reinforcement learning is not exactly right. The term classify is not appropriate. An better description would b... | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
This is a very nice compact introduction to the basic ideas!
Reinforcement Learning
I think your use case description of reinforcement learning is not exactly right. The term classify is not appropria |
8,334 | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics | Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome...
Here is an answer that adds some tiny mathematical notes to your list and some different thoughts on when to use what. I hope the enumeration is self-explanatory enough:
Supervised
We ... | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics | Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome...
Here is an answer that adds some tiny mathematical notes to your | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome...
Here is an answer that adds some tiny mathematical notes to your list and some different thoughts ... | Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome...
Here is an answer that adds some tiny mathematical notes to your |
8,335 | Bound for Arithmetic Harmonic mean inequality for matrices? | Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof:
https://www.sciencedirect.com/science/article/pii/0024379595002693
After downloading the paper, the proof is on pages 450-4... | Bound for Arithmetic Harmonic mean inequality for matrices? | Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof:
htt | Bound for Arithmetic Harmonic mean inequality for matrices?
Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof:
https://www.sciencedirect.com/science/article/pii/0024379595002... | Bound for Arithmetic Harmonic mean inequality for matrices?
Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof:
htt |
8,336 | Variance on the sum of predicted values from a mixed effect model on a timeseries | In matrix notation a mixed model can be represented as
y = X*beta + Z*u + epsilon
where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively.
I would apply a simple and adequate (but not the best) transformation for correcting for auto-correlation that involves t... | Variance on the sum of predicted values from a mixed effect model on a timeseries | In matrix notation a mixed model can be represented as
y = X*beta + Z*u + epsilon
where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively.
I | Variance on the sum of predicted values from a mixed effect model on a timeseries
In matrix notation a mixed model can be represented as
y = X*beta + Z*u + epsilon
where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively.
I would apply a simple and adequate (bu... | Variance on the sum of predicted values from a mixed effect model on a timeseries
In matrix notation a mixed model can be represented as
y = X*beta + Z*u + epsilon
where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively.
I |
8,337 | What is the hardest statistical concept to grasp? | for some reason, people have difficulty grasping what a p-value really is. | What is the hardest statistical concept to grasp? | for some reason, people have difficulty grasping what a p-value really is. | What is the hardest statistical concept to grasp?
for some reason, people have difficulty grasping what a p-value really is. | What is the hardest statistical concept to grasp?
for some reason, people have difficulty grasping what a p-value really is. |
8,338 | What is the hardest statistical concept to grasp? | Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer the question that we would like to answer. We'd like to know, "what's the chance that the true value is inside this part... | What is the hardest statistical concept to grasp? | Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer | What is the hardest statistical concept to grasp?
Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer the question that we would like to answer. We'd like to know, "what's ... | What is the hardest statistical concept to grasp?
Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer |
8,339 | What is the hardest statistical concept to grasp? | What is the meaning of "degrees of freedom"? How about df that are not whole numbers? | What is the hardest statistical concept to grasp? | What is the meaning of "degrees of freedom"? How about df that are not whole numbers? | What is the hardest statistical concept to grasp?
What is the meaning of "degrees of freedom"? How about df that are not whole numbers? | What is the hardest statistical concept to grasp?
What is the meaning of "degrees of freedom"? How about df that are not whole numbers? |
8,340 | What is the hardest statistical concept to grasp? | Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can't get away from & is a source of rampant misadventure. | What is the hardest statistical concept to grasp? | Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can | What is the hardest statistical concept to grasp?
Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can't get away from & is a source of rampant misadventure. | What is the hardest statistical concept to grasp?
Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can |
8,341 | What is the hardest statistical concept to grasp? | I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically:
Sample size has to be picked in advance. It is not ok to keep analyzing the data as more subjects are added, stopping when th... | What is the hardest statistical concept to grasp? | I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically:
Sampl | What is the hardest statistical concept to grasp?
I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically:
Sample size has to be picked in advance. It is not ok to keep analyzing the... | What is the hardest statistical concept to grasp?
I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically:
Sampl |
8,342 | What is the hardest statistical concept to grasp? | Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o)
Both have merit of course, but it can be very difficult to understand why one framework is interesting/useful/valid if your grasp of the other is too firm. Cross-validated is a goo... | What is the hardest statistical concept to grasp? | Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o)
Both have merit of course, but it can be very difficult to un | What is the hardest statistical concept to grasp?
Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o)
Both have merit of course, but it can be very difficult to understand why one framework is interesting/useful/valid if your grasp o... | What is the hardest statistical concept to grasp?
Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o)
Both have merit of course, but it can be very difficult to un |
8,343 | What is the hardest statistical concept to grasp? | From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability, which is not exactly correct. | What is the hardest statistical concept to grasp? | From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability | What is the hardest statistical concept to grasp?
From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability, which is not exactly correct. | What is the hardest statistical concept to grasp?
From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability |
8,344 | What is the hardest statistical concept to grasp? | Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it. | What is the hardest statistical concept to grasp? | Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it. | What is the hardest statistical concept to grasp?
Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it. | What is the hardest statistical concept to grasp?
Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it. |
8,345 | What is the hardest statistical concept to grasp? | What do the different distributions really represent, besides than how they are used. | What is the hardest statistical concept to grasp? | What do the different distributions really represent, besides than how they are used. | What is the hardest statistical concept to grasp?
What do the different distributions really represent, besides than how they are used. | What is the hardest statistical concept to grasp?
What do the different distributions really represent, besides than how they are used. |
8,346 | What is the hardest statistical concept to grasp? | I think the question is interpretable in two ways, which will give very different answers:
1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept to grasp?
2) Which statistical concept is misunderstood by the most people?
For 1) I don't know the answer at all. S... | What is the hardest statistical concept to grasp? | I think the question is interpretable in two ways, which will give very different answers:
1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept t | What is the hardest statistical concept to grasp?
I think the question is interpretable in two ways, which will give very different answers:
1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept to grasp?
2) Which statistical concept is misunderstood by the most p... | What is the hardest statistical concept to grasp?
I think the question is interpretable in two ways, which will give very different answers:
1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept t |
8,347 | What is the hardest statistical concept to grasp? | Confidence interval in non-Bayesian tradition is a difficult one. | What is the hardest statistical concept to grasp? | Confidence interval in non-Bayesian tradition is a difficult one. | What is the hardest statistical concept to grasp?
Confidence interval in non-Bayesian tradition is a difficult one. | What is the hardest statistical concept to grasp?
Confidence interval in non-Bayesian tradition is a difficult one. |
8,348 | What is the hardest statistical concept to grasp? | I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't know the difference between a sample statistic and a population parameter. If you beat these ideas into their head, the ... | What is the hardest statistical concept to grasp? | I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't k | What is the hardest statistical concept to grasp?
I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't know the difference between a sample statistic and a population paramet... | What is the hardest statistical concept to grasp?
I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't k |
8,349 | Generating random numbers manually | If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a geometric distribution we can count how many coin tosses are needed before we obtain heads. To simulate a binomial dist... | Generating random numbers manually | If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a | Generating random numbers manually
If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a geometric distribution we can count how many coin tosses are needed before we obtain... | Generating random numbers manually
If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a |
8,350 | Generating random numbers manually | If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transform:$$X=\sqrt{-2\log U_1}\,\cos(2\pi U_2)$$(and even two since $Y=\sqrt{-2\log U_1}\,\sin(2\pi U_2)$ is another normal var... | Generating random numbers manually | If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transfor | Generating random numbers manually
If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transform:$$X=\sqrt{-2\log U_1}\,\cos(2\pi U_2)$$(and even two since $Y=\sqrt{-2\log U_1}\,\s... | Generating random numbers manually
If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transfor |
8,351 | Generating random numbers manually | This is not exactly random, but it should be close enough, as you seem to want a rough experiment.
Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more you approach a truly "random" result, but 10 seconds are fine). Take the last digits (for instance, 10.67 sec will give ... | Generating random numbers manually | This is not exactly random, but it should be close enough, as you seem to want a rough experiment.
Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more | Generating random numbers manually
This is not exactly random, but it should be close enough, as you seem to want a rough experiment.
Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more you approach a truly "random" result, but 10 seconds are fine). Take the last digits ... | Generating random numbers manually
This is not exactly random, but it should be close enough, as you seem to want a rough experiment.
Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more |
8,352 | Generating random numbers manually | Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is sufficiently large, then we should have an "approximate realization" of the normalized Gaussian $N (0,1)$.
Why? Let
$$X_k... | Generating random numbers manually | Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is su | Generating random numbers manually
Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is sufficiently large, then we should have an "approximate realization" of the normalized ... | Generating random numbers manually
Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is su |
8,353 | Generating random numbers manually | It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inverse CDF.
So how might one calculate a uniform(0,1) manually? Well, as mentioned by @Silverfish, there are a variety of ... | Generating random numbers manually | It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inve | Generating random numbers manually
It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inverse CDF.
So how might one calculate a uniform(0,1) manually? Well, as mentioned by @... | Generating random numbers manually
It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inve |
8,354 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | More consistency in parameter names. For instance:
matrix() has a parameter dimnames.
write.table() has parameters row.names and col.names (with dots, and no dimnames parameter).
There are functions rownames() and colnames(), without dots.
Yes, this is a tiny detail. But I have been using R on a daily basis for almos... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | More consistency in parameter names. For instance:
matrix() has a parameter dimnames.
write.table() has parameters row.names and col.names (with dots, and no dimnames parameter).
There are functions | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
More consistency in parameter names. For instance:
matrix() has a parameter dimnames.
write.table() has parameters row.names and col.names (with dots, and no dimnames parameter).
There are functions rowna... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
More consistency in parameter names. For instance:
matrix() has a parameter dimnames.
write.table() has parameters row.names and col.names (with dots, and no dimnames parameter).
There are functions |
8,355 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Useful error messages
Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the code causes the bug.
Optional static typing
Easy way to make sure that i is a number (as it is supposed to be) and not a d... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Useful error messages
Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the co | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Useful error messages
Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the code ca... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Useful error messages
Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the co |
8,356 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Standalone executable
To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables.
This makes it more difficult to share programms with users that do not have R installed. | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Standalone executable
To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables.
This makes it more diffi | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Standalone executable
To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables.
This makes it more difficult ... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Standalone executable
To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables.
This makes it more diffi |
8,357 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Built-in reproducible environments
If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and bundle information about which packages the code was run with in a single file that could be used to rerun this code wit... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Built-in reproducible environments
If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Built-in reproducible environments
If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and bundl... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Built-in reproducible environments
If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and |
8,358 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Preserving/translating existing R packages
Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perform a broader class of statistical tasks than is available in other programs. In the event that there were any attempt t... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Preserving/translating existing R packages
Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perfo | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Preserving/translating existing R packages
Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perform a ... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Preserving/translating existing R packages
Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perfo |
8,359 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Bring data.table like syntax to data.frame
data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing to entertain breaking changes). | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Bring data.table like syntax to data.frame
data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Bring data.table like syntax to data.frame
data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing to e... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Bring data.table like syntax to data.frame
data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing |
8,360 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Object oriented programming
OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a problem that is more general than just OOP). | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Object oriented programming
OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a prob | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Object oriented programming
OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a problem t... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Object oriented programming
OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a prob |
8,361 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Standard object classes/structures for common statistical outputs
There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, there are objects of class htest that are used to represent the outputs of a hypothesis test, and objects of class lm, g... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Standard object classes/structures for common statistical outputs
There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Standard object classes/structures for common statistical outputs
There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, there... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Standard object classes/structures for common statistical outputs
There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, |
8,362 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Multithreading by default
R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. https://mran.microsoft.com/documents/rro/multithread | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Multithreading by default
R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. htt | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Multithreading by default
R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. https://... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Multithreading by default
R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. htt |
8,363 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Less reliance on C/C++/Fortran, aka solve the Two-Language Problem
One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran).
This makes development and tinkering way harder (since now new users need to learn at least two, not one, language).
Julia,... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Less reliance on C/C++/Fortran, aka solve the Two-Language Problem
One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran).
Thi | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Less reliance on C/C++/Fortran, aka solve the Two-Language Problem
One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran).
This mak... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Less reliance on C/C++/Fortran, aka solve the Two-Language Problem
One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran).
Thi |
8,364 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Build wrangling functions and labelled data into the base program
As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for the stats package at one stage). In particular, the base objects in the program should be programmed to use some of th... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Build wrangling functions and labelled data into the base program
As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Build wrangling functions and labelled data into the base program
As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for the ... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Build wrangling functions and labelled data into the base program
As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for |
8,365 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Add more protected names
pi <- 3 should probably not be allowed. | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Add more protected names
pi <- 3 should probably not be allowed. | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Add more protected names
pi <- 3 should probably not be allowed. | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Add more protected names
pi <- 3 should probably not be allowed. |
8,366 | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] | Replace packages by standardized functions
There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packages, with similar names but different details. Actually you do not know what happens if you apply a function and you loos... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu | Replace packages by standardized functions
There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packag | If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Replace packages by standardized functions
There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packages, w... | If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Replace packages by standardized functions
There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packag |
8,367 | Command-line tool to calculate basic statistics for stream of values [closed] | You can do this with R, which may be a bit of overkill...
EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, which is meant to do what you're trying to do. For example, if I have a file bar which has a list of numbers, one per l... | Command-line tool to calculate basic statistics for stream of values [closed] | You can do this with R, which may be a bit of overkill...
EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, | Command-line tool to calculate basic statistics for stream of values [closed]
You can do this with R, which may be a bit of overkill...
EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, which is meant to do what you're trying t... | Command-line tool to calculate basic statistics for stream of values [closed]
You can do this with R, which may be a bit of overkill...
EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, |
8,368 | Command-line tool to calculate basic statistics for stream of values [closed] | Try "st":
$ seq 1 10 | st
N min max sum mean stddev
10 1 10 55 5.5 3.02765
$ seq 1 10 | st --transpose
N 10
min 1
max 10
sum 55
mean 5.5
stddev 3.02765
You can also see the five number summary:
$ seq 1 10 | st --summary
min q1 median q3 max
1 3.5 5.5 7.5 ... | Command-line tool to calculate basic statistics for stream of values [closed] | Try "st":
$ seq 1 10 | st
N min max sum mean stddev
10 1 10 55 5.5 3.02765
$ seq 1 10 | st --transpose
N 10
min 1
max 10
sum 55
mean 5.5
stddev 3.02765
Yo | Command-line tool to calculate basic statistics for stream of values [closed]
Try "st":
$ seq 1 10 | st
N min max sum mean stddev
10 1 10 55 5.5 3.02765
$ seq 1 10 | st --transpose
N 10
min 1
max 10
sum 55
mean 5.5
stddev 3.02765
You can also see the five number summary:
$ ... | Command-line tool to calculate basic statistics for stream of values [closed]
Try "st":
$ seq 1 10 | st
N min max sum mean stddev
10 1 10 55 5.5 3.02765
$ seq 1 10 | st --transpose
N 10
min 1
max 10
sum 55
mean 5.5
stddev 3.02765
Yo |
8,369 | Command-line tool to calculate basic statistics for stream of values [closed] | R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner:
Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7
which results in
Min. 1st Qu. Median Mean 3rd Qu. Max.
3.0 4.0 5.0 5.6 7.0 9.0
If you want to read ... | Command-line tool to calculate basic statistics for stream of values [closed] | R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner:
Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7
which res | Command-line tool to calculate basic statistics for stream of values [closed]
R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner:
Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7
which results in
Min. 1st Qu. Median Mean 3rd Q... | Command-line tool to calculate basic statistics for stream of values [closed]
R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner:
Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7
which res |
8,370 | Command-line tool to calculate basic statistics for stream of values [closed] | datamash is another great option. It's from the GNU Project.
If you have homebrew / linuxbrew you can do:
brew install datamash | Command-line tool to calculate basic statistics for stream of values [closed] | datamash is another great option. It's from the GNU Project.
If you have homebrew / linuxbrew you can do:
brew install datamash | Command-line tool to calculate basic statistics for stream of values [closed]
datamash is another great option. It's from the GNU Project.
If you have homebrew / linuxbrew you can do:
brew install datamash | Command-line tool to calculate basic statistics for stream of values [closed]
datamash is another great option. It's from the GNU Project.
If you have homebrew / linuxbrew you can do:
brew install datamash |
8,371 | Command-line tool to calculate basic statistics for stream of values [closed] | Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubuntu.
Usage example:
$ cat test.log
Handled 1000000 packets.Time elapsed: 7.575278
Handled 1000000 packets.Time elapsed:... | Command-line tool to calculate basic statistics for stream of values [closed] | Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubun | Command-line tool to calculate basic statistics for stream of values [closed]
Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubuntu.
Usage example:
$ cat test.log
Handle... | Command-line tool to calculate basic statistics for stream of values [closed]
Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubun |
8,372 | Command-line tool to calculate basic statistics for stream of values [closed] | There is also simple-r, which can do almost everything that R can, but with less keystrokes:
https://code.google.com/p/simple-r/
To calculate basic descriptive statistics, one would have to type one of:
r summary file.txt
r summary - < file.txt
cat file.txt | r summary -
Doesn't get any simple-R! | Command-line tool to calculate basic statistics for stream of values [closed] | There is also simple-r, which can do almost everything that R can, but with less keystrokes:
https://code.google.com/p/simple-r/
To calculate basic descriptive statistics, one would have to type one | Command-line tool to calculate basic statistics for stream of values [closed]
There is also simple-r, which can do almost everything that R can, but with less keystrokes:
https://code.google.com/p/simple-r/
To calculate basic descriptive statistics, one would have to type one of:
r summary file.txt
r summary - < file.... | Command-line tool to calculate basic statistics for stream of values [closed]
There is also simple-r, which can do almost everything that R can, but with less keystrokes:
https://code.google.com/p/simple-r/
To calculate basic descriptive statistics, one would have to type one |
8,373 | Command-line tool to calculate basic statistics for stream of values [closed] | There is sta, which is a c++ varient of st, also referenced in these comments.
Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or biased estimators, and can output more detailed information such as standard error.
You can download sta at github.
Discla... | Command-line tool to calculate basic statistics for stream of values [closed] | There is sta, which is a c++ varient of st, also referenced in these comments.
Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or bia | Command-line tool to calculate basic statistics for stream of values [closed]
There is sta, which is a c++ varient of st, also referenced in these comments.
Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or biased estimators, and can output more detail... | Command-line tool to calculate basic statistics for stream of values [closed]
There is sta, which is a c++ varient of st, also referenced in these comments.
Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or bia |
8,374 | Command-line tool to calculate basic statistics for stream of values [closed] | Just in case, there's datastat
https://sourceforge.net/p/datastat/code/
a simple program for Linux computing simple statistics from the command-line. For example,
cat file.dat | datastat
will output the average value across all rows for each column of file.dat. If you need to know the standard deviation, min, max, you ... | Command-line tool to calculate basic statistics for stream of values [closed] | Just in case, there's datastat
https://sourceforge.net/p/datastat/code/
a simple program for Linux computing simple statistics from the command-line. For example,
cat file.dat | datastat
will output t | Command-line tool to calculate basic statistics for stream of values [closed]
Just in case, there's datastat
https://sourceforge.net/p/datastat/code/
a simple program for Linux computing simple statistics from the command-line. For example,
cat file.dat | datastat
will output the average value across all rows for each ... | Command-line tool to calculate basic statistics for stream of values [closed]
Just in case, there's datastat
https://sourceforge.net/p/datastat/code/
a simple program for Linux computing simple statistics from the command-line. For example,
cat file.dat | datastat
will output t |
8,375 | Command-line tool to calculate basic statistics for stream of values [closed] | You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers.
I/O options
Input data can be from a file, standard input, or a pipe
Output can be written to a file, standard output, or a pipe
Output uses headers that start ... | Command-line tool to calculate basic statistics for stream of values [closed] | You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers.
I/O options
Input data can be from a fil | Command-line tool to calculate basic statistics for stream of values [closed]
You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers.
I/O options
Input data can be from a file, standard input, or a pipe
Output can be... | Command-line tool to calculate basic statistics for stream of values [closed]
You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers.
I/O options
Input data can be from a fil |
8,376 | Command-line tool to calculate basic statistics for stream of values [closed] | Stumbled across this old thread looking for something else.
Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day:
http://moo.nac.uci.edu/~hjm/stats
example:
$ ls -l | scut -f=4 | stats
Sum 9702066453
Number 501
Mean 19365... | Command-line tool to calculate basic statistics for stream of values [closed] | Stumbled across this old thread looking for something else.
Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day:
http://moo.nac.uci | Command-line tool to calculate basic statistics for stream of values [closed]
Stumbled across this old thread looking for something else.
Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day:
http://moo.nac.uci.edu/~hjm/stats
example:
$ ls -l | scut -... | Command-line tool to calculate basic statistics for stream of values [closed]
Stumbled across this old thread looking for something else.
Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day:
http://moo.nac.uci |
8,377 | Command-line tool to calculate basic statistics for stream of values [closed] | Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended for large datasets and supports multiple fields and grouping by key. Output is tab separated. An example for the sequence ... | Command-line tool to calculate basic statistics for stream of values [closed] | Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended fo | Command-line tool to calculate basic statistics for stream of values [closed]
Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended for large datasets and supports multiple fie... | Command-line tool to calculate basic statistics for stream of values [closed]
Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended fo |
8,378 | Command-line tool to calculate basic statistics for stream of values [closed] | Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means algorithm. For the mean: $$\bar{x}_n = \frac{(n-1)\,\bar{x}_{n-1} + x_n}{n}$$;
and for the variance:$$s^2_n = \frac{S_n}{... | Command-line tool to calculate basic statistics for stream of values [closed] | Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means al | Command-line tool to calculate basic statistics for stream of values [closed]
Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means algorithm. For the mean: $$\bar{x}_n = \fra... | Command-line tool to calculate basic statistics for stream of values [closed]
Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means al |
8,379 | Choosing the best model from among different "best" models | A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible.
For model evaluation there are different methods depending on what you want to know. There are generally two ways of evaluating a model: Based on predictions and based on goodness ... | Choosing the best model from among different "best" models | A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible.
For model evaluation there are different methods depending o | Choosing the best model from among different "best" models
A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible.
For model evaluation there are different methods depending on what you want to know. There are generally two ways of eval... | Choosing the best model from among different "best" models
A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible.
For model evaluation there are different methods depending o |
8,380 | Choosing the best model from among different "best" models | Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you use or which index you use as a stopping rule. Variable selection without shrinkage is almost doomed. However limited ba... | Choosing the best model from among different "best" models | Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you us | Choosing the best model from among different "best" models
Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you use or which index you use as a stopping rule. Variable select... | Choosing the best model from among different "best" models
Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you us |
8,381 | Choosing the best model from among different "best" models | Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away from 0, and there are other related problems.
If you must do automatic variable selection, I would recommend using a mo... | Choosing the best model from among different "best" models | Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away | Choosing the best model from among different "best" models
Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away from 0, and there are other related problems.
If you must do... | Choosing the best model from among different "best" models
Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away |
8,382 | Choosing the best model from among different "best" models | The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting the outcome for new observations, or you may simply be interested in the model with the least false positives; perhaps ... | Choosing the best model from among different "best" models | The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting | Choosing the best model from among different "best" models
The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting the outcome for new observations, or you may simply be inter... | Choosing the best model from among different "best" models
The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting |
8,383 | The "Amazing Hidden Power" of Random Search? | One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result.
Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" stated if a million monkeys spent ten hours a day at a typewriter, it's extremely unlikely that the quality ... | The "Amazing Hidden Power" of Random Search? | One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result.
Émile Borel's 1913 article "Mécanique Statistique et Irréve | The "Amazing Hidden Power" of Random Search?
One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result.
Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" stated if a million monkeys spent ten hours a day at a typewrit... | The "Amazing Hidden Power" of Random Search?
One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result.
Émile Borel's 1913 article "Mécanique Statistique et Irréve |
8,384 | The "Amazing Hidden Power" of Random Search? | Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of the signs of these weights, which is a very large number. If we sample 60 random weight vectors, we will have seen only ... | The "Amazing Hidden Power" of Random Search? | Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of t | The "Amazing Hidden Power" of Random Search?
Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of the signs of these weights, which is a very large number. If we sample 60 r... | The "Amazing Hidden Power" of Random Search?
Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of t |
8,385 | The "Amazing Hidden Power" of Random Search? | Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of these will be within the most useful 5% of all possible Stack Exchange answers within this character limit.
Basically the pr... | The "Amazing Hidden Power" of Random Search? | Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of thes | The "Amazing Hidden Power" of Random Search?
Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of these will be within the most useful 5% of all possible Stack Exchange answers ... | The "Amazing Hidden Power" of Random Search?
Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of thes |
8,386 | The "Amazing Hidden Power" of Random Search? | There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorithm can beat random search when its performance is averaged over all possible functions to be optimised. That is, you hav... | The "Amazing Hidden Power" of Random Search? | There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorith | The "Amazing Hidden Power" of Random Search?
There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorithm can beat random search when its performance is averaged over all possible... | The "Amazing Hidden Power" of Random Search?
There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorith |
8,387 | The "Amazing Hidden Power" of Random Search? | As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thie performance of the random search will very strongly depend on the features of this distribution. In fact, one of the k... | The "Amazing Hidden Power" of Random Search? | As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thi | The "Amazing Hidden Power" of Random Search?
As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thie performance of the random search will very strongly depend on the feature... | The "Amazing Hidden Power" of Random Search?
As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thi |
8,388 | The "Amazing Hidden Power" of Random Search? | Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks?
We do use both at the same time currently. Meaning that, there is already a degree of random search even if we use stochastic gradient decent in training neural networks, i.e., random initialisation and in rei... | The "Amazing Hidden Power" of Random Search? | Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks?
We do use both at the same time currently. Meaning that, there is already a degree of ran | The "Amazing Hidden Power" of Random Search?
Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks?
We do use both at the same time currently. Meaning that, there is already a degree of random search even if we use stochastic gradient decent in training neural net... | The "Amazing Hidden Power" of Random Search?
Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks?
We do use both at the same time currently. Meaning that, there is already a degree of ran |
8,389 | The "Amazing Hidden Power" of Random Search? | regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions!
Finding a 95th-percentile solution is no guarantee of finding a good solution. The nature of the curse of dimensionality is that your "optimiza... | The "Amazing Hidden Power" of Random Search? | regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions!
Finding a 95th-percenti | The "Amazing Hidden Power" of Random Search?
regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions!
Finding a 95th-percentile solution is no guarantee of finding a good solution. The nature of the c... | The "Amazing Hidden Power" of Random Search?
regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions!
Finding a 95th-percenti |
8,390 | The "Amazing Hidden Power" of Random Search? | The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed"
Sort of. There is a compounding that occurs when you add dimensions that is similar to what you get when you add more randomly sampled models, except that it works against you rather than fo... | The "Amazing Hidden Power" of Random Search? | The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed"
Sort of. There is a compounding that occurs when you add dimensions th | The "Amazing Hidden Power" of Random Search?
The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed"
Sort of. There is a compounding that occurs when you add dimensions that is similar to what you get when you add more randomly sampled models, ex... | The "Amazing Hidden Power" of Random Search?
The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed"
Sort of. There is a compounding that occurs when you add dimensions th |
8,391 | The "Amazing Hidden Power" of Random Search? | This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality.
Consider classification problem on ImageNet and some large networks with millions of parameters. Doing a random search in the space of parameters, you ... | The "Amazing Hidden Power" of Random Search? | This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality.
Consider classification problem | The "Amazing Hidden Power" of Random Search?
This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality.
Consider classification problem on ImageNet and some large networks with millions of parameters. Doing a r... | The "Amazing Hidden Power" of Random Search?
This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality.
Consider classification problem |
8,392 | The "Amazing Hidden Power" of Random Search? | The key to the answer to OP's question is in the ... loss function. Here's why.
OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution with very few attempts. Why then this isn't good enough, if you believe everyone who answered the question before me?
Sever... | The "Amazing Hidden Power" of Random Search? | The key to the answer to OP's question is in the ... loss function. Here's why.
OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution wit | The "Amazing Hidden Power" of Random Search?
The key to the answer to OP's question is in the ... loss function. Here's why.
OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution with very few attempts. Why then this isn't good enough, if you believe everyo... | The "Amazing Hidden Power" of Random Search?
The key to the answer to OP's question is in the ... loss function. Here's why.
OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution wit |
8,393 | The "Amazing Hidden Power" of Random Search? | A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogous to random search. Any more sophisticated algorithms will have had to evolve from random search.
This means that it sho... | The "Amazing Hidden Power" of Random Search? | A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogou | The "Amazing Hidden Power" of Random Search?
A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogous to random search. Any more sophisticated algorithms will have had to evol... | The "Amazing Hidden Power" of Random Search?
A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogou |
8,394 | Expectation of 500 coin flips after 500 realizations | If you "know" that the coin is fair
then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but rather that the initial $500$ flips become irrelevant as $n\rightarrow\infty$. A streak of $500$ heads may seem like a l... | Expectation of 500 coin flips after 500 realizations | If you "know" that the coin is fair
then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but | Expectation of 500 coin flips after 500 realizations
If you "know" that the coin is fair
then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but rather that the initial $500$ flips become irrelevant as $n\rightar... | Expectation of 500 coin flips after 500 realizations
If you "know" that the coin is fair
then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but |
8,395 | Expectation of 500 coin flips after 500 realizations | The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant.
For example, if I toss the coin 10 times and get 7 heads, those two extra heads seem pretty significant. If I toss ... | Expectation of 500 coin flips after 500 realizations | The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant | Expectation of 500 coin flips after 500 realizations
The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant.
For example, if I toss the coin 10 times and get 7 heads, those... | Expectation of 500 coin flips after 500 realizations
The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant |
8,396 | Expectation of 500 coin flips after 500 realizations | The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run.
The coin does not know or care that you plan to stop flipping. For the coin, an infinity of flips remain, and against that infinity, a mere... | Expectation of 500 coin flips after 500 realizations | The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run.
The coin does not k | Expectation of 500 coin flips after 500 realizations
The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run.
The coin does not know or care that you plan to stop flipping. For the coin, an infini... | Expectation of 500 coin flips after 500 realizations
The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run.
The coin does not k |
8,397 | Expectation of 500 coin flips after 500 realizations | The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion asaṃkhyeyas, a value used in Buddhist and Hindu theology to denote a number so large as to be incalculable; it is about ... | Expectation of 500 coin flips after 500 realizations | The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion | Expectation of 500 coin flips after 500 realizations
The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion asaṃkhyeyas, a value used in Buddhist and Hindu theology to denote ... | Expectation of 500 coin flips after 500 realizations
The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion |
8,398 | Expectation of 500 coin flips after 500 realizations | There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the question). This reasoning holds for any particular arbitrary number of trials, but does not address the situation of arbit... | Expectation of 500 coin flips after 500 realizations | There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the qu | Expectation of 500 coin flips after 500 realizations
There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the question). This reasoning holds for any particular arbitrary number o... | Expectation of 500 coin flips after 500 realizations
There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the qu |
8,399 | Expectation of 500 coin flips after 500 realizations | The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many models that use bayesian framework includes realization on updating the probability. This is a great example to what I me... | Expectation of 500 coin flips after 500 realizations | The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many m | Expectation of 500 coin flips after 500 realizations
The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many models that use bayesian framework includes realization on updating ... | Expectation of 500 coin flips after 500 realizations
The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many m |
8,400 | Expectation of 500 coin flips after 500 realizations | Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world.
A good rule of thumb to help you think about it is that every finite number looks like zero to infinity. A million heads in a row still looks like zero to infinity. If you were to "flip the coin an infin... | Expectation of 500 coin flips after 500 realizations | Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world.
A good rule of thumb to help you think about it is that every finite number looks lik | Expectation of 500 coin flips after 500 realizations
Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world.
A good rule of thumb to help you think about it is that every finite number looks like zero to infinity. A million heads in a row still looks like zero... | Expectation of 500 coin flips after 500 realizations
Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world.
A good rule of thumb to help you think about it is that every finite number looks lik |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.