idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
15,901
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property. In the beginning, we have two objects, data (coming from a Random Variable, call it X) and parameter, $\theta$ (another rv, implicitly...
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property. In the beginni
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter? Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property. In the beginning, we have two...
Why a sufficient statistic contains all the information needed to compute any estimate of the parame Let me give another perspective that may help. This is also qualitative, but there is a rigorous version of that particularly important in Information Theory - known as Markov property. In the beginni
15,902
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter?
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equivalent to the second sentence (to the extent that an informal statement can be equivalent to a mathematical statement) if ...
Why a sufficient statistic contains all the information needed to compute any estimate of the parame
The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equiva
Why a sufficient statistic contains all the information needed to compute any estimate of the parameter? The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equivalent to the sec...
Why a sufficient statistic contains all the information needed to compute any estimate of the parame The second sentence in the quote is proven by the factorization theorem--which shows that a sample conditioned on its sufficient statistic is independent of the parameter. The first sentence is equiva
15,903
Do optimization techniques map to sampling techniques?
One connection has been brought up by Max Welling and friends in these two papers: Bayesian Learning via Stochastic Gradient Langevin Dynamics Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. The gist is that the "learning", ie. optimisation of a model smoothly transitions into sampling from the po...
Do optimization techniques map to sampling techniques?
One connection has been brought up by Max Welling and friends in these two papers: Bayesian Learning via Stochastic Gradient Langevin Dynamics Bayesian Posterior Sampling via Stochastic Gradient Fish
Do optimization techniques map to sampling techniques? One connection has been brought up by Max Welling and friends in these two papers: Bayesian Learning via Stochastic Gradient Langevin Dynamics Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. The gist is that the "learning", ie. optimisation of...
Do optimization techniques map to sampling techniques? One connection has been brought up by Max Welling and friends in these two papers: Bayesian Learning via Stochastic Gradient Langevin Dynamics Bayesian Posterior Sampling via Stochastic Gradient Fish
15,904
Do optimization techniques map to sampling techniques?
There is a link, it's the Gumbel-Max trick ! http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
Do optimization techniques map to sampling techniques?
There is a link, it's the Gumbel-Max trick ! http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
Do optimization techniques map to sampling techniques? There is a link, it's the Gumbel-Max trick ! http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
Do optimization techniques map to sampling techniques? There is a link, it's the Gumbel-Max trick ! http://www.cs.toronto.edu/~cmaddis/pubs/astar.pdf
15,905
Do optimization techniques map to sampling techniques?
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If you cannot find the cdf exactly, you could use a simple acceptemce-rejection based heuristic.
Do optimization techniques map to sampling techniques?
One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If
Do optimization techniques map to sampling techniques? One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If you cannot find the cdf exactly, you could use a simple acceptemc...
Do optimization techniques map to sampling techniques? One possibility is to find the CDF of the heuristic. Then from monte carlo theory we know that for $ U \sim unif[0,1]$ that $F^{-1}(U) \sim F$ where F is the cdf of the distribution you are after. If
15,906
Repeated measures ANOVA: what is the normality assumption?
This is the simplest repeated measures ANOVA model if we treat it as a univariate model: $$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$ where $i$ represents each case and $t$ the times we measured them (so the data are in long form). $y_{it}$ represents the outcomes stacked one on top of the other, $a_{i}$ represents the m...
Repeated measures ANOVA: what is the normality assumption?
This is the simplest repeated measures ANOVA model if we treat it as a univariate model: $$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$ where $i$ represents each case and $t$ the times we measured them (s
Repeated measures ANOVA: what is the normality assumption? This is the simplest repeated measures ANOVA model if we treat it as a univariate model: $$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$ where $i$ represents each case and $t$ the times we measured them (so the data are in long form). $y_{it}$ represents the outcome...
Repeated measures ANOVA: what is the normality assumption? This is the simplest repeated measures ANOVA model if we treat it as a univariate model: $$y_{it} = a_{i} + b_{t} + \epsilon_{it}$$ where $i$ represents each case and $t$ the times we measured them (s
15,907
Repeated measures ANOVA: what is the normality assumption?
The explanation of normality of repeated-measure ANOVA can be found here: Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output You need normality of the dependent variables in residuals (this implies a normal distribution in all groups, with common variance and group-dependent aver...
Repeated measures ANOVA: what is the normality assumption?
The explanation of normality of repeated-measure ANOVA can be found here: Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output You need normality of the dependent
Repeated measures ANOVA: what is the normality assumption? The explanation of normality of repeated-measure ANOVA can be found here: Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output You need normality of the dependent variables in residuals (this implies a normal distribution i...
Repeated measures ANOVA: what is the normality assumption? The explanation of normality of repeated-measure ANOVA can be found here: Understanding repeated measure ANOVA assumptions for correct interpretation of SPSS output You need normality of the dependent
15,908
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
We know that If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$ Hence there exists infinitely many UE's of $\lambda$. Now a question occur which of these should we choose? so we call UMVUE. Along unbiasednes...
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
We know that If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$ Hence there exists infini
Is the theory of minimum variance unbiased estimation overemphasized in graduate school? We know that If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$ Hence there exists infinitely many UE's of $\lambda$. N...
Is the theory of minimum variance unbiased estimation overemphasized in graduate school? We know that If $X_1,X_2, \dots X_n$ be a random sample from $Poisson(\lambda)$ then for any $\alpha \in (0,1),~T_\alpha =\alpha \bar X+(1-\alpha)S^2$ is an UE of $\lambda$ Hence there exists infini
15,909
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in many cases do not exist.
Is the theory of minimum variance unbiased estimation overemphasized in graduate school?
Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in
Is the theory of minimum variance unbiased estimation overemphasized in graduate school? Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in many cases do not exist.
Is the theory of minimum variance unbiased estimation overemphasized in graduate school? Perhaps the paper by Brad Efron "Maximum Likelihood and Decision Theory" can help clarify this. Brad mentioned that one main difficulty with the UMVUE is that it is in general hard to compute, and in
15,910
How should standard errors for mixed effects model estimates be calculated?
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth. However, take a look at McCulloch and Searle (2001) Generalized, linear and mixed models, 1st edition, Section 6.4b, "Sampling variance". They indicate that you can't...
How should standard errors for mixed effects model estimates be calculated?
My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth. However, take a look at McCulloch and Searle (2
How should standard errors for mixed effects model estimates be calculated? My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth. However, take a look at McCulloch and Searle (2001) Generalized, linear and mixed models, ...
How should standard errors for mixed effects model estimates be calculated? My initial thought was that, for ordinary linear regression, we just plug in our estimate of the residual variance, $\sigma^2$, as if it were the truth. However, take a look at McCulloch and Searle (2
15,911
In Random Forest, why is a random subset of features chosen at the node level rather than at the tree level? [duplicate]
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features into consideration. We calculate the entropy, compare only these 4 features at every node and take that feature that y...
In Random Forest, why is a random subset of features chosen at the node level rather than at the tre
Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features
In Random Forest, why is a random subset of features chosen at the node level rather than at the tree level? [duplicate] Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features...
In Random Forest, why is a random subset of features chosen at the node level rather than at the tre Suppose we have 10 features f1, f2, ..., f9, f10, then when we take a subset to let's suppose f1, f3, f4, f8 of features at tree level itself, then we construct the whole tree taking these 4 features
15,912
How to interpret GARCH parameters?
Campbell et al (1996) have following interpretation on p. 483. $\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ measures the rate at which this effect dies over time. According to Chan (2010) persistence of volatility occurs when $\...
How to interpret GARCH parameters?
Campbell et al (1996) have following interpretation on p. 483. $\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ m
How to interpret GARCH parameters? Campbell et al (1996) have following interpretation on p. 483. $\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ measures the rate at which this effect dies over time. According to Chan (2010) persi...
How to interpret GARCH parameters? Campbell et al (1996) have following interpretation on p. 483. $\gamma_1$ measures the extent to which a volatility shock today feeds through into next period’s volatility and $\gamma_1 + \delta_1$ m
15,913
How to interpret GARCH parameters?
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
How to interpret GARCH parameters?
the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
How to interpret GARCH parameters? the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
How to interpret GARCH parameters? the large values of the third coefficient ($\delta_{1}$) means that large changes in the volatility will affect future volatilizes for a long period of time since the decay is slower.
15,914
How to interpret GARCH parameters?
Alpha catches the arch effect Beeta catches the garch effect Sum of both more close to 1, implies volatility remains long
How to interpret GARCH parameters?
Alpha catches the arch effect Beeta catches the garch effect Sum of both more close to 1, implies volatility remains long
How to interpret GARCH parameters? Alpha catches the arch effect Beeta catches the garch effect Sum of both more close to 1, implies volatility remains long
How to interpret GARCH parameters? Alpha catches the arch effect Beeta catches the garch effect Sum of both more close to 1, implies volatility remains long
15,915
How to interpret GARCH parameters?
Alpha (ARCH term) represents how volatility reacts to new information Beta (GARCH Term) represents persistence of the volatility Alpha + Beta shows overall measurement of persistence of volatility
How to interpret GARCH parameters?
Alpha (ARCH term) represents how volatility reacts to new information Beta (GARCH Term) represents persistence of the volatility Alpha + Beta shows overall measurement of persistence of volatility
How to interpret GARCH parameters? Alpha (ARCH term) represents how volatility reacts to new information Beta (GARCH Term) represents persistence of the volatility Alpha + Beta shows overall measurement of persistence of volatility
How to interpret GARCH parameters? Alpha (ARCH term) represents how volatility reacts to new information Beta (GARCH Term) represents persistence of the volatility Alpha + Beta shows overall measurement of persistence of volatility
15,916
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model extensions. For count data Poisson or Negative Binomial distributional assumptions will be likely suitable. Negative Bino...
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM?
Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model e
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM? Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model extensions. For count data Poiss...
How to analyze longitudinal count data: accounting for temporal autocorrelation in GLMM? Log transforming your response is an option although not ideal. A GLM framework is generally preferred. If you are not familiar with GLMs then start by reviewing them prior to looking at mixed model e
15,917
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models. Also, you have to look into how it makes sense for the Business itself. For example, the inputs of the model. If you have...
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models. A
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry? At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models. Also, you hav...
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in At least for the US, it is due to regulatory reasons. The customer-facing risk models must be explainable and actionable. Some FIs, including mine, are already in favor of using spline-based models. A
15,918
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry?
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem, but let's think the main problem of ML is unexplainable. When you devise a scorecard system, the salesperson in the futu...
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in
As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem,
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance industry? As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem, but let's th...
Why is Binning, Weight of Evidence and Information Value so ubiquitous in the Credit Risk/Finance in As a person working on this industry, I believe that logistics regression is the easiest to explain your score card. There are so many machine learning advanced techniques for classification problem,
15,919
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is for simplicity; the equations in your chapter work out in a closed-form way, whereas if you model the variances, they do ...
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known?
On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is f
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known? On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is for simplicity; the equ...
In Gelman's 8 school example, why is the standard error of the individual estimate assumed known? On p114 of the same book you cite: "The problem of estimating a set of means with unknown variances will require some additional computational methods, presented in sections 11.6 and 13.6". So it is f
15,920
What does function "effects" in R do?
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$. Here is the numeric example which confirms the above: > set.seed(1001) > x<-rnorm(100) > y<-1+2*x+rnorm(100) > mod<-lm(y~x) > xqr<-qr(cbind(1,x)) > sum(abs(qr.qty(xqr,y)-effec...
What does function "effects" in R do?
Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$. Here is the numeric example which confirms the above: >
What does function "effects" in R do? Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$. Here is the numeric example which confirms the above: > set.seed(1001) > x<-rnorm(100) > y<-1+2*x+rnorm(100) > mod<-lm(y~x) > xqr<-qr(cbi...
What does function "effects" in R do? Given the response vector $y$, explanatory variable matrix $X$ and its QR decomposition $X=QR$, the effects returned by R is the vector $Q^Ty$. Here is the numeric example which confirms the above: >
15,921
Have I computed these likelihood ratios correctly?
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-squares approach and divided by n in a maximum-likeli...
Have I computed these likelihood ratios correctly?
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg
Have I computed these likelihood ratios correctly? Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-sq...
Have I computed these likelihood ratios correctly? Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg
15,922
Covariance for three variables
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include the dependence of two variables on each other to be include in any measure of their "relation". However, we know that $c...
Covariance for three variables
To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include
Covariance for three variables To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include the dependence of two variables on each other to be include in any measure of their "rela...
Covariance for three variables To expand on Zachary's comment, the covariance matrix does not capture the "relation" between two random variables, as "relation" is too broad of a concept. For example, we'd probably want to include
15,923
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monotone?
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$. Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n} = I_p$. With an orthonormal design the OLS regression coefficients are simply $\hat{\beta}^{ols} = \frac{X'y}{n}$. The ...
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monot
I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$. Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n}
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monotone? I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$. Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n} = I_p$. With a...
Is there a clear set of conditions under which lasso, ridge, or elastic net solution paths are monot I can give you a sufficient condition for the path to be monotonic: an orthonormal design of $X$. Suppose an orthonormal design matrix, that is, with $p$ variables in $X$, we have that $\frac{X'X}{n}
15,924
MLE vs least squares in fitting probability distributions
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So in fact, rather than (as you speculate) that the MLE does not assume a noise model, what is going on is that it does assu...
MLE vs least squares in fitting probability distributions
One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So i
MLE vs least squares in fitting probability distributions One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So in fact, rather than (as you speculate) that the MLE does not a...
MLE vs least squares in fitting probability distributions One useful way of thinking about this is to note that there are cases when least squares and the MLE are the same eg estimating the parameters where the random element has a normal distribution. So i
15,925
Deep learning vs. Decision trees and boosting methods
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest. I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What other algorithms are you using with the boosting? In ...
Deep learning vs. Decision trees and boosting methods
Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest. I'm also not sure how to compare methods like boostin
Deep learning vs. Decision trees and boosting methods Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest. I'm also not sure how to compare methods like boosting and DL, as boosting is really just a collection of methods. What...
Deep learning vs. Decision trees and boosting methods Can you be more specific about the types of data you are looking at? This will in part determine what type of algorithm will converge the fastest. I'm also not sure how to compare methods like boostin
15,926
Deep learning vs. Decision trees and boosting methods
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks", whereas "boosting" is a "meta-learning algorithm" that requires one or more learning networks, called weak learners, ...
Deep learning vs. Decision trees and boosting methods
Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks
Deep learning vs. Decision trees and boosting methods Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks", whereas "boosting" is a "meta-learning algorithm" that requires...
Deep learning vs. Decision trees and boosting methods Great Question! Both adaptive boosting and deep learning can be classified as probabilistic learning networks. The difference is that "deep learning" specifically involves one or more "neural networks
15,927
Second moment method, Brownian motion?
Not the answer, but possibly useful reformulation I assume that comment made above is right (that is sum has $2^{n+1}$ terms). Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$ Observe that $p_n(\rho_1)>p_n(\rho_2)$ if $\rho_1 < \rho_2$ First point: if you ask whether such $\rho$ exists for all n, you need to show ...
Second moment method, Brownian motion?
Not the answer, but possibly useful reformulation I assume that comment made above is right (that is sum has $2^{n+1}$ terms). Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$ Observe that $p_n(\
Second moment method, Brownian motion? Not the answer, but possibly useful reformulation I assume that comment made above is right (that is sum has $2^{n+1}$ terms). Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$ Observe that $p_n(\rho_1)>p_n(\rho_2)$ if $\rho_1 < \rho_2$ First point: if you ask whether such $\r...
Second moment method, Brownian motion? Not the answer, but possibly useful reformulation I assume that comment made above is right (that is sum has $2^{n+1}$ terms). Denote $$p_n(\rho)=P(K_n>\rho 2^n)=P(K_n/2^n>\rho )$$ Observe that $p_n(\
15,928
pdf of the product of two independent random variables, normal and chi-square
simplify the term in the integral to $T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $ find the polynomial $p(y)$ such that $[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)}]'=p'(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} + p(y) [-\frac{1}{2}((\frac{\frac{z...
pdf of the product of two independent random variables, normal and chi-square
simplify the term in the integral to $T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $ find the polynomial $p(y)$ such that $[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\s
pdf of the product of two independent random variables, normal and chi-square simplify the term in the integral to $T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $ find the polynomial $p(y)$ such that $[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)}]'=p'(y)e^{-\frac{1}{2}((\...
pdf of the product of two independent random variables, normal and chi-square simplify the term in the integral to $T=e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\sigma_x} )^2 -y)} y^{k/2-2} $ find the polynomial $p(y)$ such that $[p(y)e^{-\frac{1}{2}((\frac{\frac{z}{y}-\mu_x}{\s
15,929
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the integrals below exist, we have: $$ P(Y \le y) = \mathbb{E}_{f_A, f_X}\left[I(Y \le y)\right] = \mathbb{E}_{f_X}\left[\m...
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period
Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the integrals below exist, we have: $$ P(Y \le y) = \mathbb{E}_{f_A, ...
$ARIMA(p,d,q)+X_t$, Simulation over Forecasting period Firstly we consider a more general case. Let $Y = Y(A, X)$, where $A \sim f_A(\cdot)$ and $X \sim f_X(\cdot)$. Then, assuming the support of $g_x(\cdot)$ dominates the one of $f_X(\cdot)$ and all the
15,930
How the Pearson's Chi Squared Test works
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't work to determine resulting fit in every instance. Source: http://www.ling.upenn.edu/~clight/chisquared.htm
How the Pearson's Chi Squared Test works
A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't wor
How the Pearson's Chi Squared Test works A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't work to determine resulting fit in every instance. Source: http://www.ling.upenn.e...
How the Pearson's Chi Squared Test works A Chi-square test is designed to analyze categorical data. That means that the data has been counted and divided into categories. It will not work with parametric or continuous data. So it doesn't wor
15,931
How does cross-validation overcome the overfitting problem?
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does not have a negligible variance, especially if the size of the ...
How does cross-validation overcome the overfitting problem?
I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model select
How does cross-validation overcome the overfitting problem? I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross-validation error does no...
How does cross-validation overcome the overfitting problem? I can't think of a sufficiently clear explanation just at the moment, so I'll leave that to someone else; however cross-validation does not completely overcome the over-fitting problem in model select
15,932
How does cross-validation overcome the overfitting problem?
Not at all. However, cross validation helps you to assess by how much your method overfits. For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0.48, you hardly have any overfitting and you feel good. On the other hand, if the crossvalidated R-squared is only 0.3 h...
How does cross-validation overcome the overfitting problem?
Not at all. However, cross validation helps you to assess by how much your method overfits. For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0
How does cross-validation overcome the overfitting problem? Not at all. However, cross validation helps you to assess by how much your method overfits. For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0.48, you hardly have any overfitting and you feel good. On t...
How does cross-validation overcome the overfitting problem? Not at all. However, cross validation helps you to assess by how much your method overfits. For instance, if your training data R-squared of a regression is 0.50 and the crossvalidated R-squared is 0
15,933
How does cross-validation overcome the overfitting problem?
My answer is more intuitive than rigorous, but maybe it will help... As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have a flexible fitting mechanism: you fit your sample of data so closely that you're fitting the noise, outliers, and all th...
How does cross-validation overcome the overfitting problem?
My answer is more intuitive than rigorous, but maybe it will help... As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have
How does cross-validation overcome the overfitting problem? My answer is more intuitive than rigorous, but maybe it will help... As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have a flexible fitting mechanism: you fit your sample of data so...
How does cross-validation overcome the overfitting problem? My answer is more intuitive than rigorous, but maybe it will help... As I understand it, overfitting is the result of model selection based on training and testing using the same data, where you have
15,934
How does cross-validation overcome the overfitting problem?
Cross-Validation is a good, but not perfect, technique to minimize over-fitting. Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'll be trying to predict! Here are two concrete situations when cross-validation has flaws: You are using the past to pr...
How does cross-validation overcome the overfitting problem?
Cross-Validation is a good, but not perfect, technique to minimize over-fitting. Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'l
How does cross-validation overcome the overfitting problem? Cross-Validation is a good, but not perfect, technique to minimize over-fitting. Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'll be trying to predict! Here are two concrete situations wh...
How does cross-validation overcome the overfitting problem? Cross-Validation is a good, but not perfect, technique to minimize over-fitting. Cross-Validation will not perform well to outside data if the data you do have is not representative of the data you'l
15,935
How does cross-validation overcome the overfitting problem?
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does. This is because if you are comparing models in a Bayesian way, then you are essentially already doing cross validation. This is be...
How does cross-validation overcome the overfitting problem?
From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does. This is bec
How does cross-validation overcome the overfitting problem? From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does. This is because if you are comparing models in a Bayesian way, then you...
How does cross-validation overcome the overfitting problem? From a Bayesian perspective, I'm not so sure that cross validation does anything that a "proper" Bayesian analysis doesn't do for comparing models. But I am not 100% certain that it does. This is bec
15,936
How does cross-validation overcome the overfitting problem?
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively. Cross-Validation and the Bootstrap (14:01) K-fold Cross-Validation (13:33) Cross-Validation: The Right and Wrong Ways (10:07)
How does cross-validation overcome the overfitting problem?
Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively. Cross-Validation and the Bootstra
How does cross-validation overcome the overfitting problem? Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively. Cross-Validation and the Bootstrap (14:01) K-fold Cross-Validation (13:33) Cross-Validation: ...
How does cross-validation overcome the overfitting problem? Also I can recomend these videos from the Stanford course in Statistical learning. These videos goes in quite depth regarding how to use cross-valudation effectively. Cross-Validation and the Bootstra
15,937
Clustering & Time Series
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size increases over time. In that case, to see significant reduction on certain clusters, one should use a fixed sample-size. ...
Clustering & Time Series
Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size i
Clustering & Time Series Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size increases over time. In that case, to see significant reduction on certain clusters, one should ...
Clustering & Time Series Time-series clustering requires sample size remaining the same but the features changes over time, otherwise it makes little sense. In the question though, inferring from the description sample size i
15,938
Clustering & Time Series
Update in 2023 If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem: https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html In general we should mention that TSC problems are well known and it is indeed possible. To...
Clustering & Time Series
Update in 2023 If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem: https://sktime-backup.readthedocs.io/en/v0.15.1/a
Clustering & Time Series Update in 2023 If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem: https://sktime-backup.readthedocs.io/en/v0.15.1/api_reference/clustering.html In general we should mention that TSC problems are well known and ...
Clustering & Time Series Update in 2023 If you are used to python, there is a scikit-learn clone for time series called sktime, which has appropriate methods for this problem: https://sktime-backup.readthedocs.io/en/v0.15.1/a
15,939
Could any equation have predicted the results of this simulation?
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning. For example, suppose you've flipped the following sequence so far: $$ HTTHHHTTTTTTH $$ You haven't won yet, but you could win in two more flips if those two flips are $TH$. In other words, your last flip was $H$ so you have made "on...
Could any equation have predicted the results of this simulation?
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning. For example, suppose you've flipped the following sequence so far: $$ HTTHHHTTTTTTH $$ You haven't won yet, but y
Could any equation have predicted the results of this simulation? At any given point in the game, you're $3$ or fewer "perfect flips" away from winning. For example, suppose you've flipped the following sequence so far: $$ HTTHHHTTTTTTH $$ You haven't won yet, but you could win in two more flips if those two flips are ...
Could any equation have predicted the results of this simulation? At any given point in the game, you're $3$ or fewer "perfect flips" away from winning. For example, suppose you've flipped the following sequence so far: $$ HTTHHHTTTTTTH $$ You haven't won yet, but y
15,940
Could any equation have predicted the results of this simulation?
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar: n_sims <- 1e5 library(pbapply) results <- pbreplicate(n_sims,{ flips <- NULL while(length(flips)<3 || !identical(tail(flips,3),c("H","T","H"))){ flips <- c(flips,sample(...
Could any equation have predicted the results of this simulation?
First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar: n_sims <- 1e5 library(pbapply) results <- pbreplicate(n_sims,{
Could any equation have predicted the results of this simulation? First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar: n_sims <- 1e5 library(pbapply) results <- pbreplicate(n_sims,{ flips <- NULL while(length(flips)<3 || !identica...
Could any equation have predicted the results of this simulation? First, you can refactor your R code to be (IMHO) a little more legible, also using pbapply::pbreplicate() to get a nice progress bar: n_sims <- 1e5 library(pbapply) results <- pbreplicate(n_sims,{
15,941
Could any equation have predicted the results of this simulation?
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigorous Probability Theory by Jeffrey S. Rosenthal, in the martingale chapter. (I don't have the book in front of me at the m...
Could any equation have predicted the results of this simulation?
There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigoro
Could any equation have predicted the results of this simulation? There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigorous Probability Theory by Jeffrey S. Rosenthal, in the ...
Could any equation have predicted the results of this simulation? There is a fun way to answer this problem using martingales, and in particular using https://en.wikipedia.org/wiki/Optional_stopping_theorem. I first saw this trick in the book A First Look at Rigoro
15,942
Could any equation have predicted the results of this simulation?
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work. So first just generate the possible sequences and associated probabilities where "HTH" are the last 3 flips (with that sequence not occurring previously). Then look for patterns. For i...
Could any equation have predicted the results of this simulation?
Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work. So first just generate the possible sequences and associated proba
Could any equation have predicted the results of this simulation? Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work. So first just generate the possible sequences and associated probabilities where "HTH" are the last 3 flips (with that s...
Could any equation have predicted the results of this simulation? Here is a somewhat clumsy brute-force method to obtain the probabilities and order statistics. Getting the mean will take more work. So first just generate the possible sequences and associated proba
15,943
Could any equation have predicted the results of this simulation?
Disclosure: I wrote the samc R package used in this answer This answer is more of a supplement to Stephan Kolassa's answer In it, he showed how to construct a transition matrix representing the problem: Image credit: Stephan Kolassa's answer Now, as indicated in comments/another answer, this matrix can be simplified a...
Could any equation have predicted the results of this simulation?
Disclosure: I wrote the samc R package used in this answer This answer is more of a supplement to Stephan Kolassa's answer In it, he showed how to construct a transition matrix representing the proble
Could any equation have predicted the results of this simulation? Disclosure: I wrote the samc R package used in this answer This answer is more of a supplement to Stephan Kolassa's answer In it, he showed how to construct a transition matrix representing the problem: Image credit: Stephan Kolassa's answer Now, as ind...
Could any equation have predicted the results of this simulation? Disclosure: I wrote the samc R package used in this answer This answer is more of a supplement to Stephan Kolassa's answer In it, he showed how to construct a transition matrix representing the proble
15,944
Could any equation have predicted the results of this simulation?
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging into his work will provide the equations/ formulas of interest. And because I can't resist, "HTH"
Could any equation have predicted the results of this simulation?
Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging int
Could any equation have predicted the results of this simulation? Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging into his work will provide the equations/ formulas of int...
Could any equation have predicted the results of this simulation? Seems very close to Shannon - related theorems. If you posit "HTH" as your "end of message" string, you want to estimate the chance that "HTH" shows up in random data. I suspect a little digging int
15,945
Could any equation have predicted the results of this simulation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Final formula appears to be sum(i = 1 to length(patter...
Could any equation have predicted the results of this simulation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Could any equation have predicted the results of this simulation? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Could any equation have predicted the results of this simulation? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
15,946
Can I trust a significant result of a t-test if the sample size is small?
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size. In practice there are some not-quite-true assumptions which we can get away with for large sample sizes but they can cause problems for small sample sizes. Do you know if the underlying distribution is normally dis...
Can I trust a significant result of a t-test if the sample size is small?
In theory if all the assumptions of the t-test are true then there's no problem with a small sample size. In practice there are some not-quite-true assumptions which we can get away with for large sam
Can I trust a significant result of a t-test if the sample size is small? In theory if all the assumptions of the t-test are true then there's no problem with a small sample size. In practice there are some not-quite-true assumptions which we can get away with for large sample sizes but they can cause problems for smal...
Can I trust a significant result of a t-test if the sample size is small? In theory if all the assumptions of the t-test are true then there's no problem with a small sample size. In practice there are some not-quite-true assumptions which we can get away with for large sam
15,947
Can I trust a significant result of a t-test if the sample size is small?
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggling to be able to claim a statistically significant outcome! Setting that aside, consider the following from p. 261 of S...
Can I trust a significant result of a t-test if the sample size is small?
You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggl
Can I trust a significant result of a t-test if the sample size is small? You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggling to be able to claim a statistically signif...
Can I trust a significant result of a t-test if the sample size is small? You should rarely trust any single significant result. You didn't say why you were using a one-tailed instead of a two-tailed test, so hopefully you have a good reason for doing so other than struggl
15,948
Can I trust a significant result of a t-test if the sample size is small?
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true. Indeed, let's model it using a super-simple urn-type model; in the urn, there are numbered balls each corresponding to an experiment you might choose to do, some of which hav...
Can I trust a significant result of a t-test if the sample size is small?
Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true. Indeed, let's model it using a super-simple urn-type m
Can I trust a significant result of a t-test if the sample size is small? Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true. Indeed, let's model it using a super-simple urn-type model; in the urn, there are numbered balls eac...
Can I trust a significant result of a t-test if the sample size is small? Imagine yourself to be in a situation where you're doing many similar tests, in a set of circumstances where some fraction of the nulls are true. Indeed, let's model it using a super-simple urn-type m
15,949
Can I trust a significant result of a t-test if the sample size is small?
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal approximation would be fine. That said, Gosset was doing very careful, controlled experiments on data that he understood ...
Can I trust a significant result of a t-test if the sample size is small?
Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal a
Can I trust a significant result of a t-test if the sample size is small? Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal approximation would be fine. That said, Gosset ...
Can I trust a significant result of a t-test if the sample size is small? Some of Gosset's original work (aka Student), for which he developed the t test, involved yeast samples of n=4 and 5. The test was specifically designed for very small samples. Otherwise, the normal a
15,950
Can I trust a significant result of a t-test if the sample size is small?
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a true significant effect of a reasonable size. If typical effects are very large, an n of 8 could be totally adequate (as ...
Can I trust a significant result of a t-test if the sample size is small?
The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a t
Can I trust a significant result of a t-test if the sample size is small? The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a true significant effect of a reasonable size. I...
Can I trust a significant result of a t-test if the sample size is small? The main thing you need to worry about is the power of your test. In particular, you might want to do a post-hoc power analysis to determine how likely you are, given your sample size, to identify a t
15,951
Can I trust a significant result of a t-test if the sample size is small?
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact that with small sample sizes, you can get flukes, when no real effect exists. The statistical significance, that is to sa...
Can I trust a significant result of a t-test if the sample size is small?
One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact t
Can I trust a significant result of a t-test if the sample size is small? One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact that with small sample sizes, you can get fluke...
Can I trust a significant result of a t-test if the sample size is small? One could say that the whole point of statistical significance is to answer the question "can I trust this result, given the sample size?". In other words, the whole point is to control for the fact t
15,952
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existing tradition to use greek letters in mathematical abbrevation. To satisfy certain individuals craving for actual histori...
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existi
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existing tradi...
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and The letters that derive from $\mu$ include the Roman M and the Cyrillic М. Hence considering that the word "mean" starts with an $m$ the choice seems relatively straightforward given an already existi
15,953
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
There is a general rule to use Greek letters for parameters and Latin letters for statistics. Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google translate: Latin: Media French: Moyenne Spanish: Media German: Mittel Dutch: Midden
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
There is a general rule to use Greek letters for parameters and Latin letters for statistics. Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google t
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics There is a general rule to use Greek letters for parameters and Latin letters for statistics. Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google translate...
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and There is a general rule to use Greek letters for parameters and Latin letters for statistics. Why $\mu$? Well, the word 'mean' in English starts with M and $\mu$ sounds like M. But also, per Google t
15,954
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" in 1809. The parameters $\mu$ and $\sigma$ of the normal dist...
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and
The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus cor
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and statistics The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus corporum co...
Why do we use the Greek letter μ (Mu) to denote population mean or expected value in probability and The use of Greek letters in modern mathematics is basically a consequence of the humanist education. The normal distribution was introduced by Gauß in his celestial mechanics paper "Theoria motus cor
15,955
When to remove insignificant variables?
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you should be externally validating your model via a validation/test prodecedure on unseen data. If, instead, you are intere...
When to remove insignificant variables?
Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you
When to remove insignificant variables? Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you should be externally validating your model via a validation/test prodecedure on ...
When to remove insignificant variables? Let me first ask this: What is the goal of the model? If you are only interested in predicting if a customer will buy, then statistcal hypothesis tests really aren't your main concern. Instead, you
15,956
When to remove insignificant variables?
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better way would be to use the functions in the penalised or glmnet package to perform a lasso regression.
When to remove insignificant variables?
Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better
When to remove insignificant variables? Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better way would be to use the functions in the penalised or glmnet package to perform...
When to remove insignificant variables? Have a look at the help pages for step(), drop1() and add1(). These will help you to add/remove variables based on AIC. However, all such methods are somewhat flawed in their path dependence. A better
15,957
When to remove insignificant variables?
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variables be fairly uncorrelated. Typically, when you use logistic regression in a business setting, both inferential informat...
When to remove insignificant variables?
What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variabl
When to remove insignificant variables? What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variables be fairly uncorrelated. Typically, when you use logistic regression in a bus...
When to remove insignificant variables? What is the correlations among the independent variables? This is less important for pure prediction, but if you want to gain some inferential information it is important that the independent variabl
15,958
What is the easiest way to create publication-quality plots under Linux?
The easiest way is to use R Use read.csv to enter the data into R, then use a combination of the plot and line commands If you want something really special, then look at the libraries ggplot2 or lattice. In ggplot2 the following commands should get you started. require(ggplot2) #You would use read.csv here N = 10 d = ...
What is the easiest way to create publication-quality plots under Linux?
The easiest way is to use R Use read.csv to enter the data into R, then use a combination of the plot and line commands If you want something really special, then look at the libraries ggplot2 or latt
What is the easiest way to create publication-quality plots under Linux? The easiest way is to use R Use read.csv to enter the data into R, then use a combination of the plot and line commands If you want something really special, then look at the libraries ggplot2 or lattice. In ggplot2 the following commands should g...
What is the easiest way to create publication-quality plots under Linux? The easiest way is to use R Use read.csv to enter the data into R, then use a combination of the plot and line commands If you want something really special, then look at the libraries ggplot2 or latt
15,959
What is the easiest way to create publication-quality plots under Linux?
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns: x <- read.csv("file.csv") matplot(x[,1],x[,2:4],type="l",col=1:3) legend("topleft",legend=c("A","B","C"),lty=1,col=1:3)
What is the easiest way to create publication-quality plots under Linux?
It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns: x <- read.csv("file.csv") matplot(x[,1],x[,2:4],type="l",col=1:3) legen
What is the easiest way to create publication-quality plots under Linux? It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns: x <- read.csv("file.csv") matplot(x[,1],x[,2:4],type="l",col=1:3) legend("topleft",legend=c("A","B","C"),lty=1,col=1:3...
What is the easiest way to create publication-quality plots under Linux? It's hard to go past R for graphics. You could do what you want in 3 lines. For example, assuming the csv file has four columns: x <- read.csv("file.csv") matplot(x[,1],x[,2:4],type="l",col=1:3) legen
15,960
What is the easiest way to create publication-quality plots under Linux?
R is definitely the answer. I would just add to what Rob and Colin already said: To improve the quality of your plots, you should consider using the Cairo package for the output device. That will greatly improve the quality of the final graphics. You simply call the function before plotting and it redirects to Cairo...
What is the easiest way to create publication-quality plots under Linux?
R is definitely the answer. I would just add to what Rob and Colin already said: To improve the quality of your plots, you should consider using the Cairo package for the output device. That will gr
What is the easiest way to create publication-quality plots under Linux? R is definitely the answer. I would just add to what Rob and Colin already said: To improve the quality of your plots, you should consider using the Cairo package for the output device. That will greatly improve the quality of the final graphics...
What is the easiest way to create publication-quality plots under Linux? R is definitely the answer. I would just add to what Rob and Colin already said: To improve the quality of your plots, you should consider using the Cairo package for the output device. That will gr
15,961
What is the easiest way to create publication-quality plots under Linux?
My favorite tool is Python with matplotlib. The advantages: Immediate export from the environment where I do my experiments in Support for the scipy/numpy data structures Familiar syntax/options (matlab background) Full latex support for labels/legends etc. So same typesetting as in the rest of your document! Specifi...
What is the easiest way to create publication-quality plots under Linux?
My favorite tool is Python with matplotlib. The advantages: Immediate export from the environment where I do my experiments in Support for the scipy/numpy data structures Familiar syntax/options (mat
What is the easiest way to create publication-quality plots under Linux? My favorite tool is Python with matplotlib. The advantages: Immediate export from the environment where I do my experiments in Support for the scipy/numpy data structures Familiar syntax/options (matlab background) Full latex support for labels/l...
What is the easiest way to create publication-quality plots under Linux? My favorite tool is Python with matplotlib. The advantages: Immediate export from the environment where I do my experiments in Support for the scipy/numpy data structures Familiar syntax/options (mat
15,962
What is the easiest way to create publication-quality plots under Linux?
Take a look at the sample galleries for three popular visualization libraries: matplotlib gallery (Python) R graph gallery (R) -- (also see ggplot2, scroll down to reference) prefuse visualization gallery (Java) For the first two, you can even view the associated source code -- the simple stuff is simple, not many li...
What is the easiest way to create publication-quality plots under Linux?
Take a look at the sample galleries for three popular visualization libraries: matplotlib gallery (Python) R graph gallery (R) -- (also see ggplot2, scroll down to reference) prefuse visualization ga
What is the easiest way to create publication-quality plots under Linux? Take a look at the sample galleries for three popular visualization libraries: matplotlib gallery (Python) R graph gallery (R) -- (also see ggplot2, scroll down to reference) prefuse visualization gallery (Java) For the first two, you can even v...
What is the easiest way to create publication-quality plots under Linux? Take a look at the sample galleries for three popular visualization libraries: matplotlib gallery (Python) R graph gallery (R) -- (also see ggplot2, scroll down to reference) prefuse visualization ga
15,963
What is the easiest way to create publication-quality plots under Linux?
Another option is Gnuplot
What is the easiest way to create publication-quality plots under Linux?
Another option is Gnuplot
What is the easiest way to create publication-quality plots under Linux? Another option is Gnuplot
What is the easiest way to create publication-quality plots under Linux? Another option is Gnuplot
15,964
What is the easiest way to create publication-quality plots under Linux?
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them. I have recently started to make my plots in pgfplots. Being a LaTeX package (on top of tikz), it is particularly good at making things ...
What is the easiest way to create publication-quality plots under Linux?
Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them. I have recently
What is the easiest way to create publication-quality plots under Linux? Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them. I have recently started to make my plots in pgfplots. Being a ...
What is the easiest way to create publication-quality plots under Linux? Easy is relative. No tool is easy until you know how to use it. Some tools may appear more difficult at first, but provide you with much more fine-grained control once you master them. I have recently
15,965
What does "a.s." stand for?
It stands for "almost surely," i.e. the probability of this occurring is 1. See: https://en.wikipedia.org/wiki/Almost_surely
What does "a.s." stand for?
It stands for "almost surely," i.e. the probability of this occurring is 1. See: https://en.wikipedia.org/wiki/Almost_surely
What does "a.s." stand for? It stands for "almost surely," i.e. the probability of this occurring is 1. See: https://en.wikipedia.org/wiki/Almost_surely
What does "a.s." stand for? It stands for "almost surely," i.e. the probability of this occurring is 1. See: https://en.wikipedia.org/wiki/Almost_surely
15,966
What does "a.s." stand for?
As noted by @Matt, a.s. stands for "almost surely", or with probability 1. Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. For example, suppose $X \sim$ Uniform(0,1). What's $P(X = 0.5)$? Well, since $X$ is a continuous random variable, $P(X = ...
What does "a.s." stand for?
As noted by @Matt, a.s. stands for "almost surely", or with probability 1. Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. F
What does "a.s." stand for? As noted by @Matt, a.s. stands for "almost surely", or with probability 1. Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. For example, suppose $X \sim$ Uniform(0,1). What's $P(X = 0.5)$? Well, since $X$ is a continu...
What does "a.s." stand for? As noted by @Matt, a.s. stands for "almost surely", or with probability 1. Why the "almost" in "almost surely"? Because just because something happens "almost surely" does not mean it must happen. F
15,967
What does "a.s." stand for?
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia, To say that the sequence $X_n$ converges almost surely or almost everywhere or with probability 1 or strongly towards $X$ means that $$Pr(\lim_{n\to\infty}{X_n}=X)=1$$
What does "a.s." stand for?
As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia, To say that the sequence $X_n$ converges almost surely or
What does "a.s." stand for? As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia, To say that the sequence $X_n$ converges almost surely or almost everywhere or with probability 1 or strongly towards $X$ means that $$Pr(\lim_{n\t...
What does "a.s." stand for? As mentioned above, a. s. stands for almost shurely, but in this case they are talking about almost shurely convergence. From the Wikipedia, To say that the sequence $X_n$ converges almost surely or
15,968
What does "a.s." stand for?
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms. There is however a subtle distinction between almost surely (or with probability 1) to always [resp., between with probability zero to never]. Imagine an infinite ser...
What does "a.s." stand for?
As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms. There is however a subtle distinction between a
What does "a.s." stand for? As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms. There is however a subtle distinction between almost surely (or with probability 1) to always [resp., between with probability zero to neve...
What does "a.s." stand for? As already noted by others, "a.s." stands for "almost surely". The wikipedia article quoted by @Matt is a good start for almost surely and its synonyms. There is however a subtle distinction between a
15,969
Why use odds and not probability in logistic regression?
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like $$\log \left(\frac{p_i}{1-p_i}\right) = \beta_0 + \sum_{j=1}^J \beta_j x_{ij}$$ for the log-odds without any problem (i.e. for an...
Why use odds and not probability in logistic regression?
The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like $$\log \l
Why use odds and not probability in logistic regression? The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like $$\log \left(\frac{p_i}{1-p_i}\right) = \beta_0 + \sum_{j=1}^J \beta_j x...
Why use odds and not probability in logistic regression? The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like $$\log \l
15,970
Why use odds and not probability in logistic regression?
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but -3 successes per failure does not make sense. The logarithm of an odds can take any positive or negative value. Logisti...
Why use odds and not probability in logistic regression?
The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but
Why use odds and not probability in logistic regression? The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but -3 successes per failure does not make sense. The logarithm of...
Why use odds and not probability in logistic regression? The odds is the expected number of "successes" per "failure", so it can take values less than one, one or more than one, but negative values won't make sense; you can have 3 successes per failure, but
15,971
Why use odds and not probability in logistic regression?
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons. First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increase the odds of a positive response multiplicatively by the factor exp(beta_2). Beta_x2 has units of odds/unit of x2 whe...
Why use odds and not probability in logistic regression?
McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons. First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increa
Why use odds and not probability in logistic regression? McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons. First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increase the odds of a positive response multiplicatively by the fact...
Why use odds and not probability in logistic regression? McCullagh and Nelder (1989 Generalized Linear Models) list 2 reasons. First, analytic results with odds are more easily interpreted: the effect of a unit change in explanatory variable x2 is to increa
15,972
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen Intuitive example: $Z = X + Y$, where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
15,973
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent
Roll two dice. X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die X and Z are correlated, Y and Z are correlated, but X and Y are completely independent. (This is a concrete instance of the answer given by fblundun, but I came up with it before seeing their answer.)
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen
Roll two dice. X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent Roll two dice. X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die X and Z are correlated, Y and Z are correlated, but X and Y are completely independent. (This is a concre...
Example where $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independen Roll two dice. X is the number on the first die, Z is the sum of the two dice, Y is the number on the second die X and Z are correlated, Y and Z are correlated, but X and Y are completely independent.
15,974
Computation speed in R?
R works in-memory - so your data do need to fit into memory for the majority of functions. The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supplied with R), is not the same thing as a compiled language in the traditional sense (C, Fortran). It is a byte compiler ...
Computation speed in R?
R works in-memory - so your data do need to fit into memory for the majority of functions. The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supp
Computation speed in R? R works in-memory - so your data do need to fit into memory for the majority of functions. The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supplied with R), is not the same thing as a compiled language in the traditional sense (C, Fortran)...
Computation speed in R? R works in-memory - so your data do need to fit into memory for the majority of functions. The compiler package, if I am thinking of the thing you are thinking of (Luke Tierney's compiler package supp
15,975
Computation speed in R?
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data manipulations directly, there is no equivalent to DATA or PROC SQLprocedures because they're not needed (the latter being...
Computation speed in R?
I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data m
Computation speed in R? I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data manipulations directly, there is no equivalent to DATA or PROC SQLprocedures because they're not ...
Computation speed in R? I have used SAS for 15 years, and have started using R seriously the past 6 months, with some tinkering around in it for a couple of years ahead of that. From a programming perspective, R does data m
15,976
Computation speed in R?
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets and colons. Think of it like Fortran or C, but with implicit vectorisation so you don't have to loop over arrays, and dyna...
Computation speed in R?
R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets an
Computation speed in R? R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets and colons. Think of it like Fortran or C, but with implicit vectorisation so you don't have to lo...
Computation speed in R? R is a programming language. It works not in datasteps. It does whatever you want it to do, for it is but a programming language, a slave for your desires, expressed in a language of curly brackets an
15,977
Computation speed in R?
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff. However, if you are doing array work in R that can be vectorised it will be very quick - maybe half the speed of a C program in some cases, but if yo...
Computation speed in R?
I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff. However, if you are doing arr
Computation speed in R? I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff. However, if you are doing array work in R that can be vectorised it will be very quick - maybe half the speed of a C program ...
Computation speed in R? I understand that by default SAS can work with models that are bigger than memory, but this is not the case with R, unless you specifically use packages like biglm or ff. However, if you are doing arr
15,978
Computation speed in R?
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to easily make your functions interact with other tools provided by R, e.g. feeding them bootstraped data... or whatever they ...
Computation speed in R?
(2), ideally, we'd like to create an executable, but R is normally used as a scripted language Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to eas
Computation speed in R? (2), ideally, we'd like to create an executable, but R is normally used as a scripted language Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to easily make your functions interact with other tools provided by R, e.g. feeding them bootstraped d...
Computation speed in R? (2), ideally, we'd like to create an executable, but R is normally used as a scripted language Yes, and this is the good reason move to R. The interest of writing a R package is to allow users to eas
15,979
Computation speed in R?
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for R, ff and bigmemory, move data from memory to disk. I have pointers for you if you want to improve either speed or memo...
Computation speed in R?
Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for
Computation speed in R? Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for R, ff and bigmemory, move data from memory to disk. I have pointers for you if you want to impr...
Computation speed in R? Array manipulation in memory is a big thing for SAS, apparently. I do not know the specifics concerning R, but I surmise that R operates in memory by default, since the memory expanding packages for
15,980
What machine learning algorithm can be used to predict the stock market?
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the box unless you are doing something clever with it (at which point it sort of stops being widely available since you are ...
What machine learning algorithm can be used to predict the stock market?
As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the
What machine learning algorithm can be used to predict the stock market? As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the box unless you are doing something clever with ...
What machine learning algorithm can be used to predict the stock market? As babelproofreader mentioned, those that have a successful algorithm tend to be very secretive about it. Thus it's unlikely that any widely available algorithm is going to be very useful out of the
15,981
What machine learning algorithm can be used to predict the stock market?
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it. Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some people (both in academia and real-life quants) are motivated by the intellectual challenge, not just to get-rich-quick,...
What machine learning algorithm can be used to predict the stock market?
I think for your purposes, you should pick a machine learning algorithm you find interesting and try it. Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some
What machine learning algorithm can be used to predict the stock market? I think for your purposes, you should pick a machine learning algorithm you find interesting and try it. Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some people (both in academia and real-life quants)...
What machine learning algorithm can be used to predict the stock market? I think for your purposes, you should pick a machine learning algorithm you find interesting and try it. Regarding Efficient Market Theory, the markets are not efficient, in any time scale. Also, some
15,982
What machine learning algorithm can be used to predict the stock market?
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction: Gather and understand rumours Access and interpret all government knowledge Do so in every relevant country Make relevant predictions about: Weather conditions Terrorist activity T...
What machine learning algorithm can be used to predict the stock market?
To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction: Gather and understand rumours Access and interpret all govern
What machine learning algorithm can be used to predict the stock market? To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction: Gather and understand rumours Access and interpret all government knowledge Do so in every relevant country ...
What machine learning algorithm can be used to predict the stock market? To my mind, any run-of-the-mill strong AI that could do all of the following might easily produce a statistically significant prediction: Gather and understand rumours Access and interpret all govern
15,983
What machine learning algorithm can be used to predict the stock market?
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could get parameters for the mean model from auto.arima, then pass them to rugarch and add garch(1,1)? There's all sorts of bl...
What machine learning algorithm can be used to predict the stock market?
You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could
What machine learning algorithm can be used to predict the stock market? You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could get parameters for the mean model from auto.ari...
What machine learning algorithm can be used to predict the stock market? You could try the auto.arima and ets functions in R. You might also have some success with the rugarch package, but there's no existing functions for automated parameters selection. Maybe you could
15,984
What machine learning algorithm can be used to predict the stock market?
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fund. In other words: A hedge fund provides open access to an encrypted version of data on a couple of hundred investment ...
What machine learning algorithm can be used to predict the stock market?
I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fu
What machine learning algorithm can be used to predict the stock market? I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fund. In other words: A hedge fund provides open ...
What machine learning algorithm can be used to predict the stock market? I know of one machine learning approach which is currently in use by at least one hedge fund. numer.ai is using an ensemble of user-provided machine learning algorithms to direct the actions of the fu
15,985
What machine learning algorithm can be used to predict the stock market?
You should try GMDH-type neural networks. I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation. In a nutshell it is a multilayered iterative neural network, so you are on the right way.
What machine learning algorithm can be used to predict the stock market?
You should try GMDH-type neural networks. I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation. In a nutshel
What machine learning algorithm can be used to predict the stock market? You should try GMDH-type neural networks. I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation. In a nutshell it is a multilayered iterative neural network...
What machine learning algorithm can be used to predict the stock market? You should try GMDH-type neural networks. I know that some successful commercial packages for stock market prediction are using it, but mention it only in the depths of the documentation. In a nutshel
15,986
What machine learning algorithm can be used to predict the stock market?
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
What machine learning algorithm can be used to predict the stock market?
I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
What machine learning algorithm can be used to predict the stock market? I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
What machine learning algorithm can be used to predict the stock market? I think hidden markov models are popular in stock market. The most important thing to keep in mind is that you want an algorithm that preserves the temporal aspect of your data.
15,987
Variance of linear combinations of correlated random variables
This is just an exercise in applying basic properties of sums, the linearity of expectation, and definitions of variance and covariance \begin{align} \operatorname{var}\left(\sum_{i=1}^n a_i X_i\right) &= E\left[\left(\sum_{i=1}^n a_i X_i\right)^2\right] - \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2 &\scriptstyle...
Variance of linear combinations of correlated random variables
This is just an exercise in applying basic properties of sums, the linearity of expectation, and definitions of variance and covariance \begin{align} \operatorname{var}\left(\sum_{i=1}^n a_i X_i\right
Variance of linear combinations of correlated random variables This is just an exercise in applying basic properties of sums, the linearity of expectation, and definitions of variance and covariance \begin{align} \operatorname{var}\left(\sum_{i=1}^n a_i X_i\right) &= E\left[\left(\sum_{i=1}^n a_i X_i\right)^2\right] - ...
Variance of linear combinations of correlated random variables This is just an exercise in applying basic properties of sums, the linearity of expectation, and definitions of variance and covariance \begin{align} \operatorname{var}\left(\sum_{i=1}^n a_i X_i\right
15,988
Variance of linear combinations of correlated random variables
You can actually do it by recursion without using matrices: Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$. $\text{Var}(a_1X_1+Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,Y_1)+\text{Var}(Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,a_2X_2+Y_2)+\text{Var}(a_2X_2+Y_2)$ $\qquad=a...
Variance of linear combinations of correlated random variables
You can actually do it by recursion without using matrices: Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$. $\text{Var}(a_1X_1+Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}
Variance of linear combinations of correlated random variables You can actually do it by recursion without using matrices: Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$. $\text{Var}(a_1X_1+Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}(X_1,Y_1)+\text{Var}(Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2...
Variance of linear combinations of correlated random variables You can actually do it by recursion without using matrices: Take the result for $\text{Var}(a_1X_1+Y_1)$ and let $Y_1=a_2X_2+Y_2$. $\text{Var}(a_1X_1+Y_1)$ $\qquad=a_1^2\text{Var}(X_1)+2a_1\text{Cov}
15,989
Variance of linear combinations of correlated random variables
Here is a slightly different proof based on matrix algebra. Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated. Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\ldots,\mu_n) = E(X)$ and set $Y = a_1X_1+\ldots+a_nX_n = a^\top X$. Note first that, by the linearity of the integral (o...
Variance of linear combinations of correlated random variables
Here is a slightly different proof based on matrix algebra. Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated. Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\
Variance of linear combinations of correlated random variables Here is a slightly different proof based on matrix algebra. Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated. Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\ldots,\mu_n) = E(X)$ and set $Y = a_1X_1+\ldots+a_nX_n = ...
Variance of linear combinations of correlated random variables Here is a slightly different proof based on matrix algebra. Convention: a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated. Let $a = (a_1,\ldots,a_n)$, $\mu = (\mu_1,\
15,990
Variance of linear combinations of correlated random variables
Just for fun, proof by induction! Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$ Then $P(2)$ is (trivially) true (you said you're happy with that in the question). Let's assume P(k) is true. Thus, $Var[\sum_{i=1}^{k+1} a_iX_i]...
Variance of linear combinations of correlated random variables
Just for fun, proof by induction! Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$ Then $P(2)$ is (trivially
Variance of linear combinations of correlated random variables Just for fun, proof by induction! Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$ Then $P(2)$ is (trivially) true (you said you're happy with that in the question)....
Variance of linear combinations of correlated random variables Just for fun, proof by induction! Let $P(k)$ be the statement that $Var[\sum_{i=1}^k a_iX_i] = \sum_{i=1}^k a_i^2\sigma_i^2 + 2\sum_{i=1}^k \sum _{j>i}^k a_ia_jCov[X_i, X_j]$ Then $P(2)$ is (trivially
15,991
Variance of linear combinations of correlated random variables
Basically, the proof is the same as the first formula. I will prove it use a very brutal method. $Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2 =E[(a_1X_1)^2+...+(a_nX_n)^2+2a_1a_2X_1X_2+2a_1a_3X_1X_3+...+2a_1a_nX_1X_n+...+2a_{n-1}a_nX_{n-1}X_n]-[a_1E(X1)+...a_nE(X_n)]^2 $ $=a_1^2E(X_1^2)+...+a_...
Variance of linear combinations of correlated random variables
Basically, the proof is the same as the first formula. I will prove it use a very brutal method. $Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2 =E[(a_1X_1)^2+...+(a_nX_n)^2+2a_
Variance of linear combinations of correlated random variables Basically, the proof is the same as the first formula. I will prove it use a very brutal method. $Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2 =E[(a_1X_1)^2+...+(a_nX_n)^2+2a_1a_2X_1X_2+2a_1a_3X_1X_3+...+2a_1a_nX_1X_n+...+2a_{n-1}a_...
Variance of linear combinations of correlated random variables Basically, the proof is the same as the first formula. I will prove it use a very brutal method. $Var(a_1X_1+...+a_nX_n)=E[(a_1X_1+..a_nX_n)^2]-[E(a_1X_1+...+a_nXn)]^2 =E[(a_1X_1)^2+...+(a_nX_n)^2+2a_
15,992
Continuous random variables - probability of a kid arriving on time for school
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y \leq 420]$. This problem can be handled with a straightforward geometric approach. $$\mathbb{P}[X + Y \leq 420] = \fra...
Continuous random variables - probability of a kid arriving on time for school
As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y
Continuous random variables - probability of a kid arriving on time for school As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y \leq 420]$. This problem can be handled ...
Continuous random variables - probability of a kid arriving on time for school As you suggested, $X$ and $Y$ can be described as two independent uniform random variables $X \sim \mathcal{U(375, 405)}$, $Y \sim \mathcal{U(30, 40)}$. We are interesting in finding $\mathbb{P}[X + Y
15,993
Continuous random variables - probability of a kid arriving on time for school
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed until departure and drive time to be less than 60. Define $Z=X+Y,$ so that $$F_Z(z) = \int_{30}^{40} F_X(z-y)f_Y(y)dy,$$ ...
Continuous random variables - probability of a kid arriving on time for school
Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed unt
Continuous random variables - probability of a kid arriving on time for school Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed until departure and drive time to be less th...
Continuous random variables - probability of a kid arriving on time for school Write $X \sim U(15,45)$ and $Y \sim U(30,40)$, then can write what you are trying to solve for as $P(X+Y<60)$. I am using the starting time here as 6:00AM and therefore need the sum of time passed unt
15,994
Continuous random variables - probability of a kid arriving on time for school
A simpler approach: There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period. There's a $5/30$ chance of leaving between 6:15 and 6:20, and there'll be a $100\%$ chance of arriving before 7 if he leaves at any point during that interval. There's...
Continuous random variables - probability of a kid arriving on time for school
A simpler approach: There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period. There's a $5/30$ chance of leaving between 6:15
Continuous random variables - probability of a kid arriving on time for school A simpler approach: There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period. There's a $5/30$ chance of leaving between 6:15 and 6:20, and there'll be a $100\%$ chan...
Continuous random variables - probability of a kid arriving on time for school A simpler approach: There's a 30-minute interval during which he can leave, thus there's a $x/30$ chance of leaving during any given $x$-minute period. There's a $5/30$ chance of leaving between 6:15
15,995
Continuous random variables - probability of a kid arriving on time for school
We should begin by partitioning the space. If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest, the dad needs to leave between 6:15 and 6:30. Let's write out some scenarios: Dad leaves 0 minutes after 6:15, he can ...
Continuous random variables - probability of a kid arriving on time for school
We should begin by partitioning the space. If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest,
Continuous random variables - probability of a kid arriving on time for school We should begin by partitioning the space. If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest, the dad needs to leave between 6:15 and ...
Continuous random variables - probability of a kid arriving on time for school We should begin by partitioning the space. If the dad leaves at 6:45, then there is a 0% chance he makes it to school on time since the ride takes in the shortest case 30 mins. So at the very latest,
15,996
Why am I getting information entropy greater than 1?
Entropy is not the same as probability. Entropy measures the "information" or "uncertainty" of a random variable. When you are using base 2, it is measured in bits; and there can be more than one bit of information in a variable. In this example, one sample "contains" about 1.15 bits of information. In other words, if ...
Why am I getting information entropy greater than 1?
Entropy is not the same as probability. Entropy measures the "information" or "uncertainty" of a random variable. When you are using base 2, it is measured in bits; and there can be more than one bit
Why am I getting information entropy greater than 1? Entropy is not the same as probability. Entropy measures the "information" or "uncertainty" of a random variable. When you are using base 2, it is measured in bits; and there can be more than one bit of information in a variable. In this example, one sample "contains...
Why am I getting information entropy greater than 1? Entropy is not the same as probability. Entropy measures the "information" or "uncertainty" of a random variable. When you are using base 2, it is measured in bits; and there can be more than one bit
15,997
Why am I getting information entropy greater than 1?
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using. Using base 2 logarithms as an example, as in the question: $\log_2 1$ is $0$ and $\log_2 2$ is $1$, so a result greater than $1$ is definitely ...
Why am I getting information entropy greater than 1?
The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using. Using base 2 logarithms
Why am I getting information entropy greater than 1? The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using. Using base 2 logarithms as an example, as in the question: $\log_2 1$ is $0$ and $\log_2 2...
Why am I getting information entropy greater than 1? The maximum value of entropy is $\log k$, where $k$ is the number of categories you are using. Its numeric value will naturally depend on the base of logarithms you are using. Using base 2 logarithms
15,998
Why am I getting information entropy greater than 1?
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct. As stated earlier "Entropy measures the "information" or "uncertainty" of a random variable." Information can be measured in bits and when doing so log2 should be used. However, if a d...
Why am I getting information entropy greater than 1?
Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct. As stated earlier "Entropy measures the "information" or "uncerta
Why am I getting information entropy greater than 1? Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct. As stated earlier "Entropy measures the "information" or "uncertainty" of a random variable." Information can be measured in bits a...
Why am I getting information entropy greater than 1? Earlier answers, specifically: "Entropy is not the same as probability." and "the maximum value of entropy is log 𝑘" are both correct. As stated earlier "Entropy measures the "information" or "uncerta
15,999
Do we ever use maximum likelihood estimation?
I am wondering if maximum likelihood estimation ever used in statistics. Certainly! Actually quite a lot -- but not always. We learn the concept of it but I wonder when it is actually used. When people have a parametric distributional model, they quite often choose to use maximum likelihood estimation. When the mod...
Do we ever use maximum likelihood estimation?
I am wondering if maximum likelihood estimation ever used in statistics. Certainly! Actually quite a lot -- but not always. We learn the concept of it but I wonder when it is actually used. When p
Do we ever use maximum likelihood estimation? I am wondering if maximum likelihood estimation ever used in statistics. Certainly! Actually quite a lot -- but not always. We learn the concept of it but I wonder when it is actually used. When people have a parametric distributional model, they quite often choose to u...
Do we ever use maximum likelihood estimation? I am wondering if maximum likelihood estimation ever used in statistics. Certainly! Actually quite a lot -- but not always. We learn the concept of it but I wonder when it is actually used. When p
16,000
Do we ever use maximum likelihood estimation?
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribution and solve for the MLE, then remove the explicit distributional assumption and instead look at how your estimator pe...
Do we ever use maximum likelihood estimation?
While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribu
Do we ever use maximum likelihood estimation? While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribution and solve for the MLE, then remove the explicit distributional assump...
Do we ever use maximum likelihood estimation? While maximize likelihood estimators can look suspicious given the assumptions on the data distribution, Quasi Maximum Likelihood Estimators are often used. The idea is to start by assuming a distribu