idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
48,601
AR(2) model is causal
A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process $$\phi(B)X_t = \theta(B) W_t$$ is causal if and only if $\phi(z) \neq 0$ for all $z \in \mathbb{C}$ such that $\left|z\right|\leq 1$. So in order the AR($2$) process to be causal, the coeffici...
AR(2) model is causal
A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process $$\phi(B)X_t = \theta(B) W_t$$ is causal if and only if $\phi(z) \neq 0$
AR(2) model is causal A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process $$\phi(B)X_t = \theta(B) W_t$$ is causal if and only if $\phi(z) \neq 0$ for all $z \in \mathbb{C}$ such that $\left|z\right|\leq 1$. So in order the AR($2$) process to b...
AR(2) model is causal A famous theorem (Theorem 3.1.1., Brockwell, Davis. Time series: theory and application)states that an ARMA($p$, $q$) process $$\phi(B)X_t = \theta(B) W_t$$ is causal if and only if $\phi(z) \neq 0$
48,602
AR(2) model is causal
your final equation leads to the MA representation of an AR process. Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X. All ARMA models can be presented as pure AR models (weighted average of the past ) or as a pure MA mode ( weighted average of the past errors )
AR(2) model is causal
your final equation leads to the MA representation of an AR process. Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X. All ARMA models can be present
AR(2) model is causal your final equation leads to the MA representation of an AR process. Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X. All ARMA models can be presented as pure AR models (weighted average of the past ) or as a pure MA mode ( weighted average of th...
AR(2) model is causal your final equation leads to the MA representation of an AR process. Pred[X(t)] = cons + a1*W(t-1) + a2*W(t-2) + .... an*W(t-n) reflecting how previous errors "cause" X. All ARMA models can be present
48,603
AR(2) model is causal
You write: "I want to prove AR(2) model is causal." Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These explicitly have merely statistical meaning. Causality is something the go beyond merely statistical relationship and invol...
AR(2) model is causal
You write: "I want to prove AR(2) model is causal." Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These ex
AR(2) model is causal You write: "I want to prove AR(2) model is causal." Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These explicitly have merely statistical meaning. Causality is something the go beyond merely statistical ...
AR(2) model is causal You write: "I want to prove AR(2) model is causal." Is simply not possible. AR and/or ARMA models are never causal. ARMA models was thinked exactly for describing a process with its own past. These ex
48,604
AR(2) model is causal
AR(2) is causal if : $$ \phi_1+\phi_2 < 1$$ and $$ \phi_1 - \phi_2 < 1$$ and $$ -1 < \phi_2 < 1$$ In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal.
AR(2) model is causal
AR(2) is causal if : $$ \phi_1+\phi_2 < 1$$ and $$ \phi_1 - \phi_2 < 1$$ and $$ -1 < \phi_2 < 1$$ In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal.
AR(2) model is causal AR(2) is causal if : $$ \phi_1+\phi_2 < 1$$ and $$ \phi_1 - \phi_2 < 1$$ and $$ -1 < \phi_2 < 1$$ In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal.
AR(2) model is causal AR(2) is causal if : $$ \phi_1+\phi_2 < 1$$ and $$ \phi_1 - \phi_2 < 1$$ and $$ -1 < \phi_2 < 1$$ In this conditions $ \phi_2(B)=0 $ equation roots are outside of unit circle so it's causal.
48,605
How do I include measurement errors in a Bernoulli experiment?
We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have \begin{align} \mathbb{P}(Y=1 | X=0)&= \epsilon_1 \\ \mathbb{P}(Y=0 | X=0)&= 1-\epsilon_1 \\ \mathbb{P}(Y=0 | X=1 )&= \epsi...
How do I include measurement errors in a Bernoulli experiment?
We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have \begin{align}
How do I include measurement errors in a Bernoulli experiment? We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have \begin{align} \mathbb{P}(Y=1 | X=0)&= \epsilon_1 \\ \mathbb{P}(...
How do I include measurement errors in a Bernoulli experiment? We can solve this by maximum likelihood. Let $X$ be a bernoulli variable with success probability $p_0$. But you observe not $X$, but $Y$ which is $X$ "contaminated", that is, we have \begin{align}
48,606
How do I include measurement errors in a Bernoulli experiment?
I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is $$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$ The likelihood when $Y=1$ is $$ \epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)$$ The likelihood for the sample is the product of the above terms for the relevant numbers of 0's and ...
How do I include measurement errors in a Bernoulli experiment?
I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is $$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$ The likelihood when $Y=1$ is $$ \epsilon_1 p_0 + (1-\
How do I include measurement errors in a Bernoulli experiment? I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is $$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$ The likelihood when $Y=1$ is $$ \epsilon_1 p_0 + (1-\epsilon_1)(1-p_0)$$ The likelihood for the sample is the ...
How do I include measurement errors in a Bernoulli experiment? I would start by writing out the likelihood for the data you actually have. The likelihood for $Y=0$ is $$ \epsilon_0(1-p_0) + (1-\epsilon_0)p_0$$ The likelihood when $Y=1$ is $$ \epsilon_1 p_0 + (1-\
48,607
Can neural network classify large images?
There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible. I would strongly suggest to reduce the image size as much as possible, and at the same time use non-fully connected layers in the beginning, reducing the dimensionality of your optimisation problem. Another approach ...
Can neural network classify large images?
There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible. I would strongly suggest to reduce the image size as much as possible, and at the same time use
Can neural network classify large images? There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible. I would strongly suggest to reduce the image size as much as possible, and at the same time use non-fully connected layers in the beginning, reducing the dimensionality of yo...
Can neural network classify large images? There have been convolution networks for videos of $224 \times 224 \times 10$ (1), so yes its possible. I would strongly suggest to reduce the image size as much as possible, and at the same time use
48,608
Can neural network classify large images?
In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process. EfficientNet uses an image size of 600x600 pixels in its largest setting, and Feature Pyramid Networks for Object Detection and Mask R-...
Can neural network classify large images?
In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process. EfficientNet us
Can neural network classify large images? In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process. EfficientNet uses an image size of 600x600 pixels in its largest setting, and Feature Pyramid...
Can neural network classify large images? In principle, the only limiting factor to how large input sizes you can handle is the amount of memory on your GPU. Then of course, larger input sizes will take longer time to process. EfficientNet us
48,609
Distribution of "sample" mahalanobis distances
if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$ and $\pmb \varSigma\succ0$ and we denote: $$(\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)$$ the usual Gaussian estimates of mean and covariance, then $$d^2(\pmb x_i,...
Distribution of "sample" mahalanobis distances
if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$ and $\pmb \varSigma\succ0$ and we denote: $$
Distribution of "sample" mahalanobis distances if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$ and $\pmb \varSigma\succ0$ and we denote: $$(\mbox{ave}\;\pmb x_i,\mbox{cov}\;\pmb x_i)$$ the usual Gaussian estimate...
Distribution of "sample" mahalanobis distances if $\{\pmb x_i\}_{i=1}^n$ is your data with $\pmb x_i\underset{\text{i.i.d.}}{\sim}\mathcal{N}_p(\pmb \mu,\pmb \varSigma)$ where $\pmb \mu\in\mathbb{R}^p$ and $\pmb \varSigma\succ0$ and we denote: $$
48,610
Distribution of "sample" mahalanobis distances
If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed. To understand this, assume your estimate is perfect: $\hat S =\Sigma$. The math then should be straightforward, because you can essentially remove all the variance, and ...
Distribution of "sample" mahalanobis distances
If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed. To understand this, assume your estimate is perfe
Distribution of "sample" mahalanobis distances If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed. To understand this, assume your estimate is perfect: $\hat S =\Sigma$. The math then should be straightforward, because yo...
Distribution of "sample" mahalanobis distances If your estimate of $\Sigma$ is not too far off, it is the Euclidean distance of a multivariate standard normal distribution, i.e. $\chi$ distributed. To understand this, assume your estimate is perfe
48,611
Definition of p-value in carets confusion matrix method
If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are class #1, is an accuracy of 75% any better than the "non-information rate" of 70%. confusionMatrix uses the binom.test fu...
Definition of p-value in carets confusion matrix method
If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are cl
Definition of p-value in carets confusion matrix method If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are class #1, is an accuracy of 75% any better than the "non-informati...
Definition of p-value in carets confusion matrix method If you have a class imbalance, you might want to know if your model's accuracy is better than the proportion of the data with the majority class. So if you have two classes and 70% of your data are cl
48,612
Hellinger transformation with relative data
The Hellinger transformation is defined as $$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$ Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample. If your data are already of the form $\frac{y_{ij}}{y_{i.}}$, but you've only taken a subset of the species, then yes, you can...
Hellinger transformation with relative data
The Hellinger transformation is defined as $$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$ Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample. If you
Hellinger transformation with relative data The Hellinger transformation is defined as $$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$ Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample. If your data are already of the form $\frac{y_{ij}}{y_{i.}}$, but you've only take...
Hellinger transformation with relative data The Hellinger transformation is defined as $$ y^{\prime}_{ij} = \sqrt{\frac{y_{ij}}{y_{i.}}} $$ Where $j$ indexes the species, $i$ the site/sample, and $i.$ is the row sum for the $i$th sample. If you
48,613
How does the mean function work for a Gaussian Process?
Your understanding is correct. There is apparently mistake in the notes and the equations should be \begin{array} mm (x) &= E[ f(x) ], \\ k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))]. \end{array} For reference, see Equation (2.13) in page 13 of C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, ...
How does the mean function work for a Gaussian Process?
Your understanding is correct. There is apparently mistake in the notes and the equations should be \begin{array} mm (x) &= E[ f(x) ], \\ k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))]. \end{array} For ref
How does the mean function work for a Gaussian Process? Your understanding is correct. There is apparently mistake in the notes and the equations should be \begin{array} mm (x) &= E[ f(x) ], \\ k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))]. \end{array} For reference, see Equation (2.13) in page 13 of C. E. Rasmussen & C. K...
How does the mean function work for a Gaussian Process? Your understanding is correct. There is apparently mistake in the notes and the equations should be \begin{array} mm (x) &= E[ f(x) ], \\ k(x,x') &= E[(f(x)-m(x))(f(x')-m(x'))]. \end{array} For ref
48,614
How would I create a 95% confidence interval with log-transformed data?
In the same way that you would compute and other confidence interval: Transform data to the log you want Calculate the mean of the transformed data Calculate the standard error of the transformed data Compute the upper and lower bounds, with the choosen confidence level I might add that you dont need the residuals (...
How would I create a 95% confidence interval with log-transformed data?
In the same way that you would compute and other confidence interval: Transform data to the log you want Calculate the mean of the transformed data Calculate the standard error of the transformed da
How would I create a 95% confidence interval with log-transformed data? In the same way that you would compute and other confidence interval: Transform data to the log you want Calculate the mean of the transformed data Calculate the standard error of the transformed data Compute the upper and lower bounds, with the ...
How would I create a 95% confidence interval with log-transformed data? In the same way that you would compute and other confidence interval: Transform data to the log you want Calculate the mean of the transformed data Calculate the standard error of the transformed da
48,615
Similarities and dissimilarities in classical multidimensional scaling
These two books are in full agreement. Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ such that their scalar products $\langle z_i, z_j \rangle$ approximate a given similarity matrix as well as possible. Howe...
Similarities and dissimilarities in classical multidimensional scaling
These two books are in full agreement. Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ su
Similarities and dissimilarities in classical multidimensional scaling These two books are in full agreement. Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ such that their scalar products $\langle z_i, z_j \...
Similarities and dissimilarities in classical multidimensional scaling These two books are in full agreement. Classical multidimensional scaling (where by "classical MDS" I understand Torgerson's MDS, following both Hastie et al. and Borg & Groenen) finds points $z_i$ su
48,616
Prediction based on bayesian model
Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to illustrate. Say you want to estimate linear regression model $$ y_i = \beta_0 + \beta_1 x_i + \varepsilon_i $$ and based on...
Prediction based on bayesian model
Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to ill
Prediction based on bayesian model Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to illustrate. Say you want to estimate linear regression model $$ y_i = \beta_0 + \beta_1 ...
Prediction based on bayesian model Generally, in Bayesian model you do predictions on new data the same way as you do with non-Bayesian models. As your example is complicated I will provide a simplified one to make things easier to ill
48,617
Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model?
Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE). That said, R^2 isn't a great measure since R^2 will always increase with additional variables, regardless of whether that variable is meaningful. For example: > x <- rnorm(100) > y <- 1 * x + rnorm(10...
Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model?
Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE). That said, R^2 isn't a great measure since R^2 will always increase with additio
Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model? Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE). That said, R^2 isn't a great measure since R^2 will always increase with additional variables, regardle...
Is Predicted R-squared a Valid Method for Rejecting Additional Explanatory Variables in a Model? Predicted R squared would be no different than many other forms of cross-validation estimates of error (e.g., CV-MSE). That said, R^2 isn't a great measure since R^2 will always increase with additio
48,618
PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s
I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer: import pymc3 as pm import numpy as np import pandas as pd import theano import scipy as sp data = pd.read_csv('jester-dense-subset-100x20.csv') n, m = data.shape test_size = m...
PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s
I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer: import pymc3 as pm import numpy as np import pandas as pd im
PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer: import pymc3 as pm import numpy as np import pandas as pd import theano import scipy as sp da...
PyMC3 Implementation of Probabilistic Matrix Factorization (PMF): MAP produces all 0s I did two things to fix your code. One was to initialize the model away from zero, the other one was to use a non-gradient based optimizer: import pymc3 as pm import numpy as np import pandas as pd im
48,619
Count data and heteroscedasticity
Q1 "why [do] count data tend to be heteroscedastic"? If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random counts' might usefully mean. Hence one way to ask why count data is heteroscedastic is to ask why count data might be P...
Count data and heteroscedasticity
Q1 "why [do] count data tend to be heteroscedastic"? If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random
Count data and heteroscedasticity Q1 "why [do] count data tend to be heteroscedastic"? If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random counts' might usefully mean. Hence one way to ask why count data is heteroscedastic i...
Count data and heteroscedasticity Q1 "why [do] count data tend to be heteroscedastic"? If we want to model counts as random, then the Poisson distribution, which is heteroscedastic, provides a natural characterisation of what 'random
48,620
What is the significance of a linear dependency in a polynomial regression?
Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because the matrix $X^TX$ is singular, so there is not a uniquely defined solution to estimating your regression coefficients. (...
What is the significance of a linear dependency in a polynomial regression?
Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because
What is the significance of a linear dependency in a polynomial regression? Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because the matrix $X^TX$ is singular, so there is n...
What is the significance of a linear dependency in a polynomial regression? Recall from linear algebra that linearly dependent vectors are a set of vectors which can be expressed as a linear combination of each other. When performing regression, this creates problems because
48,621
Computing Paired Samples (pre/post) Effect Size with Limited Information
Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-test and post-test scores.
Computing Paired Samples (pre/post) Effect Size with Limited Information
Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-tes
Computing Paired Samples (pre/post) Effect Size with Limited Information Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-test and post-test scores.
Computing Paired Samples (pre/post) Effect Size with Limited Information Unfortunately, if that really is all the information you have, then there is no way to get either #1 or #2 -- one way or another you need to know (or be able to deduce) the correlation between pre-tes
48,622
Computing Paired Samples (pre/post) Effect Size with Limited Information
Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size. However, this correlation value can be imputed to obtain reasonable results, especially if you can draw upon previous research and/or have a strong theoretical rationale for the particula...
Computing Paired Samples (pre/post) Effect Size with Limited Information
Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size. However, this correlation value can be imputed to obtain reasonable
Computing Paired Samples (pre/post) Effect Size with Limited Information Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size. However, this correlation value can be imputed to obtain reasonable results, especially if you can draw upon previ...
Computing Paired Samples (pre/post) Effect Size with Limited Information Yes, as others have mentioned, you will need to know the correlation between pre- and post-test scores to calculate an effect size. However, this correlation value can be imputed to obtain reasonable
48,623
Computing Paired Samples (pre/post) Effect Size with Limited Information
I am also working with a similar meta-analysis. SDd can be imputed by several methods. 1. Taking it from other studies. Use maximum of observed from other studies. 2. If any of other studies have reported r, use it. Base it on maximum of observed r values. 3. If any of other studies in your meta-analysis is has mention...
Computing Paired Samples (pre/post) Effect Size with Limited Information
I am also working with a similar meta-analysis. SDd can be imputed by several methods. 1. Taking it from other studies. Use maximum of observed from other studies. 2. If any of other studies have repo
Computing Paired Samples (pre/post) Effect Size with Limited Information I am also working with a similar meta-analysis. SDd can be imputed by several methods. 1. Taking it from other studies. Use maximum of observed from other studies. 2. If any of other studies have reported r, use it. Base it on maximum of observed ...
Computing Paired Samples (pre/post) Effect Size with Limited Information I am also working with a similar meta-analysis. SDd can be imputed by several methods. 1. Taking it from other studies. Use maximum of observed from other studies. 2. If any of other studies have repo
48,624
Identifiability of the linear regression model: necessary and sufficient condition
Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\binom{n+1}{2}$ distinct points, their agreement at all points of $\mathbb{R}^n$ is far more than needed to conclude the ...
Identifiability of the linear regression model: necessary and sufficient condition
Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\
Identifiability of the linear regression model: necessary and sufficient condition Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\binom{n+1}{2}$ distinct points, their...
Identifiability of the linear regression model: necessary and sufficient condition Your "assume also" clause equates two quadratic forms in $\mathbb{R}^n$ (with $\mathrm{y}=(y_1,y_2,\ldots,y_n)$ the variable). Since any quadratic form is completely determined by its values at $1+n+\
48,625
Identifiability of the linear regression model: necessary and sufficient condition
Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic nature of the cost function, which it turns out, is part of the proof. This is because a quadratic function will always...
Identifiability of the linear regression model: necessary and sufficient condition
Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic
Identifiability of the linear regression model: necessary and sufficient condition Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic nature of the cost function, which i...
Identifiability of the linear regression model: necessary and sufficient condition Ok, I think I understand what you want. I don't think I can help you all the way, but this might provide a little help. You are right in terms of the equation above, as this demonstrates the quadratic
48,626
Making two vectors uncorrelated in terms of Kendall Tau correlation
Covariance is linear, so a linear projection can be used to zero it out. Concordance is not linear, so a linear projection won't (in general) work to zero it out. However, one can still come up with vectors which have zero Kendall correlation. Specifically, if $\hat{\beta}^K$ is the slope estimate for the Theil-Sen r...
Making two vectors uncorrelated in terms of Kendall Tau correlation
Covariance is linear, so a linear projection can be used to zero it out. Concordance is not linear, so a linear projection won't (in general) work to zero it out. However, one can still come up with
Making two vectors uncorrelated in terms of Kendall Tau correlation Covariance is linear, so a linear projection can be used to zero it out. Concordance is not linear, so a linear projection won't (in general) work to zero it out. However, one can still come up with vectors which have zero Kendall correlation. Specif...
Making two vectors uncorrelated in terms of Kendall Tau correlation Covariance is linear, so a linear projection can be used to zero it out. Concordance is not linear, so a linear projection won't (in general) work to zero it out. However, one can still come up with
48,627
A framework for multi-valued categorical attributes
The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is absent or present, respectively. See for example how to do it in Pandas (if you are using Python) and Generate a dummy-varia...
A framework for multi-valued categorical attributes
The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is abse
A framework for multi-valued categorical attributes The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is absent or present, respectively. See for example how to do it in Pandas ...
A framework for multi-valued categorical attributes The most standard way of dealing with variables having an array of values is using dummy variables, i.e. creating a column for each possibility and assigning 0 and 1 depending if a n attribute is abse
48,628
Do I get the nice asymptotic properties of MLE when I restrict the parameter space?
The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for the true value of the parameter to be within an open set of the parameter space. In your example, if the true value of $...
Do I get the nice asymptotic properties of MLE when I restrict the parameter space?
The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for
Do I get the nice asymptotic properties of MLE when I restrict the parameter space? The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for the true value of the parameter to b...
Do I get the nice asymptotic properties of MLE when I restrict the parameter space? The nice properties stop working if the true value is on the boundary of your parameter space --- that, and certain regularity conditions on the likelihood itself. I believe that all you need is for
48,629
Can a forecast that reaches further into the future be less uncertain?
In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in variations in future forecasts. In that case, the band is not a statistical uncertainty band per se, it's the results o...
Can a forecast that reaches further into the future be less uncertain?
In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in
Can a forecast that reaches further into the future be less uncertain? In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in variations in future forecasts. In that case, th...
Can a forecast that reaches further into the future be less uncertain? In ensemble forecast, a common technique in weather forecasting, some not-fully-known quantity at the present is varied, creating different initial condition for the forecasting models which result in
48,630
Advantages of counterbalancing vs. randomizing stimuli
I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and test to see if it has any effect. The cons of counterbalancing are that there may be some effect of, say, $Q1$ and $Q...
Advantages of counterbalancing vs. randomizing stimuli
I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and
Advantages of counterbalancing vs. randomizing stimuli I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and test to see if it has any effect. The cons of counterbalancing...
Advantages of counterbalancing vs. randomizing stimuli I think the pros of counterbalancing are basically their convenience for you. You set up two questionnaires and you're done. If you have many people using each list, you can add List as a factor and
48,631
Fitted model of linear spline regression in R
The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R : bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff new.x <- c(10.2, 11.8, 13, 30) predict(x.bspline.bff, new.x) Most software will have functions to generate these (e.g. SAS, S...
Fitted model of linear spline regression in R
The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R : bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff new.x
Fitted model of linear spline regression in R The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R : bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff new.x <- c(10.2, 11.8, 13, 30) predict(x.bspline.bff, new.x) Most software will...
Fitted model of linear spline regression in R The coefficients have the usual interpretation, but for the B-spline basis functions; which you can generate for new data easily enough in R : bs(x, degree=1, knots=c(6,12,18)) -> x.bspline.bff new.x
48,632
What statistical test should I use to look at change in a binary outcome over time?
Two approaches that work in your case are: Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works. Generalized Linear Mixed Models (GLMM). Of course you would want to choose the logit link. With above approaches, you can easily incorporate your explanatory variables you wish t...
What statistical test should I use to look at change in a binary outcome over time?
Two approaches that work in your case are: Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works. Generalized Linear Mixed Models (GLMM). Of course you would
What statistical test should I use to look at change in a binary outcome over time? Two approaches that work in your case are: Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works. Generalized Linear Mixed Models (GLMM). Of course you would want to choose the logit link. Wit...
What statistical test should I use to look at change in a binary outcome over time? Two approaches that work in your case are: Generalized Estimating Equation (GEE), as you indicated in above comment. That definitely works. Generalized Linear Mixed Models (GLMM). Of course you would
48,633
What statistical test should I use to look at change in a binary outcome over time?
If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadherance as "failure" event. Besides showing the Kaplan-Meier curves, you could use a Cox regression model to evaluate the...
What statistical test should I use to look at change in a binary outcome over time?
If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadh
What statistical test should I use to look at change in a binary outcome over time? If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadherance as "failure" event. Besides s...
What statistical test should I use to look at change in a binary outcome over time? If you mean you have visits at 6w and 6mt, then you may be able to determine at what exact day the patients stopped taking their medication, meaning the best way would be survival analysis, with inadh
48,634
KL divergence between a gamma distribution and a lognormal distribution?
Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$: and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$: Then, the Kullback-Leibler divergence between the true distribution $f$ and the Lognormal approximation $g$ is given by: $$E_f\big[\log f(x)\big] - E_f\big[\log g(x)\bi...
KL divergence between a gamma distribution and a lognormal distribution?
Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$: and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$: Then, the Kullback-Leibler divergence between th
KL divergence between a gamma distribution and a lognormal distribution? Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$: and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$: Then, the Kullback-Leibler divergence between the true distribution $f$ and the Lognormal appro...
KL divergence between a gamma distribution and a lognormal distribution? Given: Let our $\text{Gamma}(k,\theta)$ random variable have pdf $f(x)$: and let our $\text{Lognormal}(\mu, \sigma)$ random variable have pdf $g(x)$: Then, the Kullback-Leibler divergence between th
48,635
Is it possible to compare probabilities of 2 logistic different models?
Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper). In your case, combine the data from different days, and model this: $answer=\alpha+\beta_1Tues+\beta_2Wed+\beta_3Thurs+\beta_4Fri...
Is it possible to compare probabilities of 2 logistic different models?
Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper). In your c
Is it possible to compare probabilities of 2 logistic different models? Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper). In your case, combine the data from different days, and m...
Is it possible to compare probabilities of 2 logistic different models? Indeed, you cannot reliably compare across logit models with different underlying data. Without repeating what has been written before, this post has a very good answer (or see this paper). In your c
48,636
Question about posterior mean calibration
Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive miscalibration. Therefore I think the intended definition of miscalibration is: $$ \text{miscalibration} = \text{(posterio...
Question about posterior mean calibration
Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive mi
Question about posterior mean calibration Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive miscalibration. Therefore I think the intended definition of miscalibration is: ...
Question about posterior mean calibration Later in that section, there is an example where the posterior mean using the inferential prior is larger than the posterior mean using the true prior, and this is said to be an example of positive mi
48,637
Q-Q plot and sample size
I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with: set.seed(2501) par(mfrow=c(3,3), pty="s") And then ran the rest of your code three times. Here is the resulting plot: Sometimes the distinct...
Q-Q plot and sample size
I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with: set.seed(2501) par(mfrow=
Q-Q plot and sample size I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with: set.seed(2501) par(mfrow=c(3,3), pty="s") And then ran the rest of your code three times. Here is the resulting plot: ...
Q-Q plot and sample size I think there is less here than meets the eye. You need to recognize that the appearance of these plots will bounce around with different data. I modified your code with: set.seed(2501) par(mfrow=
48,638
Q-Q plot and sample size
I could think of at least two approaches to better diagnostics for a small sample size case: To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line; To augment visual diagnostics with analytical approach, as described, for example, here.
Q-Q plot and sample size
I could think of at least two approaches to better diagnostics for a small sample size case: To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line; To a
Q-Q plot and sample size I could think of at least two approaches to better diagnostics for a small sample size case: To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line; To augment visual diagnostics with analytical approach, as described, for example, here.
Q-Q plot and sample size I could think of at least two approaches to better diagnostics for a small sample size case: To use a different scale for Q-Q plots in order to visually emphasize deviation from the normal line; To a
48,639
Chance of me beating my friend in trivia
This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The Problem of Points itself has been traced back to problems of insurance raised under 13th century (CE) Islamic contract la...
Chance of me beating my friend in trivia
This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The P
Chance of me beating my friend in trivia This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The Problem of Points itself has been traced back to problems of insurance raised un...
Chance of me beating my friend in trivia This question generalizes the famous Problem of Points whose consideration by Blaise Pascal and Pierre Fermat in the summer of 1654 is generally credited as the beginning of probability theory. The P
48,640
Confused about 0 intercept in logistic regression in R
The issue is not specific to a GLM. It's an issue of treatment contrasts. You should also look at the model with intercept: set.seed(42) y <- as.factor(sample(rep(1:2), 30, T)) x <- as.factor(sample(rep(1:2), 30, T)) z <- as.factor(sample(rep(1:2), 30, T)) fit0 <- glm(y ~ z + x, binomial) predict(fit0, newdata=data.fr...
Confused about 0 intercept in logistic regression in R
The issue is not specific to a GLM. It's an issue of treatment contrasts. You should also look at the model with intercept: set.seed(42) y <- as.factor(sample(rep(1:2), 30, T)) x <- as.factor(sample(r
Confused about 0 intercept in logistic regression in R The issue is not specific to a GLM. It's an issue of treatment contrasts. You should also look at the model with intercept: set.seed(42) y <- as.factor(sample(rep(1:2), 30, T)) x <- as.factor(sample(rep(1:2), 30, T)) z <- as.factor(sample(rep(1:2), 30, T)) fit0 <-...
Confused about 0 intercept in logistic regression in R The issue is not specific to a GLM. It's an issue of treatment contrasts. You should also look at the model with intercept: set.seed(42) y <- as.factor(sample(rep(1:2), 30, T)) x <- as.factor(sample(r
48,641
What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$?
Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the square root is a non-linear transformation. In other words, if a straight line was most appropriate before the transfo...
What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$
Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the
What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$? Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the square root is a ...
What if a transformed variable yields more normal and less heteroskedastic residuals but lower $R^2$ Simply put, you should not use a model that violates its assumptions just because it yields a higher $R^2$. So, you should use the transformed variable for your model. However, bear in mind that the
48,642
Why is independence required for two- sample proportions z test?
All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers is significantly larger for the first than the second question. That would be a paired test. Why is wrong to use a two-...
Why is independence required for two- sample proportions z test?
All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers i
Why is independence required for two- sample proportions z test? All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers is significantly larger for the first than the second qu...
Why is independence required for two- sample proportions z test? All participants answered two questions. One question was answered correctly by 85% and the other question was answered correctly by 65%. I am interested in whether the proportion of correct answers i
48,643
Pre Window Length Selection with Difference-In-Differences
This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatment is consistent while using just the most recent pre-treatment period is not consistent. His framework is somewhat rest...
Pre Window Length Selection with Difference-In-Differences
This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatme
Pre Window Length Selection with Difference-In-Differences This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatment is consistent while using just the most recent pre-treatme...
Pre Window Length Selection with Difference-In-Differences This paper by Chabé-Ferret (2010) may be interesting in this context. He provides different scenarios under which a DID estimator using pre-post-treatment pairs with equal time distance to the treatme
48,644
What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)?
$\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless either it can only return $0$ when the null hypothesis is true, or if half its distribution is $<0$. Since it cannot be $<...
What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)
$\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless eit
What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not)? $\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless either it can only re...
What measure of effect size in ANOVA has mode at zero under the null (unlike $\eta^2$ that does not) $\eta^2$ is the same as $R^2$ in a one-way ANOVA. It is bounded by $[0,\ 1]$. When the null hypothesis holds, the true value of $\eta^2$ is $0$. So the estimator $SSB/SST$ must be biased unless eit
48,645
Is the converse of this statement true?
Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transformation from $F\to F$ via $$(T_{\mu,\sigma}(F))(x) = F((x-\mu)/\sigma)(x)$$ for all $x\in \mathbb R$. (This is the acti...
Is the converse of this statement true?
Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transfo
Is the converse of this statement true? Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transformation from $F\to F$ via $$(T_{\mu,\sigma}(F))(x) = F((x-\mu)/\sigma)(x)$$ for ...
Is the converse of this statement true? Here is my understanding of the terminology. $\mathcal{X}$ is the set of all distributions on the real line. For $F\in\mathcal{X}$, $\mu\in\mathbb{R}$,and $\sigma\in\mathbb{R}-{0}$, define a transfo
48,646
Clarifications about probit and logit models
Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be better to write: $Prob(y=1|x)=G^{-1}(β0+xβ)$. Second, if the probability that y=1 is 50%, then the probability y=0 mus...
Clarifications about probit and logit models
Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be
Clarifications about probit and logit models Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be better to write: $Prob(y=1|x)=G^{-1}(β0+xβ)$. Second, if the probability ...
Clarifications about probit and logit models Let me start with a couple of persnickety details: We usually refer to the link function as being applied to the LHS, and the inverse of the link function being applied to the RHS. Thus, it would be
48,647
How to standardize text data for training Neural Networks?
I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dense and much smaller dimensionality representations. I tried LDA (Latent Dirichlet Allocation) and some other feature sele...
How to standardize text data for training Neural Networks?
I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dens
How to standardize text data for training Neural Networks? I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dense and much smaller dimensionality representations. I tried LD...
How to standardize text data for training Neural Networks? I've been also trying to use Neural Networks for text categorization/classification with limited success. I tried to move away from unigram/bigram features (very sparse, very high-dimensional) to dens
48,648
How to standardize text data for training Neural Networks?
Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a good results for text classification tasks, but it hard to train for a complex tasks. Basically word Embedding is some i...
How to standardize text data for training Neural Networks?
Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a
How to standardize text data for training Neural Networks? Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a good results for text classification tasks, but it hard to tr...
How to standardize text data for training Neural Networks? Neural networks is not the best way for text classification and for good improve you need to train it for a long time. If you just want use the NN read more about RNN and Word Embedding. RNN showed a
48,649
How to standardize text data for training Neural Networks?
Nevermind... I found the answer here PDF link. Using word-of-bag or word class is also possible.
How to standardize text data for training Neural Networks?
Nevermind... I found the answer here PDF link. Using word-of-bag or word class is also possible.
How to standardize text data for training Neural Networks? Nevermind... I found the answer here PDF link. Using word-of-bag or word class is also possible.
How to standardize text data for training Neural Networks? Nevermind... I found the answer here PDF link. Using word-of-bag or word class is also possible.
48,650
Acceptable values for the intraclass correlation coefficient (empty model)
John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying within-group relationships when intraclass correlations are 0 (attached below). So some, like Nezlek, would say that this i...
Acceptable values for the intraclass correlation coefficient (empty model)
John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying wit
Acceptable values for the intraclass correlation coefficient (empty model) John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying within-group relationships when intraclass corre...
Acceptable values for the intraclass correlation coefficient (empty model) John B. Nezlek argues that ICC should not be a ground for justifying decisions on multilevel models, because it's values could be misleading. In his article he gives a synthetic example of varying wit
48,651
Applying linear function approximation to reinforcement learning
If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html Sutton's book is really confusing in how they describe how to set up your feature space F(s,a), but in the web page above, they describe it in a simple example. Applying the architecture of theta and F(s,a) from...
Applying linear function approximation to reinforcement learning
If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html Sutton's book is really confusing in how they describe how to set up your feature space F(s
Applying linear function approximation to reinforcement learning If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html Sutton's book is really confusing in how they describe how to set up your feature space F(s,a), but in the web page above, they describe it in a s...
Applying linear function approximation to reinforcement learning If you haven't yet, check out this page which covers SARSA with LFA: http://artint.info/html/ArtInt_272.html Sutton's book is really confusing in how they describe how to set up your feature space F(s
48,652
How can I implement lasso in R using optim function
With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function being minimized isn't differentiable at zero so unless you hit zero exactly, you're likely to get all coefficients non-z...
How can I implement lasso in R using optim function
With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function
How can I implement lasso in R using optim function With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function being minimized isn't differentiable at zero so unless you hit zero ...
How can I implement lasso in R using optim function With standard algorithms for convex smooth optimization, like CG, gradient descent, etc, you tend to get results that are similar to lasso but the coefficients don't become exactly zero. The function
48,653
How should I evaluate the expectation of the ratio of two random variables?
$1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality $$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$ the last equality if we assume that each respondent has an equal probability to respond or not. An estimator of this probability is the samp...
How should I evaluate the expectation of the ratio of two random variables?
$1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality $$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$ the last equality if
How should I evaluate the expectation of the ratio of two random variables? $1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality $$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$ the last equality if we assume that each respondent has an equal...
How should I evaluate the expectation of the ratio of two random variables? $1/\sum{S_i}$ is a convex function in $\sum{S_i}$. Then by Jensen's inequality $$E\left(\frac{1}{\sum{S_i}}\right)>\left(\frac{1}{E[\sum{S_i}]}\right) =\frac{1}{n\cdot P(S_i=1)}$$ the last equality if
48,654
Adding a variance structure when fitting a gamm with Gamma distribution
Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distribution such as Gamma) then the weights argument should be specified as it would be for GlmmPQL(). The explanation is that GA...
Adding a variance structure when fitting a gamm with Gamma distribution
Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distributio
Adding a variance structure when fitting a gamm with Gamma distribution Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distribution such as Gamma) then the weights argument shoul...
Adding a variance structure when fitting a gamm with Gamma distribution Your error message "weights must be like glm weights for generalized case" is saying that if you choose to use Gamm() with a generalized case (which means: using a non-Gaussian probability distributio
48,655
Poisson as a limiting case of negative binomial
Consider that $$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$ and then take the denominator over into the ratio of Gammas. I think all you need to do then is make an argument that the resulting term with the gammas and the denominator goes to 1. I believe this is one of the relations discussed in the middle of this sectio...
Poisson as a limiting case of negative binomial
Consider that $$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$ and then take the denominator over into the ratio of Gammas. I think all you need to do then is make an argument that the resulting term with
Poisson as a limiting case of negative binomial Consider that $$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$ and then take the denominator over into the ratio of Gammas. I think all you need to do then is make an argument that the resulting term with the gammas and the denominator goes to 1. I believe this is one of the ...
Poisson as a limiting case of negative binomial Consider that $$({au\over 1+au})^y=({u\over a^{-1}+u})^y$$ and then take the denominator over into the ratio of Gammas. I think all you need to do then is make an argument that the resulting term with
48,656
Poisson as a limiting case of negative binomial
This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution The key is the parameterization of the dispersion parameter.
Poisson as a limiting case of negative binomial
This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution The key is the parameterization of the dispersion parameter.
Poisson as a limiting case of negative binomial This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution The key is the parameterization of the dispersion parameter.
Poisson as a limiting case of negative binomial This is covered under http://en.wikipedia.org/wiki/Negative_binomial_distribution#Poisson_distribution The key is the parameterization of the dispersion parameter.
48,657
Can correlated random effects "steal" the variability (and the significance) from the regression coefficient?
Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the variance $\sigma^2$. Thus it will be "cheaper" in terms of likelihood to put year-consistent variability into the fixed effect ...
Can correlated random effects "steal" the variability (and the significance) from the regression coe
Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the varianc
Can correlated random effects "steal" the variability (and the significance) from the regression coefficient? Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the variance $\sigma^...
Can correlated random effects "steal" the variability (and the significance) from the regression coe Since $\gamma_j$ is assumed to follow a zero-mean normal distribution, any deviation of the predicted value of $\gamma_j$ from zero will be penalized in the likelihood function relative to the varianc
48,658
Can correlated random effects "steal" the variability (and the significance) from the regression coefficient?
If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\gamma$, you now explicitly distinguish between-year variability and within-year variability. The overall variance in your...
Can correlated random effects "steal" the variability (and the significance) from the regression coe
If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\g
Can correlated random effects "steal" the variability (and the significance) from the regression coefficient? If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\gamma$, you...
Can correlated random effects "steal" the variability (and the significance) from the regression coe If I understand your description correctly, I would say you are more likely to see a significant coefficient $\hat{\beta}$ by including a random effect. The reason is that with the introduction of $\g
48,659
Question about definition of random sample
It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat. The answer probably requires some familiarity with non-basic probability theory, but I think you'll follow just fine. A stochastic variable is a measurable function from a back...
Question about definition of random sample
It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat. The answer probably requires some familiarity with non-b
Question about definition of random sample It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat. The answer probably requires some familiarity with non-basic probability theory, but I think you'll follow just fine. A stochastic va...
Question about definition of random sample It's a good question. A lot of introductory statistics books are a bit vague when it comes to the mathematical set-up of the topics they treat. The answer probably requires some familiarity with non-b
48,660
mtry tuning given by caret higher than the number of predictors
Try using train with the matrix argument, i.e. tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed # but change to tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3 I think train creates the model matrix and then passes it to randomForest when using the formula argument, thus considering every...
mtry tuning given by caret higher than the number of predictors
Try using train with the matrix argument, i.e. tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed # but change to tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3 I think t
mtry tuning given by caret higher than the number of predictors Try using train with the matrix argument, i.e. tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed # but change to tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3 I think train creates the model matrix and then passes it to rand...
mtry tuning given by caret higher than the number of predictors Try using train with the matrix argument, i.e. tr1 <- train(Sepal.Length ~ ., data = iris) # gives mtry = 5, not allowed # but change to tr2 <- train(iris[, -1], iris[, 1]) # gives mtry = 3 I think t
48,661
Quantiles of a compound gamma/negative binomial distribution
As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right censoring and how heavy the tails are in the components). But in any case - especially when dealing with high quantiles...
Quantiles of a compound gamma/negative binomial distribution
As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right
Quantiles of a compound gamma/negative binomial distribution As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right censoring and how heavy the tails are in the components). ...
Quantiles of a compound gamma/negative binomial distribution As a practical answer to the real questions you're addressing, such high quantiles will generally be quite sensitive to issues with model choice (especially such things as whether you model the right
48,662
Kolmogorov distribution
The function that is shown implements the CDF for one sided KS statistic $$ D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\}, $$ where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial CDF of the sample of size $n$. So, $D_n^{+}$ has a CDF shown in the question: $$ F_{D_n^{+}}(x) = 1-x\sum_{j=0}^{\lfloor...
Kolmogorov distribution
The function that is shown implements the CDF for one sided KS statistic $$ D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\}, $$ where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial
Kolmogorov distribution The function that is shown implements the CDF for one sided KS statistic $$ D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\}, $$ where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial CDF of the sample of size $n$. So, $D_n^{+}$ has a CDF shown in the question: $$ F_{D_n^{+}}(x) ...
Kolmogorov distribution The function that is shown implements the CDF for one sided KS statistic $$ D_n^{+} = \sup_{x}\{\hat{F}_n(x) - F(x)\}, $$ where $F(x)$ is theoretical (continuous) CDF and $\hat{F}_n(x)$ is empiricial
48,663
Kolmogorov distribution
The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution Kolmogorov distribution The Kolmogorov distribution is the distribution of the random variable $K=\sup_{t\in[0,1]}|B(t)|$ where $B(t)$ is the Brownia...
Kolmogorov distribution
The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution Kolmogorov distribution The Kolmo
Kolmogorov distribution The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution Kolmogorov distribution The Kolmogorov distribution is the distribution of the random variable $K=\sup_{t\in[0,1]}|B(t)|$ whe...
Kolmogorov distribution The expression for the Kolmogorov-Smirnov CDF is provided in the wikipedia link: http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov_distribution Kolmogorov distribution The Kolmo
48,664
Is residuals autocorrelation always a problem?
Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, means, variances, etc. may be spurious (with unknown direction of bias) because the population mean is undefined and the...
Is residuals autocorrelation always a problem?
Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels,
Is residuals autocorrelation always a problem? Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels, means, variances, etc. may be spurious (with unknown direction of bias) b...
Is residuals autocorrelation always a problem? Correlated residuals in time series analysis may imply far worse than low efficiency: if the structure of autocorrelation implies integrated or near-integrated data, then any inferences about levels,
48,665
Is residuals autocorrelation always a problem?
1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by shifting car maker or model is not very meaningful. For new cars, shifting year (comparing year over year sales of the ...
Is residuals autocorrelation always a problem?
1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by
Is residuals autocorrelation always a problem? 1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by shifting car maker or model is not very meaningful. For new cars, shifti...
Is residuals autocorrelation always a problem? 1) The time series auto-correlation you refer is the correlation between a time series and the time-shifted series; "time" is observed when the data are collected. In your example, auto-correlation by
48,666
Logrank test for trend (proportional hazards)
Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and tests for trend (for all of the above mentioned). The example taken from the manual is the following: data(larynx, pack...
Logrank test for trend (proportional hazards)
Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and
Logrank test for trend (proportional hazards) Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and tests for trend (for all of the above mentioned). The example taken from ...
Logrank test for trend (proportional hazards) Try comp from survMisc package. It extends the survival package. It counts statistic and p-value for logrank test, as well as for Gehan-Breslow, Tarone-Ware, Peto-Peto and Fleming-Harrington tests and
48,667
Logrank test for trend (proportional hazards)
Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the proportional hazards model. This essentially tests the slope of (scaled) residuals as a function of follow-up time.
Logrank test for trend (proportional hazards)
Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the prop
Logrank test for trend (proportional hazards) Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the proportional hazards model. This essentially tests the slope of (scaled) resi...
Logrank test for trend (proportional hazards) Your question is not very clear, so not sure if this is what you are looking for. To test the proportional hazards assumption you can use the Grambsch-Therneau test on Schoenfeld residuals of the prop
48,668
Anderson Darling exponential distribution
The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) doesn't depend on the null-hypothesis distribution when all parameters are known, & (2) depends only on the functional ...
Anderson Darling exponential distribution
The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1)
Anderson Darling exponential distribution The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) doesn't depend on the null-hypothesis distribution when all parameters are kn...
Anderson Darling exponential distribution The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1)
48,669
Estimating the ratio of cell means in ANOVA under lognormal assumption
First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions. As a second remark, I would check the assumptions of ANOVA: it might be that the variances of the two samples differ significantly. Eventually, in a linear regression setting with logged d...
Estimating the ratio of cell means in ANOVA under lognormal assumption
First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions. As a second remark, I would check the assumptions of ANOVA: it might
Estimating the ratio of cell means in ANOVA under lognormal assumption First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions. As a second remark, I would check the assumptions of ANOVA: it might be that the variances of the two samples differ ...
Estimating the ratio of cell means in ANOVA under lognormal assumption First off, I find hard to understand why you preferred an 1-way ANOVA instead of a t-test, since you did not look for interactions. As a second remark, I would check the assumptions of ANOVA: it might
48,670
Estimating the ratio of cell means in ANOVA under lognormal assumption
$\log Y = b_0 + b_1 X$ When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need to do estimation. Two $y$ values would be sufficient to estimate two parameters exactly (two equations in two unknowns)...
Estimating the ratio of cell means in ANOVA under lognormal assumption
$\log Y = b_0 + b_1 X$ When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need
Estimating the ratio of cell means in ANOVA under lognormal assumption $\log Y = b_0 + b_1 X$ When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need to do estimation. Two $y$ values would be suffic...
Estimating the ratio of cell means in ANOVA under lognormal assumption $\log Y = b_0 + b_1 X$ When you omit the error term, you lead yourself straight into difficulty that is otherwise easily avoided. Clearly the equation you wrote is false, otherwise you wouldn't need
48,671
Estimating the ratio of cell means in ANOVA under lognormal assumption
The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means. In other words $E[\log Y | X]$ is the arithmetic mean of $\log Y$, and exponentiating that gives you the geometric mean of $Y$. This carries ove...
Estimating the ratio of cell means in ANOVA under lognormal assumption
The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means. In other words $E[\log Y
Estimating the ratio of cell means in ANOVA under lognormal assumption The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means. In other words $E[\log Y | X]$ is the arithmetic mean of $\log Y$, and ex...
Estimating the ratio of cell means in ANOVA under lognormal assumption The exponentiated arithmetic mean of logged values is the geometric mean of the original values. So when you model $\log Y$ and exponentiate, you get back the geometric means. In other words $E[\log Y
48,672
From joint cdf to joint pdf
A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we get $4$ subsets, $$(-\infty, x] \times (-\infty, y],\;\;(-\infty, x] \times [y,\infty),\\ [x, \infty) \times (-\infty, ...
From joint cdf to joint pdf
A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we ge
From joint cdf to joint pdf A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we get $4$ subsets, $$(-\infty, x] \times (-\infty, y],\;\;(-\infty, x] \times [y,\infty),\\ [x...
From joint cdf to joint pdf A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we ge
48,673
How to avoid NaN in using ReLU + Cross-Entropy? [duplicate]
The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf) This requires another hyperparameter, the threshold. The suggestion from the referenced paper is to sample some gradients t...
How to avoid NaN in using ReLU + Cross-Entropy? [duplicate]
The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf) Thi
How to avoid NaN in using ReLU + Cross-Entropy? [duplicate] The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf) This requires another hyperparameter, the threshold. The sugges...
How to avoid NaN in using ReLU + Cross-Entropy? [duplicate] The recommended thing to do when using ReLUs is to clip the gradient, if the norm is above a certain threshold, during the SGD update (suggested by Mikolov, see http://arxiv.org/pdf/1211.5063.pdf) Thi
48,674
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with very similar $R^2$ or a number of low quality predictors the answers can be different. This is why in regular regression w...
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with ver
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with very similar $R^2$ or a number of l...
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model Is your goal model parsimony or the predictive power of the model? If parsimony, then use AIC, if predictive power then $R^2$. Usually the answer is similar, but if you are comparing models with ver
48,675
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to know if the change in R^2 is significant. If not, for the parsimony, take the simpler model, and if so take the more comple...
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model
R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to kno
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to know if the change in R^2 is signif...
The 'best' model selected with AICc have lower $R^2$ -square than the full/global model R^2 tells you how much of the variance a model explains. AIC is based on the KL distance and compares models relative to one another. For instance, if you wanted to compare using R^2 you'd want to kno
48,676
Matrix Factorization Recommendation Systems with Only "Like" Ratings
This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation: predict likes, with log-loss, use your set of actual likes (p=1) and randomly generate set of potential non-likes (p=0). Usually you want to generate this non-likes set from the similar distribu...
Matrix Factorization Recommendation Systems with Only "Like" Ratings
This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation: predict likes, with log-loss, use your set of actual likes (p=1) and random
Matrix Factorization Recommendation Systems with Only "Like" Ratings This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation: predict likes, with log-loss, use your set of actual likes (p=1) and randomly generate set of potential non-likes (p=0). Usua...
Matrix Factorization Recommendation Systems with Only "Like" Ratings This problem is usually called implicit feedback. The typical solution is similar to word2vec noise-contrastive estimation: predict likes, with log-loss, use your set of actual likes (p=1) and random
48,677
Matrix Factorization Recommendation Systems with Only "Like" Ratings
Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outlined in this paper (PDF): Hu, Koren, and Volinsky.There are implementations in many common machine learning software pac...
Matrix Factorization Recommendation Systems with Only "Like" Ratings
Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outl
Matrix Factorization Recommendation Systems with Only "Like" Ratings Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outlined in this paper (PDF): Hu, Koren, and Volinsky.T...
Matrix Factorization Recommendation Systems with Only "Like" Ratings Yes, this is known as "unary" data (or often "implicit" data if you're only using clicks or impressions). The most common matrix factorization technique used is probably alternating least squares outl
48,678
Which of these points in this plot has the highest leverage and why?
The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$. The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so the point with the largest $(x_i-\bar{x})^2$ has the highest leverage. This means that the point furthest from the mean...
Which of these points in this plot has the highest leverage and why?
The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$. The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so
Which of these points in this plot has the highest leverage and why? The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$. The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so the point with the largest $(x_i-\bar{x})^2$ has t...
Which of these points in this plot has the highest leverage and why? The leverage is $h_{ii}=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum (x_i-\bar{x})^2}\,$. The term $\frac{1}{n}$ and the denominator of the second term $\sum (x_i-\bar{x})^2$ are the same for every $i$, so
48,679
What transformations preserve the von Mises distribution?
Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family. Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\Theta\}$ of continuous distributions. By virtue of this continuity, whenever $X\sim F_\theta$ and $0\le q\le 1$, $$\Pr(F_\...
What transformations preserve the von Mises distribution?
Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family. Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\The
What transformations preserve the von Mises distribution? Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family. Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\Theta\}$ of continuous distributions. By virtue of this continui...
What transformations preserve the von Mises distribution? Obviously $\mu$ is a location parameter, meaning that translations of the variable preserve the family. Focus now on the shape parameter $\kappa$. Consider any family $\Omega=\{F_\theta|\theta\in\The
48,680
How to distance and to MDS-plot objects according their complex shape
This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculate on reasons for that below. But I think I was able to get to what you look for in terms of the four basal signals A1, ...
How to distance and to MDS-plot objects according their complex shape
This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculat
How to distance and to MDS-plot objects according their complex shape This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculate on reasons for that below. But I think I was ab...
How to distance and to MDS-plot objects according their complex shape This may be only a partial answer because I don't think the plot that you expect is really what is in the data, especially the "parallelity and continuity" of the intermediate signals. I will speculat
48,681
Can AUC decrease with additional variables?
The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance. Your intuition that using more features should necessarily yield a better model is wrong.
Can AUC decrease with additional variables?
The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance. Your intuit
Can AUC decrease with additional variables? The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance. Your intuition that using more features should necessarily yield a better model is wron...
Can AUC decrease with additional variables? The effect of uninformative features depends largely on your modeling strategy. For some approaches they are irrelevant while for others they can dramatically decrease overall performance. Your intuit
48,682
Can AUC decrease with additional variables?
4 years late but I just had the same experience now. For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of these variables from being 0. It is important to remember how a logistic regression works. I believe the model optimises s...
Can AUC decrease with additional variables?
4 years late but I just had the same experience now. For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of the
Can AUC decrease with additional variables? 4 years late but I just had the same experience now. For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of these variables from being 0. It is important to remember how a logistic regre...
Can AUC decrease with additional variables? 4 years late but I just had the same experience now. For logistic regression, the model should be smart enough to disregard useless variables. There is no constraint preventing the coefficients of the
48,683
Can AUC decrease with additional variables?
Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the discrimination in the same cohorts.
Can AUC decrease with additional variables?
Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the d
Can AUC decrease with additional variables? Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the discrimination in the same cohorts.
Can AUC decrease with additional variables? Check if you have not missings values in the new variables. Logistic regression reject the cases with missing data, and only adjust the model for full cases. You must sure that you are comparing the d
48,684
Longitudinal item response theory models in R
As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives using the WLSMV estimator for ordinal data since I imagine less issues will exist. Plus, including external covariates is...
Longitudinal item response theory models in R
As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives u
Longitudinal item response theory models in R As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives using the WLSMV estimator for ordinal data since I imagine less issues will...
Longitudinal item response theory models in R As a precursor, the IRT approach to this problem is very demanding computationally due to the higher dimensionality. It may be worthwhile to look into structural equation modeling (SEM) alternatives u
48,685
Longitudinal item response theory models in R
In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation. I have had good experience using R package "rstan", which implements a flavor of Hamiltonian Monte Carlo. I had a data ...
Longitudinal item response theory models in R
In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation.
Longitudinal item response theory models in R In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation. I have had good experience using R package "rstan", which implements a fl...
Longitudinal item response theory models in R In the IRT literature for complicated IRT models (multiple groups, longitudinal/repeated measures, multidimensional) the recommended framework is Bayesian, because of relative easiness of estimation.
48,686
Variance as a function of parameters
This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (which may be the regressors themselves or not). In its most simple form the model is Weighted Least Squares. Various speci...
Variance as a function of parameters
This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (whic
Variance as a function of parameters This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (which may be the regressors themselves or not). In its most simple form the model is We...
Variance as a function of parameters This looks like a standard heteroskedastic model, where we treat heteroskedasticity the "old-fashioned way", i.e. by explicitly modelling the error variance as a function of some other variables (whic
48,687
Variance as a function of parameters
To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functions. For example, suppose you have a model that looks like this: (1) $y_{i} = \beta_{0} + \beta_{1}x_{1} + \beta_{2}x_{2}...
Variance as a function of parameters
To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functio
Variance as a function of parameters To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functions. For example, suppose you have a model that looks like this: (1) $y_{i} = \beta_...
Variance as a function of parameters To me the question speaks of straight-up mixed models where the typical homogeneous (homoscedastic) error term is (possibly) decomposed into levels and (possibly) explained at each level using functio
48,688
Convergence in probability, $X_i$ IID with finite second moment
Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$, $$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\sum_{j=1}^nj(X_j-\mathbb E[X_j]),$$ hence $$\tag{1}\mathbb E|Y_n-\mathbb E[X_1]|^2=\frac 4{n^2(n+1)^2}\sum_{i,j=1}^n ij\m...
Convergence in probability, $X_i$ IID with finite second moment
Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$, $$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\su
Convergence in probability, $X_i$ IID with finite second moment Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$, $$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\sum_{j=1}^nj(X_j-\mathbb E[X_j]),$$ hence $$\tag{1}\mathb...
Convergence in probability, $X_i$ IID with finite second moment Actually, we can even show that $\mathbb E|Y_n-\mathbb E[X_1]|^2\to 0$. Indeed, since $\sum_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X_j]=\mathbb E[X_1]$ for all $j$, $$Y_n-\mathbb E[X_1]=\frac 2{n(n+1)}\su
48,689
Convergence in probability, $X_i$ IID with finite second moment
Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) complements the supplied proof of convergence in mean square and the direct proof using Chebyshev's inequality. Proof outline...
Convergence in probability, $X_i$ IID with finite second moment
Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) comple
Convergence in probability, $X_i$ IID with finite second moment Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) complements the supplied proof of convergence in mean square a...
Convergence in probability, $X_i$ IID with finite second moment Much later, here's an updated answer without hints. I mostly wanted to see if I could make sense of the details. This proof of almost sure convergence (which implies convergence in probability) comple
48,690
Plotting a categorical response as a function of a continuous predictor using R
This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight. In this spirit, an attractive approach eschews binning the independent variable. Instead, compute and smooth a running summary of the dependent variable (proportion of incomes less than 50,000 per...
Plotting a categorical response as a function of a continuous predictor using R
This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight. In this spirit, an attractive approach eschews binning the independent variab
Plotting a categorical response as a function of a continuous predictor using R This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight. In this spirit, an attractive approach eschews binning the independent variable. Instead, compute and smooth a runni...
Plotting a categorical response as a function of a continuous predictor using R This is exploration: we should feel free to be creative and to look in many different ways at the data to develop insight. In this spirit, an attractive approach eschews binning the independent variab
48,691
Plotting a categorical response as a function of a continuous predictor using R
The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:- Of course, the line corresponds with the histogram example at where the two colours meet. I can't see in the your example if the data is raw or modelled (as my example is)...
Plotting a categorical response as a function of a continuous predictor using R
The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:- Of course, the line corresponds with the histogram e
Plotting a categorical response as a function of a continuous predictor using R The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:- Of course, the line corresponds with the histogram example at where the two colours meet. I...
Plotting a categorical response as a function of a continuous predictor using R The plot you highlight in your question reminds of using a loess (or lowess) curve to visualise a continuous variables against a binary response:- Of course, the line corresponds with the histogram e
48,692
Checking MCMC convergence with a single chain
First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly important mode of the target distribution, the Gelman-Rubin criterion concludes to the convergence of the chains. Using mul...
Checking MCMC convergence with a single chain
First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly imp
Checking MCMC convergence with a single chain First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly important mode of the target distribution, the Gelman-Rubin criterion conclud...
Checking MCMC convergence with a single chain First, the Gelman-Rubin test does not check convergence of an MCMC Markov chain but simply an agreement between several parallel chains: if all chains miss a highly concentrated but equally highly imp
48,693
Estimating the error in the average of correlated values
This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, this is a nontrivial question. Provided a CLT exists, the second question is how to obtain a consistent estimator of t...
Estimating the error in the average of correlated values
This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically,
Estimating the error in the average of correlated values This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically, this is a nontrivial question. Provided a CLT exists, the sec...
Estimating the error in the average of correlated values This is an active area of research. This first question is whether a central limit theorem (CLT) even exists which depends on the mixing properties of your MCMC, e.g. geometric convergence. Typically,
48,694
Estimating the error in the average of correlated values
The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this document for a very short summary [1]. There is also some parallel work in the physics community, where ideas from renormal...
Estimating the error in the average of correlated values
The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this doc
Estimating the error in the average of correlated values The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this document for a very short summary [1]. There is also some parallel...
Estimating the error in the average of correlated values The problem of finding error estimates of statistics in (autocorrelated) time series is usually approached via block bootstrapping. It is the same in spirit as your approach. See Section 5 of this doc
48,695
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm?
Well, random forest uses bagging which is specifically designed to reduce problems with overfitting. Ensemble methods like bagging and CV are both ways to avoid overfitting. Cross-validation can be used in random forest modelling various ways - e.g. to find the optimal number of trees - but I don't know anywhere it...
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm
Well, random forest uses bagging which is specifically designed to reduce problems with overfitting. Ensemble methods like bagging and CV are both ways to avoid overfitting. Cross-validation can be
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm? Well, random forest uses bagging which is specifically designed to reduce problems with overfitting. Ensemble methods like bagging and CV are both ways to avoid overfitting. Cross-validation can be used in random fo...
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm Well, random forest uses bagging which is specifically designed to reduce problems with overfitting. Ensemble methods like bagging and CV are both ways to avoid overfitting. Cross-validation can be
48,696
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm?
As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much of your time. And Glen_b also mentioned that, CV and Bagging are two approaches to reduce overfitting, so using one of...
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm
As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm? As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much of your time. An...
Is it necessary to use cross-validatation to avoid overfitting when applying random forest algorithm As random forests is working on the concept of Bootstrap aggregating, there is no special need for cross validation. while dealing with large number of trees in forest, cross validation will take much
48,697
What's the algorithm for finding sequences used by TraMineR?
As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the pre fix-tree-based search described in Masseglia (2002). Masseglia, F. (2002). Algorithmes et applications pour l'extraction de motifs sequentiels dans le domaine de la fouille de donnees : de l'incremental au temps ...
What's the algorithm for finding sequences used by TraMineR?
As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the pre fix-tree-based search described in Masseglia (2002). Masseglia, F. (2002). Algorithmes et appli
What's the algorithm for finding sequences used by TraMineR? As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the pre fix-tree-based search described in Masseglia (2002). Masseglia, F. (2002). Algorithmes et applications pour l'extraction de motifs sequentiels dans le do...
What's the algorithm for finding sequences used by TraMineR? As stated in Ritschard et al. (2013), the algorithm implemented in TraMineR is an adaptation of the pre fix-tree-based search described in Masseglia (2002). Masseglia, F. (2002). Algorithmes et appli
48,698
Parameter region for existence of solutions of equation
To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures below.) This particular question is amenable to further analysis which enables R to display $S$: for each $x\in [0,1]$, we ...
Parameter region for existence of solutions of equation
To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures bel
Parameter region for existence of solutions of equation To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures below.) This particular question is amenable to further analysis wh...
Parameter region for existence of solutions of equation To address the general question, consider using a tool that is well adapted to such calculations and visualizations, such as Mathematica. (This was used to plot the first two and last two figures bel
48,699
How to choose the right number of parameters in Logistic Regression?
Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devote to the portion of the model that pertains to that one predictor. I use regression splines, place knots where $X$ is ...
How to choose the right number of parameters in Logistic Regression?
Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devo
How to choose the right number of parameters in Logistic Regression? Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devote to the portion of the model that pertains to tha...
How to choose the right number of parameters in Logistic Regression? Distortion of statistical properties can occur when you "fit to the data", so I think of this more in terms of specifying the number of parameters that I can afford to estimate and that I want to devo
48,700
Confidence Interval for predictions for Poisson regression
To address Q1, lets start by making some data to play with: lo.to.p <- function(lo){ # this function will convert log odds to probabilities o <- exp(lo) # we get odds by exponentiating log odds p <- o/(o+1) # we convert to probabilities return(p) } set.seed(90) # t...
Confidence Interval for predictions for Poisson regression
To address Q1, lets start by making some data to play with: lo.to.p <- function(lo){ # this function will convert log odds to probabilities o <- exp(lo) # we get odds by exponentiating
Confidence Interval for predictions for Poisson regression To address Q1, lets start by making some data to play with: lo.to.p <- function(lo){ # this function will convert log odds to probabilities o <- exp(lo) # we get odds by exponentiating log odds p <- o/(o+1) # we convert to probabili...
Confidence Interval for predictions for Poisson regression To address Q1, lets start by making some data to play with: lo.to.p <- function(lo){ # this function will convert log odds to probabilities o <- exp(lo) # we get odds by exponentiating