idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
47,801
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution?
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well as carrying out Fisher's Exact Test, (1) also calculates the conditional maximum-likelihood estimate of, & confidence i...
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom
As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution? As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well...
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom As @Glen_b says, under the null hypothesis of an odds ratio of one, Fisher's non-central hypergeometric distribution reduces to a hypergeometric distribution. However the fisher.test function, as well
47,802
coxme proportional hazard assumption
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation. Oka suggested using frailty in connection with coxph in order to test the proportional hazard assumption. I believe it is worth noting that the documentation for frailty mentions, "the coxme package has superseded...
coxme proportional hazard assumption
Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation. Oka suggested using frailty in connection with coxph in order to test the proportional hazard
coxme proportional hazard assumption Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation. Oka suggested using frailty in connection with coxph in order to test the proportional hazard assumption. I believe it is worth noting that the documentation for frailty mentio...
coxme proportional hazard assumption Apologies for making this a separate answer, but I cannot comment because I have less than 50 reputation. Oka suggested using frailty in connection with coxph in order to test the proportional hazard
47,803
coxme proportional hazard assumption
However, I cannot find the equivalent for coxme models. Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can do it like this: # making the model myfit <- coxph( Surv(Time, Censor) ~ fixed + frailty(random) , data = mydata ) # ...
coxme proportional hazard assumption
However, I cannot find the equivalent for coxme models. Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can
coxme proportional hazard assumption However, I cannot find the equivalent for coxme models. Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can do it like this: # making the model myfit <- coxph( Surv(Time, Censor) ~ fixed + ...
coxme proportional hazard assumption However, I cannot find the equivalent for coxme models. Based on the documentation, you can add the random effects into Cox or survreg model with frailty function. As suggested in SO answer, you can
47,804
coxme proportional hazard assumption
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argument is "the result of fitting a Cox regression model, using the coxph or coxme functions." (Emphasis added.) As another...
coxme proportional hazard assumption
The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argu
coxme proportional hazard assumption The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argument is "the result of fitting a Cox regression model, using the coxph or coxme fun...
coxme proportional hazard assumption The proportional hazards (PH) assumption for fixed effects in a coxme model can be tested with the same cox.zph() function used for coxph models. As the documentation of cox.zph() states, its fit argu
47,805
coxme proportional hazard assumption
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function. Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg suggested above. Random effects terms such a frailty or random effects in a coxme model are not checked for proportional h...
coxme proportional hazard assumption
Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function. Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg sug
coxme proportional hazard assumption Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function. Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg suggested above. Random effects terms such a frailty or random effects in a coxme mod...
coxme proportional hazard assumption Based on the documentation in the page 31, "cox.zph" does not work for the "frailty" function. Therefore, you can not use cox.zph(myfit) to check mixed effects Cox models as the answers Oka or kjg sug
47,806
Expectation of a matrix for variance-covariance
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like an average of it's column. In this sense you are right. However covariance matrix is about covariation between this vari...
Expectation of a matrix for variance-covariance
Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like
Expectation of a matrix for variance-covariance Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like an average of it's column. In this sense you are right. However covarian...
Expectation of a matrix for variance-covariance Expectation of a matrix of variables is not the expectation of the columns of the matrix. What may confuse you is that you treat each column as a variable and calculate it's expectation estimate like
47,807
Applying an ARIMA model with exogenous variables for forecasting
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds the xreg value as a covariate to the RHS of the equation, see here. This means the value needs to be explicitly provide...
Applying an ARIMA model with exogenous variables for forecasting
Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds
Applying an ARIMA model with exogenous variables for forecasting Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds the xreg value as a covariate to the RHS of the equati...
Applying an ARIMA model with exogenous variables for forecasting Once the model has been trained (in this instance ModeloX3) you can produce forecasts with the forecast function. I think you are missing some understanding as to how ARMAX models work. It simply adds
47,808
Kalman filter has a frequentist or bayesian origin?
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dimensional sufficient statistics which do not grow in time$^*$. So I would say that it is pretty Bayesian and as you state...
Kalman filter has a frequentist or bayesian origin?
Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dim
Kalman filter has a frequentist or bayesian origin? Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dimensional sufficient statistics which do not grow in time$^*$. So I w...
Kalman filter has a frequentist or bayesian origin? Kalman filter is the analytical implementation of Bayesian filtering recursions for linear Gaussian state space models. For this model class the filtering density can be tracked in terms of finite-dim
47,809
Variance estimation for regression coefficients with complex survey data
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the correlation structure within the survey instrument, yield variance estimates that scale inversely with the number of survey ...
Variance estimation for regression coefficients with complex survey data
A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the corr
Variance estimation for regression coefficients with complex survey data A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the correlation structure within the survey instrument,...
Variance estimation for regression coefficients with complex survey data A sort of 'proof by contradiction' is readily available on consideration of the scaling laws at work here, in light of information concepts. The usual estimator(s) you cite, since they ignore the corr
47,810
Variance estimation for regression coefficients with complex survey data
Here are some explicit ways that the model-based estimator can be biased Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test (the equal-variance t-test), and we know that if the variance of Y is different between the X groups that the t-test ha...
Variance estimation for regression coefficients with complex survey data
Here are some explicit ways that the model-based estimator can be biased Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test
Variance estimation for regression coefficients with complex survey data Here are some explicit ways that the model-based estimator can be biased Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test (the equal-variance t-test), and we know that ...
Variance estimation for regression coefficients with complex survey data Here are some explicit ways that the model-based estimator can be biased Heteroskedasticity. Let X be binary and Y be continuous. We know that linear regression of Y on X reproduces Student's t-test
47,811
Markov Random Fields vs Hidden Markov Model
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields are undirected graphical models (i.e., they describe a factorization of a Gibbs distribution in terms of the clique pote...
Markov Random Fields vs Hidden Markov Model
They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields
Markov Random Fields vs Hidden Markov Model They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields are undirected graphical models (i.e., they describe a factorization of a Gi...
Markov Random Fields vs Hidden Markov Model They are similar in the sense that they are both graphical models, i.e., both of them describe a factorization of a joint distribution according to some graph structure. However, Markov Random Fields
47,812
Markov Random Fields vs Hidden Markov Model
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here). So yes, you can use Markov Random Fields to represent an HMM.
Markov Random Fields vs Hidden Markov Model
Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
Markov Random Fields vs Hidden Markov Model Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here). So yes, you can use Markov Random Fields to represent an HMM.
Markov Random Fields vs Hidden Markov Model Hidden Markov Models can be represented as directed graphs (with Bayesian Networks, letter a) of image below) or as undirected graphs (with Markov Random Fields, letter b) of image below, link here).
47,813
Intuition as to why estimates of a covariance matrix are numerically unstable
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem presented in the covariance matrix $C$ (where $C = \frac{1}{N-1}X_0^T X_0$, $X_0$ being the zero-centred version of the origi...
Intuition as to why estimates of a covariance matrix are numerically unstable
The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem prese
Intuition as to why estimates of a covariance matrix are numerically unstable The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem presented in the covariance matrix $C$ (where $...
Intuition as to why estimates of a covariance matrix are numerically unstable The reason that the SVD of the original matrix $X$ is preferred instead of the eigen-decomposition of the covariance matrix $C$ when doing PCA is that that the solution of the eigenvalue problem prese
47,814
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" : "In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables." Check up the Zeiler's paper i...
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" : "In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recor
(deep learning) Is there a type of layer that can reverse the max-pooling operation? As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" : "In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima wi...
(deep learning) Is there a type of layer that can reverse the max-pooling operation? As Zeiler says in his paper "Visualizing and Understanding Convolutional Networks" : "In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recor
47,815
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction? Here we introduce a max-pooling layer that introduces sparsity over the hidden representation by erasing all non-maximal values in non overlapping subregions. Basically it's the same as alviur's answer. Since they ...
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction? Here we introduce a max-pooling layer that introduces sparsity over the hidden representation b
(deep learning) Is there a type of layer that can reverse the max-pooling operation? Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction? Here we introduce a max-pooling layer that introduces sparsity over the hidden representation by erasing all non-maximal values in...
(deep learning) Is there a type of layer that can reverse the max-pooling operation? Have you checked this paper Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction? Here we introduce a max-pooling layer that introduces sparsity over the hidden representation b
47,816
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0: MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal...
(deep learning) Is there a type of layer that can reverse the max-pooling operation?
MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0: MaxUnpool2d takes in as input the
(deep learning) Is there a type of layer that can reverse the max-pooling operation? MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0: MaxUnpool2d takes in as input the output of MaxPool2d including the ...
(deep learning) Is there a type of layer that can reverse the max-pooling operation? MaxPool is not generally invertible, but PyTorch for example provides a function which computes a pseudo-inverse, where all elements other than the max are set to 0: MaxUnpool2d takes in as input the
47,817
Modeling multivariate Time Series Count Data in R
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf. The model is quite complicated, here is the state space representation: So, let's say I have L different products I'm studying across 1,..,T time periods. $Y_{l,t} \sim z*\delta_0 + (1-z)NB(exp(\widetilde{\eta}_{l,t}),alpha_l)$ is th...
Modeling multivariate Time Series Count Data in R
I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf. The model is quite complicated, here is the state space representation: So, let's say I have L different products
Modeling multivariate Time Series Count Data in R I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf. The model is quite complicated, here is the state space representation: So, let's say I have L different products I'm studying across 1,..,T time periods. $Y_{l,t} \sim z*\delta_0 + (...
Modeling multivariate Time Series Count Data in R I found a reference which answered my question: https://arxiv.org/pdf/1405.3738.pdf. The model is quite complicated, here is the state space representation: So, let's say I have L different products
47,818
Comparing Perplexities With Different Data Set Sizes
Would comparing perplexities be invalidated by the different data set sizes? No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\log$ is base 2): In order to assess the quality of a language model, one needs to define evaluation metrics. One evaluatio...
Comparing Perplexities With Different Data Set Sizes
Would comparing perplexities be invalidated by the different data set sizes? No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\lo
Comparing Perplexities With Different Data Set Sizes Would comparing perplexities be invalidated by the different data set sizes? No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\log$ is base 2): In order to assess the quality of a language model, ...
Comparing Perplexities With Different Data Set Sizes Would comparing perplexities be invalidated by the different data set sizes? No. I copy below some text on perplexity I wrote with some students for a natural language processing course (assume $\lo
47,819
Defining a prior multinomial regression. Case study with `MCMCglmm`
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in notation. always, and I mean always, standardise your continous variables to have mean=0 and sd=1 (or even sd=2). Look into...
Defining a prior multinomial regression. Case study with `MCMCglmm`
Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in nota
Defining a prior multinomial regression. Case study with `MCMCglmm` Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in notation. always, and I mean always, standardise your c...
Defining a prior multinomial regression. Case study with `MCMCglmm` Good question! I've actually tried to get my mind around the same for quite some time, so I'll just share my experiences. I'm still a novice in this area, so I hope I haven't made any mistakes in nota
47,820
Understanding the spectral decomposition of a Markov matrix? [closed]
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix. Detailed Balance Let $P$ be your (say 2x2) transition matrix. It isn't symmetric. Let $\pi$ be the (1x2) stationary distribution v...
Understanding the spectral decomposition of a Markov matrix? [closed]
This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix. Detailed B
Understanding the spectral decomposition of a Markov matrix? [closed] This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix. Detailed Balance Let $P$ be your (say 2x2) transition matrix...
Understanding the spectral decomposition of a Markov matrix? [closed] This is all from here: http://cims.nyu.edu/~holmes/teaching/asa15/Lecture2.pdf TLDR: you're still using the spectral decomposition theorem; you just have to find the right symmetric matrix. Detailed B
47,821
Understanding the spectral decomposition of a Markov matrix? [closed]
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post One related posts. Properties of spectral decomposition I think you will be clear if you read the accepted answer in this post. In the particular exa...
Understanding the spectral decomposition of a Markov matrix? [closed]
The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post One related posts. Properties
Understanding the spectral decomposition of a Markov matrix? [closed] The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post One related posts. Properties of spectral decomposition I think you will be clea...
Understanding the spectral decomposition of a Markov matrix? [closed] The goal is finding the stationary distribution of the states. As the other answer mentioned, symmetric is not the key. "diagonalizable" is more important. See this post One related posts. Properties
47,822
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is: $$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$ Show that $E_{\mathcal{D}_n}[R_e(h)] = R(h)$ Start with the LHS: $$E_{\mathcal{D}_n}[R_e(h)]$$ Plug in the expression ...
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is: $$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$ S
How can I show that the average empirical risk is equal to the true risk for a binary classifier? Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is: $$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$ Show that $E_{\mathcal{...
How can I show that the average empirical risk is equal to the true risk for a binary classifier? Suppose the dataset is $\mathcal{D} = \{X_1, \dots, X_n\}$ where each data point $X_i$ is drawn i.i.d. from some distribution $f_X$. The true risk is: $$R(h) = E_{X \sim f_X}[\mathcal{L}(X, h(X))]$$ S
47,823
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introductory books to Machine Learning, where "datasets" are considered, we more properly consider a random vector $\mathbf{X}$ wh...
How can I show that the average empirical risk is equal to the true risk for a binary classifier?
It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introducto
How can I show that the average empirical risk is equal to the true risk for a binary classifier? It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introductory books to Machine Le...
How can I show that the average empirical risk is equal to the true risk for a binary classifier? It's actually an immediate consequence of the fact that $R_e(h)$ is a Monte Carlo estimator for $R(h)$ (for fixed h). This is evident if, instead of the terrible notation often used in some introducto
47,824
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain quantile?
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$, we need to find $\mu$ and $\sigma \gt 0$. To this end, let $\Phi$ be the standard Normal distribution function. The two...
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain qua
Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$,
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain quantile? Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$, we need to fi...
How can I fit the parameters of a lognormal distribution knowing the sample mean and one certain qua Let $\mu$ and $\sigma$ be parameters of the corresponding Normal distribution (its mean and standard deviation, respectively). Given the lognormal mean $m$ and the value $z$ for percentile $\alpha$,
47,825
OHE vs Feature Hashing
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vector. We might represent Afghanistan as [1,0,0,0], Belarus as [0,1,0,0], Canada as [0,0,1,0], and Denmark as [0,0,0,1]. ...
OHE vs Feature Hashing
One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vec
OHE vs Feature Hashing One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vector. We might represent Afghanistan as [1,0,0,0], Belarus as [0,1,0,0], Canada as [0,0,1,0], and ...
OHE vs Feature Hashing One hot encoding and feature hashing are both forms of feature engineering where a data scientist is trying to represent categorical information (blood type, country, product ID, word) as an input vec
47,826
Selecting between ARMA, GARCH and ARMA-GARCH models
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise. Strictly speaking, no. Failure to reject a null hypothesis (here: absence of autocorrelation) does not imply we can accept it. Also, absence of autocorrelation does not imply white noise (although it h...
Selecting between ARMA, GARCH and ARMA-GARCH models
the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise. Strictly speaking, no. Failure to reject a null hypothesis (here: absence of auto
Selecting between ARMA, GARCH and ARMA-GARCH models the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise. Strictly speaking, no. Failure to reject a null hypothesis (here: absence of autocorrelation) does not imply we can accept it. Also, absence of autoc...
Selecting between ARMA, GARCH and ARMA-GARCH models the p-value is greater than 0.05 and as such we CAN say that the residuals are a realisation of discrete white noise. Strictly speaking, no. Failure to reject a null hypothesis (here: absence of auto
47,827
Weights to combine different models
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to maximize performance. Using the same data to train the base models and high level model would result in overfitting, so...
Weights to combine different models
Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to
Weights to combine different models Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to maximize performance. Using the same data to train the base models and high level m...
Weights to combine different models Stacking (Wolpert 1992) is a method for combining multiple base models using a high level model. The output of each base model is provided as an input to the high level model, which is then trained to
47,828
Is there any method for choosing the number of layers and neurons?
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches. There exist more advanced techniques such as 1) Gaussian processes. Example: Franck Dernoncourt, Ji Young Lee Optimizing N...
Is there any method for choosing the number of layers and neurons?
There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches. The
Is there any method for choosing the number of layers and neurons? There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches. There exist more advanced techniques such as 1) Gaussian...
Is there any method for choosing the number of layers and neurons? There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches. The
47,829
Joint distribution of dependent Binomial random variables
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to construct a joint distributions with such marginals.
Joint distribution of dependent Binomial random variables
There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to con
Joint distribution of dependent Binomial random variables There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to construct a joint distributions with such marginals.
Joint distribution of dependent Binomial random variables There is no unique joint distribution. In fact, there are infinite possibilities to construct the joint distribution. For instance, there exist infinitely many copula functions that can be used to con
47,830
Is every L-estimator an M-estimator?
A classic example would be a trimmed mean. For concreteness consider a 25% trimmed mean, where we average the middle half of the data. That's an L-estimator, but not an M-estimator. It can in a sense be approximated* by a Huber-type M-estimator but they're not the same. * (perhaps 'analogy' would be a better term than ...
Is every L-estimator an M-estimator?
A classic example would be a trimmed mean. For concreteness consider a 25% trimmed mean, where we average the middle half of the data. That's an L-estimator, but not an M-estimator. It can in a sense
Is every L-estimator an M-estimator? A classic example would be a trimmed mean. For concreteness consider a 25% trimmed mean, where we average the middle half of the data. That's an L-estimator, but not an M-estimator. It can in a sense be approximated* by a Huber-type M-estimator but they're not the same. * (perhaps '...
Is every L-estimator an M-estimator? A classic example would be a trimmed mean. For concreteness consider a 25% trimmed mean, where we average the middle half of the data. That's an L-estimator, but not an M-estimator. It can in a sense
47,831
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.05$?
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from the ones which say $p > 0.05$ as the only safe option here is to convert them to $p = 1$, your suggestion of $p = 0.5$ c...
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.
If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0.05$? If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from the ones which ...
Using p-values reported as inequalities such as $p<0.05$ in meta-analysis; can I convert them to $0. If you convert $p<0.05$ to $p=0.05$ then your analysis will be conservative but you will at least have been able to include all the studies. Similarly for $p < 0.01$ and so on. The problem comes from
47,832
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-treated
For ATU, the weights on $y_i$ would be $$ w_i = \begin{cases} \frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\ 1 & \text{if}\ d_i=0, \end{cases} $$ where $d_i$ is the binary treatment indicator. For ATT/ATET, the weights are $$ w_i = \begin{cases} 1 & \text{if}\ d_i=1 \\ \frac{\hat p(...
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-
For ATU, the weights on $y_i$ would be $$ w_i = \begin{cases} \frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\ 1 & \text{if}\ d_i=0, \end{cases} $$ where $d_i$ is the binary trea
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non-treated For ATU, the weights on $y_i$ would be $$ w_i = \begin{cases} \frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\ 1 & \text{if}\ d_i=0, \end{cases} $$ where $d_i$ is the binary treatment indica...
Calculating weights for inverse probability weighting for the treatment effect on the untreated/non- For ATU, the weights on $y_i$ would be $$ w_i = \begin{cases} \frac{1 - \hat p(x_i)}{\hat p(x_i)} & \text{if}\ d_i=1 \\ 1 & \text{if}\ d_i=0, \end{cases} $$ where $d_i$ is the binary trea
47,833
What is the maximum entropy distribution given values for several quantiles of one sample?
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints \begin{equation} \int dx \, h_i(x) \, f(x) = c_i \end{equation} with $i=1\ldots N$ is \begin{equation} f(x) = e^{\mu + \sum_{i=1}^N \lambda_i h_i(x)} \;. \end{equati...
What is the maximum entropy distribution given values for several quantiles of one sample?
Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints \begin{equation} \int dx \, h_i(x) \, f(x) =
What is the maximum entropy distribution given values for several quantiles of one sample? Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints \begin{equation} \int dx \, h_i(x) \, f(x) = c_i \end{equation} with $i=1\...
What is the maximum entropy distribution given values for several quantiles of one sample? Maximum entropy problems do not always admit a solution. The generic expression for the maximum entropy density $f(x)$ given a set of integral constraints \begin{equation} \int dx \, h_i(x) \, f(x) =
47,834
Does a continuous censored predictor have to be treated as ordinal?
I think most articles on "censored variables" will be related to the response variable which is quite a different story. Being a censored regressor is not automatically a problem. If you are not fully trusting this regressor or if the corresponding "residuals versus variable"-plot shows troubles in the two extreme valu...
Does a continuous censored predictor have to be treated as ordinal?
I think most articles on "censored variables" will be related to the response variable which is quite a different story. Being a censored regressor is not automatically a problem. If you are not fully
Does a continuous censored predictor have to be treated as ordinal? I think most articles on "censored variables" will be related to the response variable which is quite a different story. Being a censored regressor is not automatically a problem. If you are not fully trusting this regressor or if the corresponding "re...
Does a continuous censored predictor have to be treated as ordinal? I think most articles on "censored variables" will be related to the response variable which is quite a different story. Being a censored regressor is not automatically a problem. If you are not fully
47,835
Parameter estimation of a Rayleigh random variable with an offset
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy. Nevertheless, I'll continue as if the shifted-Rayleigh were a suitable model. If the offset is unknown you can estimate it as a parameter. The dens...
Parameter estimation of a Rayleigh random variable with an offset
One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy. Nevertheless, I'll continue
Parameter estimation of a Rayleigh random variable with an offset One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy. Nevertheless, I'll continue as if the shifted-Rayleigh were a suitable model. If ...
Parameter estimation of a Rayleigh random variable with an offset One important thing to note is that your data don't appear to be consistent with having been drawn from a Rayleigh population -- the right tail is considerably too heavy. Nevertheless, I'll continue
47,836
What is out-of-fold average?
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at. Say you have a very high cardinality feature $x$ with some giant set of possible levels $l_1, l_2, \cdots, l_n$. These can be difficult to use in a model directly. One approach to deriving a feat...
What is out-of-fold average?
It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at. Say you have a very high cardinality feature $x$ with some giant set of possi
What is out-of-fold average? It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at. Say you have a very high cardinality feature $x$ with some giant set of possible levels $l_1, l_2, \cdots, l_n$. These can be difficult to use in a model directly. On...
What is out-of-fold average? It's hard to know for sure with such a terse and pithy description, but here's a shot at what he may likely be getting at. Say you have a very high cardinality feature $x$ with some giant set of possi
47,837
Missing data imputation in time series in R
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is no way to estimate the missing values) You need imputation packages that work on time features. You could use for example...
Missing data imputation in time series in R
First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is n
Missing data imputation in time series in R First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is no way to estimate the missing values) You need imputation packages that work...
Missing data imputation in time series in R First thing, a lot of imputation packages do not work with whole rows missing. (because their algorithms work on correlations between the variables - if there is no other variable in a row, there is n
47,838
Missing data imputation in time series in R
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series. https://www.est.colpos.mx/web/packages/kssa/index.html
Missing data imputation in time series in R
You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series. https://www.est.colpos.mx/web/packages/kssa/index.html
Missing data imputation in time series in R You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series. https://www.est.colpos.mx/web/packages/kssa/index.html
Missing data imputation in time series in R You can also use package 'kssa'. It automatically help you to identify the best imputation method for your time series. https://www.est.colpos.mx/web/packages/kssa/index.html
47,839
How to transform one PDF into another graphically?
You're heading in the right direction with your thoughts on considering the cdf. Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying some monotonic increasing transformation, $t$ on $X$, giving $Y=t(X)$. The new variable $Y$ has cdf $F_Y(y)$ and density $f...
How to transform one PDF into another graphically?
You're heading in the right direction with your thoughts on considering the cdf. Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying som
How to transform one PDF into another graphically? You're heading in the right direction with your thoughts on considering the cdf. Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying some monotonic increasing transformation, $t$ on $X$, giving $Y=t(X)$. T...
How to transform one PDF into another graphically? You're heading in the right direction with your thoughts on considering the cdf. Consider some random variable, $X$ with cdf $F_X(x)$ and density $f_X(x)$. To make things simple, consider applying som
47,840
Martingale process
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then $$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have \begin{align} \mathbb E[X_t\mid\mathcal F_s] &= \mathbb E\left[ \frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\,\big\vert\...
Martingale process
Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then $$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have \begi
Martingale process Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then $$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have \begin{align} \mathbb E[X_t\mid\mathcal F_s] &= \mathbb E\left[ \frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\...
Martingale process Let $X_t = M_t^{-1}\mathbb e^{\xi_t}$. Then $$\mathbb E[|X_t|] = \mathbb E\left[\frac{e^{\xi_t}}{\mathbb E\left[e^{\xi_t}\right]}\right] = 1 $$ so that $X_t$ is integrable, and for $s<t$ we have \begi
47,841
Variance of the modulus of a random variable
So $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$. Now define a new random variable $X^+$ by $X^+ = X$ if $X>0$, and $X^+=0$ if $X\le 0$; similarly let $X^- = X$ if $X < 0$ and $X^-=0$ if $X\ge 0$. Assuming both $...
Variance of the modulus of a random variable
So $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$. Now define a new random variable $X^+$ by
Variance of the modulus of a random variable So $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$. Now define a new random variable $X^+$ by $X^+ = X$ if $X>0$, and $X^+=0$ if $X\le 0$; similarly let $X^- = X$ if $X ...
Variance of the modulus of a random variable So $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ You know how to write $E(X^2)$ in terms of $\mu$ and $\sigma$. Now define a new random variable $X^+$ by
47,842
Variance of the modulus of a random variable
We know that $$\;\;\;\;X \leq |X|\\ \Rightarrow E\big(X\big) \leq E\big(|X|\big)\\ \Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2 $$ Using the above in $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ we get $$ \var\bigl( |X| \bigr) \leq E\left(X^2\right) - E\bigl( X \big...
Variance of the modulus of a random variable
We know that $$\;\;\;\;X \leq |X|\\ \Rightarrow E\big(X\big) \leq E\big(|X|\big)\\ \Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2 $$ Using the above in $$ \def\var{\text{var}} \var\bigl( |X| \bigr
Variance of the modulus of a random variable We know that $$\;\;\;\;X \leq |X|\\ \Rightarrow E\big(X\big) \leq E\big(|X|\big)\\ \Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2 $$ Using the above in $$ \def\var{\text{var}} \var\bigl( |X| \bigr) = E\left(X^2\right) - E\bigl( |X| \bigr)^2.$$ we get $$ \var\bigl( |X| \...
Variance of the modulus of a random variable We know that $$\;\;\;\;X \leq |X|\\ \Rightarrow E\big(X\big) \leq E\big(|X|\big)\\ \Rightarrow E\big(X\big)^2 \leq E\big(|X|\big)^2 $$ Using the above in $$ \def\var{\text{var}} \var\bigl( |X| \bigr
47,843
Impact of inverting grayscale values on mnist dataset
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result. In the MNIST input data, pixel values range from 0 (black background) to 255 (white foreground), which is usually scaled in the [0,1] interval. In tensorflow, the...
Impact of inverting grayscale values on mnist dataset
Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result. In the MNIST input data, pixel values range
Impact of inverting grayscale values on mnist dataset Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result. In the MNIST input data, pixel values range from 0 (black background) to 255 (white foreground), which is usu...
Impact of inverting grayscale values on mnist dataset Here's a quick test on the mnist_softmax implemention from the tensorflow tutorial. You can append this code at the end of the file to reproduce the result. In the MNIST input data, pixel values range
47,844
Interpretation of coefficients in logistic regression output
Summary The question misinterprets the coefficients. The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant ($p=0.138$). Therefore the proportion of positive results in the data, equal to $100 - 19.95\% \approx 80\%$, ought to ...
Interpretation of coefficients in logistic regression output
Summary The question misinterprets the coefficients. The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant
Interpretation of coefficients in logistic regression output Summary The question misinterprets the coefficients. The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant ($p=0.138$). Therefore the proportion of positive results ...
Interpretation of coefficients in logistic regression output Summary The question misinterprets the coefficients. The software output shows that the log odds of the response don't depend appreciably on $X$, because its coefficient is small and not significant
47,845
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particular and in my experience, the categorical information will dominate the solution. A better approach would be to employ a v...
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particula
Clustering of variables: but they are mixed type, some are numeric, some are categorical Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particular and in my experience, the cat...
Clustering of variables: but they are mixed type, some are numeric, some are categorical Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particula
47,846
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a starting point: http://www.personality-project.org/r/psych/ http://www.personality-project.org/r/#factoranal You can trick cor(...
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a startin
Clustering of variables: but they are mixed type, some are numeric, some are categorical Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a starting point: http://www.personality...
Clustering of variables: but they are mixed type, some are numeric, some are categorical Clusters of correlations are best investigated by factor analysis. There is a number of different implementations of factor analysis in R and I would recommend the package 'psych' on CRAN as a startin
47,847
Clustering of variables: but they are mixed type, some are numeric, some are categorical
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity. A correlation coefficient could be assumed the similarity measure. We could, for example, compute Pearson $r$. Given that boolean true/false is convertible into 1...
Clustering of variables: but they are mixed type, some are numeric, some are categorical
So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity. A correlation coefficient could be assumed
Clustering of variables: but they are mixed type, some are numeric, some are categorical So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity. A correlation coefficient could be assumed the similarity measure. We cou...
Clustering of variables: but they are mixed type, some are numeric, some are categorical So, you have a mixture of categorical boolean and numeric continuous variables. You want to cluster the variables (not data cases) based on their similarity. A correlation coefficient could be assumed
47,848
Clustering of variables: but they are mixed type, some are numeric, some are categorical
You could one-hot encode your binary features and normalize your data to enable correlation computation: library(caret) df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df)))) library(corrplot) corrplot(cor(df)) Based on this you could apply any clustering approach (example with K-Means, but also look into...
Clustering of variables: but they are mixed type, some are numeric, some are categorical
You could one-hot encode your binary features and normalize your data to enable correlation computation: library(caret) df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df)))) library(corr
Clustering of variables: but they are mixed type, some are numeric, some are categorical You could one-hot encode your binary features and normalize your data to enable correlation computation: library(caret) df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df)))) library(corrplot) corrplot(cor(df)) Based ...
Clustering of variables: but they are mixed type, some are numeric, some are categorical You could one-hot encode your binary features and normalize your data to enable correlation computation: library(caret) df <- data.frame(scale(data.frame(predict(dummyVars(~., df), df)))) library(corr
47,849
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing with nominal variables with more than two categories (or binary variables with rare classes). In this case, 1-1 matches ar...
Clustering of variables: but they are mixed type, some are numeric, some are categorical
Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing wi
Clustering of variables: but they are mixed type, some are numeric, some are categorical Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing with nominal variables with more ...
Clustering of variables: but they are mixed type, some are numeric, some are categorical Because you have mostly either continuous variables or binary variables, the suggestion made by @geekoverdose is certainly an option. The main issue that arises when taking this approach is dealing wi
47,850
Intuitive explanation of state space models
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It's all about multiple equations with multiple matrix multiplications. Some tools like R's bsts package make it more acce...
Intuitive explanation of state space models
The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It
Intuitive explanation of state space models The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It's all about multiple equations with multiple matrix multiplications. Some t...
Intuitive explanation of state space models The good news is that your instincts are right that it would be a useful technique. The bad news is that it's not a technique that you can use without understanding a fair amount of linear algebra. It
47,851
Testing mediation and moderation; can one variable function as both mediator and moderator?
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects: Mediaiton Mediation is a hypothesized causal chain in which one variable affects a second variable that, in turn, affects a third variable. The intervening variable, $M$, is the med...
Testing mediation and moderation; can one variable function as both mediator and moderator?
From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects: Mediaiton Mediation is a hypothesized causal chain in which o
Testing mediation and moderation; can one variable function as both mediator and moderator? From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects: Mediaiton Mediation is a hypothesized causal chain in which one variable affects a second...
Testing mediation and moderation; can one variable function as both mediator and moderator? From definitions, I feel that a variable can not simultaneously function as mediator and moderator. Let's try to investigate both effects: Mediaiton Mediation is a hypothesized causal chain in which o
47,852
Testing mediation and moderation; can one variable function as both mediator and moderator?
Here is an article giving an example of a moderating mediator. https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144 This explains how a mediator may later become a moderator, however I would speculate that under most circumstances (particularly in biopsychology) mediation tests are detecting only sta...
Testing mediation and moderation; can one variable function as both mediator and moderator?
Here is an article giving an example of a moderating mediator. https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144 This explains how a mediator may later become a moderator, howeve
Testing mediation and moderation; can one variable function as both mediator and moderator? Here is an article giving an example of a moderating mediator. https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144 This explains how a mediator may later become a moderator, however I would speculate that und...
Testing mediation and moderation; can one variable function as both mediator and moderator? Here is an article giving an example of a moderating mediator. https://www.sciencedirect.com/science/article/abs/pii/S0005789417301144 This explains how a mediator may later become a moderator, howeve
47,853
Testing mediation and moderation; can one variable function as both mediator and moderator?
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable is considered as the treatment and the other as an effect-modifier, means moderation. It seems to me, however, that, in...
Testing mediation and moderation; can one variable function as both mediator and moderator?
TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable
Testing mediation and moderation; can one variable function as both mediator and moderator? TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable is considered as the treatm...
Testing mediation and moderation; can one variable function as both mediator and moderator? TLDR: Moderation and mediation are two different things, but nothing prevents to have them both simultaneously. This is because a mediator may interact with treatment, and interaction, if one variable
47,854
Normal distribution necessary for linear-mixed effects? (R)
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on the random effects, that is assumed to be normally distributed. This means that the residuals should be normally distr...
Normal distribution necessary for linear-mixed effects? (R)
As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on
Normal distribution necessary for linear-mixed effects? (R) As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on the random effects, that is assumed to be normally distribu...
Normal distribution necessary for linear-mixed effects? (R) As per the comment by @Roland, there is no requirement for the response variable itself to be normally distributed in a linear mixed model (LMM). It is the distribution of the response, conditional on
47,855
Normal distribution necessary for linear-mixed effects? (R)
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
Normal distribution necessary for linear-mixed effects? (R)
If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
Normal distribution necessary for linear-mixed effects? (R) If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
Normal distribution necessary for linear-mixed effects? (R) If you use something like a generalized linear mixed model, then the response variables don't have to be gaussians. This fact is the key differentiator from GLMM and LMM.
47,856
Difference between Log Entropy Model and TF-IDF Model?
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containing a partial answer to your question: The first global weighting function we considered normalized each word using the ...
Difference between Log Entropy Model and TF-IDF Model?
Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containi
Difference between Log Entropy Model and TF-IDF Model? Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containing a partial answer to your question: The first global weighting...
Difference between Log Entropy Model and TF-IDF Model? Your question brought me to a thread on the Gensim user group where that question was asked. That in turn links to a paper titled An Empirical Evaluation of Models of Text Document Similarity containi
47,857
Parameter n_iter in scikit-learn's SGDClassifier
It must be the second. I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I searched for SGDClassifier then followed the function calls until I got to this one, which is a low level routine). Brea...
Parameter n_iter in scikit-learn's SGDClassifier
It must be the second. I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I
Parameter n_iter in scikit-learn's SGDClassifier It must be the second. I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I searched for SGDClassifier then followed the function calls until I got...
Parameter n_iter in scikit-learn's SGDClassifier It must be the second. I always answer these questions by looking at the source code (which in sklearn is of very high quality, and is written extremely clearly). The function in question is here (I
47,858
Time series analysis of electricity load questions
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands for heating in winter, higher power demands for air conditioning in summer). Plus time-shifting holidays. I'd say your ...
Time series analysis of electricity load questions
Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands
Time series analysis of electricity load questions Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands for heating in winter, higher power demands for air conditioning in ...
Time series analysis of electricity load questions Electric load typically exhibits intra-daily seasonality, as well as intra-weekly seasonality (weekends have different power demand patterns than weekdays), plus yearly seasonality (high power demands
47,859
Support of likelihood ratio test statistic
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consider two possibilities: The denominator is positive. This means that $H_0$ assigns a positive chance to any tiny neighbo...
Support of likelihood ratio test statistic
This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consid
Support of likelihood ratio test statistic This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consider two possibilities: The denominator is positive. This means that $H_0$ as...
Support of likelihood ratio test statistic This statistic weighs evidence for the two hypotheses by comparing their probability densities at the observed value of $y$. Because the denominator could be zero in this situation, we have to consid
47,860
Support of likelihood ratio test statistic
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention. You have to think carefully about the density across at least the +ve half-line -- your ansẃer defines what you get when $0<y<1$, but what's the LR when y=4.3? That could happen, if the distribution really we...
Support of likelihood ratio test statistic
Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention. You have to think carefully about the density across at least the +ve half-line -- your a
Support of likelihood ratio test statistic Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention. You have to think carefully about the density across at least the +ve half-line -- your ansẃer defines what you get when $0<y<1$, but what's the LR when y=4.3? That ...
Support of likelihood ratio test statistic Actually both densities are defined over the whole line, they're just 0 elsewhere than the places you mention. You have to think carefully about the density across at least the +ve half-line -- your a
47,861
Interpret regression coefficients when independent variable is a ratio
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the formula, which is $$E\left[\log Y\right] = \beta_0 + \beta_1 x_1 + \beta_2\left(\frac{x_3}{x_1}\right).$$ The derivatives a...
Interpret regression coefficients when independent variable is a ratio
Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the for
Interpret regression coefficients when independent variable is a ratio Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the formula, which is $$E\left[\log Y\right] = \beta_0 +...
Interpret regression coefficients when independent variable is a ratio Ordinarily, we interpret coefficients in terms of how the expected value of the response should change when we effect tiny changes in the underlying variables. This is done by differentiating the for
47,862
Interpret regression coefficients when independent variable is a ratio
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing the other parts of the model. It then becomes: $$ \log Y = \beta_0 + \beta x + E $$ which in multiplicative form becom...
Interpret regression coefficients when independent variable is a ratio
For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing
Interpret regression coefficients when independent variable is a ratio For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing the other parts of the model. It then becomes: $$...
Interpret regression coefficients when independent variable is a ratio For a more useful answer you should tell us more about your real application. As the question only seems to be about the role of a ratio variable $x \in [0,1]$, so I simplify the question by removing
47,863
Interpret regression coefficients when independent variable is a ratio
I suppose you could interpret the numerator and denominator with ratio. If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your denominator (x1) decreased and that would be its effect on dependent variable.
Interpret regression coefficients when independent variable is a ratio
I suppose you could interpret the numerator and denominator with ratio. If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your
Interpret regression coefficients when independent variable is a ratio I suppose you could interpret the numerator and denominator with ratio. If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your denominator (x1) decreased and that would be its...
Interpret regression coefficients when independent variable is a ratio I suppose you could interpret the numerator and denominator with ratio. If your fraction increases by 1 unit, it means your numerator (x3) increased, if you fraction decreases by 1 unit, it means your
47,864
Interpret regression coefficients when independent variable is a ratio
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covariate. As long as your response variable is indeed linear in that ratio, then that is simply how your system behaves. Supp...
Interpret regression coefficients when independent variable is a ratio
Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covari
Interpret regression coefficients when independent variable is a ratio Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covariate. As long as your response variable is indeed ...
Interpret regression coefficients when independent variable is a ratio Just as in linear regression it is common to view nonlinear factors such as $x_1^2$ or $x_1 \cdot x_2$ as individual covariates, similarly there is no reason why $x_3/x_1$ can't be a legitimate covari
47,865
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is $$\begin{align} b_n&=\frac{1}{2}\left(y^Ty - \mu_n^T\Lambda_n\mu_n\right)\\ &=\frac 12\left(y^Ty -\mu_n^TX^Ty-y^TX\mu_n...
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$
This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$ This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is $$\begin{align} b_n&=\frac{1}{2}\...
Overestimation of the noise precision in Bayesian linear regression when $n\gtrsim p$ This problem turns out to be well-known in the frequentist literature. In particular, if we use an impropr prior $\Lambda_0=b_0=0$, the posterior scale hyperparameter for the distribution on $\tau$ is
47,866
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided $$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$ Let $y-x$ = $u$ $$ f_{1,n}(x,x+u) = n(n- 1) \dfrac{u^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}...
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$
You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided $$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$ You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided $$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2}} \dfrac{1}{\theta^2}.$$ Let $y-x$ = $u$ $$ f_{1,n}(x,x...
Order Statistics, Expected Value of range, $E(X_{(n)}-X_{(1)})$ You have the joint distribution $(X_{(n)}, X_{(1)})$ and you need to find the distribution of $X_{(n)} - X_{(1)}$. From the link you provided $$ f_{1,n}(x,y) = n(n- 1) \dfrac{(y-x)^{n-2}}{\theta^{n-2
47,867
Generating random samples from Huber density
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely, $$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\left\{-k|x|+k^2/2\right\}\mathbb{I}_{(-k,k)^c}(x)$$ implies that the distribution is the mixture of the Normal distributio...
Generating random samples from Huber density
As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely, $$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\le
Generating random samples from Huber density As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely, $$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\left\{-k|x|+k^2/2\right\}\mathbb{I}_{(-k,k)^c}(x)$$ implies that the distribu...
Generating random samples from Huber density As you suggest, this distribution is a mixture of a truncated Normal distribution and of a truncated Laplace distribution: namely, $$f(x)\propto \exp\left\{-x^2/2\right\}\mathbb{I}_{(-k,k)}(x)+\exp\le
47,868
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
Yes, it's easy. Let's say that this is your population model: +---+ 0.5 +----+ | X +------------> | Y | +-+-+ +-+--+ | +----+ ^ +---->+ M +-------+ 0.5 +----+ 0.5 And you fit this model: +---+ +----+ | X | | Y |...
Poor model fit but significant and high path coefficient values in Structural Equation Modeling
Yes, it's easy. Let's say that this is your population model: +---+ 0.5 +----+ | X +------------> | Y | +-+-+ +-+--+ | +----+ ^ +---->+ M +-------+ 0.5
Poor model fit but significant and high path coefficient values in Structural Equation Modeling Yes, it's easy. Let's say that this is your population model: +---+ 0.5 +----+ | X +------------> | Y | +-+-+ +-+--+ | +----+ ^ +---->+ M +-------+ 0.5 +----+ 0.5 And ...
Poor model fit but significant and high path coefficient values in Structural Equation Modeling Yes, it's easy. Let's say that this is your population model: +---+ 0.5 +----+ | X +------------> | Y | +-+-+ +-+--+ | +----+ ^ +---->+ M +-------+ 0.5
47,869
Help with zero-inflated generalized linear mixed models with random factor in R
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in the zero-inflation part. See vignette("countreg", package = "pscl") for more details. If you want random effects, then ...
Help with zero-inflated generalized linear mixed models with random factor in R
No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in
Help with zero-inflated generalized linear mixed models with random factor in R No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in the zero-inflation part. See vignette("...
Help with zero-inflated generalized linear mixed models with random factor in R No, zeroinfl() currently does not support random effects. So the formula you specified actually means something different: You use a fixed treatment effect in the count part and a fixed site effect in
47,870
Finding the MLE for a mixture of random variables which are discrete and continuous
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition of $Y_i$ implies there exists an index $k$ for which $$y_1 = y_2 = \cdots = y_k = 1;\ y_{k+1}=y_{k+2}=\cdots=y_n = 0.$$ ...
Finding the MLE for a mixture of random variables which are discrete and continuous
You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition o
Finding the MLE for a mixture of random variables which are discrete and continuous You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition of $Y_i$ implies there exists an inde...
Finding the MLE for a mixture of random variables which are discrete and continuous You are implicitly assuming the $(X_i,Y_i)$ are iid. Therefore you may freely re-index the observations $(x_i,y_i)$ so that $x_0 = 0 \le x_1 \le x_2 \cdots \le x_n \le 1 = x_{n+1}$. The definition o
47,871
Appropriateness of one-sided hypothesis tests when testing medical treatments
Hypothesis testing: I refer to this answer: What follows if we fail to reject the null hypothesis?. Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'', i.e. whether the data you observe is (statistical) evidence that $H_A$ is true (see What follows if we fail to reject t...
Appropriateness of one-sided hypothesis tests when testing medical treatments
Hypothesis testing: I refer to this answer: What follows if we fail to reject the null hypothesis?. Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'',
Appropriateness of one-sided hypothesis tests when testing medical treatments Hypothesis testing: I refer to this answer: What follows if we fail to reject the null hypothesis?. Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'', i.e. whether the data you observe is (sta...
Appropriateness of one-sided hypothesis tests when testing medical treatments Hypothesis testing: I refer to this answer: What follows if we fail to reject the null hypothesis?. Hypothesis testing is about ''finding statistical evidence for your alternative hypothesis $H_A$'',
47,872
Appropriateness of one-sided hypothesis tests when testing medical treatments
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('hunch'), using a one-sided approach is almost groundless, and you risk being not conservative enough. I would thus recom...
Appropriateness of one-sided hypothesis tests when testing medical treatments
A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('
Appropriateness of one-sided hypothesis tests when testing medical treatments A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('hunch'), using a one-sided approach is alm...
Appropriateness of one-sided hypothesis tests when testing medical treatments A one-sided confidence interval (CI)/test is as good as a two-sided CI/test: it all depends on your assumptions and goals. Given what you are telling us, ie that the prior knowledge is very limited ('
47,873
Why do we calculate pooled standard deviations by using variances?
We work with variances rather than standard deviations because variances have special properties. In particular, variances of sums and differences of variables have a simple form, and if the variables are independent, the result is even simpler. That is, if two variables are independent, the variance of the difference ...
Why do we calculate pooled standard deviations by using variances?
We work with variances rather than standard deviations because variances have special properties. In particular, variances of sums and differences of variables have a simple form, and if the variables
Why do we calculate pooled standard deviations by using variances? We work with variances rather than standard deviations because variances have special properties. In particular, variances of sums and differences of variables have a simple form, and if the variables are independent, the result is even simpler. That is...
Why do we calculate pooled standard deviations by using variances? We work with variances rather than standard deviations because variances have special properties. In particular, variances of sums and differences of variables have a simple form, and if the variables
47,874
Combining one class classifiers to do multi-class classification
I've done something like this using either of the following: (a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A, '0' otherwise - do this for B and C classes using the same logic. The foregoing columns will be your target fields for t...
Combining one class classifiers to do multi-class classification
I've done something like this using either of the following: (a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A,
Combining one class classifiers to do multi-class classification I've done something like this using either of the following: (a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A, '0' otherwise - do this for B and C classes using the s...
Combining one class classifiers to do multi-class classification I've done something like this using either of the following: (a) Given three different classes (e.g. A, B, C), create an input column for each class. Place '1' in the A column if the sample is an A,
47,875
Combining one class classifiers to do multi-class classification
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice of which 2-classifier to use for a given sample is also to be learned when doing the 0 vs 1 vs 2 classification problem....
Combining one class classifiers to do multi-class classification
Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice
Combining one class classifiers to do multi-class classification Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice of which 2-classifier to use for a given sample is also...
Combining one class classifiers to do multi-class classification Two classifiers which do 0 vs 1 and 0 vs 2 classifications intuitively should perform better than a classifier which has to distinguish between all three at once. The intuition being, that the choice
47,876
Combining one class classifiers to do multi-class classification
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novelty detection, where you have data coming only from a single class. If you have two classes, it's a two-class classifier....
Combining one class classifiers to do multi-class classification
First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novel
Combining one class classifiers to do multi-class classification First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novelty detection, where you have data coming only from a si...
Combining one class classifiers to do multi-class classification First of all, regarding terminology, you are taking about using multiple two-class classifiers, rather than one-class classifiers. One class classifiers are a class of models used for anomaly or novel
47,877
Combining one class classifiers to do multi-class classification
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs. A perceptron or linear support vector machine may work well here.
Combining one class classifiers to do multi-class classification
I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs. A perceptron or linear support vector machine
Combining one class classifiers to do multi-class classification I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs. A perceptron or linear support vector machine may work well here.
Combining one class classifiers to do multi-class classification I am not so familiar with Bayes Networks. If you are interested in learning a weighting scheme, I'd propose a meta-linear model to combine those outputs. A perceptron or linear support vector machine
47,878
Need more intuition for the curse of dimensionality [duplicate]
I am used to an essentially same but a bit more illustrative example, in my opinion. Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be shown (I'm not writing out the derivation now, let me know if you're interested) that the median of the maximum of Euclide...
Need more intuition for the curse of dimensionality [duplicate]
I am used to an essentially same but a bit more illustrative example, in my opinion. Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be sh
Need more intuition for the curse of dimensionality [duplicate] I am used to an essentially same but a bit more illustrative example, in my opinion. Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be shown (I'm not writing out the derivation now, let me know...
Need more intuition for the curse of dimensionality [duplicate] I am used to an essentially same but a bit more illustrative example, in my opinion. Let $x_1,...x_l$ be i.i.d. and uniformly distributed in the unit $n$-ball centered at the origin. Then it can be sh
47,879
Need more intuition for the curse of dimensionality [duplicate]
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of the first pair is different than that of the second pair. Suppose there are two points $x$ and $y$ in $\mathbb{R}^n$ ...
Need more intuition for the curse of dimensionality [duplicate]
There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of
Need more intuition for the curse of dimensionality [duplicate] There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of the first pair is different than that of the second pai...
Need more intuition for the curse of dimensionality [duplicate] There is one respect in which the Euclidean distance is not comfortable because the distance tends to increase with dimension: comparison of distances between two pairs of points when the dimension of
47,880
Feature extraction for time series classification
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights. You could use the python package tsfresh to automatically extract a huge of number of features and filter them for their importance. You described that you calculated b...
Feature extraction for time series classification
From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights. You could use the python package tsfresh to auto
Feature extraction for time series classification From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights. You could use the python package tsfresh to automatically extract a huge of number of features and filter them for the...
Feature extraction for time series classification From my experience, often the mass calculation of different features with subsequent inspection of their significance can lead to interesting insights. You could use the python package tsfresh to auto
47,881
slice sampling within a Gibbs sampler
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works. @inbook{cruz, Author = {Cruz, Marcelo G. and Peters, Gareth W. and Shevchenko, Pavel V.}, Chapter = {7.6.2: Generic univariate auxiliary variable Gibbs sampler: sl...
slice sampling within a Gibbs sampler
I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works. @inbook{cruz, Author = {Cruz, Marcelo G. and Pe
slice sampling within a Gibbs sampler I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works. @inbook{cruz, Author = {Cruz, Marcelo G. and Peters, Gareth W. and Shevchenko, Pavel V.}, Chapter = {7.6.2: Generic univariat...
slice sampling within a Gibbs sampler I found two references. This one details the algorithm, but the publicly-available pages that I could see on Google Books don't prove that it works. @inbook{cruz, Author = {Cruz, Marcelo G. and Pe
47,882
slice sampling within a Gibbs sampler
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate distribution. This fact is used extensively all over the literature on MCMC. Its proof is straightforward. There is no ne...
slice sampling within a Gibbs sampler
It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate dist
slice sampling within a Gibbs sampler It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate distribution. This fact is used extensively all over the literature on MCMC. Its pro...
slice sampling within a Gibbs sampler It is generally the case that any valid MCMC scheme for univariate distributions can be applied to a univariate conditional distribution as part of an MCMC scheme for sampling from a multivariate dist
47,883
Clarification about no free lunch theorem
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the specialized model to perform worst on average on other subsets that aren't its specialty. This is not entirely accurate,...
Clarification about no free lunch theorem
As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the
Clarification about no free lunch theorem As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the specialized model to perform worst on average on other subsets that aren't its...
Clarification about no free lunch theorem As I understand the NFL theorem, the only way a model can out-perform the general model, is by using predefined knowledge / structure, relevant to the problem. These prior assumptions, will cause the
47,884
Using re.form= in predict.merMod() for a lmer() model
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional". In the first model, for which you controlled for days and random slopes and intercepts, each individual has three contributions to the...
Using re.form= in predict.merMod() for a lmer() model
The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional". In the first m
Using re.form= in predict.merMod() for a lmer() model The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional". In the first model, for which you controlled for days and random slopes and inte...
Using re.form= in predict.merMod() for a lmer() model The short answer is that dropping random effects from predictions does not re-estimate the reduced model, it just sets the other random effects to 0, so it is still "fully conditional". In the first m
47,885
EFA on one part of the dataset and CFA/SEM on another part of the dataset
I believe you should do the structural equation modeling on the second half of the dataset. As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA on. This is where you explore the data and get a feel for how the structure shapes up. But who knows if this is just du...
EFA on one part of the dataset and CFA/SEM on another part of the dataset
I believe you should do the structural equation modeling on the second half of the dataset. As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA
EFA on one part of the dataset and CFA/SEM on another part of the dataset I believe you should do the structural equation modeling on the second half of the dataset. As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA on. This is where you explore the data and ge...
EFA on one part of the dataset and CFA/SEM on another part of the dataset I believe you should do the structural equation modeling on the second half of the dataset. As you say in your question, the basic process is: You split the dataset, and the first half you do the EFA
47,886
Maximize variance of a distribution subject to constraints
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have $$ f(x) = f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr) \leq (1-x) f(0) + x f(1) $$ for all $x\in[0,1]$ and thus we get the bound $$\begin{align*} \mathrm{Var}(X) &= \mathbb{E}\bigl(f(X)\bigr) \\ &\leq \mathbb{E}(1-X) f(0) + \mathbb{E}(X) f(1) \\ &= (1-\mu)\mu^2 + \mu (1...
Maximize variance of a distribution subject to constraints
Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have $$ f(x) = f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr) \leq (1-x) f(0) + x f(1) $$ for all $x\in[0,1]$ and thus we get the bound $$\begin{align*} \mat
Maximize variance of a distribution subject to constraints Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have $$ f(x) = f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr) \leq (1-x) f(0) + x f(1) $$ for all $x\in[0,1]$ and thus we get the bound $$\begin{align*} \mathrm{Var}(X) &= \mathbb{E}\bigl(f(X)\bigr) \\ &\leq \mathbb{E}...
Maximize variance of a distribution subject to constraints Let $f(x) = (x - \mu)^2$. Since $f$ is convex, we have $$ f(x) = f\bigl( (1-x)\cdot 0 + x\cdot 1 \bigr) \leq (1-x) f(0) + x f(1) $$ for all $x\in[0,1]$ and thus we get the bound $$\begin{align*} \mat
47,887
Maximize variance of a distribution subject to constraints
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then $$ \mathbb{E}[X] = ap + p_1 = \mu, $$ so that $p_1=\mu-ap, p_0=1-p-\mu+ap$ (some reasonable conditions must be applied so that the s...
Maximize variance of a distribution subject to constraints
I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then $$ \mathbb{E
Maximize variance of a distribution subject to constraints I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then $$ \mathbb{E}[X] = ap + p_1 = \mu, $$ so that $p_1=\mu-ap, p_0=1-p-\mu+ap...
Maximize variance of a distribution subject to constraints I think I can develop a partial answer for a three-point distribution. Suppose I have ${\rm Prob}[X=0]=p_0, {\rm Prob}[X=1]=p_1$ and ${\rm Prob}[X=a]=p$ for some fixed $a,p\in(0,1)$. Then $$ \mathbb{E
47,888
In which cases we can approximate expected value of a function by assuming the function and the expectation commute?
I will use $E$ for expectation, rather than angle brackets. First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approximation, which can be very context-specific. If $f$ is linear (or more generally, affine), $E(f(X)) = f(E(X))$, and so t...
In which cases we can approximate expected value of a function by assuming the function and the expe
I will use $E$ for expectation, rather than angle brackets. First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approx
In which cases we can approximate expected value of a function by assuming the function and the expectation commute? I will use $E$ for expectation, rather than angle brackets. First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approxima...
In which cases we can approximate expected value of a function by assuming the function and the expe I will use $E$ for expectation, rather than angle brackets. First of all, $E(f(X))$ can always be "approximated" by $f(E(X))$; the only question is the accuracy and adequacy for purpose of that approx
47,889
Convert double differenced forecast into actual value
I found answer in stackoverflow. To summarize instead of doing ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg))) we should instead do ARIMAfit <- auto.arima(val.ts, d=2, approximation=FALSE,trace=FALSE, xreg=xreg) This d=2 will make sure that forecasted values for futu...
Convert double differenced forecast into actual value
I found answer in stackoverflow. To summarize instead of doing ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg))) we should instead do ARIMAfit <- aut
Convert double differenced forecast into actual value I found answer in stackoverflow. To summarize instead of doing ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg))) we should instead do ARIMAfit <- auto.arima(val.ts, d=2, approximation=FALSE,trace=FALSE, xreg=xreg) T...
Convert double differenced forecast into actual value I found answer in stackoverflow. To summarize instead of doing ARIMAfit <- auto.arima(diff(diff(val.ts)), approximation=FALSE,trace=FALSE, xreg=diff(diff(xreg))) we should instead do ARIMAfit <- aut
47,890
Use of fixed effects and random effects
Here is a standard linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component $\eta_{it}$ is "idiosyncratic", varying...
Use of fixed effects and random effects
Here is a standard linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heteroge
Use of fixed effects and random effects Here is a standard linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heterogeneity, the error component that is constant over time. The other error component...
Use of fixed effects and random effects Here is a standard linear panel data model: $$ y_{it}=X_{it}\delta+\alpha_i+\eta_{it}, $$ the so-called error component model. Here, $\alpha_i$ is what is sometimes called individual-specific heteroge
47,891
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e. nested cross-validation to check how robust the selection of parameters really is). If the performance is bad it is be...
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data?
You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data? You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e. nested cross-validation to check...
How to tune the "depth" and "min_samples_leaf" of Random Forest with correlated data? You are doing it wrong -- the essential part of RF is that it basically only requires making # trees large enough to converge and that's it (it becomes obvious once one starts doing proper tuning, i.e
47,892
GLMER sampling random effects
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different schools you could treat schools as either fixed effect with estimating parameter for each school, or as random effect and be...
GLMER sampling random effects
One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different scho
GLMER sampling random effects One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different schools you could treat schools as either fixed effect with estimating parameter for each scho...
GLMER sampling random effects One of the ways of thinking about random effects (see also this answer) is that they apply to groups that are random draws from population. So if you studied students performance across different scho
47,893
How can we calculate the variance inflation factor for a categorical predictor variable when examining multicollinearity in a linear regression model?
The function you requested comes in the package {car} in R. I tried to figure it out running some regression models using the mtcars package in R. Evidently, I can get the VIF both using the function and manually, when the regressor is a continuous variable: require(car) attach(mtcars) fit1 <- lm(mpg ~ wt + hp + disp)...
How can we calculate the variance inflation factor for a categorical predictor variable when examini
The function you requested comes in the package {car} in R. I tried to figure it out running some regression models using the mtcars package in R. Evidently, I can get the VIF both using the function
How can we calculate the variance inflation factor for a categorical predictor variable when examining multicollinearity in a linear regression model? The function you requested comes in the package {car} in R. I tried to figure it out running some regression models using the mtcars package in R. Evidently, I can get t...
How can we calculate the variance inflation factor for a categorical predictor variable when examini The function you requested comes in the package {car} in R. I tried to figure it out running some regression models using the mtcars package in R. Evidently, I can get the VIF both using the function
47,894
Testing for classification significance
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you could have chosen a Leave-one-out procedure) The issue is the null hypothesis. You probably want to know if your classif...
Testing for classification significance
It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you co
Testing for classification significance It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you could have chosen a Leave-one-out procedure) The issue is the null hypothesis. Y...
Testing for classification significance It is very unusual to perform a significance test on a classifier (also it is very unusual to use a 70 fold on a 160 dataset - the most common is 5 or 10 folds. For the number of folds you used you co
47,895
Treating missing values in panel data set
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for example, 100 observations and four regressors, each with a 10% missing observation rate, you'll only be missing 10% of the dat...
Treating missing values in panel data set
Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for exampl
Treating missing values in panel data set Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for example, 100 observations and four regressors, each with a 10% missing observation r...
Treating missing values in panel data set Imputation is very useful for improving the accuracy of your parameter estimates in situations where a significant amount of data would otherwise be deleted. Consider that in a study with, for exampl
47,896
Ancillary statistics:Beta distribution is free of $\beta$?
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For instance, your link shows why the ratio of two Gamma $\text{G}(\alpha_i,\beta)$ variates is a Beta $\text{B}(\alpha_1,\al...
Ancillary statistics:Beta distribution is free of $\beta$?
There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For i
Ancillary statistics:Beta distribution is free of $\beta$? There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For instance, your link shows why the ratio of two Gamma $\text{G}...
Ancillary statistics:Beta distribution is free of $\beta$? There is either a typo in giving the pdf of $Z$ or you are confusing with the general definition of a Beta $\text{B}(\alpha,\beta)$ as it should be a Beta $\text{B}(\alpha,\alpha)$ distribution. For i
47,897
Bound for weighted sum of Poisson random variables
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). See also the post How does saddlepoint approximation work? Let $X_1, \do...
Bound for weighted sum of Poisson random variables
We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint appr
Bound for weighted sum of Poisson random variables We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). See also the post How...
Bound for weighted sum of Poisson random variables We can use the saddlepoint approximation. I will follow closely my answer to Generic sum of Gamma random variables . For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint appr
47,898
Kendall's tau derivation from first principles?
Tau is an "indicator" version of covariance. Recall this image from How would you explain covariance to someone who understands only the mean: It shows two possible configurations of pairs of points in a scatterplot. The red pairs are "positively" oriented (or "concordant"): they are at the lower left and upper right...
Kendall's tau derivation from first principles?
Tau is an "indicator" version of covariance. Recall this image from How would you explain covariance to someone who understands only the mean: It shows two possible configurations of pairs of points
Kendall's tau derivation from first principles? Tau is an "indicator" version of covariance. Recall this image from How would you explain covariance to someone who understands only the mean: It shows two possible configurations of pairs of points in a scatterplot. The red pairs are "positively" oriented (or "concorda...
Kendall's tau derivation from first principles? Tau is an "indicator" version of covariance. Recall this image from How would you explain covariance to someone who understands only the mean: It shows two possible configurations of pairs of points
47,899
Fisher's exact test vs kappa analysis
I know I answer the question two years later, but I hope some future readers may find the answer helpful. Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classification table whereas Fisher's exact test evaluates the association between two categorical variables. In some cases, C...
Fisher's exact test vs kappa analysis
I know I answer the question two years later, but I hope some future readers may find the answer helpful. Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classi
Fisher's exact test vs kappa analysis I know I answer the question two years later, but I hope some future readers may find the answer helpful. Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classification table whereas Fisher's exact test evaluates the association between two c...
Fisher's exact test vs kappa analysis I know I answer the question two years later, but I hope some future readers may find the answer helpful. Cohen's $\kappa$ tests if there are more chance that a datum falls in the diagonal of a classi
47,900
Should I consider time as a fixed or random effect in GLMM?
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not known is whether that can be explained by values of existing variables in your regression. Ideally, you would include all th...
Should I consider time as a fixed or random effect in GLMM?
I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not know
Should I consider time as a fixed or random effect in GLMM? I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not known is whether that can be explained by values of existing var...
Should I consider time as a fixed or random effect in GLMM? I think it may be a little more complex than just "fixed" or "random" effect. What you seem to be suggesting is that there is a known decline in bird abundance over the years. What is perhaps not know