idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,101
Kruskal-Wallis test: how to calculate the exact p-value?
The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial: You want to test if a coin is biased with 10 heads/tails. Compute the statistics $h_0=|heads-5|$ for your dataset. Under...
Kruskal-Wallis test: how to calculate the exact p-value?
The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial:
Kruskal-Wallis test: how to calculate the exact p-value? The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial: You want to test if a coin is biased with 10 heads/tails. Compu...
Kruskal-Wallis test: how to calculate the exact p-value? The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial:
49,102
Why do saddle points become "attractive" in Newtonian dynamics?
There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenvalues. $$ \begin{align} \nabla^2 f(x) &= PDP^\top \\ \left[\nabla^2 f(x)\right]^{-1} &= PD^{-1}P^\top \end{align} $$ Thi...
Why do saddle points become "attractive" in Newtonian dynamics?
There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenv
Why do saddle points become "attractive" in Newtonian dynamics? There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenvalues. $$ \begin{align} \nabla^2 f(x) &= PDP^\top \\ \le...
Why do saddle points become "attractive" in Newtonian dynamics? There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenv
49,103
Training models on biased samples
1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that would distinguish between the true positives and false positives that you have data about i.e. the samples samples that have...
Training models on biased samples
1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that woul
Training models on biased samples 1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that would distinguish between the true positives and false positives that you have data about ...
Training models on biased samples 1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that woul
49,104
Training models on biased samples
Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manually examine some representative examples from each cluster, as well as / or the outliers. Also you can see if your cluste...
Training models on biased samples
Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manual
Training models on biased samples Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manually examine some representative examples from each cluster, as well as / or the outlier...
Training models on biased samples Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manual
49,105
Difference between OOB score and score of random forest model in scikit-learn package?
clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If you would put "unseen" test-data here, you get validation loss. clf.oob_score provides the coefficient of determination ...
Difference between OOB score and score of random forest model in scikit-learn package?
clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If y
Difference between OOB score and score of random forest model in scikit-learn package? clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If you would put "unseen" test-data h...
Difference between OOB score and score of random forest model in scikit-learn package? clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If y
49,106
Pre-computing feature crosses when using XGBoost?
Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the one hand, trees (like xgb) can work out this boundary; on the other hand, lots of splits will be needed to accomplish it....
Pre-computing feature crosses when using XGBoost?
Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the o
Pre-computing feature crosses when using XGBoost? Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the one hand, trees (like xgb) can work out this boundary; on the other han...
Pre-computing feature crosses when using XGBoost? Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the o
49,107
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$
The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mathbb{E}(XY) = \mathbb{E}_X(\mathbb{E}_y(XY|X))$ where the subscripts denote expectation with respect to which variable (f...
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$
The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mat
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$ The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mathbb{E}(XY) = \mathbb{E}_X(\math...
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$ The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mat
49,108
Fitting MA(q) and ARIMA(q) model
When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and than fit it again with the newly obtained errors, compare it to the actual data to obtain new errors and so on. you a...
Fitting MA(q) and ARIMA(q) model
When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and
Fitting MA(q) and ARIMA(q) model When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and than fit it again with the newly obtained errors, compare it to the actual data to obt...
Fitting MA(q) and ARIMA(q) model When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and
49,109
Why is p(x|z) tractable but p(z|x) intractable?
In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called latent variables, going with your notation). We then have to specify a $prior$ on these latent variables, $p(z)$, to comple...
Why is p(x|z) tractable but p(z|x) intractable?
In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called lat
Why is p(x|z) tractable but p(z|x) intractable? In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called latent variables, going with your notation). We then have to specify a $pri...
Why is p(x|z) tractable but p(z|x) intractable? In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called lat
49,110
What's the difference between Random Intercepts Model and linear model with dummies?
Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your question).
What's the difference between Random Intercepts Model and linear model with dummies?
Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your que
What's the difference between Random Intercepts Model and linear model with dummies? Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your question).
What's the difference between Random Intercepts Model and linear model with dummies? Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your que
49,111
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder?
According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected value by drawing samples then computing the average. The random value is added for generating samples from $q_\phi(z|x^{(...
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder?
According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected v
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder? According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected value by drawing samples ...
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder? According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected v
49,112
Test to determine whether a variable changes more in one group than another
Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interaction in a regression model. Namely, the interaction between group and dose in the model: response ~ group * dose Your quest...
Test to determine whether a variable changes more in one group than another
Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interactio
Test to determine whether a variable changes more in one group than another Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interaction in a regression model. Namely, the interac...
Test to determine whether a variable changes more in one group than another Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interactio
49,113
Test to determine whether a variable changes more in one group than another
It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual as your random factor and time and time squared as your random effects, and time and time squared as your fixed effects. ...
Test to determine whether a variable changes more in one group than another
It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual a
Test to determine whether a variable changes more in one group than another It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual as your random factor and time and time squar...
Test to determine whether a variable changes more in one group than another It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual a
49,114
When the effect size of a covariate is high and yet not significant
Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. Since such examples are numerous and easy to create, I won't dwell on this rather uninteresting point. There are subtler ...
When the effect size of a covariate is high and yet not significant
Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. S
When the effect size of a covariate is high and yet not significant Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. Since such examples are numerous and easy to create, ...
When the effect size of a covariate is high and yet not significant Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. S
49,115
Complete sufficient statistic and unbiased estimator
Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the function $g(\theta)=\frac{1}{\theta}$ doesn't admit an unbiased estimator while $\sum_{i=1}^{n}{X_i}$ is a Complete Suffic...
Complete sufficient statistic and unbiased estimator
Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the fu
Complete sufficient statistic and unbiased estimator Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the function $g(\theta)=\frac{1}{\theta}$ doesn't admit an unbiased estim...
Complete sufficient statistic and unbiased estimator Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the fu
49,116
Complete sufficient statistic and unbiased estimator
No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is discussed here at math SE. There are many other examples, this is one. A paper discussing the topic is A Class of Para...
Complete sufficient statistic and unbiased estimator
No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is
Complete sufficient statistic and unbiased estimator No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is discussed here at math SE. There are many other examples, this is ...
Complete sufficient statistic and unbiased estimator No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is
49,117
Moments of the sample median of a normal distribution
The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (except for the odd moments which are zero), if not unobtainable. Yet, for the 2nd moment of the median of samples with 3 ...
Moments of the sample median of a normal distribution
The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (e
Moments of the sample median of a normal distribution The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (except for the odd moments which are zero), if not unobtainable. Ye...
Moments of the sample median of a normal distribution The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (e
49,118
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation in a paper
Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{Y}{E(Y)}\cdot E(Y)= E(Y)\cdot [\frac{Y}{E(Y)}-1+1]= E(Y)\cdot [\frac{Y-E(Y)}{E(Y)}+1]$ Therefore $EY^{-r} = [E(Y)]^{-r}...
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation
Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation in a paper Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{Y}{E(Y)}\...
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{
49,119
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so it tends to approximate maximum likelihood learning better when the mixing is fast. The ignored derivatives are then sm...
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so it tends to approximate maximum li...
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so
49,120
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem again in terms of energy? I came across the following equation several times and one drawback of the ML method is that the d...
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem agai
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem again in terms of energy? I came across...
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem agai
49,121
Why is chi-squared / z-test used for a/b testing in marketing?
A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or exposed to an impression, and of those a certain number will "convert," which often means purchase something, but can be som...
Why is chi-squared / z-test used for a/b testing in marketing?
A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or expo
Why is chi-squared / z-test used for a/b testing in marketing? A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or exposed to an impression, and of those a certain number will ...
Why is chi-squared / z-test used for a/b testing in marketing? A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or expo
49,122
Why is chi-squared / z-test used for a/b testing in marketing?
Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether a ratio is significantly different (e.g. ratio of visitors that click on a link) and a z-test tests whether means are s...
Why is chi-squared / z-test used for a/b testing in marketing?
Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether
Why is chi-squared / z-test used for a/b testing in marketing? Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether a ratio is significantly different (e.g. ratio of visito...
Why is chi-squared / z-test used for a/b testing in marketing? Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether
49,123
Estimation in STAN - help modelling a multinomial
You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that there were 7 subjects in the first row by calculating 0 + 1 + 6. Stan can't do this for the binomial distribution, since the...
Estimation in STAN - help modelling a multinomial
You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that ther
Estimation in STAN - help modelling a multinomial You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that there were 7 subjects in the first row by calculating 0 + 1 + 6. Stan can'...
Estimation in STAN - help modelling a multinomial You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that ther
49,124
During oversampling of rare events, why are the beta coefficients of the independent variables not affected, but only the intercept?
Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(X=0)) = \log (\frac{\pi(X=0)}{1-\pi(X=0)})$ so $\alpha$ is simply the log of odds for the unexposed, $X=0$. But why is t...
During oversampling of rare events, why are the beta coefficients of the independent variables not a
Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(
During oversampling of rare events, why are the beta coefficients of the independent variables not affected, but only the intercept? Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha...
During oversampling of rare events, why are the beta coefficients of the independent variables not a Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(
49,125
Distance between discrete histograms
This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two kind of distances. Let's take a simple example. You have a range of values $\{0,1,...100\}$. Then consider the distributi...
Distance between discrete histograms
This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two k
Distance between discrete histograms This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two kind of distances. Let's take a simple example. You have a range of values $\{0,1,.....
Distance between discrete histograms This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two k
49,126
Notation for median in a formula
As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represents the median of the set $A$ then you can use this notation as in your example.
Notation for median in a formula
As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represent
Notation for median in a formula As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represents the median of the set $A$ then you can use this notation as in your example.
Notation for median in a formula As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represent
49,127
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality)
I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionally, a major thrust of Bayesian research over the last 30 years or so has been on Markov chain Monte Carlo (MCMC) methods....
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality)
I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionall
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality) I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionally, a major thrust of Bayesian research over the last 30 ...
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality) I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionall
49,128
How to understand / calculate FLOPs of the neural network model?
input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_filter[2] * conv_filter[3] # vector_length flops_per_instance = n + 1 # general defination for number of flops (n: mu...
How to understand / calculate FLOPs of the neural network model?
input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_f
How to understand / calculate FLOPs of the neural network model? input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_filter[2] * conv_filter[3] # vector_length flops_per_in...
How to understand / calculate FLOPs of the neural network model? input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_f
49,129
Prediction on individual cases in survival analysis
There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gives you something better: the full distribution of the duration! Let me explain. Linear regression gives you an estimat...
Prediction on individual cases in survival analysis
There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gi
Prediction on individual cases in survival analysis There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gives you something better: the full distribution of the duration! Let...
Prediction on individual cases in survival analysis There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gi
49,130
Prediction on individual cases in survival analysis
As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/characteristics of that individual case. So you'll have to construct a risk model for individualized prediction (which yo...
Prediction on individual cases in survival analysis
As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/c
Prediction on individual cases in survival analysis As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/characteristics of that individual case. So you'll have to construct ...
Prediction on individual cases in survival analysis As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/c
49,131
Convolutional neural networks backpropagation
A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore these connections/weights each of the non-zero weights is shared across multiple connections, ie across multiple pairs o...
Convolutional neural networks backpropagation
A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore
Convolutional neural networks backpropagation A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore these connections/weights each of the non-zero weights is shared across mu...
Convolutional neural networks backpropagation A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore
49,132
Convolutional neural networks backpropagation
In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gradient descent, is quite simple if you just express it as a derivative of the cost function $J$: $\Delta w_k = -\eta \fra...
Convolutional neural networks backpropagation
In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gra
Convolutional neural networks backpropagation In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gradient descent, is quite simple if you just express it as a derivative of t...
Convolutional neural networks backpropagation In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gra
49,133
Deal with percentage data
This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't be used directly in any analysis, though you can back out its results in the end.
Deal with percentage data
This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't
Deal with percentage data This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't be used directly in any analysis, though you can back out its results in the end.
Deal with percentage data This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't
49,134
Iteratively updating the decomposition of a covariance matrix
I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of theoretical background, I would suggest you look at computer vision literature. They have nice application papers that ex...
Iteratively updating the decomposition of a covariance matrix
I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of t
Iteratively updating the decomposition of a covariance matrix I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of theoretical background, I would suggest you look at compute...
Iteratively updating the decomposition of a covariance matrix I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of t
49,135
How does clogit() (in R) handle incomplete strata?
The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > library(survival) > data(retinopathy) > head(retinopathy) id laser eye age type trt futime status risk 1 5 argon...
How does clogit() (in R) handle incomplete strata?
The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > l
How does clogit() (in R) handle incomplete strata? The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > library(survival) > data(retinopathy) > head(retinopathy) id laser ...
How does clogit() (in R) handle incomplete strata? The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > l
49,136
How does clogit() (in R) handle incomplete strata?
In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically done or at least an error/warning message is displayed during estimation. Otherwise you could remove incomplete strata ...
How does clogit() (in R) handle incomplete strata?
In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically
How does clogit() (in R) handle incomplete strata? In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically done or at least an error/warning message is displayed during estima...
How does clogit() (in R) handle incomplete strata? In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically
49,137
Backpropagation algorithm in neural networks (NN) with logistic activation function
Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}\sum_i(t_i - y_i)^2}{\partial y_i} = \frac{\partial\frac{1}{2}(t_i - y_i)^2}{\partial y_i} = y_i - t_i$$ would be more ...
Backpropagation algorithm in neural networks (NN) with logistic activation function
Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}
Backpropagation algorithm in neural networks (NN) with logistic activation function Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}\sum_i(t_i - y_i)^2}{\partial y_i} =...
Backpropagation algorithm in neural networks (NN) with logistic activation function Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}
49,138
Uniform distribution of correlation coefficients in correlation matrix
It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which all the variables have the same correlation with each other. Let $\rho$ be a random variable with distribution $F$. Defin...
Uniform distribution of correlation coefficients in correlation matrix
It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which al
Uniform distribution of correlation coefficients in correlation matrix It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which all the variables have the same correlation with ea...
Uniform distribution of correlation coefficients in correlation matrix It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which al
49,139
Terminology for "time-series of events"
The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events"
The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events" The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events" The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
49,140
Terminology for "time-series of events"
Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a special case where only the times itself are observed and of interest. So if the observations are occurrence times of earthqu...
Terminology for "time-series of events"
Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a specia
Terminology for "time-series of events" Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a special case where only the times itself are observed and of interest. So if the obse...
Terminology for "time-series of events" Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a specia
49,141
Multinomial logistic regression with class probability as target variable
The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one could also have $(y_1, y_2, \dots, y_{k-1}, 1 - \sum_{j = 1}^{k-1} y_j)$ with the additional restriction that $0 < y_j < 1...
Multinomial logistic regression with class probability as target variable
The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one co
Multinomial logistic regression with class probability as target variable The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one could also have $(y_1, y_2, \dots, y_{k-1}, 1 - ...
Multinomial logistic regression with class probability as target variable The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one co
49,142
FGLS and time fixed effects
Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FGLS estimation is inefficient under groupwise heteroskedasticity. Note also that this method requires estimation of T...
FGLS and time fixed effects
Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FG
FGLS and time fixed effects Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FGLS estimation is inefficient under groupwise heteroskedasticity. Note also that this metho...
FGLS and time fixed effects Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FG
49,143
Probability of throwing n different numbers in m throws of a die
Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides on your dice. We assume that the dice is fair. The objective is to find $P (T_{n, m })$. Then by the law of total prob...
Probability of throwing n different numbers in m throws of a die
Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides
Probability of throwing n different numbers in m throws of a die Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides on your dice. We assume that the dice is fair. The ob...
Probability of throwing n different numbers in m throws of a die Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides
49,144
What does the W value in Wilcoxon test mean
The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly drawn observation from group b. Only if you see location-shift (i.e. distributions in both groups have the same shape but ...
What does the W value in Wilcoxon test mean
The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly dr
What does the W value in Wilcoxon test mean The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly drawn observation from group b. Only if you see location-shift (i.e. distribut...
What does the W value in Wilcoxon test mean The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly dr
49,145
Self Play in Reinforcement Learning
I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if they had the same model, their parameters highly likely wouldn't converge simultaneously. This is like matching two differe...
Self Play in Reinforcement Learning
I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if the
Self Play in Reinforcement Learning I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if they had the same model, their parameters highly likely wouldn't converge simultaneousl...
Self Play in Reinforcement Learning I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if the
49,146
What is the loss function in neural networks?
The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of that layer. For the loss function we usually use MSE for linear layers or cross-entropy for softmax layers such that the...
What is the loss function in neural networks?
The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of
What is the loss function in neural networks? The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of that layer. For the loss function we usually use MSE for linear layers or ...
What is the loss function in neural networks? The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of
49,147
What is the loss function in neural networks?
You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, the most common is least-squares. For classification, cross-entropy is popular. Imagine you want your network to output 1...
What is the loss function in neural networks?
You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, t
What is the loss function in neural networks? You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, the most common is least-squares. For classification, cross-entropy is popu...
What is the loss function in neural networks? You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, t
49,148
Should MCMC posterior be used as my new prior?
One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are many ways to this, and how you create these dependence structures is problem specific. But to give you an idea, a simple...
Should MCMC posterior be used as my new prior?
One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are m
Should MCMC posterior be used as my new prior? One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are many ways to this, and how you create these dependence structures is probl...
Should MCMC posterior be used as my new prior? One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are m
49,149
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel?
Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(x)}{g(x)}]} \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} = \langle \mathbb{1}_{[0, \frac{f(x)}{g(x)}]}, \mathbb{1}_{[0, \frac{f...
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel?
Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel? Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(x)}{g(x)}]} \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} = \langle \ma...
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel? Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(
49,150
What information can I gain from the fourier transform?
The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in music). your plots are too low detail to zoom in, but I cannot imagine that a keystroke has any particular repetitive sign...
What information can I gain from the fourier transform?
The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in mu
What information can I gain from the fourier transform? The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in music). your plots are too low detail to zoom in, but I cannot ima...
What information can I gain from the fourier transform? The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in mu
49,151
What information can I gain from the fourier transform?
Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency domain. The low frequency components can be the sound from base guitar, and the high frequency components can be sound fr...
What information can I gain from the fourier transform?
Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency d
What information can I gain from the fourier transform? Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency domain. The low frequency components can be the sound from base g...
What information can I gain from the fourier transform? Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency d
49,152
What information can I gain from the fourier transform?
FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or for small sets of events like consecutive taps of the same number/button).
What information can I gain from the fourier transform?
FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or f
What information can I gain from the fourier transform? FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or for small sets of events like consecutive taps of the same number...
What information can I gain from the fourier transform? FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or f
49,153
Autocorrelation and GLS
GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true value of $\rho$ in $\varepsilon_t = \rho\varepsilon_{t-1} + w_t$ to be known. However, apparently $\rho$ is unobservable. ...
Autocorrelation and GLS
GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true va
Autocorrelation and GLS GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true value of $\rho$ in $\varepsilon_t = \rho\varepsilon_{t-1} + w_t$ to be known. However, apparently ...
Autocorrelation and GLS GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true va
49,154
Autocorrelation and GLS
The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARIMAX model's exogenous components? and An example of autocorrelation in residuals causing misinterpretation.
Autocorrelation and GLS
The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARI
Autocorrelation and GLS The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARIMAX model's exogenous components? and An example of autocorrelation in residuals causing misinte...
Autocorrelation and GLS The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARI
49,155
Mathematical Principle behind ANOVA?
I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance $s^2 = \sum_{k = 1}^N (Y_k - \bar{Y}_{\centerdot})^2.$ Now consider a grouping index $i = 1, \ldots, I$ with no particu...
Mathematical Principle behind ANOVA?
I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance
Mathematical Principle behind ANOVA? I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance $s^2 = \sum_{k = 1}^N (Y_k - \bar{Y}_{\centerdot})^2.$ Now consider a grouping ind...
Mathematical Principle behind ANOVA? I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance
49,156
Usage of tensor notation in statistics
The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim F$, where $F$ is some $p$-dimensional distribution. Given some data matrix $X \in \mathbb{R}^{n\times p}$ where $n$ is t...
Usage of tensor notation in statistics
The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim
Usage of tensor notation in statistics The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim F$, where $F$ is some $p$-dimensional distribution. Given some data matrix $X \in...
Usage of tensor notation in statistics The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim
49,157
Standard equivalent of OLS for minimizing the $L_1$ norm
We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x| = \mathrm{med}(x_1, \dots, x_j),$$ the sample median. Commonly in regression, we minimize the squared error, giving us...
Standard equivalent of OLS for minimizing the $L_1$ norm
We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x
Standard equivalent of OLS for minimizing the $L_1$ norm We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x| = \mathrm{med}(x_1, \dots, x_j),$$ the sample median. Commonl...
Standard equivalent of OLS for minimizing the $L_1$ norm We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x
49,158
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgboost works by recursive partitioning of the data. In this case data (or an array of index into it) can be reordered in me...
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgbo
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgboost works by recursive partitionin...
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgbo
49,159
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The rjungle refers to this "Random Jungle" tool. Source code: https://sourceforge.net/p/randomjungle/code/HEAD/tree/trunk/...
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The r
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The rjungle refers to this "Random Jung...
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The r
49,160
Why are hidden Markov models (HMM) also called mixture models?
Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all probability density functions. Consider, for example that two people are cutting pencils on an assembly line. The first c...
Why are hidden Markov models (HMM) also called mixture models?
Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all p
Why are hidden Markov models (HMM) also called mixture models? Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all probability density functions. Consider, for example that ...
Why are hidden Markov models (HMM) also called mixture models? Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all p
49,161
Why are hidden Markov models (HMM) also called mixture models?
(This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing on Wikipedia). This clearly shows that in HMM, it is the observation process ($Y$) which is modeled as a mixture. Howev...
Why are hidden Markov models (HMM) also called mixture models?
(This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing
Why are hidden Markov models (HMM) also called mixture models? (This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing on Wikipedia). This clearly shows that in HMM, it is the...
Why are hidden Markov models (HMM) also called mixture models? (This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing
49,162
Relation between: Likelihood, conditional probability and failure rate
My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likelihood is defined as $$ \mathcal{L}(\theta \mid x_1,\dots,x_n) = \prod_{i=1}^n f_\theta(x_i) $$ so it is a product of pro...
Relation between: Likelihood, conditional probability and failure rate
My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likeli
Relation between: Likelihood, conditional probability and failure rate My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likelihood is defined as $$ \mathcal{L}(\theta \mid x_...
Relation between: Likelihood, conditional probability and failure rate My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likeli
49,163
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference?
So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for parameterizing flexible approximate posterior distributions in the VAE framework The term "flexible approximation" is useful ...
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference
So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for paramete
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference? So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for parameterizing flexible ap...
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for paramete
49,164
"Branching Stick" Regression
I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want to try several clusters. Another possibility, if the data set is not too large, is to try to do pairs of regressions o...
"Branching Stick" Regression
I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want
"Branching Stick" Regression I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want to try several clusters. Another possibility, if the data set is not too large, is to try...
"Branching Stick" Regression I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want
49,165
"Branching Stick" Regression
The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case. A first way to relax the segmented regression is by not requiring that the lines join, this could give you a first ide...
"Branching Stick" Regression
The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case.
"Branching Stick" Regression The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case. A first way to relax the segmented regression is by not requiring that the lines join, th...
"Branching Stick" Regression The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case.
49,166
"Branching Stick" Regression
The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf The result $\quad\begin{cases} y=p_1x+q_1 &...
"Branching Stick" Regression
The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scr
"Branching Stick" Regression The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf The result $\q...
"Branching Stick" Regression The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scr
49,167
Forecast time-series with two seasonal patterns
You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train, seasonal.periods=c(12,288)) fit <- tbats(usage_train_ts) fc <- forecast(fit) plot(fc) Details of the TBATS model are g...
Forecast time-series with two seasonal patterns
You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train,
Forecast time-series with two seasonal patterns You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train, seasonal.periods=c(12,288)) fit <- tbats(usage_train_ts) fc <- forecast...
Forecast time-series with two seasonal patterns You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train,
49,168
Forecast time-series with two seasonal patterns
Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day for three days or a total of 864 values. What I might suggest is that you build a model for each of the 24 hours (step ...
Forecast time-series with two seasonal patterns
Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day
Forecast time-series with two seasonal patterns Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day for three days or a total of 864 values. What I might suggest is that y...
Forecast time-series with two seasonal patterns Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day
49,169
Apparent contradiction between t-test and 1-way ANOVA
Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but the t-test uses only two groups. The two groups tested with the t-test have much lower variation than the third group an...
Apparent contradiction between t-test and 1-way ANOVA
Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but
Apparent contradiction between t-test and 1-way ANOVA Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but the t-test uses only two groups. The two groups tested with the t-...
Apparent contradiction between t-test and 1-way ANOVA Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but
49,170
Apparent contradiction between t-test and 1-way ANOVA
Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The commonly used Tuckey's tests are not really post-hoc tests for an ANOVA, but a collection of t-tests that can be performed ...
Apparent contradiction between t-test and 1-way ANOVA
Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The com
Apparent contradiction between t-test and 1-way ANOVA Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The commonly used Tuckey's tests are not really post-hoc tests for an ANO...
Apparent contradiction between t-test and 1-way ANOVA Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The com
49,171
Tuning Order XGBoost
Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benhalloum provides some Python code for tuning XGBoost: https://github.com/bamine/Kaggle-stuff/tree/master/otto
Tuning Order XGBoost
Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benh
Tuning Order XGBoost Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benhalloum provides some Python code for tuning XGBoost: https://github.com/bamine/Kaggle-stuff/tree/ma...
Tuning Order XGBoost Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benh
49,172
Tuning Order XGBoost
param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_weight': [0.5, 1.0, ...
Tuning Order XGBoost
param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6
Tuning Order XGBoost param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child...
Tuning Order XGBoost param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6
49,173
How does FaceNet (Google's facerecognition) handles a new image?
After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image by a fixed length vector (embedding). The triplet loss basically says, the distance between feature vectors of the sam...
How does FaceNet (Google's facerecognition) handles a new image?
After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image
How does FaceNet (Google's facerecognition) handles a new image? After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image by a fixed length vector (embedding). The triplet los...
How does FaceNet (Google's facerecognition) handles a new image? After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image
49,174
Autoencoders' gradient when using tied weights
Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\frac{\partial l}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}\frac{\partial \hat{a}_j}{\partial W_{ij}}=\frac{\p...
Autoencoders' gradient when using tied weights
Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\fr
Autoencoders' gradient when using tied weights Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\frac{\partial l}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}\fra...
Autoencoders' gradient when using tied weights Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\fr
49,175
Does $r$ overestimate true effects for small sample size datasets?
In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 10000 simulations of samples of size 3 where the pairs were generated from a bivariate normal population with $\rho= 0.1$...
Does $r$ overestimate true effects for small sample size datasets?
In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 1
Does $r$ overestimate true effects for small sample size datasets? In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 10000 simulations of samples of size 3 where the pairs...
Does $r$ overestimate true effects for small sample size datasets? In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 1
49,176
Maximum likelihood estimation of a Poisson binomial distribution
As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are interested in computing probabilities of success for each trial $p_i$. So the first thing to notice is that you assume in her...
Maximum likelihood estimation of a Poisson binomial distribution
As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are inter
Maximum likelihood estimation of a Poisson binomial distribution As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are interested in computing probabilities of success for each tr...
Maximum likelihood estimation of a Poisson binomial distribution As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are inter
49,177
For random forest, what's the difference between out-of-bag error and k-fold cross validation?
OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of the model's ability to perform on future data. Therefore, use a methodology like TimeSeriesSplit. By holding out future da...
For random forest, what's the difference between out-of-bag error and k-fold cross validation?
OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of th
For random forest, what's the difference between out-of-bag error and k-fold cross validation? OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of the model's ability to perf...
For random forest, what's the difference between out-of-bag error and k-fold cross validation? OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of th
49,178
Resampling/Interpolating monthly rates to daily rate estimates in R
If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulative energy used) rather than the intensive quantity ("density" = usage rate). This is very similar to how resampling a pr...
Resampling/Interpolating monthly rates to daily rate estimates in R
If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulativ
Resampling/Interpolating monthly rates to daily rate estimates in R If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulative energy used) rather than the intensive quantity ("...
Resampling/Interpolating monthly rates to daily rate estimates in R If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulativ
49,179
Resampling/Interpolating monthly rates to daily rate estimates in R
What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}$. That is, the mid month consumption is 3/2 of the end month consumption minus 1/2 of the prior month's u...
Resampling/Interpolating monthly rates to daily rate estimates in R
What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac
Resampling/Interpolating monthly rates to daily rate estimates in R What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}$. That is, the mid month consumption ...
Resampling/Interpolating monthly rates to daily rate estimates in R What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac
49,180
How can one produce many `p-values` in regression analysis?
When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for the pairwise comparisons with the baseline. Here's an example in R, a data set on weights of chicks on different feed: ...
How can one produce many `p-values` in regression analysis?
When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for
How can one produce many `p-values` in regression analysis? When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for the pairwise comparisons with the baseline. Here's an examp...
How can one produce many `p-values` in regression analysis? When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for
49,181
Sum of truncated Gammas
I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $X_2$ each have a $\text{Gamma}(a,b)$ distribution truncated above at $w$, with pdf $f(x)$: Note that your parameter $\b...
Sum of truncated Gammas
I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $
Sum of truncated Gammas I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $X_2$ each have a $\text{Gamma}(a,b)$ distribution truncated above at $w$, with pdf $f(x)$: Note...
Sum of truncated Gammas I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $
49,182
How to get the data set size required for neural network training?
There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network architecture. Try "simple" architectures first, i.e., fewer layers, fewer units per layer and experiment a bit with different...
How to get the data set size required for neural network training?
There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network archi
How to get the data set size required for neural network training? There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network architecture. Try "simple" architectures first, i.e., fewe...
How to get the data set size required for neural network training? There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network archi
49,183
How to get the data set size required for neural network training?
I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and network architecture. One rule of thumb I have read (e.g., in (2)) was a few thousand samples per class for the neural ...
How to get the data set size required for neural network training?
I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and n
How to get the data set size required for neural network training? I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and network architecture. One rule of thumb I have read ...
How to get the data set size required for neural network training? I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and n
49,184
How to get the data set size required for neural network training?
The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images for training. The training data set size will be 54,000.
How to get the data set size required for neural network training?
The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images f
How to get the data set size required for neural network training? The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images for training. The training data set size will be 54,00...
How to get the data set size required for neural network training? The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images f
49,185
Formal definitions for nonparametric and parametric models
Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random variables $X_1, ..., X_n$ all have the same distribution [i.e., their induced probability measures are all equal], but we may...
Formal definitions for nonparametric and parametric models
Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random varia
Formal definitions for nonparametric and parametric models Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random variables $X_1, ..., X_n$ all have the same distribution [i.e., th...
Formal definitions for nonparametric and parametric models Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random varia
49,186
Formal definitions for nonparametric and parametric models
Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number of parameters to be described (for example bounded probability distributions, continuous regression functions...) Paramet...
Formal definitions for nonparametric and parametric models
Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number o
Formal definitions for nonparametric and parametric models Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number of parameters to be described (for example bounded probability...
Formal definitions for nonparametric and parametric models Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number o
49,187
Ways of implementing Translation invariance
This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you translate the convolved output $f*g$, or if yo...
Ways of implementing Translation invariance
This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respe
Ways of implementing Translation invariance This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you tr...
Ways of implementing Translation invariance This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respe
49,188
Ways of implementing Translation invariance
I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance
I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
49,189
Understanding formulation of hypotheses in difference between two sample means (z test)
1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\leq0$ $H_{-1}$: The data weren't Normal after all but followed some other distribution $H_{-2}$: The data didn't even ha...
Understanding formulation of hypotheses in difference between two sample means (z test)
1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\
Understanding formulation of hypotheses in difference between two sample means (z test) 1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\leq0$ $H_{-1}$: The data weren't...
Understanding formulation of hypotheses in difference between two sample means (z test) 1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\
49,190
Understanding formulation of hypotheses in difference between two sample means (z test)
Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test)
Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test) Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test) Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
49,191
How are unobserved components predicted in random effect models?
So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{it}$ Use pooled ols to get $\hat u_{it} $ and $\hat a$ let $c_i-a=\mu$ $\hat\mu=(1/n)\sum \hat u_{it}$ and $\hat e_{it}=...
How are unobserved components predicted in random effect models?
So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{
How are unobserved components predicted in random effect models? So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{it}$ Use pooled ols to get $\hat u_{it} $ and $\hat a$ ...
How are unobserved components predicted in random effect models? So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{
49,192
How are unobserved components predicted in random effect models?
Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtain an estimation of $\beta$. Define the estimation error $\hat{u}_{it}$: $$ \hat{u}_{it} \equiv y_{it} - x_{it}\hat{\beta...
How are unobserved components predicted in random effect models?
Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtai
How are unobserved components predicted in random effect models? Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtain an estimation of $\beta$. Define the estimation error...
How are unobserved components predicted in random effect models? Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtai
49,193
Bayesian A/B testing a continuous value (Not a success rate)
Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to buy at $(Y_L)$. This is ordinal, and ranges from 0 to 3. The value of this is meaningless when $Y_K=0$. The amount of mon...
Bayesian A/B testing a continuous value (Not a success rate)
Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to bu
Bayesian A/B testing a continuous value (Not a success rate) Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to buy at $(Y_L)$. This is ordinal, and ranges from 0 to 3. The...
Bayesian A/B testing a continuous value (Not a success rate) Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to bu
49,194
Bayesian A/B testing a continuous value (Not a success rate)
Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take a big sample from the posterior and then take the interval based on the percentiles of your sample.
Bayesian A/B testing a continuous value (Not a success rate)
Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take
Bayesian A/B testing a continuous value (Not a success rate) Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take a big sample from the posterior and then take the interval...
Bayesian A/B testing a continuous value (Not a success rate) Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take
49,195
Help understanding Standard Error
Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distributed). If there is bias then you have to apply a correction. Q2: yes, although the convergence will slow as you approach z...
Help understanding Standard Error
Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distribut
Help understanding Standard Error Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distributed). If there is bias then you have to apply a correction. Q2: yes, although the conve...
Help understanding Standard Error Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distribut
49,196
Help understanding Standard Error
Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever. Then you would obtain another result than by sampling the whole day (what you might have intended). B: As there is no p...
Help understanding Standard Error
Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever.
Help understanding Standard Error Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever. Then you would obtain another result than by sampling the whole day (what you might h...
Help understanding Standard Error Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever.
49,197
Procedure for testing covariate balance for generalized propensity score estimator
The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after conditioning. If it is 0, then the variable will no longer confound the estimate of the treatment effect. Calculating st...
Procedure for testing covariate balance for generalized propensity score estimator
The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after
Procedure for testing covariate balance for generalized propensity score estimator The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after conditioning. If it is 0, then the va...
Procedure for testing covariate balance for generalized propensity score estimator The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after
49,198
Can neural network can be used to predict pseudo-random numbers?
A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the neural network, as the mainstream method for machine learning, has brought successes in many application areas, here we pr...
Can neural network can be used to predict pseudo-random numbers?
A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the ne
Can neural network can be used to predict pseudo-random numbers? A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the neural network, as the mainstream method for machine lear...
Can neural network can be used to predict pseudo-random numbers? A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the ne
49,199
My data are very skew and I can't see any detail in a histogram. How do I see the shape?
With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the data the largest observation is. You're dealing with two different problems at once here -- $\:$ a. The distribution is v...
My data are very skew and I can't see any detail in a histogram. How do I see the shape?
With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the da
My data are very skew and I can't see any detail in a histogram. How do I see the shape? With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the data the largest observation is. ...
My data are very skew and I can't see any detail in a histogram. How do I see the shape? With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the da
49,200
If we have auto differentiate tool, do we also need EM algorithm?
In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each step, the EM algorithm will always return valid parameters for the distribution. Although gradient-based optimization me...
If we have auto differentiate tool, do we also need EM algorithm?
In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each
If we have auto differentiate tool, do we also need EM algorithm? In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each step, the EM algorithm will always return valid parame...
If we have auto differentiate tool, do we also need EM algorithm? In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each