idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
48,401 | How do I perform diagnostic checks on a beta regression? | I am afraid I have a relatively unsatisfying answer, but I have included a number of references you may explore.
Beta regression models are relatively new, compared to the rest of the common generalized linear models. Ferrari & Cribari-Neto (2004) introduced the parameterization that is used by most statistical package... | How do I perform diagnostic checks on a beta regression? | I am afraid I have a relatively unsatisfying answer, but I have included a number of references you may explore.
Beta regression models are relatively new, compared to the rest of the common generaliz | How do I perform diagnostic checks on a beta regression?
I am afraid I have a relatively unsatisfying answer, but I have included a number of references you may explore.
Beta regression models are relatively new, compared to the rest of the common generalized linear models. Ferrari & Cribari-Neto (2004) introduced the ... | How do I perform diagnostic checks on a beta regression?
I am afraid I have a relatively unsatisfying answer, but I have included a number of references you may explore.
Beta regression models are relatively new, compared to the rest of the common generaliz |
48,402 | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference) | I'll weigh in with one point, but make it an answer so I can include some code and results.
I think the way I would approach the question is to consider the effect size.
If I have understood the question and data correctly, phi for the appropriate table comes out to 0.05. Cohen (1988) interprets this value as less tha... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age | I'll weigh in with one point, but make it an answer so I can include some code and results.
I think the way I would approach the question is to consider the effect size.
If I have understood the quest | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference)
I'll weigh in with one point, but make it an answer so I can include some code and results.
I think the way I would approach the question is to consider the effect size.
If I have und... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age
I'll weigh in with one point, but make it an answer so I can include some code and results.
I think the way I would approach the question is to consider the effect size.
If I have understood the quest |
48,403 | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference) | There are a few confusions at the base of this question that need to be addressed at the outset. As stated, your question asks about sex differences in the attainment of tertiary degrees; this question makes no mention of workforce participation, and hence, data on that metric is not relevant to your question as it is... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age | There are a few confusions at the base of this question that need to be addressed at the outset. As stated, your question asks about sex differences in the attainment of tertiary degrees; this questi | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference)
There are a few confusions at the base of this question that need to be addressed at the outset. As stated, your question asks about sex differences in the attainment of tertiary deg... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age
There are a few confusions at the base of this question that need to be addressed at the outset. As stated, your question asks about sex differences in the attainment of tertiary degrees; this questi |
48,404 | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference) | I will focus on the aspect of statistical significance and your calculations and tell why they are wrong.
To give a more refined question that talks about 'practical' significance is a bit more difficult. Not only is this topic a bit more subjective (though you can already find several posts on this website about diff... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age | I will focus on the aspect of statistical significance and your calculations and tell why they are wrong.
To give a more refined question that talks about 'practical' significance is a bit more diffi | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, ages 25 and up) (disparity, difference)
I will focus on the aspect of statistical significance and your calculations and tell why they are wrong.
To give a more refined question that talks about 'practical' significance is... | Significance of relationship between sex and bachelor's degree or higher (2017 U.S. labor force, age
I will focus on the aspect of statistical significance and your calculations and tell why they are wrong.
To give a more refined question that talks about 'practical' significance is a bit more diffi |
48,405 | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% chance level? | I think you are confused about the role of intercept in logistic regression.
Logistic regression predicts the probability of some outcome, in your case e.g. the probability of the A choice. To do that, it forms a linear combination of predictors and passes it through a logistic function that "squeezes" real numbers fro... | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% c | I think you are confused about the role of intercept in logistic regression.
Logistic regression predicts the probability of some outcome, in your case e.g. the probability of the A choice. To do that | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% chance level?
I think you are confused about the role of intercept in logistic regression.
Logistic regression predicts the probability of some outcome, in your case e.g. the probability of the A choice. To do that, it fo... | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% c
I think you are confused about the role of intercept in logistic regression.
Logistic regression predicts the probability of some outcome, in your case e.g. the probability of the A choice. To do that |
48,406 | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% chance level? | Suggest use of a Binomial test, with $p=0.5$.
One common use of the binomial test is in the case where the null hypothesis is that two categories are equally likely to occur (such as a coin toss). Tables are widely available to give the significance observed numbers of observations in the categories for this case. How... | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% c | Suggest use of a Binomial test, with $p=0.5$.
One common use of the binomial test is in the case where the null hypothesis is that two categories are equally likely to occur (such as a coin toss). Ta | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% chance level?
Suggest use of a Binomial test, with $p=0.5$.
One common use of the binomial test is in the case where the null hypothesis is that two categories are equally likely to occur (such as a coin toss). Tables ar... | How to set up an intercept-only mixed logistic regression in order to test for difference from 50% c
Suggest use of a Binomial test, with $p=0.5$.
One common use of the binomial test is in the case where the null hypothesis is that two categories are equally likely to occur (such as a coin toss). Ta |
48,407 | Investigate overdispersion in a plot for a poisson regression | One somewhat useful plot would be to plot absolute Pearson residuals against $\sqrt{\hat{y}}$ (or $\hat{y}$ or $\log(\hat{y})$...). It should look flat, and as long as the fitted mean isn't too small the mean value on the y-axis should be roughly about 0.8 (the mean of the squared Pearson residuals should be about 1).
... | Investigate overdispersion in a plot for a poisson regression | One somewhat useful plot would be to plot absolute Pearson residuals against $\sqrt{\hat{y}}$ (or $\hat{y}$ or $\log(\hat{y})$...). It should look flat, and as long as the fitted mean isn't too small | Investigate overdispersion in a plot for a poisson regression
One somewhat useful plot would be to plot absolute Pearson residuals against $\sqrt{\hat{y}}$ (or $\hat{y}$ or $\log(\hat{y})$...). It should look flat, and as long as the fitted mean isn't too small the mean value on the y-axis should be roughly about 0.8 (... | Investigate overdispersion in a plot for a poisson regression
One somewhat useful plot would be to plot absolute Pearson residuals against $\sqrt{\hat{y}}$ (or $\hat{y}$ or $\log(\hat{y})$...). It should look flat, and as long as the fitted mean isn't too small |
48,408 | Principal Components of Random Walk | I actually recently wrote a paper on this subject which will appear at NIPS 2018: https://arxiv.org/abs/1806.08805
My collaborator and I proved that in the limit of an infinite number of dimensions the projection of a random walk onto any PCA component is a sinusoid. You are welcome to read the paper for the proof, bu... | Principal Components of Random Walk | I actually recently wrote a paper on this subject which will appear at NIPS 2018: https://arxiv.org/abs/1806.08805
My collaborator and I proved that in the limit of an infinite number of dimensions th | Principal Components of Random Walk
I actually recently wrote a paper on this subject which will appear at NIPS 2018: https://arxiv.org/abs/1806.08805
My collaborator and I proved that in the limit of an infinite number of dimensions the projection of a random walk onto any PCA component is a sinusoid. You are welcome... | Principal Components of Random Walk
I actually recently wrote a paper on this subject which will appear at NIPS 2018: https://arxiv.org/abs/1806.08805
My collaborator and I proved that in the limit of an infinite number of dimensions th |
48,409 | Estimate the variance of Gaussian distribution from noisy sample | I would answer this question through the use of bayesian estimation with a "non-informative" prior.
notation and setup
Data model... $(y_i|\mu,\sigma^2)\sim N(\mu,\sigma^2+\sigma_e^2) $ for $i=1,\dots,n $
Prior... $p (\mu,\sigma^2)\propto(\sigma^2+\sigma_e^2)^{-2} $
The prior is not favouring one source. We have $\frac... | Estimate the variance of Gaussian distribution from noisy sample | I would answer this question through the use of bayesian estimation with a "non-informative" prior.
notation and setup
Data model... $(y_i|\mu,\sigma^2)\sim N(\mu,\sigma^2+\sigma_e^2) $ for $i=1,\dots | Estimate the variance of Gaussian distribution from noisy sample
I would answer this question through the use of bayesian estimation with a "non-informative" prior.
notation and setup
Data model... $(y_i|\mu,\sigma^2)\sim N(\mu,\sigma^2+\sigma_e^2) $ for $i=1,\dots,n $
Prior... $p (\mu,\sigma^2)\propto(\sigma^2+\sigma_... | Estimate the variance of Gaussian distribution from noisy sample
I would answer this question through the use of bayesian estimation with a "non-informative" prior.
notation and setup
Data model... $(y_i|\mu,\sigma^2)\sim N(\mu,\sigma^2+\sigma_e^2) $ for $i=1,\dots |
48,410 | Estimate the variance of Gaussian distribution from noisy sample | If (using the notation of the answer from @probabilityislogic) $\sigma_e^2$ is known, then the maximum likelihood estimator of $\sigma^2$ is
$$\hat{\sigma}^2={{n-1}\over{n}}s^2-\sigma_e^2$$
(or rather the maximum of this and zero). The estimate of the asymptotic variance of $\hat{\sigma}^2$ is given by
$$\frac{2 (n-1)... | Estimate the variance of Gaussian distribution from noisy sample | If (using the notation of the answer from @probabilityislogic) $\sigma_e^2$ is known, then the maximum likelihood estimator of $\sigma^2$ is
$$\hat{\sigma}^2={{n-1}\over{n}}s^2-\sigma_e^2$$
(or rather | Estimate the variance of Gaussian distribution from noisy sample
If (using the notation of the answer from @probabilityislogic) $\sigma_e^2$ is known, then the maximum likelihood estimator of $\sigma^2$ is
$$\hat{\sigma}^2={{n-1}\over{n}}s^2-\sigma_e^2$$
(or rather the maximum of this and zero). The estimate of the as... | Estimate the variance of Gaussian distribution from noisy sample
If (using the notation of the answer from @probabilityislogic) $\sigma_e^2$ is known, then the maximum likelihood estimator of $\sigma^2$ is
$$\hat{\sigma}^2={{n-1}\over{n}}s^2-\sigma_e^2$$
(or rather |
48,411 | About specifying independent priors for each parameter in bayesian modeling | They are specified as independent when you do not want to assume that they are a priori informative about each other. That is, knowing the value of one would not change your mind about any of the others, before seeing any data. If, on the other hand, you thought that e.g. larger means tended to correspond to smaller ... | About specifying independent priors for each parameter in bayesian modeling | They are specified as independent when you do not want to assume that they are a priori informative about each other. That is, knowing the value of one would not change your mind about any of the oth | About specifying independent priors for each parameter in bayesian modeling
They are specified as independent when you do not want to assume that they are a priori informative about each other. That is, knowing the value of one would not change your mind about any of the others, before seeing any data. If, on the oth... | About specifying independent priors for each parameter in bayesian modeling
They are specified as independent when you do not want to assume that they are a priori informative about each other. That is, knowing the value of one would not change your mind about any of the oth |
48,412 | Which regression analysis should I use for ranked data? | It sounds like you have the potential for two different models here, one that predicts rank and one that predicts premium.
For the rank model, something like an ordinal logistic regression may be appropriate. For the premium model, a linear regression may work. Both models can accommodate continuous and categorical pr... | Which regression analysis should I use for ranked data? | It sounds like you have the potential for two different models here, one that predicts rank and one that predicts premium.
For the rank model, something like an ordinal logistic regression may be app | Which regression analysis should I use for ranked data?
It sounds like you have the potential for two different models here, one that predicts rank and one that predicts premium.
For the rank model, something like an ordinal logistic regression may be appropriate. For the premium model, a linear regression may work. B... | Which regression analysis should I use for ranked data?
It sounds like you have the potential for two different models here, one that predicts rank and one that predicts premium.
For the rank model, something like an ordinal logistic regression may be app |
48,413 | Naïve Bayes Theorem for multiple features | Naive Bayes algorithm assumes that your features are independent (hence we call it "naive", since it makes the naive assumption about independence, so we don't have to care about dependencies between them). What follows, we model
$$ \begin{align}
p(C_k, x_1, x_2, ..., x_n) &\propto p(x_1 | C_k) \, p(x_2 | C_k) \dots p(... | Naïve Bayes Theorem for multiple features | Naive Bayes algorithm assumes that your features are independent (hence we call it "naive", since it makes the naive assumption about independence, so we don't have to care about dependencies between | Naïve Bayes Theorem for multiple features
Naive Bayes algorithm assumes that your features are independent (hence we call it "naive", since it makes the naive assumption about independence, so we don't have to care about dependencies between them). What follows, we model
$$ \begin{align}
p(C_k, x_1, x_2, ..., x_n) &\pr... | Naïve Bayes Theorem for multiple features
Naive Bayes algorithm assumes that your features are independent (hence we call it "naive", since it makes the naive assumption about independence, so we don't have to care about dependencies between |
48,414 | Why do we need density in estimation and cumulative distribution in transformation? | I guess that what you mean is maximum likelihood estimation scenario, where we consider some value $\hat \theta$ as the best guess for $\theta$ if it maximizes the likelihood function:
$$
L(\theta) = \prod_{i=1}^n f(x_i|\theta)
$$
Imagine a simple model, where we want to estimate the mean of normal distribution with kn... | Why do we need density in estimation and cumulative distribution in transformation? | I guess that what you mean is maximum likelihood estimation scenario, where we consider some value $\hat \theta$ as the best guess for $\theta$ if it maximizes the likelihood function:
$$
L(\theta) = | Why do we need density in estimation and cumulative distribution in transformation?
I guess that what you mean is maximum likelihood estimation scenario, where we consider some value $\hat \theta$ as the best guess for $\theta$ if it maximizes the likelihood function:
$$
L(\theta) = \prod_{i=1}^n f(x_i|\theta)
$$
Imagi... | Why do we need density in estimation and cumulative distribution in transformation?
I guess that what you mean is maximum likelihood estimation scenario, where we consider some value $\hat \theta$ as the best guess for $\theta$ if it maximizes the likelihood function:
$$
L(\theta) = |
48,415 | Find the MSE of a true response and its predicted value using OLS estimation | Here are two facts that we can utilize: if $A$ is a constant matrix, then $$\operatorname{E}[AX]=A\operatorname{E}[X],\qquad\operatorname{Var}(AX)=A\operatorname{Var}(X)A'.$$
For convenience let's denote $\hat{y}=f(\mathbf{x};\mathcal{D})$. Notice that $\operatorname{E}_{\mathcal{D}}\left[\left(\hat{y}-\operatorname{E}... | Find the MSE of a true response and its predicted value using OLS estimation | Here are two facts that we can utilize: if $A$ is a constant matrix, then $$\operatorname{E}[AX]=A\operatorname{E}[X],\qquad\operatorname{Var}(AX)=A\operatorname{Var}(X)A'.$$
For convenience let's den | Find the MSE of a true response and its predicted value using OLS estimation
Here are two facts that we can utilize: if $A$ is a constant matrix, then $$\operatorname{E}[AX]=A\operatorname{E}[X],\qquad\operatorname{Var}(AX)=A\operatorname{Var}(X)A'.$$
For convenience let's denote $\hat{y}=f(\mathbf{x};\mathcal{D})$. No... | Find the MSE of a true response and its predicted value using OLS estimation
Here are two facts that we can utilize: if $A$ is a constant matrix, then $$\operatorname{E}[AX]=A\operatorname{E}[X],\qquad\operatorname{Var}(AX)=A\operatorname{Var}(X)A'.$$
For convenience let's den |
48,416 | Quantile regression - "check function" | The check function stems from applying an optimization view of expressing the $\tau$-th sample quantile of a sample $\{Y_1, \ldots, Y_n\}$.
Conventionally, given an observed sample $Y_1, \ldots, Y_n$, the $\tau$-th sample quantile $\hat{Q}_Y(\tau)$ is defined by ranking, i.e., $\hat{Q}_Y(\tau)$ is the $\lfloor n\tau \... | Quantile regression - "check function" | The check function stems from applying an optimization view of expressing the $\tau$-th sample quantile of a sample $\{Y_1, \ldots, Y_n\}$.
Conventionally, given an observed sample $Y_1, \ldots, Y_n$ | Quantile regression - "check function"
The check function stems from applying an optimization view of expressing the $\tau$-th sample quantile of a sample $\{Y_1, \ldots, Y_n\}$.
Conventionally, given an observed sample $Y_1, \ldots, Y_n$, the $\tau$-th sample quantile $\hat{Q}_Y(\tau)$ is defined by ranking, i.e., $\... | Quantile regression - "check function"
The check function stems from applying an optimization view of expressing the $\tau$-th sample quantile of a sample $\{Y_1, \ldots, Y_n\}$.
Conventionally, given an observed sample $Y_1, \ldots, Y_n$ |
48,417 | Statistics in the context of Search Engine Optimization (SEO)? | First, I would like to explain the history of SEOs and consequently why many SEO books do not provide the exact implementation details for a general audience.
From my understanding, there are very few people who know exactly how search engines work. If someone knows how Google's search works in great detail, he/she ca... | Statistics in the context of Search Engine Optimization (SEO)? | First, I would like to explain the history of SEOs and consequently why many SEO books do not provide the exact implementation details for a general audience.
From my understanding, there are very fe | Statistics in the context of Search Engine Optimization (SEO)?
First, I would like to explain the history of SEOs and consequently why many SEO books do not provide the exact implementation details for a general audience.
From my understanding, there are very few people who know exactly how search engines work. If som... | Statistics in the context of Search Engine Optimization (SEO)?
First, I would like to explain the history of SEOs and consequently why many SEO books do not provide the exact implementation details for a general audience.
From my understanding, there are very fe |
48,418 | How to conduct sample size calculation for >2 groups in R? | If you do a three group comparison: Trt1 vs Ctl and Trt2 vs control, then a sample size calculation can be done in the following way:
Obtain sample sizes for Trt1 and Ctl in a two group comparison.
Obtain sample sizes for Trt2 and Ctl in a second two group comparison.
Assign Trt1 and Trt2 according to their respective ... | How to conduct sample size calculation for >2 groups in R? | If you do a three group comparison: Trt1 vs Ctl and Trt2 vs control, then a sample size calculation can be done in the following way:
Obtain sample sizes for Trt1 and Ctl in a two group comparison.
Ob | How to conduct sample size calculation for >2 groups in R?
If you do a three group comparison: Trt1 vs Ctl and Trt2 vs control, then a sample size calculation can be done in the following way:
Obtain sample sizes for Trt1 and Ctl in a two group comparison.
Obtain sample sizes for Trt2 and Ctl in a second two group comp... | How to conduct sample size calculation for >2 groups in R?
If you do a three group comparison: Trt1 vs Ctl and Trt2 vs control, then a sample size calculation can be done in the following way:
Obtain sample sizes for Trt1 and Ctl in a two group comparison.
Ob |
48,419 | Where can I find more materials on 'binning' after PCA? | Let's draw a picture with two variables. It will illustrate the general idea.
To achieve this, I generated a set of 500 data with expected correlation of $0.25$, computed the first principal component (PC), cut it into five equinumerous bins, and computed the first PC for each of the bins.
The first PC (not shown dir... | Where can I find more materials on 'binning' after PCA? | Let's draw a picture with two variables. It will illustrate the general idea.
To achieve this, I generated a set of 500 data with expected correlation of $0.25$, computed the first principal componen | Where can I find more materials on 'binning' after PCA?
Let's draw a picture with two variables. It will illustrate the general idea.
To achieve this, I generated a set of 500 data with expected correlation of $0.25$, computed the first principal component (PC), cut it into five equinumerous bins, and computed the fir... | Where can I find more materials on 'binning' after PCA?
Let's draw a picture with two variables. It will illustrate the general idea.
To achieve this, I generated a set of 500 data with expected correlation of $0.25$, computed the first principal componen |
48,420 | Second directional derivate and Hessian matrix | Think carefully about what you mean when you describe the directional derivative as the "slope" of a function $f$ in a certain direction. The concept of a "slope" only really makes sense in the context of a function whose domain is one-dimensional.
It helps to think of it this way. Let $f({\bf x})$ be a scalar valued f... | Second directional derivate and Hessian matrix | Think carefully about what you mean when you describe the directional derivative as the "slope" of a function $f$ in a certain direction. The concept of a "slope" only really makes sense in the contex | Second directional derivate and Hessian matrix
Think carefully about what you mean when you describe the directional derivative as the "slope" of a function $f$ in a certain direction. The concept of a "slope" only really makes sense in the context of a function whose domain is one-dimensional.
It helps to think of it ... | Second directional derivate and Hessian matrix
Think carefully about what you mean when you describe the directional derivative as the "slope" of a function $f$ in a certain direction. The concept of a "slope" only really makes sense in the contex |
48,421 | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$? | Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matrix of eigenvectors (each column is an eigenvector) and $\Lambda$ is a diagonal matrix of non-negative eigenvalues $\lamb... | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$? | Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matr | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$?
Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matrix of eigenvectors (each column is an eigenvector) a... | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$?
Let us denote $X^\top X$ by $A$. By construction, it is a $n\times n$ square symmetric positive semi-definite matrix, i.e. it has an eigenvalue decomposition $A=V\Lambda V^\top$, where $V$ is the matr |
48,422 | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$? | Define $W=X^TX$, and denote by $v_i$ a unit-norm eigenvector corresponding to its $i$-th largest eigenvalue.
By the variational characterization of eigenvalues,
$$
v_1 = \underset{x,\|x\|_2=1}{\arg\max} ~ ~ x^T W x
$$
Since you are looking for an orthogonal matrix, your next vector should be in a space orthogonal to $v... | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$? | Define $W=X^TX$, and denote by $v_i$ a unit-norm eigenvector corresponding to its $i$-th largest eigenvalue.
By the variational characterization of eigenvalues,
$$
v_1 = \underset{x,\|x\|_2=1}{\arg\ma | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$?
Define $W=X^TX$, and denote by $v_i$ a unit-norm eigenvector corresponding to its $i$-th largest eigenvalue.
By the variational characterization of eigenvalues,
$$
v_1 = \underset{x,\|x\|_2=1}{\arg\max} ~ ~ x^T W x
$$
Since you are looking for an ortho... | Why do the leading eigenvectors of $A$ maximize $\text{Tr}(D^TAD)$?
Define $W=X^TX$, and denote by $v_i$ a unit-norm eigenvector corresponding to its $i$-th largest eigenvalue.
By the variational characterization of eigenvalues,
$$
v_1 = \underset{x,\|x\|_2=1}{\arg\ma |
48,423 | Interpretation of standard error of ARIMA parameters | The standard errors of estimated AR parameters have the same interpretation as the standard error of any other estimate: they are (an estimate of) the standard deviation of its sampling distribution.
The idea is that there is some unknown but fixed underlying data generating process (DGP), governed by an unknown but fi... | Interpretation of standard error of ARIMA parameters | The standard errors of estimated AR parameters have the same interpretation as the standard error of any other estimate: they are (an estimate of) the standard deviation of its sampling distribution.
| Interpretation of standard error of ARIMA parameters
The standard errors of estimated AR parameters have the same interpretation as the standard error of any other estimate: they are (an estimate of) the standard deviation of its sampling distribution.
The idea is that there is some unknown but fixed underlying data ge... | Interpretation of standard error of ARIMA parameters
The standard errors of estimated AR parameters have the same interpretation as the standard error of any other estimate: they are (an estimate of) the standard deviation of its sampling distribution.
|
48,424 | Multilabel Classification with scikit-learn and Probabilities instead of Simple Labels | Let me try to answer this, I will edit the answer as I have more information. In general scikit-learn does not provide classifiers that handle the multi-label classification problem very well. That's why I started the scikit-multilearn's extension of scikit-learn and together with a lovely team of multi-label classific... | Multilabel Classification with scikit-learn and Probabilities instead of Simple Labels | Let me try to answer this, I will edit the answer as I have more information. In general scikit-learn does not provide classifiers that handle the multi-label classification problem very well. That's | Multilabel Classification with scikit-learn and Probabilities instead of Simple Labels
Let me try to answer this, I will edit the answer as I have more information. In general scikit-learn does not provide classifiers that handle the multi-label classification problem very well. That's why I started the scikit-multilea... | Multilabel Classification with scikit-learn and Probabilities instead of Simple Labels
Let me try to answer this, I will edit the answer as I have more information. In general scikit-learn does not provide classifiers that handle the multi-label classification problem very well. That's |
48,425 | Inference for Dynamic Bayesian Networks | I personally think the question is too broad to be answered well, But I still want to give some suggestions.
I feel Murphy's introduction to graphical models is very useful and it covers Bayesian Network with discrete time very well. If you have not checked this, I would recommend to read this first.
A Brief Introducti... | Inference for Dynamic Bayesian Networks | I personally think the question is too broad to be answered well, But I still want to give some suggestions.
I feel Murphy's introduction to graphical models is very useful and it covers Bayesian Netw | Inference for Dynamic Bayesian Networks
I personally think the question is too broad to be answered well, But I still want to give some suggestions.
I feel Murphy's introduction to graphical models is very useful and it covers Bayesian Network with discrete time very well. If you have not checked this, I would recommen... | Inference for Dynamic Bayesian Networks
I personally think the question is too broad to be answered well, But I still want to give some suggestions.
I feel Murphy's introduction to graphical models is very useful and it covers Bayesian Netw |
48,426 | Tuning adaboost | Number of weak learners
Train many, many weak learners. Then look at a test-error vs. number of estimators curve to find the optimal number.
Learning rate
Smaller is better, but you will have to fit more weak learners the smaller the learning rate. During initial modeling and EDA, set the learning rate rather larg... | Tuning adaboost | Number of weak learners
Train many, many weak learners. Then look at a test-error vs. number of estimators curve to find the optimal number.
Learning rate
Smaller is better, but you will have to f | Tuning adaboost
Number of weak learners
Train many, many weak learners. Then look at a test-error vs. number of estimators curve to find the optimal number.
Learning rate
Smaller is better, but you will have to fit more weak learners the smaller the learning rate. During initial modeling and EDA, set the learning ... | Tuning adaboost
Number of weak learners
Train many, many weak learners. Then look at a test-error vs. number of estimators curve to find the optimal number.
Learning rate
Smaller is better, but you will have to f |
48,427 | Diagnostics for General Linear Models | Pearson residuals in general do not follow a normal distribution.
Deviance residuals don't follow normal distribution, right?
They don't, but they will typically be much closer to being normally distributed than Pearson residuals.
Here's an example with a Poisson model applied to actually Poisson data
Clearly the w... | Diagnostics for General Linear Models | Pearson residuals in general do not follow a normal distribution.
Deviance residuals don't follow normal distribution, right?
They don't, but they will typically be much closer to being normally di | Diagnostics for General Linear Models
Pearson residuals in general do not follow a normal distribution.
Deviance residuals don't follow normal distribution, right?
They don't, but they will typically be much closer to being normally distributed than Pearson residuals.
Here's an example with a Poisson model applied t... | Diagnostics for General Linear Models
Pearson residuals in general do not follow a normal distribution.
Deviance residuals don't follow normal distribution, right?
They don't, but they will typically be much closer to being normally di |
48,428 | Diagnostics for General Linear Models | In a number of texts both Pearson and deviance residuals (or their standardized versions, for example, Sheather (2009)) are used to plot against predicted values. When it comes to the comparison between these two types residuals, deviance residuals is preferred over Pearson residuals. As an explanation of why this is t... | Diagnostics for General Linear Models | In a number of texts both Pearson and deviance residuals (or their standardized versions, for example, Sheather (2009)) are used to plot against predicted values. When it comes to the comparison betwe | Diagnostics for General Linear Models
In a number of texts both Pearson and deviance residuals (or their standardized versions, for example, Sheather (2009)) are used to plot against predicted values. When it comes to the comparison between these two types residuals, deviance residuals is preferred over Pearson residua... | Diagnostics for General Linear Models
In a number of texts both Pearson and deviance residuals (or their standardized versions, for example, Sheather (2009)) are used to plot against predicted values. When it comes to the comparison betwe |
48,429 | Bias-corrected percentile confidence intervals | You almost had it. Change your Step 2 code as shown below. You want the value of Z associated with the proportion you computed in Step 1 and that is what qnorm will give you.
rsq.bc <- quantile(mtcar.boot.rsq$boot.rsq,
c(pnorm((2*pnorm(mean(mtcar.boot.rsq$boot.high))) - 1.96),
... | Bias-corrected percentile confidence intervals | You almost had it. Change your Step 2 code as shown below. You want the value of Z associated with the proportion you computed in Step 1 and that is what qnorm will give you.
rsq.bc <- quantile(mtcar | Bias-corrected percentile confidence intervals
You almost had it. Change your Step 2 code as shown below. You want the value of Z associated with the proportion you computed in Step 1 and that is what qnorm will give you.
rsq.bc <- quantile(mtcar.boot.rsq$boot.rsq,
c(pnorm((2*pnorm(mean(mtcar.bo... | Bias-corrected percentile confidence intervals
You almost had it. Change your Step 2 code as shown below. You want the value of Z associated with the proportion you computed in Step 1 and that is what qnorm will give you.
rsq.bc <- quantile(mtcar |
48,430 | Reasons as to why standard multiple regression would not be appropriate? | The really interesting aspect of this question is that the doses are recorded as intervals and those intervals span sizable portions of the total range. This means we should be concerned that standard procedures, like logistic regression, that represent the doses as individual numbers might be misleading.
By means of v... | Reasons as to why standard multiple regression would not be appropriate? | The really interesting aspect of this question is that the doses are recorded as intervals and those intervals span sizable portions of the total range. This means we should be concerned that standard | Reasons as to why standard multiple regression would not be appropriate?
The really interesting aspect of this question is that the doses are recorded as intervals and those intervals span sizable portions of the total range. This means we should be concerned that standard procedures, like logistic regression, that rep... | Reasons as to why standard multiple regression would not be appropriate?
The really interesting aspect of this question is that the doses are recorded as intervals and those intervals span sizable portions of the total range. This means we should be concerned that standard |
48,431 | Differences between a sequence of simple linear regressions vs a single multiple linear regression | The second strategy is the same linear model, but with a different/inferior estimation procedure.
Let's look at the sequential approach more closely, with two covariates $X_1$ and $X_2$. After regressing $Y$ on $X_1$, we have:
$$\hat Y = b_0 + b_1X_1$$
Now, you want to regress the residuals on $X_2$. Thus, the model w... | Differences between a sequence of simple linear regressions vs a single multiple linear regression | The second strategy is the same linear model, but with a different/inferior estimation procedure.
Let's look at the sequential approach more closely, with two covariates $X_1$ and $X_2$. After regres | Differences between a sequence of simple linear regressions vs a single multiple linear regression
The second strategy is the same linear model, but with a different/inferior estimation procedure.
Let's look at the sequential approach more closely, with two covariates $X_1$ and $X_2$. After regressing $Y$ on $X_1$, we... | Differences between a sequence of simple linear regressions vs a single multiple linear regression
The second strategy is the same linear model, but with a different/inferior estimation procedure.
Let's look at the sequential approach more closely, with two covariates $X_1$ and $X_2$. After regres |
48,432 | With H2O AutoML is it okay to use my test set as the leaderboard? | According to the docs:
leaderboard_frame: This argument allows the user to specify a particular data frame to rank the models on the leaderboard. This frame will not be used for anything besides creating the leaderboard. If this option is not specified, then a leaderboard_frame will be created from the training_frame.... | With H2O AutoML is it okay to use my test set as the leaderboard? | According to the docs:
leaderboard_frame: This argument allows the user to specify a particular data frame to rank the models on the leaderboard. This frame will not be used for anything besides crea | With H2O AutoML is it okay to use my test set as the leaderboard?
According to the docs:
leaderboard_frame: This argument allows the user to specify a particular data frame to rank the models on the leaderboard. This frame will not be used for anything besides creating the leaderboard. If this option is not specified,... | With H2O AutoML is it okay to use my test set as the leaderboard?
According to the docs:
leaderboard_frame: This argument allows the user to specify a particular data frame to rank the models on the leaderboard. This frame will not be used for anything besides crea |
48,433 | Recurrent networks mimicking previous / current input | This is a very common problem. Here is an article with a good explanation. People have been trying to do this for a very long time, and it is safe to say there is no such thing as model that can tell you what the price of a financial instrument will be in the future with good accuracy. Some people believe that there is... | Recurrent networks mimicking previous / current input | This is a very common problem. Here is an article with a good explanation. People have been trying to do this for a very long time, and it is safe to say there is no such thing as model that can tell | Recurrent networks mimicking previous / current input
This is a very common problem. Here is an article with a good explanation. People have been trying to do this for a very long time, and it is safe to say there is no such thing as model that can tell you what the price of a financial instrument will be in the future... | Recurrent networks mimicking previous / current input
This is a very common problem. Here is an article with a good explanation. People have been trying to do this for a very long time, and it is safe to say there is no such thing as model that can tell |
48,434 | Why can't ARIMA model large lags and/or long range dependence? | As Pr. Hyndman explains in this blog post, there's nothing in the mathematics of ARMA models that would restrict forecasting long seasonal periods. The reason you can't forecast very long periods is the fact that most software tools (including R packages) have a threshold on the allowed seasonal lags due to the high co... | Why can't ARIMA model large lags and/or long range dependence? | As Pr. Hyndman explains in this blog post, there's nothing in the mathematics of ARMA models that would restrict forecasting long seasonal periods. The reason you can't forecast very long periods is t | Why can't ARIMA model large lags and/or long range dependence?
As Pr. Hyndman explains in this blog post, there's nothing in the mathematics of ARMA models that would restrict forecasting long seasonal periods. The reason you can't forecast very long periods is the fact that most software tools (including R packages) h... | Why can't ARIMA model large lags and/or long range dependence?
As Pr. Hyndman explains in this blog post, there's nothing in the mathematics of ARMA models that would restrict forecasting long seasonal periods. The reason you can't forecast very long periods is t |
48,435 | Learning from the flaws in the NHST and p-values | Even if you do get a significant p-value from a test of significance, you are supposed to look at the magnitude of the effect by constructing a confidence interval for it.
Case 1:
When examining the confidence interval, if you notice for example that the interval falls entirely below your predefined threshold for what ... | Learning from the flaws in the NHST and p-values | Even if you do get a significant p-value from a test of significance, you are supposed to look at the magnitude of the effect by constructing a confidence interval for it.
Case 1:
When examining the c | Learning from the flaws in the NHST and p-values
Even if you do get a significant p-value from a test of significance, you are supposed to look at the magnitude of the effect by constructing a confidence interval for it.
Case 1:
When examining the confidence interval, if you notice for example that the interval falls e... | Learning from the flaws in the NHST and p-values
Even if you do get a significant p-value from a test of significance, you are supposed to look at the magnitude of the effect by constructing a confidence interval for it.
Case 1:
When examining the c |
48,436 | Learning from the flaws in the NHST and p-values | I am going to change your question slightly to "patient can only notice a change of more than 1 cm" (this makes the null a closed set, but a more complicated argument holds for open sets). Your reasoning does not overcome the issue because what you really want to test is $|\mu_0-\mu_1|\le 1{\rm cm}$ and not $\mu_o=\mu... | Learning from the flaws in the NHST and p-values | I am going to change your question slightly to "patient can only notice a change of more than 1 cm" (this makes the null a closed set, but a more complicated argument holds for open sets). Your reaso | Learning from the flaws in the NHST and p-values
I am going to change your question slightly to "patient can only notice a change of more than 1 cm" (this makes the null a closed set, but a more complicated argument holds for open sets). Your reasoning does not overcome the issue because what you really want to test i... | Learning from the flaws in the NHST and p-values
I am going to change your question slightly to "patient can only notice a change of more than 1 cm" (this makes the null a closed set, but a more complicated argument holds for open sets). Your reaso |
48,437 | Overfitting in neural network | Without knowing a lot more about the model, nor the data used, it is hard to answer these questions with and rigour. That aside, the values you provide would make the think it is a reasonable model and does not necessarily overfit the training data.
for your second question, my first line of action would always be to p... | Overfitting in neural network | Without knowing a lot more about the model, nor the data used, it is hard to answer these questions with and rigour. That aside, the values you provide would make the think it is a reasonable model an | Overfitting in neural network
Without knowing a lot more about the model, nor the data used, it is hard to answer these questions with and rigour. That aside, the values you provide would make the think it is a reasonable model and does not necessarily overfit the training data.
for your second question, my first line ... | Overfitting in neural network
Without knowing a lot more about the model, nor the data used, it is hard to answer these questions with and rigour. That aside, the values you provide would make the think it is a reasonable model an |
48,438 | Overfitting in neural network | Overfitting is something that happens gradually, so it is sometimes hard to say. Also, whether a model is "good" or not depends a lot on context. If you need 99% accuracy for your model to be used in production then the values are not "good".
However, the values you show for train and test loss, accuracy do not indicat... | Overfitting in neural network | Overfitting is something that happens gradually, so it is sometimes hard to say. Also, whether a model is "good" or not depends a lot on context. If you need 99% accuracy for your model to be used in | Overfitting in neural network
Overfitting is something that happens gradually, so it is sometimes hard to say. Also, whether a model is "good" or not depends a lot on context. If you need 99% accuracy for your model to be used in production then the values are not "good".
However, the values you show for train and test... | Overfitting in neural network
Overfitting is something that happens gradually, so it is sometimes hard to say. Also, whether a model is "good" or not depends a lot on context. If you need 99% accuracy for your model to be used in |
48,439 | What happens if I do principal components of the principal components? | Turning my comment above into an answer:
Since your first PCA identifies orthogonal vectors, your second PCA should in principle do nothing (since it should basically find the same axes as the first round). But as @NickCox points out, coefficients might be reversed.
However, there may be small differences in practice... | What happens if I do principal components of the principal components? | Turning my comment above into an answer:
Since your first PCA identifies orthogonal vectors, your second PCA should in principle do nothing (since it should basically find the same axes as the first | What happens if I do principal components of the principal components?
Turning my comment above into an answer:
Since your first PCA identifies orthogonal vectors, your second PCA should in principle do nothing (since it should basically find the same axes as the first round). But as @NickCox points out, coefficients ... | What happens if I do principal components of the principal components?
Turning my comment above into an answer:
Since your first PCA identifies orthogonal vectors, your second PCA should in principle do nothing (since it should basically find the same axes as the first |
48,440 | How to compute $\mathbb{E}\left[Y_1Y_2 \mid |U_1-U_2| <a\right]$ for $Y_i\sim N\left(\beta U_i, \sigma^2\right)$ and $U_i \sim Unif(0,1)$? | Suppose you know that $U_1$, $U_2$, obtain the values $u_1$, $u_2$, respectively. Then, because of the independence of the variables,
$$
E[Y_1 Y_2 | U_1 = u_1, U_2 = u_2] = \beta^2 u_1 u_2.
$$
Therefore, we need to calculate
$$
\int_{u_1 = 0}^1 \int_{u_2 = u_1}^{u_1 + a} \beta^2 u_1 u_2
\text{d} u_2 \text{d} u_1
... | How to compute $\mathbb{E}\left[Y_1Y_2 \mid |U_1-U_2| <a\right]$ for $Y_i\sim N\left(\beta U_i, \sig | Suppose you know that $U_1$, $U_2$, obtain the values $u_1$, $u_2$, respectively. Then, because of the independence of the variables,
$$
E[Y_1 Y_2 | U_1 = u_1, U_2 = u_2] = \beta^2 u_1 u_2.
$$
Theref | How to compute $\mathbb{E}\left[Y_1Y_2 \mid |U_1-U_2| <a\right]$ for $Y_i\sim N\left(\beta U_i, \sigma^2\right)$ and $U_i \sim Unif(0,1)$?
Suppose you know that $U_1$, $U_2$, obtain the values $u_1$, $u_2$, respectively. Then, because of the independence of the variables,
$$
E[Y_1 Y_2 | U_1 = u_1, U_2 = u_2] = \beta^2... | How to compute $\mathbb{E}\left[Y_1Y_2 \mid |U_1-U_2| <a\right]$ for $Y_i\sim N\left(\beta U_i, \sig
Suppose you know that $U_1$, $U_2$, obtain the values $u_1$, $u_2$, respectively. Then, because of the independence of the variables,
$$
E[Y_1 Y_2 | U_1 = u_1, U_2 = u_2] = \beta^2 u_1 u_2.
$$
Theref |
48,441 | Are XGBoost probability outputs based on the number of examples in a terminal leaf | Are XGBoost probability outputs based on the number of examples in a terminal leaf?
No. XGBoost is a gradient boosted tree, so it's estimating weights $c \in \mathbb{R^M}$ that assigns weight the $M$ leafs. A sample prediction (on the logit scale) is the sum of its leafs' weights. In the binary case, the inverse logis... | Are XGBoost probability outputs based on the number of examples in a terminal leaf | Are XGBoost probability outputs based on the number of examples in a terminal leaf?
No. XGBoost is a gradient boosted tree, so it's estimating weights $c \in \mathbb{R^M}$ that assigns weight the $M$ | Are XGBoost probability outputs based on the number of examples in a terminal leaf
Are XGBoost probability outputs based on the number of examples in a terminal leaf?
No. XGBoost is a gradient boosted tree, so it's estimating weights $c \in \mathbb{R^M}$ that assigns weight the $M$ leafs. A sample prediction (on the l... | Are XGBoost probability outputs based on the number of examples in a terminal leaf
Are XGBoost probability outputs based on the number of examples in a terminal leaf?
No. XGBoost is a gradient boosted tree, so it's estimating weights $c \in \mathbb{R^M}$ that assigns weight the $M$ |
48,442 | Are XGBoost probability outputs based on the number of examples in a terminal leaf | No. I do not know how XGBoost estimates probabilities, but from my experience if you have say 100 samples, the probabilities estimates will not (necessarily) be multiples of 0.01 (which would be the case for decision trees). Therefore, there must be something else in the estimation of probabilities with XGBoost. | Are XGBoost probability outputs based on the number of examples in a terminal leaf | No. I do not know how XGBoost estimates probabilities, but from my experience if you have say 100 samples, the probabilities estimates will not (necessarily) be multiples of 0.01 (which would be the c | Are XGBoost probability outputs based on the number of examples in a terminal leaf
No. I do not know how XGBoost estimates probabilities, but from my experience if you have say 100 samples, the probabilities estimates will not (necessarily) be multiples of 0.01 (which would be the case for decision trees). Therefore, t... | Are XGBoost probability outputs based on the number of examples in a terminal leaf
No. I do not know how XGBoost estimates probabilities, but from my experience if you have say 100 samples, the probabilities estimates will not (necessarily) be multiples of 0.01 (which would be the c |
48,443 | Replacing RNNs with dilated convolutions | Just in case anyone is interested: yes, it is possible to replace the RNN layers by Dilated Convolutions (DCs).
The architectures described in speech recognition literature did not work out of the box for HTR, but with some modifications results got better.
I will give a short summary.
The NN contains CNN layers and a ... | Replacing RNNs with dilated convolutions | Just in case anyone is interested: yes, it is possible to replace the RNN layers by Dilated Convolutions (DCs).
The architectures described in speech recognition literature did not work out of the box | Replacing RNNs with dilated convolutions
Just in case anyone is interested: yes, it is possible to replace the RNN layers by Dilated Convolutions (DCs).
The architectures described in speech recognition literature did not work out of the box for HTR, but with some modifications results got better.
I will give a short s... | Replacing RNNs with dilated convolutions
Just in case anyone is interested: yes, it is possible to replace the RNN layers by Dilated Convolutions (DCs).
The architectures described in speech recognition literature did not work out of the box |
48,444 | Low loss and low accuracy. What is the reason? [duplicate] | Lower cost function error not means better accuracy.
The error of the cost function represents how well your model is learning/able to learn with respect to your training examples.
Now the question is ,
Is the model learning something that I expect it to learn?
It can show very low learning curves error but when you a... | Low loss and low accuracy. What is the reason? [duplicate] | Lower cost function error not means better accuracy.
The error of the cost function represents how well your model is learning/able to learn with respect to your training examples.
Now the question is | Low loss and low accuracy. What is the reason? [duplicate]
Lower cost function error not means better accuracy.
The error of the cost function represents how well your model is learning/able to learn with respect to your training examples.
Now the question is ,
Is the model learning something that I expect it to learn... | Low loss and low accuracy. What is the reason? [duplicate]
Lower cost function error not means better accuracy.
The error of the cost function represents how well your model is learning/able to learn with respect to your training examples.
Now the question is |
48,445 | Coverage probability of credible intervals if we take Bayesian model literally | I asked a similar question:
Methods for testing a Bayesian method's software implementation and got this answer from @jaradniemi:
Bayesians don't lose the relative frequency-based interpretation of
probability. In particular, if you define this procedure:
simulate from the prior,
then simulate from the model using ... | Coverage probability of credible intervals if we take Bayesian model literally | I asked a similar question:
Methods for testing a Bayesian method's software implementation and got this answer from @jaradniemi:
Bayesians don't lose the relative frequency-based interpretation of
| Coverage probability of credible intervals if we take Bayesian model literally
I asked a similar question:
Methods for testing a Bayesian method's software implementation and got this answer from @jaradniemi:
Bayesians don't lose the relative frequency-based interpretation of
probability. In particular, if you defin... | Coverage probability of credible intervals if we take Bayesian model literally
I asked a similar question:
Methods for testing a Bayesian method's software implementation and got this answer from @jaradniemi:
Bayesians don't lose the relative frequency-based interpretation of
|
48,446 | Coverage probability of credible intervals if we take Bayesian model literally | As there is no generally accepted / unique way to specify (uninformative) priors, and as different priors will lead to a different credible intervals, it seems obvious that the coverage of Bayesian CIs is not fixed, but will depend on the prior that you choose, in relation to the "true" parameter values.
It will gener... | Coverage probability of credible intervals if we take Bayesian model literally | As there is no generally accepted / unique way to specify (uninformative) priors, and as different priors will lead to a different credible intervals, it seems obvious that the coverage of Bayesian CI | Coverage probability of credible intervals if we take Bayesian model literally
As there is no generally accepted / unique way to specify (uninformative) priors, and as different priors will lead to a different credible intervals, it seems obvious that the coverage of Bayesian CIs is not fixed, but will depend on the pr... | Coverage probability of credible intervals if we take Bayesian model literally
As there is no generally accepted / unique way to specify (uninformative) priors, and as different priors will lead to a different credible intervals, it seems obvious that the coverage of Bayesian CI |
48,447 | In what situations would one use Approximate Bayesian Computation instead of Bayesian inference? | Quoting the great Wikipedia article on ABC (emphasis added):
Approximate Bayesian computation (ABC) constitutes a class of
computational methods rooted in Bayesian statistics. In all
model-based statistical inference, the likelihood function is of
central importance, since it expresses the probability of the obs... | In what situations would one use Approximate Bayesian Computation instead of Bayesian inference? | Quoting the great Wikipedia article on ABC (emphasis added):
Approximate Bayesian computation (ABC) constitutes a class of
computational methods rooted in Bayesian statistics. In all
model-based | In what situations would one use Approximate Bayesian Computation instead of Bayesian inference?
Quoting the great Wikipedia article on ABC (emphasis added):
Approximate Bayesian computation (ABC) constitutes a class of
computational methods rooted in Bayesian statistics. In all
model-based statistical inference, ... | In what situations would one use Approximate Bayesian Computation instead of Bayesian inference?
Quoting the great Wikipedia article on ABC (emphasis added):
Approximate Bayesian computation (ABC) constitutes a class of
computational methods rooted in Bayesian statistics. In all
model-based |
48,448 | Mapping Frequentist Risk Notation to Regression | Because the main problem concerns applying a fully general and abstract formula to a somewhat complicated model (regression), let's address it by examining a simple concrete case. Ordinary regression is a good choice because it is well known, well understood, and serves as the archetype of all more complex regression ... | Mapping Frequentist Risk Notation to Regression | Because the main problem concerns applying a fully general and abstract formula to a somewhat complicated model (regression), let's address it by examining a simple concrete case. Ordinary regression | Mapping Frequentist Risk Notation to Regression
Because the main problem concerns applying a fully general and abstract formula to a somewhat complicated model (regression), let's address it by examining a simple concrete case. Ordinary regression is a good choice because it is well known, well understood, and serves ... | Mapping Frequentist Risk Notation to Regression
Because the main problem concerns applying a fully general and abstract formula to a somewhat complicated model (regression), let's address it by examining a simple concrete case. Ordinary regression |
48,449 | Higher-dimensional version of variance | It's just the sum of the variances of each component.
Suppose $n=2$ and $X=(X_1,X_2)$. Then
$$\mathbb{E}[\|X\|^2] = \mathbb{E}[X_1^2 + X_2^2] = \mathbb{E}[X_1^2] + \mathbb{E}[X_2^2].$$
Also
$$\|\mathbb{E}[X]\|^2 = \|(\mathbb{E}[X_1],\mathbb{E}[X_2])\|^2 = \mathbb{E}[X_1]^2 + \mathbb{E}[X_2]^2.$$
Therefore
$$\begin{ali... | Higher-dimensional version of variance | It's just the sum of the variances of each component.
Suppose $n=2$ and $X=(X_1,X_2)$. Then
$$\mathbb{E}[\|X\|^2] = \mathbb{E}[X_1^2 + X_2^2] = \mathbb{E}[X_1^2] + \mathbb{E}[X_2^2].$$
Also
$$\|\math | Higher-dimensional version of variance
It's just the sum of the variances of each component.
Suppose $n=2$ and $X=(X_1,X_2)$. Then
$$\mathbb{E}[\|X\|^2] = \mathbb{E}[X_1^2 + X_2^2] = \mathbb{E}[X_1^2] + \mathbb{E}[X_2^2].$$
Also
$$\|\mathbb{E}[X]\|^2 = \|(\mathbb{E}[X_1],\mathbb{E}[X_2])\|^2 = \mathbb{E}[X_1]^2 + \mat... | Higher-dimensional version of variance
It's just the sum of the variances of each component.
Suppose $n=2$ and $X=(X_1,X_2)$. Then
$$\mathbb{E}[\|X\|^2] = \mathbb{E}[X_1^2 + X_2^2] = \mathbb{E}[X_1^2] + \mathbb{E}[X_2^2].$$
Also
$$\|\math |
48,450 | Higher-dimensional version of variance | I do not think $f(x)$ has any meaning, and absolutely it is not generalization of variance. The generalization of variance is variance-covaviance matrix defined by
$E(XX')-E(X)E(X)'$ where $X$ is random column vector. | Higher-dimensional version of variance | I do not think $f(x)$ has any meaning, and absolutely it is not generalization of variance. The generalization of variance is variance-covaviance matrix defined by
$E(XX')-E(X)E(X)'$ where $X$ is ran | Higher-dimensional version of variance
I do not think $f(x)$ has any meaning, and absolutely it is not generalization of variance. The generalization of variance is variance-covaviance matrix defined by
$E(XX')-E(X)E(X)'$ where $X$ is random column vector. | Higher-dimensional version of variance
I do not think $f(x)$ has any meaning, and absolutely it is not generalization of variance. The generalization of variance is variance-covaviance matrix defined by
$E(XX')-E(X)E(X)'$ where $X$ is ran |
48,451 | SMOTE for multiclass classification | I would agree with running multiple SMOTE passes across the dataset, but with a slightly different view than already expressed. If you merely run SMOTE for each minority class against the predominant class, you're going to be generating sample that models the difference between each minority class and the predominant c... | SMOTE for multiclass classification | I would agree with running multiple SMOTE passes across the dataset, but with a slightly different view than already expressed. If you merely run SMOTE for each minority class against the predominant | SMOTE for multiclass classification
I would agree with running multiple SMOTE passes across the dataset, but with a slightly different view than already expressed. If you merely run SMOTE for each minority class against the predominant class, you're going to be generating sample that models the difference between each ... | SMOTE for multiclass classification
I would agree with running multiple SMOTE passes across the dataset, but with a slightly different view than already expressed. If you merely run SMOTE for each minority class against the predominant |
48,452 | SMOTE for multiclass classification | As I can see from your question, you are trying to balance "-1" class and "1", but they are seemed to be almost equally. So, it would be more right to 1)apply SMOTE on classes -1 and 0, then 2) apply SMOTE on classes 0 and 1. Then you may get all classes balanced. | SMOTE for multiclass classification | As I can see from your question, you are trying to balance "-1" class and "1", but they are seemed to be almost equally. So, it would be more right to 1)apply SMOTE on classes -1 and 0, then 2) apply | SMOTE for multiclass classification
As I can see from your question, you are trying to balance "-1" class and "1", but they are seemed to be almost equally. So, it would be more right to 1)apply SMOTE on classes -1 and 0, then 2) apply SMOTE on classes 0 and 1. Then you may get all classes balanced. | SMOTE for multiclass classification
As I can see from your question, you are trying to balance "-1" class and "1", but they are seemed to be almost equally. So, it would be more right to 1)apply SMOTE on classes -1 and 0, then 2) apply |
48,453 | linear regression on exponential distributed dependent variable | I want to use linear regression on [...] independent variables x1,x2,...xn [...]
while the dependent variable y is almost exponentially distributed
If you expect the relationship between y and the x's to be linear, then a nonlinear transformation of y will make the relationship between it and the x's nonlinear. It... | linear regression on exponential distributed dependent variable | I want to use linear regression on [...] independent variables x1,x2,...xn [...]
while the dependent variable y is almost exponentially distributed
If you expect the relationship between y and the | linear regression on exponential distributed dependent variable
I want to use linear regression on [...] independent variables x1,x2,...xn [...]
while the dependent variable y is almost exponentially distributed
If you expect the relationship between y and the x's to be linear, then a nonlinear transformation of y... | linear regression on exponential distributed dependent variable
I want to use linear regression on [...] independent variables x1,x2,...xn [...]
while the dependent variable y is almost exponentially distributed
If you expect the relationship between y and the |
48,454 | How does h2o handle time-series cross validation? | H2O algorithms can optionally use k-fold cross-validation. H2O does not yet support time-series (aka "walk-forward" or "rolling") cross-validation, however there is an open ticket to implement it here.
There is an example of how you can manually implement time-series CV using the h2o R package referenced here, if you ... | How does h2o handle time-series cross validation? | H2O algorithms can optionally use k-fold cross-validation. H2O does not yet support time-series (aka "walk-forward" or "rolling") cross-validation, however there is an open ticket to implement it her | How does h2o handle time-series cross validation?
H2O algorithms can optionally use k-fold cross-validation. H2O does not yet support time-series (aka "walk-forward" or "rolling") cross-validation, however there is an open ticket to implement it here.
There is an example of how you can manually implement time-series C... | How does h2o handle time-series cross validation?
H2O algorithms can optionally use k-fold cross-validation. H2O does not yet support time-series (aka "walk-forward" or "rolling") cross-validation, however there is an open ticket to implement it her |
48,455 | How does h2o handle time-series cross validation? | I implemented it using Sklearn TimeSeriesSplit like this:
from sklearn.model_selection import TimeSeriesSplit
from h2o.estimators import H2ORandomForestEstimator
forest = h2o.estimators.H2ORandomForestEstimator
forest.set_params(nfolds=0)
tscv = TimeSeriesSplit(n_splits=5)
Xcols=list(set(X.names)-set('NumberOfSales'... | How does h2o handle time-series cross validation? | I implemented it using Sklearn TimeSeriesSplit like this:
from sklearn.model_selection import TimeSeriesSplit
from h2o.estimators import H2ORandomForestEstimator
forest = h2o.estimators.H2ORandomFore | How does h2o handle time-series cross validation?
I implemented it using Sklearn TimeSeriesSplit like this:
from sklearn.model_selection import TimeSeriesSplit
from h2o.estimators import H2ORandomForestEstimator
forest = h2o.estimators.H2ORandomForestEstimator
forest.set_params(nfolds=0)
tscv = TimeSeriesSplit(n_spli... | How does h2o handle time-series cross validation?
I implemented it using Sklearn TimeSeriesSplit like this:
from sklearn.model_selection import TimeSeriesSplit
from h2o.estimators import H2ORandomForestEstimator
forest = h2o.estimators.H2ORandomFore |
48,456 | How does h2o handle time-series cross validation? | Another way to cross validate time series, which is worth sharing. Especially because the question is asked if H2o can support time-series cv. Existing h2o implementation is able support a variant of time-series cv shown below, with the help of fold_column variable.
fold 1 : training [4 5 6 7 8 9], test [1 2 3]
fold 2 ... | How does h2o handle time-series cross validation? | Another way to cross validate time series, which is worth sharing. Especially because the question is asked if H2o can support time-series cv. Existing h2o implementation is able support a variant of | How does h2o handle time-series cross validation?
Another way to cross validate time series, which is worth sharing. Especially because the question is asked if H2o can support time-series cv. Existing h2o implementation is able support a variant of time-series cv shown below, with the help of fold_column variable.
fol... | How does h2o handle time-series cross validation?
Another way to cross validate time series, which is worth sharing. Especially because the question is asked if H2o can support time-series cv. Existing h2o implementation is able support a variant of |
48,457 | How to be absolutely sure that features do have predictive power to predict the labels (without domain knowledge) ? Does Mutual information help? | Does this mean that my data is worthless and i should probably look for more data ?
No, a small mutual information between a target variable and single features does not render your dataset worthless since it neglects the information contained in the combination of features.
I will give a most simple example (XOR prob... | How to be absolutely sure that features do have predictive power to predict the labels (without doma | Does this mean that my data is worthless and i should probably look for more data ?
No, a small mutual information between a target variable and single features does not render your dataset worthless | How to be absolutely sure that features do have predictive power to predict the labels (without domain knowledge) ? Does Mutual information help?
Does this mean that my data is worthless and i should probably look for more data ?
No, a small mutual information between a target variable and single features does not rend... | How to be absolutely sure that features do have predictive power to predict the labels (without doma
Does this mean that my data is worthless and i should probably look for more data ?
No, a small mutual information between a target variable and single features does not render your dataset worthless |
48,458 | What does this sampling weight mean? | Let $N$ be the population size and $n$ the sample size, let $N_h$ and $n_h$ be the population and sample sizes for stratum $h$.
Then, the weight you defined is given by
$ W_h = \frac{N_h/N}{n_h/n} = \frac{N_h}{n_h}\frac{n}{N}$
where $\frac{n}{N}$ is the sampling fraction $f$ for the whole sample and $\frac{N_h}{n_h}$ ... | What does this sampling weight mean? | Let $N$ be the population size and $n$ the sample size, let $N_h$ and $n_h$ be the population and sample sizes for stratum $h$.
Then, the weight you defined is given by
$ W_h = \frac{N_h/N}{n_h/n} = \ | What does this sampling weight mean?
Let $N$ be the population size and $n$ the sample size, let $N_h$ and $n_h$ be the population and sample sizes for stratum $h$.
Then, the weight you defined is given by
$ W_h = \frac{N_h/N}{n_h/n} = \frac{N_h}{n_h}\frac{n}{N}$
where $\frac{n}{N}$ is the sampling fraction $f$ for th... | What does this sampling weight mean?
Let $N$ be the population size and $n$ the sample size, let $N_h$ and $n_h$ be the population and sample sizes for stratum $h$.
Then, the weight you defined is given by
$ W_h = \frac{N_h/N}{n_h/n} = \ |
48,459 | Interpret predictions of black box models | Ribeiro's "Why should I trust you?" paper and blog post provide a method of interpreting black-box models
paper https://arxiv.org/abs/1602.04938
blog-post https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
The model is called "LIME": locally interpretable model-agnos... | Interpret predictions of black box models | Ribeiro's "Why should I trust you?" paper and blog post provide a method of interpreting black-box models
paper https://arxiv.org/abs/1602.04938
blog-post https://www.oreilly.com/learning/introductio | Interpret predictions of black box models
Ribeiro's "Why should I trust you?" paper and blog post provide a method of interpreting black-box models
paper https://arxiv.org/abs/1602.04938
blog-post https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
The model is called... | Interpret predictions of black box models
Ribeiro's "Why should I trust you?" paper and blog post provide a method of interpreting black-box models
paper https://arxiv.org/abs/1602.04938
blog-post https://www.oreilly.com/learning/introductio |
48,460 | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation | As for the confusing part. Softmax derivative is simply
$$\frac{\partial L}{\partial t_i} = t_i - y_i$$
where $t_i$ is predicted output. Now, in this case $t_i, y_i \in \Re^3$, but $y_i$ has to be in the one-hot encoding form which looks like this
$$y_i = (0, \dots, \overset{\text{k'th}}{1}, \dots, 0)$$
So, for exampl... | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation | As for the confusing part. Softmax derivative is simply
$$\frac{\partial L}{\partial t_i} = t_i - y_i$$
where $t_i$ is predicted output. Now, in this case $t_i, y_i \in \Re^3$, but $y_i$ has to be in | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation
As for the confusing part. Softmax derivative is simply
$$\frac{\partial L}{\partial t_i} = t_i - y_i$$
where $t_i$ is predicted output. Now, in this case $t_i, y_i \in \Re^3$, but $y_i$ has to be in the one-hot encoding form which looks like th... | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation
As for the confusing part. Softmax derivative is simply
$$\frac{\partial L}{\partial t_i} = t_i - y_i$$
where $t_i$ is predicted output. Now, in this case $t_i, y_i \in \Re^3$, but $y_i$ has to be in |
48,461 | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation | If anyone is concerned, this "answer" will never become the accepted answer. It's more like some notes on the issue, possibly helping other people as well.
This post is very useful:
$f$ is the array of class scores for a single example (e.g. array of 3
numbers here):
$f= X\cdot W + b$,
then the Softmax classifier c... | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation | If anyone is concerned, this "answer" will never become the accepted answer. It's more like some notes on the issue, possibly helping other people as well.
This post is very useful:
$f$ is the array | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation
If anyone is concerned, this "answer" will never become the accepted answer. It's more like some notes on the issue, possibly helping other people as well.
This post is very useful:
$f$ is the array of class scores for a single example (e.g. ar... | Backpropagation algorithm NN with Rectified Linear Unit (ReLU) activation
If anyone is concerned, this "answer" will never become the accepted answer. It's more like some notes on the issue, possibly helping other people as well.
This post is very useful:
$f$ is the array |
48,462 | Distance measure between two multivariate normal distributions (with differing mean and covariances) | In the end I went for the Bhattacharyya distance. I adapted the R code referenced here:
// In the following, Vec3 and Mat3 are C++ Eigen types.
/// See: https://en.wikipedia.org/wiki/Mahalanobis_distance
double mahalanobis(const Vec3& dist, const Mat3& cov)
{
return (dist.transpose()*cov.inverse()*dist).eval()(0);... | Distance measure between two multivariate normal distributions (with differing mean and covariances) | In the end I went for the Bhattacharyya distance. I adapted the R code referenced here:
// In the following, Vec3 and Mat3 are C++ Eigen types.
/// See: https://en.wikipedia.org/wiki/Mahalanobis_dist | Distance measure between two multivariate normal distributions (with differing mean and covariances)
In the end I went for the Bhattacharyya distance. I adapted the R code referenced here:
// In the following, Vec3 and Mat3 are C++ Eigen types.
/// See: https://en.wikipedia.org/wiki/Mahalanobis_distance
double mahalan... | Distance measure between two multivariate normal distributions (with differing mean and covariances)
In the end I went for the Bhattacharyya distance. I adapted the R code referenced here:
// In the following, Vec3 and Mat3 are C++ Eigen types.
/// See: https://en.wikipedia.org/wiki/Mahalanobis_dist |
48,463 | How to Interpret Interaction Between Two Categorical Variables | Your interpretation is true. This is another way to interpret these terms:
If the person is male but not white, the wage is increased by $\beta_2$ (or decreased if $\beta_2$ is negative).
If the person is not male but is white, the wage is increased by $\beta_3$.
If the person is male and white, the wage is increased ... | How to Interpret Interaction Between Two Categorical Variables | Your interpretation is true. This is another way to interpret these terms:
If the person is male but not white, the wage is increased by $\beta_2$ (or decreased if $\beta_2$ is negative).
If the pers | How to Interpret Interaction Between Two Categorical Variables
Your interpretation is true. This is another way to interpret these terms:
If the person is male but not white, the wage is increased by $\beta_2$ (or decreased if $\beta_2$ is negative).
If the person is not male but is white, the wage is increased by $\b... | How to Interpret Interaction Between Two Categorical Variables
Your interpretation is true. This is another way to interpret these terms:
If the person is male but not white, the wage is increased by $\beta_2$ (or decreased if $\beta_2$ is negative).
If the pers |
48,464 | What is the difference between rate & probability? | Rate probably can mean different things, see https://en.wikipedia.org/wiki/Rate_(mathematics) for an overview, but in this context you probably think rate of occurence of events in some (temporal) random process.
The rate is simply the expected number of events per some (time) unit (could also be spatial). That could... | What is the difference between rate & probability? | Rate probably can mean different things, see https://en.wikipedia.org/wiki/Rate_(mathematics) for an overview, but in this context you probably think rate of occurence of events in some (temporal) ra | What is the difference between rate & probability?
Rate probably can mean different things, see https://en.wikipedia.org/wiki/Rate_(mathematics) for an overview, but in this context you probably think rate of occurence of events in some (temporal) random process.
The rate is simply the expected number of events per so... | What is the difference between rate & probability?
Rate probably can mean different things, see https://en.wikipedia.org/wiki/Rate_(mathematics) for an overview, but in this context you probably think rate of occurence of events in some (temporal) ra |
48,465 | What is the difference between rate & probability? | Rates: The instantaneous potential for the occurrence of an event, expressed per number of patients at risk. Rates can be added and subtracted.
Probabilities: A number ranging between 0 and 1. Represents the likelihood of an event happening over a specific period of time.
Briggs, Andrew. Decision Modelling for Health E... | What is the difference between rate & probability? | Rates: The instantaneous potential for the occurrence of an event, expressed per number of patients at risk. Rates can be added and subtracted.
Probabilities: A number ranging between 0 and 1. Represe | What is the difference between rate & probability?
Rates: The instantaneous potential for the occurrence of an event, expressed per number of patients at risk. Rates can be added and subtracted.
Probabilities: A number ranging between 0 and 1. Represents the likelihood of an event happening over a specific period of ti... | What is the difference between rate & probability?
Rates: The instantaneous potential for the occurrence of an event, expressed per number of patients at risk. Rates can be added and subtracted.
Probabilities: A number ranging between 0 and 1. Represe |
48,466 | What is the difference between rate & probability? | The rate is defined is the relationship between numerator and denominator
probability is a numerator is a part of the denominator, for example, a/a+b | What is the difference between rate & probability? | The rate is defined is the relationship between numerator and denominator
probability is a numerator is a part of the denominator, for example, a/a+b | What is the difference between rate & probability?
The rate is defined is the relationship between numerator and denominator
probability is a numerator is a part of the denominator, for example, a/a+b | What is the difference between rate & probability?
The rate is defined is the relationship between numerator and denominator
probability is a numerator is a part of the denominator, for example, a/a+b |
48,467 | What is the difference between rate & probability? | On a temporal frame, Probability usually refers to the expectation of occurrence of an event within a given time span (eg. 5 years), whereas Rate is provided for 1 unit of time (eg. yearly rate).
Converting one to the other goes as follows:
Rate = -ln (1 - Prob) / time
Prob = 1 - e^(-Rate * time)
Consider this example:... | What is the difference between rate & probability? | On a temporal frame, Probability usually refers to the expectation of occurrence of an event within a given time span (eg. 5 years), whereas Rate is provided for 1 unit of time (eg. yearly rate).
Conv | What is the difference between rate & probability?
On a temporal frame, Probability usually refers to the expectation of occurrence of an event within a given time span (eg. 5 years), whereas Rate is provided for 1 unit of time (eg. yearly rate).
Converting one to the other goes as follows:
Rate = -ln (1 - Prob) / time... | What is the difference between rate & probability?
On a temporal frame, Probability usually refers to the expectation of occurrence of an event within a given time span (eg. 5 years), whereas Rate is provided for 1 unit of time (eg. yearly rate).
Conv |
48,468 | How to add 95% confidence bands to a nonlinear regression model? | I think the propagate package can do what you are looking for.
require(propagate)
pred_model <- predictNLS(model, newdata=mm)
conf_model <- pred$summary
plot(v~S)
lines(conf_model$Prop.Mean.1 ~ S, lwd=2)
lines(conf_model$"Sim.2.5%" ~ S, lwd=1)
lines(conf_model$"Sim.97.5%" ~ S, lwd=1) | How to add 95% confidence bands to a nonlinear regression model? | I think the propagate package can do what you are looking for.
require(propagate)
pred_model <- predictNLS(model, newdata=mm)
conf_model <- pred$summary
plot(v~S)
lines(conf_model$Prop.Mean.1 ~ S, l | How to add 95% confidence bands to a nonlinear regression model?
I think the propagate package can do what you are looking for.
require(propagate)
pred_model <- predictNLS(model, newdata=mm)
conf_model <- pred$summary
plot(v~S)
lines(conf_model$Prop.Mean.1 ~ S, lwd=2)
lines(conf_model$"Sim.2.5%" ~ S, lwd=1)
lines(con... | How to add 95% confidence bands to a nonlinear regression model?
I think the propagate package can do what you are looking for.
require(propagate)
pred_model <- predictNLS(model, newdata=mm)
conf_model <- pred$summary
plot(v~S)
lines(conf_model$Prop.Mean.1 ~ S, l |
48,469 | Find the maximum likelihood estimator | You can do an easy check of your answer by writing the density as an exponential family. It has sufficient statistic $x^2$, which means that the maximum likelihood estimate must be $\sqrt{\frac{\sum_i x_i^2}{n}}$ as you found. | Find the maximum likelihood estimator | You can do an easy check of your answer by writing the density as an exponential family. It has sufficient statistic $x^2$, which means that the maximum likelihood estimate must be $\sqrt{\frac{\sum_ | Find the maximum likelihood estimator
You can do an easy check of your answer by writing the density as an exponential family. It has sufficient statistic $x^2$, which means that the maximum likelihood estimate must be $\sqrt{\frac{\sum_i x_i^2}{n}}$ as you found. | Find the maximum likelihood estimator
You can do an easy check of your answer by writing the density as an exponential family. It has sufficient statistic $x^2$, which means that the maximum likelihood estimate must be $\sqrt{\frac{\sum_ |
48,470 | Where does the delta method's name come from? | The name "Delta" is from the symbol $\Delta$ for "change" which is used in limit expressions like let $\Delta X_i\rightarrow 0$, where $\Delta X_i=X_{i+1}-X_i$, and also $\Delta$ or lower case "δ" refers to an inexact, non-zero differential equation (i.e., before limits are taken) which reduces in the limits to a diffe... | Where does the delta method's name come from? | The name "Delta" is from the symbol $\Delta$ for "change" which is used in limit expressions like let $\Delta X_i\rightarrow 0$, where $\Delta X_i=X_{i+1}-X_i$, and also $\Delta$ or lower case "δ" ref | Where does the delta method's name come from?
The name "Delta" is from the symbol $\Delta$ for "change" which is used in limit expressions like let $\Delta X_i\rightarrow 0$, where $\Delta X_i=X_{i+1}-X_i$, and also $\Delta$ or lower case "δ" refers to an inexact, non-zero differential equation (i.e., before limits are... | Where does the delta method's name come from?
The name "Delta" is from the symbol $\Delta$ for "change" which is used in limit expressions like let $\Delta X_i\rightarrow 0$, where $\Delta X_i=X_{i+1}-X_i$, and also $\Delta$ or lower case "δ" ref |
48,471 | solid line from a local average series | What you want is a mean preserving interpolation. John D'Errico has the exact solution you're looking for, written in MATLAB however.
https://www.mathworks.com/matlabcentral/newsreader/view_thread/31378
function [y,spl]=mean_series(ymeans,n,EndConditions)
% mean_series: cubic spline resampling of series in x
% (n times... | solid line from a local average series | What you want is a mean preserving interpolation. John D'Errico has the exact solution you're looking for, written in MATLAB however.
https://www.mathworks.com/matlabcentral/newsreader/view_thread/313 | solid line from a local average series
What you want is a mean preserving interpolation. John D'Errico has the exact solution you're looking for, written in MATLAB however.
https://www.mathworks.com/matlabcentral/newsreader/view_thread/31378
function [y,spl]=mean_series(ymeans,n,EndConditions)
% mean_series: cubic spli... | solid line from a local average series
What you want is a mean preserving interpolation. John D'Errico has the exact solution you're looking for, written in MATLAB however.
https://www.mathworks.com/matlabcentral/newsreader/view_thread/313 |
48,472 | solid line from a local average series | I think there are a number of methods that suit what you're looking for - some options are:
1. kernel smoothing
2. spline smoothing
3. moving average
There are many more, but I think those three are good candidates. Here is a very nice visual explanation of several smoothing techniques.
The problem is that technical... | solid line from a local average series | I think there are a number of methods that suit what you're looking for - some options are:
1. kernel smoothing
2. spline smoothing
3. moving average
There are many more, but I think those three ar | solid line from a local average series
I think there are a number of methods that suit what you're looking for - some options are:
1. kernel smoothing
2. spline smoothing
3. moving average
There are many more, but I think those three are good candidates. Here is a very nice visual explanation of several smoothing te... | solid line from a local average series
I think there are a number of methods that suit what you're looking for - some options are:
1. kernel smoothing
2. spline smoothing
3. moving average
There are many more, but I think those three ar |
48,473 | Time series forecasting using Gaussian Process regression | answering them in reverse order..
2) let K be the sum (or multiple but I think in your case sum) of the two kernel functions. That is one with each period.
1) you want to minimise the negative log-likelihood as explained in sections 5.4.1 of GPML (link here). | Time series forecasting using Gaussian Process regression | answering them in reverse order..
2) let K be the sum (or multiple but I think in your case sum) of the two kernel functions. That is one with each period.
1) you want to minimise the negative log-lik | Time series forecasting using Gaussian Process regression
answering them in reverse order..
2) let K be the sum (or multiple but I think in your case sum) of the two kernel functions. That is one with each period.
1) you want to minimise the negative log-likelihood as explained in sections 5.4.1 of GPML (link here). | Time series forecasting using Gaussian Process regression
answering them in reverse order..
2) let K be the sum (or multiple but I think in your case sum) of the two kernel functions. That is one with each period.
1) you want to minimise the negative log-lik |
48,474 | What is the moment generating function of the generalized (multivariate) chi-square distribution? | I will build on my answer from here: https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables/442916#442916 and use notation from there. First I will look at the case without the linear and constant term, then we will see how to take them into account.
So let $Q(X)=X^T A... | What is the moment generating function of the generalized (multivariate) chi-square distribution? | I will build on my answer from here: https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables/442916#442916 and use notation from there. First I will l | What is the moment generating function of the generalized (multivariate) chi-square distribution?
I will build on my answer from here: https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables/442916#442916 and use notation from there. First I will look at the case withou... | What is the moment generating function of the generalized (multivariate) chi-square distribution?
I will build on my answer from here: https://math.stackexchange.com/questions/442472/sum-of-squares-of-dependent-gaussian-random-variables/442916#442916 and use notation from there. First I will l |
48,475 | Different p-values for Chi squared and Fisher's Exact R | You should not run two different tests. Choose your test first, before you run any tests and preferably before you examine your data values (though it would be permissible to consider the marginal totals).
By looking at both p-values before you decide which to use, you are (quite rightly) open to charges of p-hacking. ... | Different p-values for Chi squared and Fisher's Exact R | You should not run two different tests. Choose your test first, before you run any tests and preferably before you examine your data values (though it would be permissible to consider the marginal tot | Different p-values for Chi squared and Fisher's Exact R
You should not run two different tests. Choose your test first, before you run any tests and preferably before you examine your data values (though it would be permissible to consider the marginal totals).
By looking at both p-values before you decide which to use... | Different p-values for Chi squared and Fisher's Exact R
You should not run two different tests. Choose your test first, before you run any tests and preferably before you examine your data values (though it would be permissible to consider the marginal tot |
48,476 | Different p-values for Chi squared and Fisher's Exact R | This is not surprising since the tests have different statistical bases. The Fisher Exact test is a randomization test computed assuming both the row and column marginals are fixed (which they very rarely are) and is very conservative when they are not. The Chi Squared test is an approximation but works well in practic... | Different p-values for Chi squared and Fisher's Exact R | This is not surprising since the tests have different statistical bases. The Fisher Exact test is a randomization test computed assuming both the row and column marginals are fixed (which they very ra | Different p-values for Chi squared and Fisher's Exact R
This is not surprising since the tests have different statistical bases. The Fisher Exact test is a randomization test computed assuming both the row and column marginals are fixed (which they very rarely are) and is very conservative when they are not. The Chi Sq... | Different p-values for Chi squared and Fisher's Exact R
This is not surprising since the tests have different statistical bases. The Fisher Exact test is a randomization test computed assuming both the row and column marginals are fixed (which they very ra |
48,477 | How should I be learning deep learning? | Deep learning is quite of broad topic. I can not say that you can not learn everything ( human capabilities are unlimited ) but working in every field is little difficult.
As far as I can see you have grasped all the basic concepts so its time to choose a field where you want to apply your knowledge or do more research... | How should I be learning deep learning? | Deep learning is quite of broad topic. I can not say that you can not learn everything ( human capabilities are unlimited ) but working in every field is little difficult.
As far as I can see you have | How should I be learning deep learning?
Deep learning is quite of broad topic. I can not say that you can not learn everything ( human capabilities are unlimited ) but working in every field is little difficult.
As far as I can see you have grasped all the basic concepts so its time to choose a field where you want to ... | How should I be learning deep learning?
Deep learning is quite of broad topic. I can not say that you can not learn everything ( human capabilities are unlimited ) but working in every field is little difficult.
As far as I can see you have |
48,478 | How should I be learning deep learning? | A good place to start is this book, you can download it online.
Quick recap and starting points:
If you're looking for image processing, CNN are a great choice and it seems you already played with it.
For speech recognition - you can take a look at RNN (recurrent neural networks). Basically you can use them for images... | How should I be learning deep learning? | A good place to start is this book, you can download it online.
Quick recap and starting points:
If you're looking for image processing, CNN are a great choice and it seems you already played with it | How should I be learning deep learning?
A good place to start is this book, you can download it online.
Quick recap and starting points:
If you're looking for image processing, CNN are a great choice and it seems you already played with it.
For speech recognition - you can take a look at RNN (recurrent neural networks... | How should I be learning deep learning?
A good place to start is this book, you can download it online.
Quick recap and starting points:
If you're looking for image processing, CNN are a great choice and it seems you already played with it |
48,479 | Why do we interpret neural networks as graphical models? | As a compilation of my comments on the question:
The definition of a graphical model is: "a probabilistic model for which a graph expresses the conditional dependence structure between random variables." As we can draw a dependency graph to represent a NN, it falls into this category of "graphical models".
About the qu... | Why do we interpret neural networks as graphical models? | As a compilation of my comments on the question:
The definition of a graphical model is: "a probabilistic model for which a graph expresses the conditional dependence structure between random variable | Why do we interpret neural networks as graphical models?
As a compilation of my comments on the question:
The definition of a graphical model is: "a probabilistic model for which a graph expresses the conditional dependence structure between random variables." As we can draw a dependency graph to represent a NN, it fal... | Why do we interpret neural networks as graphical models?
As a compilation of my comments on the question:
The definition of a graphical model is: "a probabilistic model for which a graph expresses the conditional dependence structure between random variable |
48,480 | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights? | It would be too noisy to estimate weights as squared residuals. Consider this: you're estimating n weights using n observations. It's one observation per parameter. It's like estimating the variance of the population having a sample size one.
So, instead you observe that the variance seems to increase linearly with X, ... | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights? | It would be too noisy to estimate weights as squared residuals. Consider this: you're estimating n weights using n observations. It's one observation per parameter. It's like estimating the variance o | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights?
It would be too noisy to estimate weights as squared residuals. Consider this: you're estimating n weights using n observations. It's one observation per parameter. It's like estimating the variance of the population having a sample size one.
So, instead... | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights?
It would be too noisy to estimate weights as squared residuals. Consider this: you're estimating n weights using n observations. It's one observation per parameter. It's like estimating the variance o |
48,481 | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights? | I suspect the issue here is that the weighting $w_i = 1/e_i^2$ would make the regression estimates insufficiently sensitive to the response variable, since it would almost be tantamount to treating each deviation as having unit magnitude.
To see what I mean, suppose you let $e_i$ be the residuals from the first model f... | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights? | I suspect the issue here is that the weighting $w_i = 1/e_i^2$ would make the regression estimates insufficiently sensitive to the response variable, since it would almost be tantamount to treating ea | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights?
I suspect the issue here is that the weighting $w_i = 1/e_i^2$ would make the regression estimates insufficiently sensitive to the response variable, since it would almost be tantamount to treating each deviation as having unit magnitude.
To see what I m... | Weighted Least squares, why not use $\frac{1}{e_i^2}$ as weights?
I suspect the issue here is that the weighting $w_i = 1/e_i^2$ would make the regression estimates insufficiently sensitive to the response variable, since it would almost be tantamount to treating ea |
48,482 | What differentiates the wilcoxon test from t test regarding ordinal variables? | The short answer is that you can always use either test in place of the other--but typically they will produce different results. That demonstrates the issue is not one of applicability, but suitability. The rest of this answer discusses what "suitability" might amount to.
S. S. Stevens' original (but often misunder... | What differentiates the wilcoxon test from t test regarding ordinal variables? | The short answer is that you can always use either test in place of the other--but typically they will produce different results. That demonstrates the issue is not one of applicability, but suitabil | What differentiates the wilcoxon test from t test regarding ordinal variables?
The short answer is that you can always use either test in place of the other--but typically they will produce different results. That demonstrates the issue is not one of applicability, but suitability. The rest of this answer discusses w... | What differentiates the wilcoxon test from t test regarding ordinal variables?
The short answer is that you can always use either test in place of the other--but typically they will produce different results. That demonstrates the issue is not one of applicability, but suitabil |
48,483 | The effect of temperature in temperature sampling | Note that we start with a set of probabilities which sum to 1. We define a function ($f(p)$ where the $i$th probability component $f_\tau(p)_i=\frac{p_i^{1/\tau}}{\sum_j p_j^{1/\tau}}$) in order to modify those probabilities as a function of temperature (for which the original probabilities have temperature $\tau=1$). ... | The effect of temperature in temperature sampling | Note that we start with a set of probabilities which sum to 1. We define a function ($f(p)$ where the $i$th probability component $f_\tau(p)_i=\frac{p_i^{1/\tau}}{\sum_j p_j^{1/\tau}}$) in order to mo | The effect of temperature in temperature sampling
Note that we start with a set of probabilities which sum to 1. We define a function ($f(p)$ where the $i$th probability component $f_\tau(p)_i=\frac{p_i^{1/\tau}}{\sum_j p_j^{1/\tau}}$) in order to modify those probabilities as a function of temperature (for which the o... | The effect of temperature in temperature sampling
Note that we start with a set of probabilities which sum to 1. We define a function ($f(p)$ where the $i$th probability component $f_\tau(p)_i=\frac{p_i^{1/\tau}}{\sum_j p_j^{1/\tau}}$) in order to mo |
48,484 | Understanding oscillating behaviour when using Q-learning on cart-pole problem | I don't believe your features can work. Disregarding your encoding of the action, you are simply using a linear function of the state to learn the value. The state given by OpenAI gym are positions and velocity. The ideal value function (V) would be symmetric around 0 for theta which a linear function cannot represent.... | Understanding oscillating behaviour when using Q-learning on cart-pole problem | I don't believe your features can work. Disregarding your encoding of the action, you are simply using a linear function of the state to learn the value. The state given by OpenAI gym are positions an | Understanding oscillating behaviour when using Q-learning on cart-pole problem
I don't believe your features can work. Disregarding your encoding of the action, you are simply using a linear function of the state to learn the value. The state given by OpenAI gym are positions and velocity. The ideal value function (V) ... | Understanding oscillating behaviour when using Q-learning on cart-pole problem
I don't believe your features can work. Disregarding your encoding of the action, you are simply using a linear function of the state to learn the value. The state given by OpenAI gym are positions an |
48,485 | Understanding oscillating behaviour when using Q-learning on cart-pole problem | I want to clarify that in cartpole (and in gym's cartpole in particular), it is definitely possible to succeed with a linear Q function approximator. For example, taking s = [cart_pos, cart_vel, pole_pos, pole_vel] as in gym, try:
Q(s,0) = -s[3] - 3*s[2]
Q(s,1) = s[3] + 3*s[2]
That will balance the pole for the requ... | Understanding oscillating behaviour when using Q-learning on cart-pole problem | I want to clarify that in cartpole (and in gym's cartpole in particular), it is definitely possible to succeed with a linear Q function approximator. For example, taking s = [cart_pos, cart_vel, pole | Understanding oscillating behaviour when using Q-learning on cart-pole problem
I want to clarify that in cartpole (and in gym's cartpole in particular), it is definitely possible to succeed with a linear Q function approximator. For example, taking s = [cart_pos, cart_vel, pole_pos, pole_vel] as in gym, try:
Q(s,0) =... | Understanding oscillating behaviour when using Q-learning on cart-pole problem
I want to clarify that in cartpole (and in gym's cartpole in particular), it is definitely possible to succeed with a linear Q function approximator. For example, taking s = [cart_pos, cart_vel, pole |
48,486 | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to find possible vulnerabilities? | For the German lottery, it is said that the lottery balls with one numeral are labeled with 15 levels of paint; while balls with two numerals are labeled with 12 levels in order to make sure the balls have nearly equal weight.
Also in contrast to the early years, the numbers are not drawn by hand anymore in most lotter... | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to f | For the German lottery, it is said that the lottery balls with one numeral are labeled with 15 levels of paint; while balls with two numerals are labeled with 12 levels in order to make sure the balls | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to find possible vulnerabilities?
For the German lottery, it is said that the lottery balls with one numeral are labeled with 15 levels of paint; while balls with two numerals are labeled with 12 levels in order to make sure... | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to f
For the German lottery, it is said that the lottery balls with one numeral are labeled with 15 levels of paint; while balls with two numerals are labeled with 12 levels in order to make sure the balls |
48,487 | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to find possible vulnerabilities? | It really depends on the meaning you are giving to the word "random". The wikipedia definition of randomness is
Randomness is the lack of pattern or predictability in events. A random sequence
of events, symbols or steps has no order and does not follow an intelligible
pattern or combination.
The problem with this ... | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to f | It really depends on the meaning you are giving to the word "random". The wikipedia definition of randomness is
Randomness is the lack of pattern or predictability in events. A random sequence
of even | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to find possible vulnerabilities?
It really depends on the meaning you are giving to the word "random". The wikipedia definition of randomness is
Randomness is the lack of pattern or predictability in events. A random sequen... | How do lotteries ensure that the drawings are sufficiently random, and what are some approaches to f
It really depends on the meaning you are giving to the word "random". The wikipedia definition of randomness is
Randomness is the lack of pattern or predictability in events. A random sequence
of even |
48,488 | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical estimators? | Unbiasedness means that under the assumptions regarding the population distribution the estimator in repeated sampling will equal the population parameter on average. This is a nice property for the theory of minimum variance unbiased estimators. However, I think unbiasedness is overemphasized. The mean square error i... | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical | Unbiasedness means that under the assumptions regarding the population distribution the estimator in repeated sampling will equal the population parameter on average. This is a nice property for the t | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical estimators?
Unbiasedness means that under the assumptions regarding the population distribution the estimator in repeated sampling will equal the population parameter on average. This is a nice property for the theory o... | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical
Unbiasedness means that under the assumptions regarding the population distribution the estimator in repeated sampling will equal the population parameter on average. This is a nice property for the t |
48,489 | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical estimators? | Think of firing at a target. If you consistently hit the target too low you have a bias. If your arrows are closely grouped you have an efficient estimate. You might be interested in or amused by Maurice Kendall's poem on the subject http://www.columbia.edu/~to166/hiawatha.html | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical | Think of firing at a target. If you consistently hit the target too low you have a bias. If your arrows are closely grouped you have an efficient estimate. You might be interested in or amused by Maur | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical estimators?
Think of firing at a target. If you consistently hit the target too low you have a bias. If your arrows are closely grouped you have an efficient estimate. You might be interested in or amused by Maurice Ken... | Intuitive explanation of desirable properties (Unbiasedness, Consistency, Efficiency) of statistical
Think of firing at a target. If you consistently hit the target too low you have a bias. If your arrows are closely grouped you have an efficient estimate. You might be interested in or amused by Maur |
48,490 | textbook example of KL Divergence [duplicate] | An enlightening example is its use in Stochastic Neighborhood Embedding devised by Hinton and Roweis.
Essentially the authors are trying to represent data on a two or three dimensional manifold so that the data can be visually represented (similar in aim as PCA, for instance). The difference is that rather than preser... | textbook example of KL Divergence [duplicate] | An enlightening example is its use in Stochastic Neighborhood Embedding devised by Hinton and Roweis.
Essentially the authors are trying to represent data on a two or three dimensional manifold so th | textbook example of KL Divergence [duplicate]
An enlightening example is its use in Stochastic Neighborhood Embedding devised by Hinton and Roweis.
Essentially the authors are trying to represent data on a two or three dimensional manifold so that the data can be visually represented (similar in aim as PCA, for instan... | textbook example of KL Divergence [duplicate]
An enlightening example is its use in Stochastic Neighborhood Embedding devised by Hinton and Roweis.
Essentially the authors are trying to represent data on a two or three dimensional manifold so th |
48,491 | textbook example of KL Divergence [duplicate] | The K-L distance is also called relative entropy
Books on Information Theory where it is discussed
Elements of Information Theory, Second Edition by Thomas Cover and Joy Thomas, Wiley 2006.
Information Theory and Statistics by Solomon Kullback, Dover paperback 1997. A reprint of an earlier book (Wiley 1959).
Statistic... | textbook example of KL Divergence [duplicate] | The K-L distance is also called relative entropy
Books on Information Theory where it is discussed
Elements of Information Theory, Second Edition by Thomas Cover and Joy Thomas, Wiley 2006.
Informati | textbook example of KL Divergence [duplicate]
The K-L distance is also called relative entropy
Books on Information Theory where it is discussed
Elements of Information Theory, Second Edition by Thomas Cover and Joy Thomas, Wiley 2006.
Information Theory and Statistics by Solomon Kullback, Dover paperback 1997. A repr... | textbook example of KL Divergence [duplicate]
The K-L distance is also called relative entropy
Books on Information Theory where it is discussed
Elements of Information Theory, Second Edition by Thomas Cover and Joy Thomas, Wiley 2006.
Informati |
48,492 | Difference between forecasting accuracy and forecasting error? | I love your quote:
He was told to evaluate the whole supply chain demand with this metric but cannot explain why.
You are completely correct that truncating "accuracy" makes no sense. It throws information away for no good reason. Much better to either accept negative "accuracy", or deal with the MAPE directly, and a... | Difference between forecasting accuracy and forecasting error? | I love your quote:
He was told to evaluate the whole supply chain demand with this metric but cannot explain why.
You are completely correct that truncating "accuracy" makes no sense. It throws info | Difference between forecasting accuracy and forecasting error?
I love your quote:
He was told to evaluate the whole supply chain demand with this metric but cannot explain why.
You are completely correct that truncating "accuracy" makes no sense. It throws information away for no good reason. Much better to either ac... | Difference between forecasting accuracy and forecasting error?
I love your quote:
He was told to evaluate the whole supply chain demand with this metric but cannot explain why.
You are completely correct that truncating "accuracy" makes no sense. It throws info |
48,493 | Difference between forecasting accuracy and forecasting error? | Actually, this is described in the link you provided:
Error above 100% implies a zero forecast accuracy or a very inaccurate
forecast. [...]
What is the impact of Large Forecast Errors?
Is Negative accuracy meaningful? Regardless of huge errors, and errors
much higher than 100% of the Actuals or Forecast, we inte... | Difference between forecasting accuracy and forecasting error? | Actually, this is described in the link you provided:
Error above 100% implies a zero forecast accuracy or a very inaccurate
forecast. [...]
What is the impact of Large Forecast Errors?
Is Negativ | Difference between forecasting accuracy and forecasting error?
Actually, this is described in the link you provided:
Error above 100% implies a zero forecast accuracy or a very inaccurate
forecast. [...]
What is the impact of Large Forecast Errors?
Is Negative accuracy meaningful? Regardless of huge errors, and err... | Difference between forecasting accuracy and forecasting error?
Actually, this is described in the link you provided:
Error above 100% implies a zero forecast accuracy or a very inaccurate
forecast. [...]
What is the impact of Large Forecast Errors?
Is Negativ |
48,494 | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | See also this question; the proof is sketched in the related comments by @cardinal.
Without loss of generality we can assume that the interval is $[0,
\,1]$. Consider the following Bernstein's polynomial
$$ B_n(x) :=
\sum_{k= 0}^n f(k/n) { n \choose k} x^k (1 - x)^{n-k}
$$
which will provide an approximation of $f(x)$:... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | See also this question; the proof is sketched in the related comments by @cardinal.
Without loss of generality we can assume that the interval is $[0,
\,1]$. Consider the following Bernstein's polynom | Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
See also this question; the proof is sketched in the related comments by @cardinal.
Without loss of generality we can assume that the interval is $[0,
\,1]$. Consider the following Bernstein's polynomial
$$ B_n(x) :=
\sum_{k= 0}^n f(k/n) { n \cho... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
See also this question; the proof is sketched in the related comments by @cardinal.
Without loss of generality we can assume that the interval is $[0,
\,1]$. Consider the following Bernstein's polynom |
48,495 | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | Weierstrass Approximation Theorem: Suppose $f$ is a continuous real-valued function defined on the real interval $[a,b]$. For every $\varepsilon > 0$ there exists a polynomial $p$ such that for all $x \in [a, b]$ we have $|f(x)−p(x)| < \varepsilon$ (or equivalently, the supremum norm $||f−p|| < \varepsilon$).
The not... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | Weierstrass Approximation Theorem: Suppose $f$ is a continuous real-valued function defined on the real interval $[a,b]$. For every $\varepsilon > 0$ there exists a polynomial $p$ such that for all $ | Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
Weierstrass Approximation Theorem: Suppose $f$ is a continuous real-valued function defined on the real interval $[a,b]$. For every $\varepsilon > 0$ there exists a polynomial $p$ such that for all $x \in [a, b]$ we have $|f(x)−p(x)| < \varepsil... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
Weierstrass Approximation Theorem: Suppose $f$ is a continuous real-valued function defined on the real interval $[a,b]$. For every $\varepsilon > 0$ there exists a polynomial $p$ such that for all $ |
48,496 | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Let Sn satisfies strong law, then it satisfies weak la... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Can we prove Weierstrass Approximation using Strong Law of Large Numbers?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
48,497 | Simulating the impact of non-IID data on a model | I have data that is non-IID, and I want to estimate if the dependence is bad enough that it will have a noticeable effect on a fitted classifier. I don't think the exact model type will matter in this case, but for argument's sake let's say I'm using elastic-net logistic regression.
On the importance of the i.i.d. ass... | Simulating the impact of non-IID data on a model | I have data that is non-IID, and I want to estimate if the dependence is bad enough that it will have a noticeable effect on a fitted classifier. I don't think the exact model type will matter in this | Simulating the impact of non-IID data on a model
I have data that is non-IID, and I want to estimate if the dependence is bad enough that it will have a noticeable effect on a fitted classifier. I don't think the exact model type will matter in this case, but for argument's sake let's say I'm using elastic-net logistic... | Simulating the impact of non-IID data on a model
I have data that is non-IID, and I want to estimate if the dependence is bad enough that it will have a noticeable effect on a fitted classifier. I don't think the exact model type will matter in this |
48,498 | Evaluating models by Loglogss, AUC, and Accuracy | A very non-mathematical intuition:
A has a higher accuracy than B, but a lower log-loss: it means A is shy, i.e. its probabilities tend to me closer to 0.5 than 0/1. B is bolder, i.e. its probalities are closer to 0/1, but makes more mistakes than A.
A has a higher accuracy than B, but a lower AUROC: it means A is "be... | Evaluating models by Loglogss, AUC, and Accuracy | A very non-mathematical intuition:
A has a higher accuracy than B, but a lower log-loss: it means A is shy, i.e. its probabilities tend to me closer to 0.5 than 0/1. B is bolder, i.e. its probalities | Evaluating models by Loglogss, AUC, and Accuracy
A very non-mathematical intuition:
A has a higher accuracy than B, but a lower log-loss: it means A is shy, i.e. its probabilities tend to me closer to 0.5 than 0/1. B is bolder, i.e. its probalities are closer to 0/1, but makes more mistakes than A.
A has a higher accu... | Evaluating models by Loglogss, AUC, and Accuracy
A very non-mathematical intuition:
A has a higher accuracy than B, but a lower log-loss: it means A is shy, i.e. its probabilities tend to me closer to 0.5 than 0/1. B is bolder, i.e. its probalities |
48,499 | How to normalized a similarity matrix? | Assuming it's composed solely of positive values, and if your diagonal isn't already composed solely of ones, do:
$$A_{ij}:=\frac{A_{ij}}{\sqrt{A_{jj}\cdot A_{ii}}}$$
This is analogous to the transformation from a covariance to correlation matrix, i.e. diagonals become one, off-diagonal is rescaled. | How to normalized a similarity matrix? | Assuming it's composed solely of positive values, and if your diagonal isn't already composed solely of ones, do:
$$A_{ij}:=\frac{A_{ij}}{\sqrt{A_{jj}\cdot A_{ii}}}$$
This is analogous to the transfor | How to normalized a similarity matrix?
Assuming it's composed solely of positive values, and if your diagonal isn't already composed solely of ones, do:
$$A_{ij}:=\frac{A_{ij}}{\sqrt{A_{jj}\cdot A_{ii}}}$$
This is analogous to the transformation from a covariance to correlation matrix, i.e. diagonals become one, off-di... | How to normalized a similarity matrix?
Assuming it's composed solely of positive values, and if your diagonal isn't already composed solely of ones, do:
$$A_{ij}:=\frac{A_{ij}}{\sqrt{A_{jj}\cdot A_{ii}}}$$
This is analogous to the transfor |
48,500 | Are Neural Nets viable to extract Date patterns in a text | This is technically possible, but there would be several issues you would run into, for example:
What would be your output? You could use a soft-max and then do classification for the days and months. However, classification on the year number would limit you to a specific time range and might lead to thousand of un... | Are Neural Nets viable to extract Date patterns in a text | This is technically possible, but there would be several issues you would run into, for example:
What would be your output? You could use a soft-max and then do classification for the days and mont | Are Neural Nets viable to extract Date patterns in a text
This is technically possible, but there would be several issues you would run into, for example:
What would be your output? You could use a soft-max and then do classification for the days and months. However, classification on the year number would limit you... | Are Neural Nets viable to extract Date patterns in a text
This is technically possible, but there would be several issues you would run into, for example:
What would be your output? You could use a soft-max and then do classification for the days and mont |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.