idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
7,401 | Proof of convergence of k-means | To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove that the algorithm does not necessarily converge (provided that the cluster assignment does not have a deterministic ti... | Proof of convergence of k-means | To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove | Proof of convergence of k-means
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove that the algorithm does not necessarily converge (provided that the cluster assignment ... | Proof of convergence of k-means
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove |
7,402 | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models? | If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is that you can still fit survival models without knowing (or assuming) the distribution. You give an example using the no... | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful | If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression i... | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is |
7,403 | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models? | "We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the underlying hazard. Each has their advantages and disadvantages.
Semi-parametric methods, like the Cox proportional hazards... | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful | "We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the un | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the ... | In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the un |
7,404 | Overfitting a logistic regression model | Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb with the AUC, ever ever ever.
What the AUC is is the probability that a randomly sampled positive (or case) will have a... | Overfitting a logistic regression model | Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb | Overfitting a logistic regression model
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb with the AUC, ever ever ever.
What the AUC is is the probability that a randoml... | Overfitting a logistic regression model
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb |
7,405 | Overfitting a logistic regression model | You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge regression, elastic net, lasso, etc.) and
the use of informative priors with a Bayesian model.
When $Y$ has limited in... | Overfitting a logistic regression model | You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge | Overfitting a logistic regression model
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge regression, elastic net, lasso, etc.) and
the use of informative priors with a ... | Overfitting a logistic regression model
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge |
7,406 | Overfitting a logistic regression model | In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most one is overfitted logistic model, its decision boundry has large no. of ups and downs while the middel one is just fit it... | Overfitting a logistic regression model | In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most o | Overfitting a logistic regression model
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most one is overfitted logistic model, its decision boundry has large no. of ups and d... | Overfitting a logistic regression model
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most o |
7,407 | Overfitting a logistic regression model | Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sample can seem like features of the population and they are not and hence overfitting hurts.
It is akin to a question of ... | Overfitting a logistic regression model | Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sam | Overfitting a logistic regression model
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sample can seem like features of the population and they are not and hence overfitt... | Overfitting a logistic regression model
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sam |
7,408 | Overfitting a logistic regression model | What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of thumb) bigger in training then there might be overfitting. | Overfitting a logistic regression model | What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of | Overfitting a logistic regression model
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of thumb) bigger in training then there might be overfitting. | Overfitting a logistic regression model
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of |
7,409 | Why is a sample covariance matrix singular when sample size is less than number of variables? | Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving enough information to be able to do so):
If $A$ and $B$ are two conformable matrices, then:
(i) column rank of $A$ = row ... | Why is a sample covariance matrix singular when sample size is less than number of variables? | Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving en | Why is a sample covariance matrix singular when sample size is less than number of variables?
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving enough information to be abl... | Why is a sample covariance matrix singular when sample size is less than number of variables?
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving en |
7,410 | Why is a sample covariance matrix singular when sample size is less than number of variables? | The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})(x_i - \bar{x})^T.
$$
Effectively, we are summing $n$ matrices, each havin... | Why is a sample covariance matrix singular when sample size is less than number of variables? | The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$ | Why is a sample covariance matrix singular when sample size is less than number of variables?
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
S = \frac{1}{n-1}\sum_{i=... | Why is a sample covariance matrix singular when sample size is less than number of variables?
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$ |
7,411 | Why is a sample covariance matrix singular when sample size is less than number of variables? | When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to a simple drawing, appearing at the very end. In between is an explanation of what the words and the drawing mean.
The... | Why is a sample covariance matrix singular when sample size is less than number of variables? | When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to | Why is a sample covariance matrix singular when sample size is less than number of variables?
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to a simple drawing, appeari... | Why is a sample covariance matrix singular when sample size is less than number of variables?
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to |
7,412 | Why is a sample covariance matrix singular when sample size is less than number of variables? | The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in this answer, is to express the sample covariance matrix $S$ in terms of a nice product form of the data matrix $X$ and an i... | Why is a sample covariance matrix singular when sample size is less than number of variables? | The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in thi | Why is a sample covariance matrix singular when sample size is less than number of variables?
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in this answer, is to express th... | Why is a sample covariance matrix singular when sample size is less than number of variables?
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in thi |
7,413 | What is the difference between generalized estimating equations and GLMM? | In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subject-specific.
Let's consider a simple made-up example related to yours. You want to model the failure rate between boys... | What is the difference between generalized estimating equations and GLMM? | In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subjec | What is the difference between generalized estimating equations and GLMM?
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subject-specific.
Let's consider a simple made-up ... | What is the difference between generalized estimating equations and GLMM?
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subjec |
7,414 | Propensity score matching - What is the problem? | It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general... | Propensity score matching - What is the problem? | It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without prop | Propensity score matching - What is the problem?
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with pro... | Propensity score matching - What is the problem?
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without prop |
7,415 | Propensity score matching - What is the problem? | @Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of the matching algorithm is too much up in the air and most matching algorithms give different results depending on how you ... | Propensity score matching - What is the problem? | @Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of th | Propensity score matching - What is the problem?
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of the matching algorithm is too much up in the air and most matching algori... | Propensity score matching - What is the problem?
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of th |
7,416 | Propensity score matching - What is the problem? | A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $Y_{pre}$ in the baseline period (e.g., total healthcare expenditures or number of inpatient visits during the 12 month p... | Propensity score matching - What is the problem? | A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $ | Propensity score matching - What is the problem?
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $Y_{pre}$ in the baseline period (e.g., total healthcare expenditures or... | Propensity score matching - What is the problem?
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $ |
7,417 | Building an autoencoder in Tensorflow to surpass PCA | Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the original 784 dimensions to two.
Let's try to reproduce it. I will not be using Tensorflow directly, because it's much e... | Building an autoencoder in Tensorflow to surpass PCA | Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the | Building an autoencoder in Tensorflow to surpass PCA
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the original 784 dimensions to two.
Let's try to reproduce it. I will ... | Building an autoencoder in Tensorflow to surpass PCA
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the |
7,418 | Building an autoencoder in Tensorflow to surpass PCA | Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The auto-encoder below is setup so it emulates amoeba's example as close as possible - same optimiser and overall architect... | Building an autoencoder in Tensorflow to surpass PCA | Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The | Building an autoencoder in Tensorflow to surpass PCA
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The auto-encoder below is setup so it emulates amoeba's example as clo... | Building an autoencoder in Tensorflow to surpass PCA
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The |
7,419 | Building an autoencoder in Tensorflow to surpass PCA | Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturation (i.e. encoded output being 0)
this might be a reason for poor performance of AE
autoencoder input is data scaled ... | Building an autoencoder in Tensorflow to surpass PCA | Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturat | Building an autoencoder in Tensorflow to surpass PCA
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturation (i.e. encoded output being 0)
this might be a reason for poor... | Building an autoencoder in Tensorflow to surpass PCA
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturat |
7,420 | Interpretation of simple predictions to odds ratios in logistic regression | It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the confusion might be. What I can say is that the left hand side (LHS) of the (not) equals sign is the odds of being under... | Interpretation of simple predictions to odds ratios in logistic regression | It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the | Interpretation of simple predictions to odds ratios in logistic regression
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the confusion might be. What I can say is that t... | Interpretation of simple predictions to odds ratios in logistic regression
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the |
7,421 | Interpretation of simple predictions to odds ratios in logistic regression | Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not exposed by insurance as in the question above).
It took me quite a while to solve; I'm not sure why is that not well-known f... | Interpretation of simple predictions to odds ratios in logistic regression | Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not expos | Interpretation of simple predictions to odds ratios in logistic regression
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not exposed by insurance as in the question above).
I... | Interpretation of simple predictions to odds ratios in logistic regression
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not expos |
7,422 | How do I use the SVD in collaborative filtering? | However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average rating per movie, and subtracting this average for each user / movie combination, that is, subtracting movie bias from each u... | How do I use the SVD in collaborative filtering? | However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average ratin | How do I use the SVD in collaborative filtering?
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average rating per movie, and subtracting this average for each user / movie combina... | How do I use the SVD in collaborative filtering?
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average ratin |
7,423 | How do I use the SVD in collaborative filtering? | I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not friended person $y$) are generally treated as missing values to be predicted, rather than as zeros. That is, if user $i... | How do I use the SVD in collaborative filtering? | I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not | How do I use the SVD in collaborative filtering?
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not friended person $y$) are generally treated as missing values to be pre... | How do I use the SVD in collaborative filtering?
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not |
7,424 | How do I use the SVD in collaborative filtering? | The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this through a different matrix A. Let the matrix A be such that rows are the users and the columns are the items that the us... | How do I use the SVD in collaborative filtering? | The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this | How do I use the SVD in collaborative filtering?
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this through a different matrix A. Let the matrix A be such that rows are th... | How do I use the SVD in collaborative filtering?
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this |
7,425 | How do I use the SVD in collaborative filtering? | This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf FOSS software to model sparse-SVD. For example, vowpal wabbit, libFM, or redsvd.
vowpal wabbit has 3 implementations of ... | How do I use the SVD in collaborative filtering? | This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf | How do I use the SVD in collaborative filtering?
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf FOSS software to model sparse-SVD. For example, vowpal wabbit, libFM, o... | How do I use the SVD in collaborative filtering?
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf |
7,426 | How do I use the SVD in collaborative filtering? | I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and factor vectors.
The details of the SVD and SVD++ algorithms for recommender system can be found in Sections 5.3.1 and 5.... | How do I use the SVD in collaborative filtering? | I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and | How do I use the SVD in collaborative filtering?
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and factor vectors.
The details of the SVD and SVD++ algorithms for recomme... | How do I use the SVD in collaborative filtering?
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and |
7,427 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's say you have an email $(w_1, w_2,...,w_n)$ which your classifier rates very highly as "Ham", say $$P(Ham|w_1,w_2,...w_n) =... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's sa | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's say you have an email $(w_1,... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's sa |
7,428 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would conclude that the sentence is impossible but it clearly exists creating a contradiction.
Another extreme example is the t... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would co | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would conclude that the sentence i... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would co |
7,429 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity whose variation can be described by a probability distribution(or prior distribution).
So, if we view the procedure of pi... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity w | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity whose variation can be desc... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity w |
7,430 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{*}|d) = \arg\max_{C} \frac{\prod_{i}p(t_{i}|C)P(C)}{P(d)} \propto \arg\max_{C} \prod_{i}p(t_{i}|C)P(C)
$$
where $t_{i}$ ... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{ | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{*}|d) = \arg\max_{C} \frac... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{ |
7,431 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown in all classes.
Say there are two classes M and N with features A, B and C, as follows:
M: A=3, B=1, C=0
(In the class... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown in all classes.
Say there... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown |
7,432 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will become 0.
So adding the 1 , even if we never train on a particular feature/class, the Posterior probability will never be... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will b | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will become 0.
So adding the 1 ,... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will b |
7,433 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator is also a small number - there is simply not enough evidence to support the probability estimation.
I have a strong ave... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator is also a small number - ... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator |
7,434 | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no black swan in our training examples but that doesn't mean that there exists no black swan in the world. We can just add a p... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set? | You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no bla | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no black swan in our training ex... | In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no bla |
7,435 | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagre... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximiz | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus m... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximiz |
7,436 | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" computed for all the possible threshold values. ROC can be seen as an average (expected value) of those accuracies when ar... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" co | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" computed for all the possible... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" co |
7,437 | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it looks pretty different from your histograms! Look at B: all your predictions are < 0.5.
Second, why is B so accurate? Beca... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate? | Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it lo | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it looks pretty different from y... | Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it lo |
7,438 | Why are the weights of RNN/LSTM networks shared across time? | The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been made when any serious computation was a problem (1980s according to wiki), so I believe it wasn't the main argument (th... | Why are the weights of RNN/LSTM networks shared across time? | The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been | Why are the weights of RNN/LSTM networks shared across time?
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been made when any serious computation was a problem (1980s acc... | Why are the weights of RNN/LSTM networks shared across time?
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been |
7,439 | Why are the weights of RNN/LSTM networks shared across time? | The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward network. But, I suppose another way to think about it would be as an RNN whose weights are a time-varying function (and tha... | Why are the weights of RNN/LSTM networks shared across time? | The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward net | Why are the weights of RNN/LSTM networks shared across time?
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward network. But, I suppose another way to think about it would be... | Why are the weights of RNN/LSTM networks shared across time?
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward net |
7,440 | Why are the weights of RNN/LSTM networks shared across time? | I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not make them more powerful. | Why are the weights of RNN/LSTM networks shared across time? | I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not ma | Why are the weights of RNN/LSTM networks shared across time?
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not make them more powerful. | Why are the weights of RNN/LSTM networks shared across time?
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not ma |
7,441 | Why are the weights of RNN/LSTM networks shared across time? | I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a network that suggests the next word in the sequence:
"On Monday it was" when accumulated using recurrence will be a point in a... | Why are the weights of RNN/LSTM networks shared across time? | I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a networ | Why are the weights of RNN/LSTM networks shared across time?
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a network that suggests the next word in the sequence:
"On Monday i... | Why are the weights of RNN/LSTM networks shared across time?
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a networ |
7,442 | Why are the weights of RNN/LSTM networks shared across time? | RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought vector like some sort of figure or object might help, which gets it's proper shape (depending on the input sequence) thr... | Why are the weights of RNN/LSTM networks shared across time? | RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought | Why are the weights of RNN/LSTM networks shared across time?
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought vector like some sort of figure or object might help, which... | Why are the weights of RNN/LSTM networks shared across time?
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought |
7,443 | How to tell the difference between linear and non-linear regression models? | There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To keep the discussion simple, take the independent variables $X$ to be fixed and accurately measured (rather than random vari... | How to tell the difference between linear and non-linear regression models? | There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To kee | How to tell the difference between linear and non-linear regression models?
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To keep the discussion simple, take the independen... | How to tell the difference between linear and non-linear regression models?
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To kee |
7,444 | How to tell the difference between linear and non-linear regression models? | A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of these.
A model is linear in parameters if it can be written as the sum of terms, where each term is either a constant or a ... | How to tell the difference between linear and non-linear regression models? | A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of the | How to tell the difference between linear and non-linear regression models?
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of these.
A model is linear in parameters if it ca... | How to tell the difference between linear and non-linear regression models?
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of the |
7,445 | How to tell the difference between linear and non-linear regression models? | You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but you can still model it using a generlized linear model (using a link function) or polynomail regression since the parame... | How to tell the difference between linear and non-linear regression models? | You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but | How to tell the difference between linear and non-linear regression models?
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but you can still model it using a generlized li... | How to tell the difference between linear and non-linear regression models?
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but |
7,446 | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum? | EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points. | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum? | EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points. | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points. | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points. |
7,447 | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum? | First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a point with zero gradient.
I can think of two ways to see this; the first view is pure intuition, and the second view is t... | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum? | First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a po | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a point with zero gradient.
I can ... | Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a po |
7,448 | Generating data with a given sample covariance matrix | There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to sampling variation, you don't have the sample characteristics exactly matching).
ii) you want to generate a sample whos... | Generating data with a given sample covariance matrix | There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to | Generating data with a given sample covariance matrix
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to sampling variation, you don't have the sample characteristics exac... | Generating data with a given sample covariance matrix
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to |
7,449 | Generating data with a given sample covariance matrix | @Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\boldsymbol \Sigma$? This is easy to do by generating samples from a standard Gaussian and multiplying them by a square root o... | Generating data with a given sample covariance matrix | @Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\bolds | Generating data with a given sample covariance matrix
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\boldsymbol \Sigma$? This is easy to do by generating samples from a sta... | Generating data with a given sample covariance matrix
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\bolds |
7,450 | Generating data with a given sample covariance matrix | I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector of means and a matrix of covariances
mean = [3, 3]
Sigma = [[1, 0.70],
[0.70, 1]]
# Generate 100 cases
X = np.r... | Generating data with a given sample covariance matrix | I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector o | Generating data with a given sample covariance matrix
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector of means and a matrix of covariances
mean = [3, 3]
Sigma = [[1, 0.7... | Generating data with a given sample covariance matrix
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector o |
7,451 | Relative importance of a set of predictors in a random forests classification in R | First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits during training. A common misconception is that the variable importance metric refers to the Gini used for asserting model ... | Relative importance of a set of predictors in a random forests classification in R | First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits dur | Relative importance of a set of predictors in a random forests classification in R
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits during training. A common misconception ... | Relative importance of a set of predictors in a random forests classification in R
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits dur |
7,452 | Relative importance of a set of predictors in a random forests classification in R | The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node is the Information Gain.
The GINI impurity function is G = 1- sum over classes[pi^2]. | Relative importance of a set of predictors in a random forests classification in R | The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node | Relative importance of a set of predictors in a random forests classification in R
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node is the Information Gain.
The GINI i... | Relative importance of a set of predictors in a random forests classification in R
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node |
7,453 | How to make a reward function in reinforcement learning? | Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might represent the taste of food. Or perhaps, $(s,a)$ might represent the act of tasting the food. So, to the extent that th... | How to make a reward function in reinforcement learning? | Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might | How to make a reward function in reinforcement learning?
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might represent the taste of food. Or perhaps, $(s,a)$ might represe... | How to make a reward function in reinforcement learning?
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might |
7,454 | How to make a reward function in reinforcement learning? | Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning because the agent needs to take many actions before getting any reward. This problem is also known as the credit assign... | How to make a reward function in reinforcement learning? | Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning | How to make a reward function in reinforcement learning?
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning because the agent needs to take many actions before getting an... | How to make a reward function in reinforcement learning?
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning |
7,455 | How to perform isometric log-ratio transformation | The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in a mixture or proportions of total time spent in various activities. The sum-to-unity invariant implies that although t... | How to perform isometric log-ratio transformation | The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in | How to perform isometric log-ratio transformation
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in a mixture or proportions of total time spent in various activities. ... | How to perform isometric log-ratio transformation
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in |
7,456 | How to perform isometric log-ratio transformation | For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up that much.
As @whuber correctly stated, since we are dealing with proportions, we have to account for dependencies betwee... | How to perform isometric log-ratio transformation | For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up t | How to perform isometric log-ratio transformation
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up that much.
As @whuber correctly stated, since we are dealing with propo... | How to perform isometric log-ratio transformation
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up t |
7,457 | How to perform isometric log-ratio transformation | The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be interested in a partition the following partition:
(1) (sleeping,sedentary|physical_activity)
(2) (sleeping|sedentary).
Sin... | How to perform isometric log-ratio transformation | The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be int | How to perform isometric log-ratio transformation
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be interested in a partition the following partition:
(1) (sleeping,sedentar... | How to perform isometric log-ratio transformation
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be int |
7,458 | "Frequency" value for seconds/minutes intervals data in R | The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier analysis, where "period" is the length of the cycle, and "frequency" is the inverse of period. When using the ts() functio... | "Frequency" value for seconds/minutes intervals data in R | The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier an | "Frequency" value for seconds/minutes intervals data in R
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier analysis, where "period" is the length of the cycle, and "freque... | "Frequency" value for seconds/minutes intervals data in R
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier an |
7,459 | Can you overfit by training machine learning algorithms using CV/Bootstrap? | There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". In my view, this appears not to be widely appreciated, but is a substantial pitfall in the application of machine learni... | Can you overfit by training machine learning algorithms using CV/Bootstrap? | There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". | Can you overfit by training machine learning algorithms using CV/Bootstrap?
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". In my view, this appears not to be widely ap... | Can you overfit by training machine learning algorithms using CV/Bootstrap?
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". |
7,460 | Can you overfit by training machine learning algorithms using CV/Bootstrap? | Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with other methods like resubstitution is that by estimating error on the same data set that you fit the classifier with you can ... | Can you overfit by training machine learning algorithms using CV/Bootstrap? | Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with othe | Can you overfit by training machine learning algorithms using CV/Bootstrap?
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with other methods like resubstitution is that by est... | Can you overfit by training machine learning algorithms using CV/Bootstrap?
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with othe |
7,461 | Can you overfit by training machine learning algorithms using CV/Bootstrap? | I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional global optimistation plus a stochastic component added to the cost function.
Many of the approaches to deal with challenges ... | Can you overfit by training machine learning algorithms using CV/Bootstrap? | I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional globa | Can you overfit by training machine learning algorithms using CV/Bootstrap?
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional global optimistation plus a stochastic component ... | Can you overfit by training machine learning algorithms using CV/Bootstrap?
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional globa |
7,462 | Can you overfit by training machine learning algorithms using CV/Bootstrap? | It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you don't have any $f(\mathbf{x})$ defined on some domain which simply has an extremum for at least one value of $\mathbf{x}... | Can you overfit by training machine learning algorithms using CV/Bootstrap? | It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you | Can you overfit by training machine learning algorithms using CV/Bootstrap?
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you don't have any $f(\mathbf{x})$ defined on so... | Can you overfit by training machine learning algorithms using CV/Bootstrap?
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you |
7,463 | Can you overfit by training machine learning algorithms using CV/Bootstrap? | Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dataset into 3 partitions, one for testing (~20%), one for testing optimized parameters (~20%) and one for fitting the clas... | Can you overfit by training machine learning algorithms using CV/Bootstrap? | Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dat | Can you overfit by training machine learning algorithms using CV/Bootstrap?
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dataset into 3 partitions, one for testing (~20... | Can you overfit by training machine learning algorithms using CV/Bootstrap?
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dat |
7,464 | Comparing hierarchical clustering dendrograms obtained by different distances & methods | To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select the "right" method or distance measure in hierarchical clustering?
There are some points - hidden snags - regarding hier... | Comparing hierarchical clustering dendrograms obtained by different distances & methods | To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select | Comparing hierarchical clustering dendrograms obtained by different distances & methods
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select the "right" method or distance m... | Comparing hierarchical clustering dendrograms obtained by different distances & methods
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select |
7,465 | Raw or orthogonal polynomial regression? | I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differences between the two approaches. That is, if we had a perfect computer that could represent all values exactly, why would ... | Raw or orthogonal polynomial regression? | I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differenc | Raw or orthogonal polynomial regression?
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differences between the two approaches. That is, if we had a perfect computer that could... | Raw or orthogonal polynomial regression?
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differenc |
7,466 | Raw or orthogonal polynomial regression? | I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunch of high order polynomials, the covariates we are regressing against become highly correlated. Example code below:
x = ... | Raw or orthogonal polynomial regression? | I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunc | Raw or orthogonal polynomial regression?
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunch of high order polynomials, the covariates we are regressing against become hi... | Raw or orthogonal polynomial regression?
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunc |
7,467 | Raw or orthogonal polynomial regression? | Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for details, you may surprised that even simple number $0.4$, computer need to store it as $0.4000000059604644775390625$. Y... | Raw or orthogonal polynomial regression? | Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for | Raw or orthogonal polynomial regression?
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for details, you may surprised that even simple number $0.4$, computer need to sto... | Raw or orthogonal polynomial regression?
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for |
7,468 | Raw or orthogonal polynomial regression? | I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistical Learning" (James et. al., 2017, corrected 8th printing), they do discuss some differences between using orthogonal pol... | Raw or orthogonal polynomial regression? | I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistica | Raw or orthogonal polynomial regression?
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistical Learning" (James et. al., 2017, corrected 8th printing), they do discuss some... | Raw or orthogonal polynomial regression?
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistica |
7,469 | When to choose SARSA vs. Q Learning | They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ next. In Q learning, you update the estimate from the maximum estimate of possible next actions, regardless of which ac... | When to choose SARSA vs. Q Learning | They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ | When to choose SARSA vs. Q Learning
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ next. In Q learning, you update the estimate from the maximum estimate of possible ... | When to choose SARSA vs. Q Learning
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ |
7,470 | Interpretation of biplots in principal components analysis | PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction in the $p$-dimensional space, assuming you have $p$ variab... | Interpretation of biplots in principal components analysis | PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when | Interpretation of biplots in principal components analysis
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction... | Interpretation of biplots in principal components analysis
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when |
7,471 | Interpretation of biplots in principal components analysis | The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The left and bottom axes are showing [normalized] principal component scores; the top and right axes are showing the loading... | Interpretation of biplots in principal components analysis | The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The | Interpretation of biplots in principal components analysis
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The left and bottom axes are showing [normalized] principal compo... | Interpretation of biplots in principal components analysis
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The |
7,472 | Is this the state of art regression methodology? | It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagging, boosting and stacking are all based exactly on this idea. So yes, if your aim is purely prediction then in most cas... | Is this the state of art regression methodology? | It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagg | Is this the state of art regression methodology?
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagging, boosting and stacking are all based exactly on this idea. So yes, ... | Is this the state of art regression methodology?
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagg |
7,473 | Is this the state of art regression methodology? | Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if they have a "forest" of these) under non-equilibrium conditions. For example, if the question is applied to stock market pe... | Is this the state of art regression methodology? | Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if the | Is this the state of art regression methodology?
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if they have a "forest" of these) under non-equilibrium conditions. For examp... | Is this the state of art regression methodology?
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if the |
7,474 | Diagnostics for generalized linear (mixed) models (specifically residuals) | This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access (although searching for it on Google scholar may prove successful, I thought I cite important passages that may be helpf... | Diagnostics for generalized linear (mixed) models (specifically residuals) | This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access | Diagnostics for generalized linear (mixed) models (specifically residuals)
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access (although searching for it on Google scholar ... | Diagnostics for generalized linear (mixed) models (specifically residuals)
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access |
7,475 | Diagnostics for generalized linear (mixed) models (specifically residuals) | This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visual residual checks suggested by the accepted answer a lot more reliable / easy.
From the package description:
The DHARM... | Diagnostics for generalized linear (mixed) models (specifically residuals) | This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visua | Diagnostics for generalized linear (mixed) models (specifically residuals)
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visual residual checks suggested by the accepted a... | Diagnostics for generalized linear (mixed) models (specifically residuals)
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visua |
7,476 | Good sources for learning Markov chain Monte Carlo (MCMC) | For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduction to Markov Chain Monte Carlo simulations and their statistical analysis, by Berg (2004).
A Tutorial on Markov Chain M... | Good sources for learning Markov chain Monte Carlo (MCMC) | For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduct | Good sources for learning Markov chain Monte Carlo (MCMC)
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduction to Markov Chain Monte Carlo simulations and their statisti... | Good sources for learning Markov chain Monte Carlo (MCMC)
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduct |
7,477 | Good sources for learning Markov chain Monte Carlo (MCMC) | I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog | Good sources for learning Markov chain Monte Carlo (MCMC) | I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog | Good sources for learning Markov chain Monte Carlo (MCMC)
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog | Good sources for learning Markov chain Monte Carlo (MCMC)
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog |
7,478 | Good sources for learning Markov chain Monte Carlo (MCMC) | Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie. | Good sources for learning Markov chain Monte Carlo (MCMC) | Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie. | Good sources for learning Markov chain Monte Carlo (MCMC)
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie. | Good sources for learning Markov chain Monte Carlo (MCMC)
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie. |
7,479 | Good sources for learning Markov chain Monte Carlo (MCMC) | Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online. | Good sources for learning Markov chain Monte Carlo (MCMC) | Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and | Good sources for learning Markov chain Monte Carlo (MCMC)
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online. | Good sources for learning Markov chain Monte Carlo (MCMC)
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and |
7,480 | Good sources for learning Markov chain Monte Carlo (MCMC) | Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more recently updated book than Gilks, Richardson & Spiegelhalter. I haven't read it myself, but it was well reviewed in Technom... | Good sources for learning Markov chain Monte Carlo (MCMC) | Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more rec | Good sources for learning Markov chain Monte Carlo (MCMC)
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more recently updated book than Gilks, Richardson & Spiegelhalter. I h... | Good sources for learning Markov chain Monte Carlo (MCMC)
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more rec |
7,481 | Good sources for learning Markov chain Monte Carlo (MCMC) | Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
Introduction to Probability Simulation and Gibbs Sampling with R by Suess and Trumbo (2010) | Good sources for learning Markov chain Monte Carlo (MCMC) | Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
| Good sources for learning Markov chain Monte Carlo (MCMC)
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
Introduction to Probability Simulation and Gibbs Sampling with... | Good sources for learning Markov chain Monte Carlo (MCMC)
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
|
7,482 | Good sources for learning Markov chain Monte Carlo (MCMC) | The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github examples page. | Good sources for learning Markov chain Monte Carlo (MCMC) | The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github | Good sources for learning Markov chain Monte Carlo (MCMC)
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github examples page. | Good sources for learning Markov chain Monte Carlo (MCMC)
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github |
7,483 | Outlier Detection on skewed Distributions | Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. Tukey himself would no doubt object to calling them outliers on this basis (he didn't necessarily regard points outside... | Outlier Detection on skewed Distributions | Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. | Outlier Detection on skewed Distributions
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. Tukey himself would no doubt object to calling them outliers on this basis (h... | Outlier Detection on skewed Distributions
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. |
7,484 | Outlier Detection on skewed Distributions | I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume that except for a minority of outliers, the bulk of your data can be well described by
a known distribution (in your cas... | Outlier Detection on skewed Distributions | I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume t | Outlier Detection on skewed Distributions
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume that except for a minority of outliers, the bulk of your data can be well descr... | Outlier Detection on skewed Distributions
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume t |
7,485 | Outlier Detection on skewed Distributions | First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when there are so many huge data sets. In a data set of (say) one million observations (not all that big, in some fields), t... | Outlier Detection on skewed Distributions | First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when | Outlier Detection on skewed Distributions
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when there are so many huge data sets. In a data set of (say) one million observat... | Outlier Detection on skewed Distributions
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when |
7,486 | How are the standard errors computed for the fitted values from a logistic regression? | The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as well. So if we can obtain the covariance matrix for the parameter estimates we can obtain the standard error for a linea... | How are the standard errors computed for the fitted values from a logistic regression? | The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as we | How are the standard errors computed for the fitted values from a logistic regression?
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as well. So if we can obtain the cova... | How are the standard errors computed for the fitted values from a logistic regression?
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as we |
7,487 | Things to consider about masters programs in statistics | Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to assume that you are interested in a terminal masters
degree to later go into industry and are not interested in
potential... | Things to consider about masters programs in statistics | Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to a | Things to consider about masters programs in statistics
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to assume that you are interested in a terminal masters
degree to la... | Things to consider about masters programs in statistics
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to a |
7,488 | Things to consider about masters programs in statistics | I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second rate private schools.
The brand name schools payoff. The price difference between a school like MIT and second tier schoo... | Things to consider about masters programs in statistics | I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second r | Things to consider about masters programs in statistics
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second rate private schools.
The brand name schools payoff. The price di... | Things to consider about masters programs in statistics
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second r |
7,489 | Things to consider about masters programs in statistics | Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions. | Things to consider about masters programs in statistics | Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions. | Things to consider about masters programs in statistics
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions. | Things to consider about masters programs in statistics
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions. |
7,490 | Distributions other than the normal where mean and variance are independent | Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.
A quick scan of Appendix A in Carlin and Lewis (1996) provides the following distribution... | Distributions other than the normal where mean and variance are independent | Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and secon | Distributions other than the normal where mean and variance are independent
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.
A quick scan ... | Distributions other than the normal where mean and variance are independent
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and secon |
7,491 | Distributions other than the normal where mean and variance are independent | In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 13, No. 1 (Mar., 1942), pp. 91-93.
I didn't know this, but Feller, "Introduct... | Distributions other than the normal where mean and variance are independent | In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", T | Distributions other than the normal where mean and variance are independent
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 1... | Distributions other than the normal where mean and variance are independent
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", T |
7,492 | How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated? | Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the vaccine group and placebo group were each of the same size 15 000.
So, calculated on the back of an envelope, instead of... | How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated? | Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the | How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the vaccine group and placebo group w... | How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the |
7,493 | What is difference between 'transfer learning' and 'domain adaptation'? | It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely related to transfer learning. Transfer learning is a general term that refers to a class of machine learning problems that in... | What is difference between 'transfer learning' and 'domain adaptation'? | It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely relat | What is difference between 'transfer learning' and 'domain adaptation'?
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely related to transfer learning. Transfer learning is a ... | What is difference between 'transfer learning' and 'domain adaptation'?
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely relat |
7,494 | What is difference between 'transfer learning' and 'domain adaptation'? | From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. Transfer learning: when p(Y|X) changes
between training and test.
In other words, in DA the input distribution changes... | What is difference between 'transfer learning' and 'domain adaptation'? | From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. T | What is difference between 'transfer learning' and 'domain adaptation'?
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. Transfer learning: when p(Y|X) changes
between ... | What is difference between 'transfer learning' and 'domain adaptation'?
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. T |
7,495 | What is difference between 'transfer learning' and 'domain adaptation'? | Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain adaptation is the process of adapting one or more source domains for the means of transferring information to improve t... | What is difference between 'transfer learning' and 'domain adaptation'? | Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain | What is difference between 'transfer learning' and 'domain adaptation'?
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain adaptation is the process of adapting one or mo... | What is difference between 'transfer learning' and 'domain adaptation'?
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain |
7,496 | What is difference between 'transfer learning' and 'domain adaptation'? | I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn/~mlong/doc/transferable-attention-aaai19.pdf | What is difference between 'transfer learning' and 'domain adaptation'? | I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn | What is difference between 'transfer learning' and 'domain adaptation'?
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn/~mlong/doc/transferable-attention-aaai19.pdf | What is difference between 'transfer learning' and 'domain adaptation'?
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn |
7,497 | What is difference between 'transfer learning' and 'domain adaptation'? | According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Transfer Learning.” IEEE Transactions on Knowledge and Data Engineering 22, no. 10 (October 2010): 1345–59. https://doi.org/... | What is difference between 'transfer learning' and 'domain adaptation'? | According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Tran | What is difference between 'transfer learning' and 'domain adaptation'?
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Transfer Learning.” IEEE Transactions on Knowledge a... | What is difference between 'transfer learning' and 'domain adaptation'?
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Tran |
7,498 | What is difference between 'transfer learning' and 'domain adaptation'? | It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but different distributions); in contrast, transfer learning
includes cases where the target domain's feature space is different
... | What is difference between 'transfer learning' and 'domain adaptation'? | It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but diffe | What is difference between 'transfer learning' and 'domain adaptation'?
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but different distributions); in contrast, transfer learn... | What is difference between 'transfer learning' and 'domain adaptation'?
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but diffe |
7,499 | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement? | A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trained against) can be found here
In short, it turns out that sampling without replacement, leads to faster convergence th... | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem | A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trai | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trained against) ca... | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trai |
7,500 | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement? | According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
trainin... | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem | According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
| Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
for j in rang... | Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.