idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
43,001 | How to compute confidence intervals from *weighted* samples? | You have a scheme of two-level sampling, first sampling the urls, then sampling from that empirical distribution over urls to detect some website property. I will assume that website property, White (W) is constant in time, so it is enough to visit each site once. So it would be best with sampling without replacement, ... | How to compute confidence intervals from *weighted* samples? | You have a scheme of two-level sampling, first sampling the urls, then sampling from that empirical distribution over urls to detect some website property. I will assume that website property, White ( | How to compute confidence intervals from *weighted* samples?
You have a scheme of two-level sampling, first sampling the urls, then sampling from that empirical distribution over urls to detect some website property. I will assume that website property, White (W) is constant in time, so it is enough to visit each site ... | How to compute confidence intervals from *weighted* samples?
You have a scheme of two-level sampling, first sampling the urls, then sampling from that empirical distribution over urls to detect some website property. I will assume that website property, White ( |
43,002 | Distribution of random variable with multinomial sampling distribution and parameters $(n,p)$, where $n\sim$ Poisson with truncation | At the moment your model is not clearly defined, so it is premature to seek the distribution of the count vector $X$. You are seeking to generalise from the multinomial distribution, which is the distribution of count values over categories for an underlying sequence of independent categorical random variables with a ... | Distribution of random variable with multinomial sampling distribution and parameters $(n,p)$, where | At the moment your model is not clearly defined, so it is premature to seek the distribution of the count vector $X$. You are seeking to generalise from the multinomial distribution, which is the dis | Distribution of random variable with multinomial sampling distribution and parameters $(n,p)$, where $n\sim$ Poisson with truncation
At the moment your model is not clearly defined, so it is premature to seek the distribution of the count vector $X$. You are seeking to generalise from the multinomial distribution, whi... | Distribution of random variable with multinomial sampling distribution and parameters $(n,p)$, where
At the moment your model is not clearly defined, so it is premature to seek the distribution of the count vector $X$. You are seeking to generalise from the multinomial distribution, which is the dis |
43,003 | Implicit Feedback Factorization Machines : Format of Input and Recommendations | Your intuition is correct, unless we have a few tens of thousand items it can quickly become too expensive to generate hundred of thousands of rankings and then sort them. What is done is commonly referred as "candidate items selection"; estimates are generated for a subset of the available items. We aim to avoid unnec... | Implicit Feedback Factorization Machines : Format of Input and Recommendations | Your intuition is correct, unless we have a few tens of thousand items it can quickly become too expensive to generate hundred of thousands of rankings and then sort them. What is done is commonly ref | Implicit Feedback Factorization Machines : Format of Input and Recommendations
Your intuition is correct, unless we have a few tens of thousand items it can quickly become too expensive to generate hundred of thousands of rankings and then sort them. What is done is commonly referred as "candidate items selection"; est... | Implicit Feedback Factorization Machines : Format of Input and Recommendations
Your intuition is correct, unless we have a few tens of thousand items it can quickly become too expensive to generate hundred of thousands of rankings and then sort them. What is done is commonly ref |
43,004 | Does Approximate Bayesian Computation (ABC) follow the Likelihood Principle? | The "when the likelihood function is tractable" is somewhat self-defeating, as the reason for using ABC is that it is intractable.
As for the likelihood principle, ABC is definitely not respecting it, since it requires a simulation of the data from its sampling distribution. It thus uses the frequentist properties of t... | Does Approximate Bayesian Computation (ABC) follow the Likelihood Principle? | The "when the likelihood function is tractable" is somewhat self-defeating, as the reason for using ABC is that it is intractable.
As for the likelihood principle, ABC is definitely not respecting it, | Does Approximate Bayesian Computation (ABC) follow the Likelihood Principle?
The "when the likelihood function is tractable" is somewhat self-defeating, as the reason for using ABC is that it is intractable.
As for the likelihood principle, ABC is definitely not respecting it, since it requires a simulation of the data... | Does Approximate Bayesian Computation (ABC) follow the Likelihood Principle?
The "when the likelihood function is tractable" is somewhat self-defeating, as the reason for using ABC is that it is intractable.
As for the likelihood principle, ABC is definitely not respecting it, |
43,005 | How can we verify the intuition that in the RW-Metropolis-Hastings algorithm with Gaussian proposal too small and too large variances are bad choices | The original approach by Gareth Roberts et al. is to investigate the limiting distribution of the first coordinate process $X^{(1)}_n$, accelerated by a factor $d$. This leads to the limiting process $Z_t = X^{(1)}_{\lfloor t d \rfloor}$.
If you put $\alpha < 1/2$ (large steps), it can be shown that asymptotically no... | How can we verify the intuition that in the RW-Metropolis-Hastings algorithm with Gaussian proposal | The original approach by Gareth Roberts et al. is to investigate the limiting distribution of the first coordinate process $X^{(1)}_n$, accelerated by a factor $d$. This leads to the limiting process | How can we verify the intuition that in the RW-Metropolis-Hastings algorithm with Gaussian proposal too small and too large variances are bad choices
The original approach by Gareth Roberts et al. is to investigate the limiting distribution of the first coordinate process $X^{(1)}_n$, accelerated by a factor $d$. This ... | How can we verify the intuition that in the RW-Metropolis-Hastings algorithm with Gaussian proposal
The original approach by Gareth Roberts et al. is to investigate the limiting distribution of the first coordinate process $X^{(1)}_n$, accelerated by a factor $d$. This leads to the limiting process |
43,006 | How to calculate causal effects with repeated exogenous shocks over a time series | The situation that you describe sound like simple treatment effect story, where the exogenous binary shock represent the treatment. If you have the shock $T=1$, otherwise $T=0$. The blue line represent the level of $x$, your outcome variable. In time series sense $x$ seems stationary, so it come back to the long run me... | How to calculate causal effects with repeated exogenous shocks over a time series | The situation that you describe sound like simple treatment effect story, where the exogenous binary shock represent the treatment. If you have the shock $T=1$, otherwise $T=0$. The blue line represen | How to calculate causal effects with repeated exogenous shocks over a time series
The situation that you describe sound like simple treatment effect story, where the exogenous binary shock represent the treatment. If you have the shock $T=1$, otherwise $T=0$. The blue line represent the level of $x$, your outcome varia... | How to calculate causal effects with repeated exogenous shocks over a time series
The situation that you describe sound like simple treatment effect story, where the exogenous binary shock represent the treatment. If you have the shock $T=1$, otherwise $T=0$. The blue line represen |
43,007 | How to calculate causal effects with repeated exogenous shocks over a time series | Basically you need a good model for the process without shocks, and a good model for a) the effect of the shock on the process, and probably b) the dependence of the effect of the shock on the process. Any answer will be contingent on these. As far as I can see, the 'difference in conditional means' approach discussed ... | How to calculate causal effects with repeated exogenous shocks over a time series | Basically you need a good model for the process without shocks, and a good model for a) the effect of the shock on the process, and probably b) the dependence of the effect of the shock on the process | How to calculate causal effects with repeated exogenous shocks over a time series
Basically you need a good model for the process without shocks, and a good model for a) the effect of the shock on the process, and probably b) the dependence of the effect of the shock on the process. Any answer will be contingent on the... | How to calculate causal effects with repeated exogenous shocks over a time series
Basically you need a good model for the process without shocks, and a good model for a) the effect of the shock on the process, and probably b) the dependence of the effect of the shock on the process |
43,008 | How to calculate causal effects with repeated exogenous shocks over a time series | The problem you describe in your question is discussed in detail in the following paper:
Bojinov, Iavor, and Neil Shephard. "Time series experiments and causal estimands: exact randomization tests and trading." Journal of the American Statistical Association 114, no. 528 (2019): 1665-1682.
It would not be very useful t... | How to calculate causal effects with repeated exogenous shocks over a time series | The problem you describe in your question is discussed in detail in the following paper:
Bojinov, Iavor, and Neil Shephard. "Time series experiments and causal estimands: exact randomization tests and | How to calculate causal effects with repeated exogenous shocks over a time series
The problem you describe in your question is discussed in detail in the following paper:
Bojinov, Iavor, and Neil Shephard. "Time series experiments and causal estimands: exact randomization tests and trading." Journal of the American Sta... | How to calculate causal effects with repeated exogenous shocks over a time series
The problem you describe in your question is discussed in detail in the following paper:
Bojinov, Iavor, and Neil Shephard. "Time series experiments and causal estimands: exact randomization tests and |
43,009 | Can double dipping be reasonable? | Looking at the overall pattern you describe: yes this could be reasonable (or, it's not obviously unreasonable).
Why? The starting position for MCMC can be arbitrarily chosen. So long as the chain is run to stationarity. Choosing a reasonable starting position will reduce compute time.
You do have to look out for some... | Can double dipping be reasonable? | Looking at the overall pattern you describe: yes this could be reasonable (or, it's not obviously unreasonable).
Why? The starting position for MCMC can be arbitrarily chosen. So long as the chain is | Can double dipping be reasonable?
Looking at the overall pattern you describe: yes this could be reasonable (or, it's not obviously unreasonable).
Why? The starting position for MCMC can be arbitrarily chosen. So long as the chain is run to stationarity. Choosing a reasonable starting position will reduce compute time.... | Can double dipping be reasonable?
Looking at the overall pattern you describe: yes this could be reasonable (or, it's not obviously unreasonable).
Why? The starting position for MCMC can be arbitrarily chosen. So long as the chain is |
43,010 | Is Structurally Missing Data a subset of Missing at Random Data? | No, I would consider Structurally Missing Data to be a separate category, with distinct methods of dealing with it in analyses.
It is definitely not Missing at Random. By definition, it is non-random, being instead logically associated with specific values of a different variable. Let's use a lightly modified version ... | Is Structurally Missing Data a subset of Missing at Random Data? | No, I would consider Structurally Missing Data to be a separate category, with distinct methods of dealing with it in analyses.
It is definitely not Missing at Random. By definition, it is non-random | Is Structurally Missing Data a subset of Missing at Random Data?
No, I would consider Structurally Missing Data to be a separate category, with distinct methods of dealing with it in analyses.
It is definitely not Missing at Random. By definition, it is non-random, being instead logically associated with specific valu... | Is Structurally Missing Data a subset of Missing at Random Data?
No, I would consider Structurally Missing Data to be a separate category, with distinct methods of dealing with it in analyses.
It is definitely not Missing at Random. By definition, it is non-random |
43,011 | When does my autoencoder start to overfit? | As validation tells about generalization of the algorithm. And from your graph, ADAM is working very good, with a biased response. But for sure, there is no overfitting sign in there.
For biasing check, you can try out k-fold method, and check the response of algorithm for each fold. Then you can find, whether this is ... | When does my autoencoder start to overfit? | As validation tells about generalization of the algorithm. And from your graph, ADAM is working very good, with a biased response. But for sure, there is no overfitting sign in there.
For biasing chec | When does my autoencoder start to overfit?
As validation tells about generalization of the algorithm. And from your graph, ADAM is working very good, with a biased response. But for sure, there is no overfitting sign in there.
For biasing check, you can try out k-fold method, and check the response of algorithm for eac... | When does my autoencoder start to overfit?
As validation tells about generalization of the algorithm. And from your graph, ADAM is working very good, with a biased response. But for sure, there is no overfitting sign in there.
For biasing chec |
43,012 | When does my autoencoder start to overfit? | @Sycorax answer is what comes the closest to the answer.
Usually, overfitting is described as the model training error going down while validation error goes up, which means the model is learning patterns that don't generalize beyond the training set.
In the case of an autoencoder, you're training the model to reproduc... | When does my autoencoder start to overfit? | @Sycorax answer is what comes the closest to the answer.
Usually, overfitting is described as the model training error going down while validation error goes up, which means the model is learning patt | When does my autoencoder start to overfit?
@Sycorax answer is what comes the closest to the answer.
Usually, overfitting is described as the model training error going down while validation error goes up, which means the model is learning patterns that don't generalize beyond the training set.
In the case of an autoenc... | When does my autoencoder start to overfit?
@Sycorax answer is what comes the closest to the answer.
Usually, overfitting is described as the model training error going down while validation error goes up, which means the model is learning patt |
43,013 | Data Augmentation in Keras: How many training observations do I end up with? | Data augmentation is used to artificially increase the number of samples in the training set (because small datasets are more vulnerable to over-fitting).
Keras is using an online data-augmentation process, where every single image is augmented at the start of every epoch (they are probably processed in batches, but th... | Data Augmentation in Keras: How many training observations do I end up with? | Data augmentation is used to artificially increase the number of samples in the training set (because small datasets are more vulnerable to over-fitting).
Keras is using an online data-augmentation pr | Data Augmentation in Keras: How many training observations do I end up with?
Data augmentation is used to artificially increase the number of samples in the training set (because small datasets are more vulnerable to over-fitting).
Keras is using an online data-augmentation process, where every single image is augmente... | Data Augmentation in Keras: How many training observations do I end up with?
Data augmentation is used to artificially increase the number of samples in the training set (because small datasets are more vulnerable to over-fitting).
Keras is using an online data-augmentation pr |
43,014 | Weighted Cosine Similarity | scipy.spatial.distance.cosine has implemented weighted cosine similarity as follows (source):
$$\frac{\sum_{i}{w_i u_i v_i}}{\sqrt{\sum_{i}w_i u_i^2}\sqrt{\sum_{i}w_i v_i^2}}$$
I know this doesn't actually answer this question, but since scipy has implemented like this, may be this is better than both of your approache... | Weighted Cosine Similarity | scipy.spatial.distance.cosine has implemented weighted cosine similarity as follows (source):
$$\frac{\sum_{i}{w_i u_i v_i}}{\sqrt{\sum_{i}w_i u_i^2}\sqrt{\sum_{i}w_i v_i^2}}$$
I know this doesn't act | Weighted Cosine Similarity
scipy.spatial.distance.cosine has implemented weighted cosine similarity as follows (source):
$$\frac{\sum_{i}{w_i u_i v_i}}{\sqrt{\sum_{i}w_i u_i^2}\sqrt{\sum_{i}w_i v_i^2}}$$
I know this doesn't actually answer this question, but since scipy has implemented like this, may be this is better ... | Weighted Cosine Similarity
scipy.spatial.distance.cosine has implemented weighted cosine similarity as follows (source):
$$\frac{\sum_{i}{w_i u_i v_i}}{\sqrt{\sum_{i}w_i u_i^2}\sqrt{\sum_{i}w_i v_i^2}}$$
I know this doesn't act |
43,015 | Intuition behind perplexity parameter in t-SNE | if you write down the equation for perplexity defined by the conditional distribution in the original paper. it does not increase wrt to the entropy simply because the conditional distribution is discrete and is not gaussian. It is not a rigorous term and in the original paper they didn't even talk about it in detail..... | Intuition behind perplexity parameter in t-SNE | if you write down the equation for perplexity defined by the conditional distribution in the original paper. it does not increase wrt to the entropy simply because the conditional distribution is disc | Intuition behind perplexity parameter in t-SNE
if you write down the equation for perplexity defined by the conditional distribution in the original paper. it does not increase wrt to the entropy simply because the conditional distribution is discrete and is not gaussian. It is not a rigorous term and in the original p... | Intuition behind perplexity parameter in t-SNE
if you write down the equation for perplexity defined by the conditional distribution in the original paper. it does not increase wrt to the entropy simply because the conditional distribution is disc |
43,016 | Intuition behind perplexity parameter in t-SNE | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yeah, i can't agree with you more, in my view, perplex... | Intuition behind perplexity parameter in t-SNE | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Intuition behind perplexity parameter in t-SNE
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yeah, i... | Intuition behind perplexity parameter in t-SNE
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
43,017 | how to deal with correlated/colinear features when using Permutation feature importance? | Regularize. If you regularize your model and inputs, you should remove the issue of multi-collinearity.
Correlation is not causation, however, and so while you can use your method to assign "predictive power" to your features, you cannot establish any sort of causal relationship. | how to deal with correlated/colinear features when using Permutation feature importance? | Regularize. If you regularize your model and inputs, you should remove the issue of multi-collinearity.
Correlation is not causation, however, and so while you can use your method to assign "predictiv | how to deal with correlated/colinear features when using Permutation feature importance?
Regularize. If you regularize your model and inputs, you should remove the issue of multi-collinearity.
Correlation is not causation, however, and so while you can use your method to assign "predictive power" to your features, you ... | how to deal with correlated/colinear features when using Permutation feature importance?
Regularize. If you regularize your model and inputs, you should remove the issue of multi-collinearity.
Correlation is not causation, however, and so while you can use your method to assign "predictiv |
43,018 | Clarification of the intuition behind backpropagation | What you want to compute, is
$$\frac{\partial \sigma(\hat{x})}{\partial \vec{x}}=\left[\frac{\partial \sigma(\hat{x})}{\partial x_0},\frac{\partial \sigma(\hat{x})}{\partial x_1}\right]$$
and
$$\frac{\partial \sigma({\hat{x}})}{\partial \vec{w}}=\left[\frac{\partial \sigma(\hat{x})}{\partial w_0},\frac{\partial \sig... | Clarification of the intuition behind backpropagation | What you want to compute, is
$$\frac{\partial \sigma(\hat{x})}{\partial \vec{x}}=\left[\frac{\partial \sigma(\hat{x})}{\partial x_0},\frac{\partial \sigma(\hat{x})}{\partial x_1}\right]$$
and
$$\fr | Clarification of the intuition behind backpropagation
What you want to compute, is
$$\frac{\partial \sigma(\hat{x})}{\partial \vec{x}}=\left[\frac{\partial \sigma(\hat{x})}{\partial x_0},\frac{\partial \sigma(\hat{x})}{\partial x_1}\right]$$
and
$$\frac{\partial \sigma({\hat{x}})}{\partial \vec{w}}=\left[\frac{\part... | Clarification of the intuition behind backpropagation
What you want to compute, is
$$\frac{\partial \sigma(\hat{x})}{\partial \vec{x}}=\left[\frac{\partial \sigma(\hat{x})}{\partial x_0},\frac{\partial \sigma(\hat{x})}{\partial x_1}\right]$$
and
$$\fr |
43,019 | Clarification of the intuition behind backpropagation | The best way to understand backpropagation for a programmer is in terms of the chain rule as a recursion.
Here's the chain rule. You have a nested function expression $y=f(g(x))$. First you look at it as two different functions:
$$f(x)\\g(x)$$
When you do forward propagation it's nothing but this psudo code:
$$t=g(x)\\... | Clarification of the intuition behind backpropagation | The best way to understand backpropagation for a programmer is in terms of the chain rule as a recursion.
Here's the chain rule. You have a nested function expression $y=f(g(x))$. First you look at it | Clarification of the intuition behind backpropagation
The best way to understand backpropagation for a programmer is in terms of the chain rule as a recursion.
Here's the chain rule. You have a nested function expression $y=f(g(x))$. First you look at it as two different functions:
$$f(x)\\g(x)$$
When you do forward pr... | Clarification of the intuition behind backpropagation
The best way to understand backpropagation for a programmer is in terms of the chain rule as a recursion.
Here's the chain rule. You have a nested function expression $y=f(g(x))$. First you look at it |
43,020 | Machine Learning and Missing Data: Impute, and If So When? | The inclination of some collaborators is to go with the complete case type analysis, where only subjects with full data are used, but this makes me slightly nervous, as I feel like those missing data patterns might have an impact.
I would argue that your intuition is correct, missing data can have strong predictive po... | Machine Learning and Missing Data: Impute, and If So When? | The inclination of some collaborators is to go with the complete case type analysis, where only subjects with full data are used, but this makes me slightly nervous, as I feel like those missing data | Machine Learning and Missing Data: Impute, and If So When?
The inclination of some collaborators is to go with the complete case type analysis, where only subjects with full data are used, but this makes me slightly nervous, as I feel like those missing data patterns might have an impact.
I would argue that your intui... | Machine Learning and Missing Data: Impute, and If So When?
The inclination of some collaborators is to go with the complete case type analysis, where only subjects with full data are used, but this makes me slightly nervous, as I feel like those missing data |
43,021 | Feature Selection in unbalanced data | In my experience feature selection tends to make performance worse rather than better if you are using a modern machine learning method that has some feature, such as regularisation, to avoid over-fitting. Miller's monograph on feature selection has similar advice hidden away in the appendices (sadly someone has borro... | Feature Selection in unbalanced data | In my experience feature selection tends to make performance worse rather than better if you are using a modern machine learning method that has some feature, such as regularisation, to avoid over-fit | Feature Selection in unbalanced data
In my experience feature selection tends to make performance worse rather than better if you are using a modern machine learning method that has some feature, such as regularisation, to avoid over-fitting. Miller's monograph on feature selection has similar advice hidden away in th... | Feature Selection in unbalanced data
In my experience feature selection tends to make performance worse rather than better if you are using a modern machine learning method that has some feature, such as regularisation, to avoid over-fit |
43,022 | Feature Selection in unbalanced data | It seems that you are mixing two problems: 1) performing feature selection with an ensemble learning algorithm (e.g. random forest, RF); 2) balancing your dataset so the learning process of your algorithm is maximum.
For the first one, perhaps you could take a look to this paper, in which the authors propose a modifica... | Feature Selection in unbalanced data | It seems that you are mixing two problems: 1) performing feature selection with an ensemble learning algorithm (e.g. random forest, RF); 2) balancing your dataset so the learning process of your algor | Feature Selection in unbalanced data
It seems that you are mixing two problems: 1) performing feature selection with an ensemble learning algorithm (e.g. random forest, RF); 2) balancing your dataset so the learning process of your algorithm is maximum.
For the first one, perhaps you could take a look to this paper, in... | Feature Selection in unbalanced data
It seems that you are mixing two problems: 1) performing feature selection with an ensemble learning algorithm (e.g. random forest, RF); 2) balancing your dataset so the learning process of your algor |
43,023 | Feature Selection in unbalanced data | The question discusses more than one important topic.
For the first one: There are many techniques to handle imbalance classes before learning a model or after the learning process. Techniques for balancing classes such as SMOTE and cost-sensitive learning and after learning a model, including the choice of performance... | Feature Selection in unbalanced data | The question discusses more than one important topic.
For the first one: There are many techniques to handle imbalance classes before learning a model or after the learning process. Techniques for bal | Feature Selection in unbalanced data
The question discusses more than one important topic.
For the first one: There are many techniques to handle imbalance classes before learning a model or after the learning process. Techniques for balancing classes such as SMOTE and cost-sensitive learning and after learning a model... | Feature Selection in unbalanced data
The question discusses more than one important topic.
For the first one: There are many techniques to handle imbalance classes before learning a model or after the learning process. Techniques for bal |
43,024 | Finite state machine with gamma distributed waiting times | Note that this is NOT an attempt to fully answer the problem, but to show how to overcome the lack of the Markov property for a special case that may not apply - one that is far too long to put in comments.
Unfortunately, as you have realized, this is not a Markov process, but a semi-Markov process. If you a) have in... | Finite state machine with gamma distributed waiting times | Note that this is NOT an attempt to fully answer the problem, but to show how to overcome the lack of the Markov property for a special case that may not apply - one that is far too long to put in com | Finite state machine with gamma distributed waiting times
Note that this is NOT an attempt to fully answer the problem, but to show how to overcome the lack of the Markov property for a special case that may not apply - one that is far too long to put in comments.
Unfortunately, as you have realized, this is not a Mark... | Finite state machine with gamma distributed waiting times
Note that this is NOT an attempt to fully answer the problem, but to show how to overcome the lack of the Markov property for a special case that may not apply - one that is far too long to put in com |
43,025 | Is there a "central distribution" for distributions for which the CLT doesn't apply? | An answer is already found in the wikipedia link from the first comment by Smith.
Q: Are there situations where this sample average nevertheless converges in distribution to some other (non-normal) distribution?
A: Yes, iff a distribution is a stable distribution then it is a limit to sums of the type:
$$\zeta_n = \fr... | Is there a "central distribution" for distributions for which the CLT doesn't apply? | An answer is already found in the wikipedia link from the first comment by Smith.
Q: Are there situations where this sample average nevertheless converges in distribution to some other (non-normal) di | Is there a "central distribution" for distributions for which the CLT doesn't apply?
An answer is already found in the wikipedia link from the first comment by Smith.
Q: Are there situations where this sample average nevertheless converges in distribution to some other (non-normal) distribution?
A: Yes, iff a distribut... | Is there a "central distribution" for distributions for which the CLT doesn't apply?
An answer is already found in the wikipedia link from the first comment by Smith.
Q: Are there situations where this sample average nevertheless converges in distribution to some other (non-normal) di |
43,026 | Difference between the Wold Decomposition and MA representation | The Wold decomposition does not say what you state. It says that any weakly stationary $(x_t)_{t=-\infty}^{\infty}$, there exists a white noise process $\{\epsilon_t\}_{t=-\infty}^{+\infty}$ such that $(x_t)_{t=-\infty}^{\infty}$ has two-sided MA representation
$$
x_t=\sum_{-\infty < j < \infty} b_j\epsilon_{t-j}.
$$
... | Difference between the Wold Decomposition and MA representation | The Wold decomposition does not say what you state. It says that any weakly stationary $(x_t)_{t=-\infty}^{\infty}$, there exists a white noise process $\{\epsilon_t\}_{t=-\infty}^{+\infty}$ such that | Difference between the Wold Decomposition and MA representation
The Wold decomposition does not say what you state. It says that any weakly stationary $(x_t)_{t=-\infty}^{\infty}$, there exists a white noise process $\{\epsilon_t\}_{t=-\infty}^{+\infty}$ such that $(x_t)_{t=-\infty}^{\infty}$ has two-sided MA represen... | Difference between the Wold Decomposition and MA representation
The Wold decomposition does not say what you state. It says that any weakly stationary $(x_t)_{t=-\infty}^{\infty}$, there exists a white noise process $\{\epsilon_t\}_{t=-\infty}^{+\infty}$ such that |
43,027 | Difference between the Wold Decomposition and MA representation | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Yes, conditional on ARMA(p,q) being the true model, wh... | Difference between the Wold Decomposition and MA representation | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Difference between the Wold Decomposition and MA representation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Difference between the Wold Decomposition and MA representation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
43,028 | Binary Genetic Algorithm in R, with strong cardinality constraints | I will try and solve it using the Cross entropy algorithm. It is a method suggest by Reuven Rubenstein CE Method.
Basically the idea is to keep good solutions and deduce on the parameters based on them. So in general suppose you want to find the 4 lowest SD. the algorithm will be as following:
1. Create a vector of p... | Binary Genetic Algorithm in R, with strong cardinality constraints | I will try and solve it using the Cross entropy algorithm. It is a method suggest by Reuven Rubenstein CE Method.
Basically the idea is to keep good solutions and deduce on the parameters based on th | Binary Genetic Algorithm in R, with strong cardinality constraints
I will try and solve it using the Cross entropy algorithm. It is a method suggest by Reuven Rubenstein CE Method.
Basically the idea is to keep good solutions and deduce on the parameters based on them. So in general suppose you want to find the 4 lowe... | Binary Genetic Algorithm in R, with strong cardinality constraints
I will try and solve it using the Cross entropy algorithm. It is a method suggest by Reuven Rubenstein CE Method.
Basically the idea is to keep good solutions and deduce on the parameters based on th |
43,029 | Explanation for MSE formula for vector comparison with "Euclidean distance"? | What is the connection between the formula and the Euclidean distance?
Consider the formula of the Euclidean Distance between $\hat{y}$ and $ y $ when they have same dimensionality:
$ D = \sqrt{\sum_{i=0}^n (\hat{y}_{i} - y_{i})^2 } $
so the square is:
$ D^2 = {\sum_{i=0}^n (\hat{y}_{i} - y_{i})^2 } $
that is very c... | Explanation for MSE formula for vector comparison with "Euclidean distance"? | What is the connection between the formula and the Euclidean distance?
Consider the formula of the Euclidean Distance between $\hat{y}$ and $ y $ when they have same dimensionality:
$ D = \sqrt{\sum_ | Explanation for MSE formula for vector comparison with "Euclidean distance"?
What is the connection between the formula and the Euclidean distance?
Consider the formula of the Euclidean Distance between $\hat{y}$ and $ y $ when they have same dimensionality:
$ D = \sqrt{\sum_{i=0}^n (\hat{y}_{i} - y_{i})^2 } $
so the... | Explanation for MSE formula for vector comparison with "Euclidean distance"?
What is the connection between the formula and the Euclidean distance?
Consider the formula of the Euclidean Distance between $\hat{y}$ and $ y $ when they have same dimensionality:
$ D = \sqrt{\sum_ |
43,030 | Can Kalman Filtering be done hierarchically - estimated from multiple time series with the same parameters? | If the parameters can be assumed to have the same value across all trials, the total likelihood (assuming independence between trials) is just the product of the likelihood for each trial. So just write a function that computes this product (or the sum of the logs) taking the unknown parameter values as a vector first... | Can Kalman Filtering be done hierarchically - estimated from multiple time series with the same para | If the parameters can be assumed to have the same value across all trials, the total likelihood (assuming independence between trials) is just the product of the likelihood for each trial. So just wr | Can Kalman Filtering be done hierarchically - estimated from multiple time series with the same parameters?
If the parameters can be assumed to have the same value across all trials, the total likelihood (assuming independence between trials) is just the product of the likelihood for each trial. So just write a functi... | Can Kalman Filtering be done hierarchically - estimated from multiple time series with the same para
If the parameters can be assumed to have the same value across all trials, the total likelihood (assuming independence between trials) is just the product of the likelihood for each trial. So just wr |
43,031 | Asymptotic joint distribution of the sample medians of a collection and a sub-collection of i.i.d. random variables | If each $X$ is Bernoulli with probability $p$, then the correlation between $M_k$ and $M_n$ has the following graph for $k=21$, $n=41$:
So also for smooth distributions approximating the Bernoulli distributions, the correlations of medians would depend on $p$, and not just on $k$ and $n$. In general, the correlations ... | Asymptotic joint distribution of the sample medians of a collection and a sub-collection of i.i.d. r | If each $X$ is Bernoulli with probability $p$, then the correlation between $M_k$ and $M_n$ has the following graph for $k=21$, $n=41$:
So also for smooth distributions approximating the Bernoulli di | Asymptotic joint distribution of the sample medians of a collection and a sub-collection of i.i.d. random variables
If each $X$ is Bernoulli with probability $p$, then the correlation between $M_k$ and $M_n$ has the following graph for $k=21$, $n=41$:
So also for smooth distributions approximating the Bernoulli distri... | Asymptotic joint distribution of the sample medians of a collection and a sub-collection of i.i.d. r
If each $X$ is Bernoulli with probability $p$, then the correlation between $M_k$ and $M_n$ has the following graph for $k=21$, $n=41$:
So also for smooth distributions approximating the Bernoulli di |
43,032 | Does taking the logs of the dependent and/or independent variable affect the model errors and thus the validity of inference? | You're seeing ringing, which is the result of passing a high-frequency change, ie a step-function, through a low-pass filter, ie the GAM.
When you apply the log transformation, you change the gradient of the near-vertical section of the graph, on the left hand side, so that it is slightly less steep, with fewer implici... | Does taking the logs of the dependent and/or independent variable affect the model errors and thus t | You're seeing ringing, which is the result of passing a high-frequency change, ie a step-function, through a low-pass filter, ie the GAM.
When you apply the log transformation, you change the gradient | Does taking the logs of the dependent and/or independent variable affect the model errors and thus the validity of inference?
You're seeing ringing, which is the result of passing a high-frequency change, ie a step-function, through a low-pass filter, ie the GAM.
When you apply the log transformation, you change the gr... | Does taking the logs of the dependent and/or independent variable affect the model errors and thus t
You're seeing ringing, which is the result of passing a high-frequency change, ie a step-function, through a low-pass filter, ie the GAM.
When you apply the log transformation, you change the gradient |
43,033 | What is statistic in statistics? | A statistic is a function of your data.
That's all it is. In different context, you may be interested in different statistics. Maybe T = number of observations. That's a valid statistic. Or T = max value observed. T = seventh observation. T = sixth largest observation. I guess it's valid to say T=1 too, just a constan... | What is statistic in statistics? | A statistic is a function of your data.
That's all it is. In different context, you may be interested in different statistics. Maybe T = number of observations. That's a valid statistic. Or T = max v | What is statistic in statistics?
A statistic is a function of your data.
That's all it is. In different context, you may be interested in different statistics. Maybe T = number of observations. That's a valid statistic. Or T = max value observed. T = seventh observation. T = sixth largest observation. I guess it's val... | What is statistic in statistics?
A statistic is a function of your data.
That's all it is. In different context, you may be interested in different statistics. Maybe T = number of observations. That's a valid statistic. Or T = max v |
43,034 | What is statistic in statistics? | Asking questions in class is never a bad idea.
A statistic is an estimate of a population parameter based on a sample. So if mu is the population mean, the sample mean x-bar is the statistic. | What is statistic in statistics? | Asking questions in class is never a bad idea.
A statistic is an estimate of a population parameter based on a sample. So if mu is the population mean, the sample mean x-bar is the statistic. | What is statistic in statistics?
Asking questions in class is never a bad idea.
A statistic is an estimate of a population parameter based on a sample. So if mu is the population mean, the sample mean x-bar is the statistic. | What is statistic in statistics?
Asking questions in class is never a bad idea.
A statistic is an estimate of a population parameter based on a sample. So if mu is the population mean, the sample mean x-bar is the statistic. |
43,035 | Lagged dependent variable in linear regression | Hi: Your model is also called a koyck distributed lag and it can be difficult to estimate with small samples. With larger samples, my experience is that there is not a problem with bias. ( I used simulation to check this ).
The link discusses the statistical properties of the estimates briefly on pages 12 and 13. Essen... | Lagged dependent variable in linear regression | Hi: Your model is also called a koyck distributed lag and it can be difficult to estimate with small samples. With larger samples, my experience is that there is not a problem with bias. ( I used simu | Lagged dependent variable in linear regression
Hi: Your model is also called a koyck distributed lag and it can be difficult to estimate with small samples. With larger samples, my experience is that there is not a problem with bias. ( I used simulation to check this ).
The link discusses the statistical properties of ... | Lagged dependent variable in linear regression
Hi: Your model is also called a koyck distributed lag and it can be difficult to estimate with small samples. With larger samples, my experience is that there is not a problem with bias. ( I used simu |
43,036 | Lagged dependent variable in linear regression | From what I have read, the Yule-Walker equations use least squares to estimate the AR-1 lag coefficient (what you call $\beta_1$ in display 1 and $\theta$ in display 2). The joint estimation of the lag coefficient and the $X$ coefficient is correctly done using a least squares model adjusting for the lag and the concur... | Lagged dependent variable in linear regression | From what I have read, the Yule-Walker equations use least squares to estimate the AR-1 lag coefficient (what you call $\beta_1$ in display 1 and $\theta$ in display 2). The joint estimation of the la | Lagged dependent variable in linear regression
From what I have read, the Yule-Walker equations use least squares to estimate the AR-1 lag coefficient (what you call $\beta_1$ in display 1 and $\theta$ in display 2). The joint estimation of the lag coefficient and the $X$ coefficient is correctly done using a least squ... | Lagged dependent variable in linear regression
From what I have read, the Yule-Walker equations use least squares to estimate the AR-1 lag coefficient (what you call $\beta_1$ in display 1 and $\theta$ in display 2). The joint estimation of the la |
43,037 | Why is the coefficient of determination ($R^2$) so called? | This Google search turns up an interesting result.
What follows is my speculation.
A deterministic model is a "Mathematical model in which outcomes are precisely determined through known relationships among states and events, without any room for random variation." From: What is a deterministic model?
I think the coeff... | Why is the coefficient of determination ($R^2$) so called? | This Google search turns up an interesting result.
What follows is my speculation.
A deterministic model is a "Mathematical model in which outcomes are precisely determined through known relationships | Why is the coefficient of determination ($R^2$) so called?
This Google search turns up an interesting result.
What follows is my speculation.
A deterministic model is a "Mathematical model in which outcomes are precisely determined through known relationships among states and events, without any room for random variati... | Why is the coefficient of determination ($R^2$) so called?
This Google search turns up an interesting result.
What follows is my speculation.
A deterministic model is a "Mathematical model in which outcomes are precisely determined through known relationships |
43,038 | Why is the coefficient of determination ($R^2$) so called? | Here are my 2 cents.
The variance that our model consists of is of 2 types: stochastic (totally probabilistic that may vary according to the sample we select) whereas there will be variance that can be quantified or determined possibly by our modelling techniques. Now as our calculation deals with understanding the val... | Why is the coefficient of determination ($R^2$) so called? | Here are my 2 cents.
The variance that our model consists of is of 2 types: stochastic (totally probabilistic that may vary according to the sample we select) whereas there will be variance that can b | Why is the coefficient of determination ($R^2$) so called?
Here are my 2 cents.
The variance that our model consists of is of 2 types: stochastic (totally probabilistic that may vary according to the sample we select) whereas there will be variance that can be quantified or determined possibly by our modelling techniqu... | Why is the coefficient of determination ($R^2$) so called?
Here are my 2 cents.
The variance that our model consists of is of 2 types: stochastic (totally probabilistic that may vary according to the sample we select) whereas there will be variance that can b |
43,039 | Consistency of a sequence of Bernoullis | Consider the simpler case where $X_i \sim N(\theta a_i,1)$, then the MLE
$$\hat{\theta} = \sum_{i=1}^n a_iX_i/\sum_{i=1}^n a_i^2 \sim N(\theta, 1/\sum_{i=1}^n a_i^2).$$ Note that if $\sum_{i=1}^n a_i^2 $ diverges, then $\hat{\theta}$ is consistent since the variance goes to 0 and it is unbiased; if $\sum_{i=1}^n a_i^... | Consistency of a sequence of Bernoullis | Consider the simpler case where $X_i \sim N(\theta a_i,1)$, then the MLE
$$\hat{\theta} = \sum_{i=1}^n a_iX_i/\sum_{i=1}^n a_i^2 \sim N(\theta, 1/\sum_{i=1}^n a_i^2).$$ Note that if $\sum_{i=1}^n a_ | Consistency of a sequence of Bernoullis
Consider the simpler case where $X_i \sim N(\theta a_i,1)$, then the MLE
$$\hat{\theta} = \sum_{i=1}^n a_iX_i/\sum_{i=1}^n a_i^2 \sim N(\theta, 1/\sum_{i=1}^n a_i^2).$$ Note that if $\sum_{i=1}^n a_i^2 $ diverges, then $\hat{\theta}$ is consistent since the variance goes to 0 a... | Consistency of a sequence of Bernoullis
Consider the simpler case where $X_i \sim N(\theta a_i,1)$, then the MLE
$$\hat{\theta} = \sum_{i=1}^n a_iX_i/\sum_{i=1}^n a_i^2 \sim N(\theta, 1/\sum_{i=1}^n a_i^2).$$ Note that if $\sum_{i=1}^n a_ |
43,040 | Consistency of a sequence of Bernoullis | Here is an estimator which can consistently estimate $\theta$ and for which the condition $\sum_{i=1}^{\infty}a_i ^2 =\infty$ is relevant.
We have
$$E(X_i) = \frac 12 + \theta a_i \implies \frac{2E(X_i)-1}{2a_i} = \theta$$
Set
$$Z_i = \frac{2X_i-1}{2a_i} \implies E(Z_i) = \theta$$
and
$$\text{Var}(Z_i) = \text{Var}(... | Consistency of a sequence of Bernoullis | Here is an estimator which can consistently estimate $\theta$ and for which the condition $\sum_{i=1}^{\infty}a_i ^2 =\infty$ is relevant.
We have
$$E(X_i) = \frac 12 + \theta a_i \implies \frac{2E( | Consistency of a sequence of Bernoullis
Here is an estimator which can consistently estimate $\theta$ and for which the condition $\sum_{i=1}^{\infty}a_i ^2 =\infty$ is relevant.
We have
$$E(X_i) = \frac 12 + \theta a_i \implies \frac{2E(X_i)-1}{2a_i} = \theta$$
Set
$$Z_i = \frac{2X_i-1}{2a_i} \implies E(Z_i) = \the... | Consistency of a sequence of Bernoullis
Here is an estimator which can consistently estimate $\theta$ and for which the condition $\sum_{i=1}^{\infty}a_i ^2 =\infty$ is relevant.
We have
$$E(X_i) = \frac 12 + \theta a_i \implies \frac{2E( |
43,041 | Consistency of a sequence of Bernoullis | I think I have finally figured out a way to solve this problem. The details are yet to be worked out but I feel if the sum is finite then for two different $\theta_1,\theta_2$ the sequence of likelihood ratios will be mutually contiguous, which would imply there is no consistent estimator. A complete solution will be a... | Consistency of a sequence of Bernoullis | I think I have finally figured out a way to solve this problem. The details are yet to be worked out but I feel if the sum is finite then for two different $\theta_1,\theta_2$ the sequence of likeliho | Consistency of a sequence of Bernoullis
I think I have finally figured out a way to solve this problem. The details are yet to be worked out but I feel if the sum is finite then for two different $\theta_1,\theta_2$ the sequence of likelihood ratios will be mutually contiguous, which would imply there is no consistent ... | Consistency of a sequence of Bernoullis
I think I have finally figured out a way to solve this problem. The details are yet to be worked out but I feel if the sum is finite then for two different $\theta_1,\theta_2$ the sequence of likeliho |
43,042 | How to check the linearity assumption? | If you want to see if the relationship between (the conditional expectation of) $y$ and $x_0$ is linear, after adjusting for control variables $x_1, x_2, \dots, x_p$, a simple graphical approach is to create an added-variable plot using the following procedure.
First, regress $y$ on $x_1, x_2, \dots, x_p$ and obtain th... | How to check the linearity assumption? | If you want to see if the relationship between (the conditional expectation of) $y$ and $x_0$ is linear, after adjusting for control variables $x_1, x_2, \dots, x_p$, a simple graphical approach is to | How to check the linearity assumption?
If you want to see if the relationship between (the conditional expectation of) $y$ and $x_0$ is linear, after adjusting for control variables $x_1, x_2, \dots, x_p$, a simple graphical approach is to create an added-variable plot using the following procedure.
First, regress $y$ ... | How to check the linearity assumption?
If you want to see if the relationship between (the conditional expectation of) $y$ and $x_0$ is linear, after adjusting for control variables $x_1, x_2, \dots, x_p$, a simple graphical approach is to |
43,043 | How to check the linearity assumption? | As @stephan-kolassa mentioned. An added spline portion can be more beneficial than a quadratic term, since that will not explicitly determine the nonlinearity of the model. A likelihood ratio test or F-test can be performed from there.
Now, there are problems with such a method that I think need to be considered.
The ... | How to check the linearity assumption? | As @stephan-kolassa mentioned. An added spline portion can be more beneficial than a quadratic term, since that will not explicitly determine the nonlinearity of the model. A likelihood ratio test or | How to check the linearity assumption?
As @stephan-kolassa mentioned. An added spline portion can be more beneficial than a quadratic term, since that will not explicitly determine the nonlinearity of the model. A likelihood ratio test or F-test can be performed from there.
Now, there are problems with such a method th... | How to check the linearity assumption?
As @stephan-kolassa mentioned. An added spline portion can be more beneficial than a quadratic term, since that will not explicitly determine the nonlinearity of the model. A likelihood ratio test or |
43,044 | Dealing with sparse categories in binary cross-entropy | I think the problem is the sigmoid activation function in your output layer. Binary crossentropy computes the sigmoid again as part of the loss computation (see the description in tensor flow: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits). Just changing the activation function in t... | Dealing with sparse categories in binary cross-entropy | I think the problem is the sigmoid activation function in your output layer. Binary crossentropy computes the sigmoid again as part of the loss computation (see the description in tensor flow: https:/ | Dealing with sparse categories in binary cross-entropy
I think the problem is the sigmoid activation function in your output layer. Binary crossentropy computes the sigmoid again as part of the loss computation (see the description in tensor flow: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_w... | Dealing with sparse categories in binary cross-entropy
I think the problem is the sigmoid activation function in your output layer. Binary crossentropy computes the sigmoid again as part of the loss computation (see the description in tensor flow: https:/ |
43,045 | Are there any seemingly simple probability question that are actually intractable? | The survival function $S_{t}$ is a quantity of interest in many (most?) kinds of event history analysis. It is commonly estimated, and 'survival curves' depicting $S_{t}$ versus time are often used to compare the cumulative probability of events among different groups. Statistical comparisons are often facilitated by i... | Are there any seemingly simple probability question that are actually intractable? | The survival function $S_{t}$ is a quantity of interest in many (most?) kinds of event history analysis. It is commonly estimated, and 'survival curves' depicting $S_{t}$ versus time are often used to | Are there any seemingly simple probability question that are actually intractable?
The survival function $S_{t}$ is a quantity of interest in many (most?) kinds of event history analysis. It is commonly estimated, and 'survival curves' depicting $S_{t}$ versus time are often used to compare the cumulative probability o... | Are there any seemingly simple probability question that are actually intractable?
The survival function $S_{t}$ is a quantity of interest in many (most?) kinds of event history analysis. It is commonly estimated, and 'survival curves' depicting $S_{t}$ versus time are often used to |
43,046 | Are there any seemingly simple probability question that are actually intractable? | You've got 5 variables and you're doing a "multivariate" analysis. You assume multivariate normality and enjoy a complete data set. Then the maximum likelihood estimates of the mean and covariance matrix are closed form and easy to calculate.
Oh wait, you didn't want to assume joint normality. You meant to assume that,... | Are there any seemingly simple probability question that are actually intractable? | You've got 5 variables and you're doing a "multivariate" analysis. You assume multivariate normality and enjoy a complete data set. Then the maximum likelihood estimates of the mean and covariance mat | Are there any seemingly simple probability question that are actually intractable?
You've got 5 variables and you're doing a "multivariate" analysis. You assume multivariate normality and enjoy a complete data set. Then the maximum likelihood estimates of the mean and covariance matrix are closed form and easy to calcu... | Are there any seemingly simple probability question that are actually intractable?
You've got 5 variables and you're doing a "multivariate" analysis. You assume multivariate normality and enjoy a complete data set. Then the maximum likelihood estimates of the mean and covariance mat |
43,047 | Are there any seemingly simple probability question that are actually intractable? | A simple probability problem that is intractable could be the following for a horse race.
If the horse trainer has a win rate of 25% and the jockey a 10% win rate and the horse has a 40% win rate what is the un-normalised probabilty of success of the horse in today's race?
The trainer has trained the horse to have a 40... | Are there any seemingly simple probability question that are actually intractable? | A simple probability problem that is intractable could be the following for a horse race.
If the horse trainer has a win rate of 25% and the jockey a 10% win rate and the horse has a 40% win rate what | Are there any seemingly simple probability question that are actually intractable?
A simple probability problem that is intractable could be the following for a horse race.
If the horse trainer has a win rate of 25% and the jockey a 10% win rate and the horse has a 40% win rate what is the un-normalised probabilty of s... | Are there any seemingly simple probability question that are actually intractable?
A simple probability problem that is intractable could be the following for a horse race.
If the horse trainer has a win rate of 25% and the jockey a 10% win rate and the horse has a 40% win rate what |
43,048 | Computer vision algorithm that maps the positions of objects in 3D onto 2D image | I do not know of a publication on this area.
In my opinion it is a computer vision problem, comprising several smaller problems. You need a model of the pitch, be able to segment and keep track of the players, and to keep track of where the camera is looking at.
Ideally, the camera is calibrated, so you have a mapping ... | Computer vision algorithm that maps the positions of objects in 3D onto 2D image | I do not know of a publication on this area.
In my opinion it is a computer vision problem, comprising several smaller problems. You need a model of the pitch, be able to segment and keep track of the | Computer vision algorithm that maps the positions of objects in 3D onto 2D image
I do not know of a publication on this area.
In my opinion it is a computer vision problem, comprising several smaller problems. You need a model of the pitch, be able to segment and keep track of the players, and to keep track of where th... | Computer vision algorithm that maps the positions of objects in 3D onto 2D image
I do not know of a publication on this area.
In my opinion it is a computer vision problem, comprising several smaller problems. You need a model of the pitch, be able to segment and keep track of the |
43,049 | What is a generalized confidence interval? | The two properties imply each other. Indeed, the implication is nearly trivial provided we formulate them mathematically, as you have requested: let's begin there.
I would like to remark that the language is confusing because it is attempting to make statements about probabilities by referring to "a large number of":... | What is a generalized confidence interval? | The two properties imply each other. Indeed, the implication is nearly trivial provided we formulate them mathematically, as you have requested: let's begin there.
I would like to remark that the la | What is a generalized confidence interval?
The two properties imply each other. Indeed, the implication is nearly trivial provided we formulate them mathematically, as you have requested: let's begin there.
I would like to remark that the language is confusing because it is attempting to make statements about probabi... | What is a generalized confidence interval?
The two properties imply each other. Indeed, the implication is nearly trivial provided we formulate them mathematically, as you have requested: let's begin there.
I would like to remark that the la |
43,050 | What is a generalized confidence interval? | under construction
This is very similar to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Your first property is more strict because it requires that the confidence interval contains 95% of the time the true parameter, conditional on the true parameter. The second property does n... | What is a generalized confidence interval? | under construction
This is very similar to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Your first property is more strict because it requires that the confid | What is a generalized confidence interval?
under construction
This is very similar to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Your first property is more strict because it requires that the confidence interval contains 95% of the time the true parameter, conditional on the... | What is a generalized confidence interval?
under construction
This is very similar to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Your first property is more strict because it requires that the confid |
43,051 | How to prepare data for input to a sparse categorical cross entropy multiclassification model [closed] | The problem is that
metrics=['accuracy']
defaults to categorical accuracy. You need sparse categorical accuracy:
from keras import metrics
model.compile(loss='sparse_categorical_crossentropy',
optimizer=sgd,
metrics=[metrics.sparse_categorical_accuracy]) | How to prepare data for input to a sparse categorical cross entropy multiclassification model [close | The problem is that
metrics=['accuracy']
defaults to categorical accuracy. You need sparse categorical accuracy:
from keras import metrics
model.compile(loss='sparse_categorical_crossentropy',
| How to prepare data for input to a sparse categorical cross entropy multiclassification model [closed]
The problem is that
metrics=['accuracy']
defaults to categorical accuracy. You need sparse categorical accuracy:
from keras import metrics
model.compile(loss='sparse_categorical_crossentropy',
optimizer=sgd,
... | How to prepare data for input to a sparse categorical cross entropy multiclassification model [close
The problem is that
metrics=['accuracy']
defaults to categorical accuracy. You need sparse categorical accuracy:
from keras import metrics
model.compile(loss='sparse_categorical_crossentropy',
|
43,052 | Why ROC Curve on test set? | Why do we want to calculate ROC curve on test set? In many other resources that I read, they calculated ROC curve on
either training set or test set without a clear definition of "test
set", so pardon me if I read it wrong.
You want to calculate the ROC on the test set because that's actually the set of data that can ... | Why ROC Curve on test set? | Why do we want to calculate ROC curve on test set? In many other resources that I read, they calculated ROC curve on
either training set or test set without a clear definition of "test
set", so pardon | Why ROC Curve on test set?
Why do we want to calculate ROC curve on test set? In many other resources that I read, they calculated ROC curve on
either training set or test set without a clear definition of "test
set", so pardon me if I read it wrong.
You want to calculate the ROC on the test set because that's actuall... | Why ROC Curve on test set?
Why do we want to calculate ROC curve on test set? In many other resources that I read, they calculated ROC curve on
either training set or test set without a clear definition of "test
set", so pardon |
43,053 | Interpretation of calibration curve | The curve labeled bias-corrected appears to be "over confident": its predictions for Predicted P(Class=1)<0.5 are too low and its predictions for Predicted P(Class=1)>0.5 are too high relative to Actual probability.
This is also the case for the curve labeled apparent, except at the extremes (roughly: x<=0.28 or x>=0.... | Interpretation of calibration curve | The curve labeled bias-corrected appears to be "over confident": its predictions for Predicted P(Class=1)<0.5 are too low and its predictions for Predicted P(Class=1)>0.5 are too high relative to Actu | Interpretation of calibration curve
The curve labeled bias-corrected appears to be "over confident": its predictions for Predicted P(Class=1)<0.5 are too low and its predictions for Predicted P(Class=1)>0.5 are too high relative to Actual probability.
This is also the case for the curve labeled apparent, except at the... | Interpretation of calibration curve
The curve labeled bias-corrected appears to be "over confident": its predictions for Predicted P(Class=1)<0.5 are too low and its predictions for Predicted P(Class=1)>0.5 are too high relative to Actu |
43,054 | In cross-validation, which is the AUC population parameter I really want to estimate? | It is the first case, i.e. the expected value of the AUC and CI with the same test set size.
We can rule out the third case (infinite models) immediately because the cross-validation is done using only the trained model. Hence, it is not valid for any other model.
While the AUC for the first and second cases would be t... | In cross-validation, which is the AUC population parameter I really want to estimate? | It is the first case, i.e. the expected value of the AUC and CI with the same test set size.
We can rule out the third case (infinite models) immediately because the cross-validation is done using onl | In cross-validation, which is the AUC population parameter I really want to estimate?
It is the first case, i.e. the expected value of the AUC and CI with the same test set size.
We can rule out the third case (infinite models) immediately because the cross-validation is done using only the trained model. Hence, it is ... | In cross-validation, which is the AUC population parameter I really want to estimate?
It is the first case, i.e. the expected value of the AUC and CI with the same test set size.
We can rule out the third case (infinite models) immediately because the cross-validation is done using onl |
43,055 | Covariance matrix for missing data | Another approach is to compute the maximum likelihood mean and covariance matrix, given all observed data. This requires an iterative algorithm, such as the expectation maximization algorithm. Accelerated variants and other types of optimization algorithms exist too. Compared to imputation, this approach can produce es... | Covariance matrix for missing data | Another approach is to compute the maximum likelihood mean and covariance matrix, given all observed data. This requires an iterative algorithm, such as the expectation maximization algorithm. Acceler | Covariance matrix for missing data
Another approach is to compute the maximum likelihood mean and covariance matrix, given all observed data. This requires an iterative algorithm, such as the expectation maximization algorithm. Accelerated variants and other types of optimization algorithms exist too. Compared to imput... | Covariance matrix for missing data
Another approach is to compute the maximum likelihood mean and covariance matrix, given all observed data. This requires an iterative algorithm, such as the expectation maximization algorithm. Acceler |
43,056 | How would I compute the standard deviation of data with errors? | This answer will assume your errors are standard deviations.
If you have a data set $x_1,\ldots,x_n$, then we can define the discrete mean and variance as
$$\langle{x}\rangle\equiv\frac{1}{n}\sum_ix_i \,,\,\hat{\sigma}^2\equiv\langle{x^2}\rangle-\langle{x}\rangle^2$$
which means
$$\langle{x}\rangle{n}=\sum_ix_i \,,\, \... | How would I compute the standard deviation of data with errors? | This answer will assume your errors are standard deviations.
If you have a data set $x_1,\ldots,x_n$, then we can define the discrete mean and variance as
$$\langle{x}\rangle\equiv\frac{1}{n}\sum_ix_i | How would I compute the standard deviation of data with errors?
This answer will assume your errors are standard deviations.
If you have a data set $x_1,\ldots,x_n$, then we can define the discrete mean and variance as
$$\langle{x}\rangle\equiv\frac{1}{n}\sum_ix_i \,,\,\hat{\sigma}^2\equiv\langle{x^2}\rangle-\langle{x}... | How would I compute the standard deviation of data with errors?
This answer will assume your errors are standard deviations.
If you have a data set $x_1,\ldots,x_n$, then we can define the discrete mean and variance as
$$\langle{x}\rangle\equiv\frac{1}{n}\sum_ix_i |
43,057 | How would I compute the standard deviation of data with errors? | If your "errors" are standard deviations, you should use a weighted mean, where the weights are the inverse of the data variances, and compute the variance of the weighted mean.
For the formulae, cf. Wikipedia. This results from the law of uncertainty propagation (Wikipedia) | How would I compute the standard deviation of data with errors? | If your "errors" are standard deviations, you should use a weighted mean, where the weights are the inverse of the data variances, and compute the variance of the weighted mean.
For the formulae, cf. | How would I compute the standard deviation of data with errors?
If your "errors" are standard deviations, you should use a weighted mean, where the weights are the inverse of the data variances, and compute the variance of the weighted mean.
For the formulae, cf. Wikipedia. This results from the law of uncertainty pro... | How would I compute the standard deviation of data with errors?
If your "errors" are standard deviations, you should use a weighted mean, where the weights are the inverse of the data variances, and compute the variance of the weighted mean.
For the formulae, cf. |
43,058 | How would I compute the standard deviation of data with errors? | Yes, the typical approach will not necessarily be the best estimate.
You are saying that there is a random variable, $x$, that is IID with some mean $\bar{x}$ and std.dev. $\sigma$. However, there is noise added to $x$, so that the observed variable is $y = x + e$, where $e$ is the error term.
If you know (or are wil... | How would I compute the standard deviation of data with errors? | Yes, the typical approach will not necessarily be the best estimate.
You are saying that there is a random variable, $x$, that is IID with some mean $\bar{x}$ and std.dev. $\sigma$. However, there is | How would I compute the standard deviation of data with errors?
Yes, the typical approach will not necessarily be the best estimate.
You are saying that there is a random variable, $x$, that is IID with some mean $\bar{x}$ and std.dev. $\sigma$. However, there is noise added to $x$, so that the observed variable is $y... | How would I compute the standard deviation of data with errors?
Yes, the typical approach will not necessarily be the best estimate.
You are saying that there is a random variable, $x$, that is IID with some mean $\bar{x}$ and std.dev. $\sigma$. However, there is |
43,059 | expectation of log of expectation by Monte Carlo | The methods presented in this work (https://people.maths.ox.ac.uk/gilesm/files/SLOAN80-056.pdf) concern Multi-Level Monte Carlo (MLMC) methods for expectations of this form. MLMC is typically not designed to provide unbiased estimators per se, but can usually be modified to do so using the trick of McLeish.
Broadly, if... | expectation of log of expectation by Monte Carlo | The methods presented in this work (https://people.maths.ox.ac.uk/gilesm/files/SLOAN80-056.pdf) concern Multi-Level Monte Carlo (MLMC) methods for expectations of this form. MLMC is typically not desi | expectation of log of expectation by Monte Carlo
The methods presented in this work (https://people.maths.ox.ac.uk/gilesm/files/SLOAN80-056.pdf) concern Multi-Level Monte Carlo (MLMC) methods for expectations of this form. MLMC is typically not designed to provide unbiased estimators per se, but can usually be modified... | expectation of log of expectation by Monte Carlo
The methods presented in this work (https://people.maths.ox.ac.uk/gilesm/files/SLOAN80-056.pdf) concern Multi-Level Monte Carlo (MLMC) methods for expectations of this form. MLMC is typically not desi |
43,060 | interval censored survival analysis with time dependent covariates | Semi-parametric models such as Cox regression are not easily applied in the presence of interval censoring. In this situation the default choice is to use parametric models.
On the other hand, time-dependent covariates are easily included in a right-censored setting and not with interval censoring. To include time-dep... | interval censored survival analysis with time dependent covariates | Semi-parametric models such as Cox regression are not easily applied in the presence of interval censoring. In this situation the default choice is to use parametric models.
On the other hand, time-d | interval censored survival analysis with time dependent covariates
Semi-parametric models such as Cox regression are not easily applied in the presence of interval censoring. In this situation the default choice is to use parametric models.
On the other hand, time-dependent covariates are easily included in a right-ce... | interval censored survival analysis with time dependent covariates
Semi-parametric models such as Cox regression are not easily applied in the presence of interval censoring. In this situation the default choice is to use parametric models.
On the other hand, time-d |
43,061 | What causes the elbow shape of the loss curve? | There are 3 reasons learning can slow, when considering the learning rate:
the optimal value has been reached (or at least a local minimum)
The learning rate is too big and we are overshooting our target
We are at a plateau with a very small gradient and the learning rate is too small to get us out there quickly.
N... | What causes the elbow shape of the loss curve? | There are 3 reasons learning can slow, when considering the learning rate:
the optimal value has been reached (or at least a local minimum)
The learning rate is too big and we are overshooting our | What causes the elbow shape of the loss curve?
There are 3 reasons learning can slow, when considering the learning rate:
the optimal value has been reached (or at least a local minimum)
The learning rate is too big and we are overshooting our target
We are at a plateau with a very small gradient and the learning ra... | What causes the elbow shape of the loss curve?
There are 3 reasons learning can slow, when considering the learning rate:
the optimal value has been reached (or at least a local minimum)
The learning rate is too big and we are overshooting our |
43,062 | Setting custom three-way interaction contrasts in R | This can be done in the lsmeans package pretty simply:
lsm = lsmeans(fit, ~A*B|C)
contrast(lsm, interaction = "pairwise")
This code generates and tests the contrast with coefficients $(1,-1,-1,1)$ at each level of factor $C$. This contrast is generated by taking the product of coefficients $(1,-1,1,-1)$ (for factor $A... | Setting custom three-way interaction contrasts in R | This can be done in the lsmeans package pretty simply:
lsm = lsmeans(fit, ~A*B|C)
contrast(lsm, interaction = "pairwise")
This code generates and tests the contrast with coefficients $(1,-1,-1,1)$ at | Setting custom three-way interaction contrasts in R
This can be done in the lsmeans package pretty simply:
lsm = lsmeans(fit, ~A*B|C)
contrast(lsm, interaction = "pairwise")
This code generates and tests the contrast with coefficients $(1,-1,-1,1)$ at each level of factor $C$. This contrast is generated by taking the ... | Setting custom three-way interaction contrasts in R
This can be done in the lsmeans package pretty simply:
lsm = lsmeans(fit, ~A*B|C)
contrast(lsm, interaction = "pairwise")
This code generates and tests the contrast with coefficients $(1,-1,-1,1)$ at |
43,063 | What is the role of feature engineering in statistical inference? | I will try to illustrate the reason behind feature engineering in general, say I would like to analyze images.
When we design features, we have to keep in mind that they are a representation of the original data/image. Now, if I know which kind of information matter for the task I have to do, I need the features to ref... | What is the role of feature engineering in statistical inference? | I will try to illustrate the reason behind feature engineering in general, say I would like to analyze images.
When we design features, we have to keep in mind that they are a representation of the or | What is the role of feature engineering in statistical inference?
I will try to illustrate the reason behind feature engineering in general, say I would like to analyze images.
When we design features, we have to keep in mind that they are a representation of the original data/image. Now, if I know which kind of inform... | What is the role of feature engineering in statistical inference?
I will try to illustrate the reason behind feature engineering in general, say I would like to analyze images.
When we design features, we have to keep in mind that they are a representation of the or |
43,064 | What is the role of feature engineering in statistical inference? | As this Wiki article makes clear (https://en.wikipedia.org/wiki/Feature_engineering), feature engineering is a key step in machine learning, involving the generation and cultivation of a set of features or attributes that may prove empirically (not necessarily theoretically) useful in the prediction or classification o... | What is the role of feature engineering in statistical inference? | As this Wiki article makes clear (https://en.wikipedia.org/wiki/Feature_engineering), feature engineering is a key step in machine learning, involving the generation and cultivation of a set of featur | What is the role of feature engineering in statistical inference?
As this Wiki article makes clear (https://en.wikipedia.org/wiki/Feature_engineering), feature engineering is a key step in machine learning, involving the generation and cultivation of a set of features or attributes that may prove empirically (not neces... | What is the role of feature engineering in statistical inference?
As this Wiki article makes clear (https://en.wikipedia.org/wiki/Feature_engineering), feature engineering is a key step in machine learning, involving the generation and cultivation of a set of featur |
43,065 | What is the role of feature engineering in statistical inference? | Predictors, dummy variables, or features are important in predictive modeling as they help capture genuine patterns in a data set and therefore make a better prediction since a model having a certain behavior will likely continue to have a certain behavior. And feature engineering helps capture this behavior.
Now for s... | What is the role of feature engineering in statistical inference? | Predictors, dummy variables, or features are important in predictive modeling as they help capture genuine patterns in a data set and therefore make a better prediction since a model having a certain | What is the role of feature engineering in statistical inference?
Predictors, dummy variables, or features are important in predictive modeling as they help capture genuine patterns in a data set and therefore make a better prediction since a model having a certain behavior will likely continue to have a certain behavi... | What is the role of feature engineering in statistical inference?
Predictors, dummy variables, or features are important in predictive modeling as they help capture genuine patterns in a data set and therefore make a better prediction since a model having a certain |
43,066 | What is the role of feature engineering in statistical inference? | Feature engineering, broadly speaking, does at least 2 things.
First, you might clean, restructure, or transform your features in such a way that the useful information is enhanced and redundant or noise information is minimized. Perhaps you know that one category of people/products/widgets is totally irrelevant and r... | What is the role of feature engineering in statistical inference? | Feature engineering, broadly speaking, does at least 2 things.
First, you might clean, restructure, or transform your features in such a way that the useful information is enhanced and redundant or n | What is the role of feature engineering in statistical inference?
Feature engineering, broadly speaking, does at least 2 things.
First, you might clean, restructure, or transform your features in such a way that the useful information is enhanced and redundant or noise information is minimized. Perhaps you know that o... | What is the role of feature engineering in statistical inference?
Feature engineering, broadly speaking, does at least 2 things.
First, you might clean, restructure, or transform your features in such a way that the useful information is enhanced and redundant or n |
43,067 | Tversky and Kahneman eye color problem | Great question! So,
P(Blue-Eyed Mom AND Blue-Eyed Daughter) = P(Blue-Eyed Mom | Blue-Eyed Daughter) * P( Blue-Eyed Daughter) = P( Blue-Eyed Daughter | Blue-Eyed Mom ) * P( Blue-Eyed Mom )
If P( Blue-Eyed Mom ) = P( Blue-Eyed Daughter ), then the conditional probabilities should, indeed hold.
However, I think your exa... | Tversky and Kahneman eye color problem | Great question! So,
P(Blue-Eyed Mom AND Blue-Eyed Daughter) = P(Blue-Eyed Mom | Blue-Eyed Daughter) * P( Blue-Eyed Daughter) = P( Blue-Eyed Daughter | Blue-Eyed Mom ) * P( Blue-Eyed Mom )
If P( Blue | Tversky and Kahneman eye color problem
Great question! So,
P(Blue-Eyed Mom AND Blue-Eyed Daughter) = P(Blue-Eyed Mom | Blue-Eyed Daughter) * P( Blue-Eyed Daughter) = P( Blue-Eyed Daughter | Blue-Eyed Mom ) * P( Blue-Eyed Mom )
If P( Blue-Eyed Mom ) = P( Blue-Eyed Daughter ), then the conditional probabilities should,... | Tversky and Kahneman eye color problem
Great question! So,
P(Blue-Eyed Mom AND Blue-Eyed Daughter) = P(Blue-Eyed Mom | Blue-Eyed Daughter) * P( Blue-Eyed Daughter) = P( Blue-Eyed Daughter | Blue-Eyed Mom ) * P( Blue-Eyed Mom )
If P( Blue |
43,068 | Any soft version for precision/recall? | There are different scenarios that make such "partial class memberships" sensible (in different ways) for both prediction [that is quite straightforward] and reference.
Remote sensing discusses the "problem of mixed pixels" which are not probabilities but fractions as in fuzzy sets - true classes are mixed because of ... | Any soft version for precision/recall? | There are different scenarios that make such "partial class memberships" sensible (in different ways) for both prediction [that is quite straightforward] and reference.
Remote sensing discusses the " | Any soft version for precision/recall?
There are different scenarios that make such "partial class memberships" sensible (in different ways) for both prediction [that is quite straightforward] and reference.
Remote sensing discusses the "problem of mixed pixels" which are not probabilities but fractions as in fuzzy se... | Any soft version for precision/recall?
There are different scenarios that make such "partial class memberships" sensible (in different ways) for both prediction [that is quite straightforward] and reference.
Remote sensing discusses the " |
43,069 | Gibbs sampling for spike and slab priors | The notation in the paper uses $\mathcal J_k$ instead of $\lambda_k$. I am going to use $\lambda_k$ as in the question. I am going to drop subscript $k$ for simplicity. The model is then
\begin{align*}
\beta \mid \lambda &\sim N(0, \lambda \tau^2) \\
\lambda &\sim (1-w) \delta_{\nu_0} + w \delta_1.
\end{align*}
The res... | Gibbs sampling for spike and slab priors | The notation in the paper uses $\mathcal J_k$ instead of $\lambda_k$. I am going to use $\lambda_k$ as in the question. I am going to drop subscript $k$ for simplicity. The model is then
\begin{align* | Gibbs sampling for spike and slab priors
The notation in the paper uses $\mathcal J_k$ instead of $\lambda_k$. I am going to use $\lambda_k$ as in the question. I am going to drop subscript $k$ for simplicity. The model is then
\begin{align*}
\beta \mid \lambda &\sim N(0, \lambda \tau^2) \\
\lambda &\sim (1-w) \delta_{... | Gibbs sampling for spike and slab priors
The notation in the paper uses $\mathcal J_k$ instead of $\lambda_k$. I am going to use $\lambda_k$ as in the question. I am going to drop subscript $k$ for simplicity. The model is then
\begin{align* |
43,070 | Inconsistency in unit on gradient descent equation | As was concluded in the discussion in comments, dimensional analysis would necessitate that the relevant component of $\alpha$ is in fact in the units necessary to make
$$\alpha_j \frac{\partial}{\partial \theta_j}J(\theta)$$
have the same units as $\theta_j$ | Inconsistency in unit on gradient descent equation | As was concluded in the discussion in comments, dimensional analysis would necessitate that the relevant component of $\alpha$ is in fact in the units necessary to make
$$\alpha_j \frac{\partial}{\pa | Inconsistency in unit on gradient descent equation
As was concluded in the discussion in comments, dimensional analysis would necessitate that the relevant component of $\alpha$ is in fact in the units necessary to make
$$\alpha_j \frac{\partial}{\partial \theta_j}J(\theta)$$
have the same units as $\theta_j$ | Inconsistency in unit on gradient descent equation
As was concluded in the discussion in comments, dimensional analysis would necessitate that the relevant component of $\alpha$ is in fact in the units necessary to make
$$\alpha_j \frac{\partial}{\pa |
43,071 | Inconsistency in unit on gradient descent equation | For the same reason that the slope of a line is not the "run", but is instead "rise" over "run", a gradient isn't a displacement in your theta-parameter space... anyone telling you otherwise is wrong. This is why the units don't match as you noted. However, the fundamental property of a gradient is that the directional... | Inconsistency in unit on gradient descent equation | For the same reason that the slope of a line is not the "run", but is instead "rise" over "run", a gradient isn't a displacement in your theta-parameter space... anyone telling you otherwise is wrong. | Inconsistency in unit on gradient descent equation
For the same reason that the slope of a line is not the "run", but is instead "rise" over "run", a gradient isn't a displacement in your theta-parameter space... anyone telling you otherwise is wrong. This is why the units don't match as you noted. However, the fundame... | Inconsistency in unit on gradient descent equation
For the same reason that the slope of a line is not the "run", but is instead "rise" over "run", a gradient isn't a displacement in your theta-parameter space... anyone telling you otherwise is wrong. |
43,072 | anomaly detection with gaussian mixture models | Gaussian Mixture Models allow assigning a probability to each datapoint of beeing created by one of k gaussian distributions.
These are normalized to sum up to one, allowing interpretation as "Which cluster is most probably responsible for this datapoint?"
If you do not normalize, you have absolute probabilities which... | anomaly detection with gaussian mixture models | Gaussian Mixture Models allow assigning a probability to each datapoint of beeing created by one of k gaussian distributions.
These are normalized to sum up to one, allowing interpretation as "Which | anomaly detection with gaussian mixture models
Gaussian Mixture Models allow assigning a probability to each datapoint of beeing created by one of k gaussian distributions.
These are normalized to sum up to one, allowing interpretation as "Which cluster is most probably responsible for this datapoint?"
If you do not n... | anomaly detection with gaussian mixture models
Gaussian Mixture Models allow assigning a probability to each datapoint of beeing created by one of k gaussian distributions.
These are normalized to sum up to one, allowing interpretation as "Which |
43,073 | Can all neural network with DAG topology be trained by Back-prop? | Can all neural network having directed acyclic graph (DAG) topology be trained by back propagation methods? I mean by the back propagation methods like Stochastic gradient decent, AdaGrad, Adam, etc.
The methods you mention are gradient-based, and subsequently won't work if one activation function used by the artifici... | Can all neural network with DAG topology be trained by Back-prop? | Can all neural network having directed acyclic graph (DAG) topology be trained by back propagation methods? I mean by the back propagation methods like Stochastic gradient decent, AdaGrad, Adam, etc.
| Can all neural network with DAG topology be trained by Back-prop?
Can all neural network having directed acyclic graph (DAG) topology be trained by back propagation methods? I mean by the back propagation methods like Stochastic gradient decent, AdaGrad, Adam, etc.
The methods you mention are gradient-based, and subse... | Can all neural network with DAG topology be trained by Back-prop?
Can all neural network having directed acyclic graph (DAG) topology be trained by back propagation methods? I mean by the back propagation methods like Stochastic gradient decent, AdaGrad, Adam, etc.
|
43,074 | Likelihood Factorization | I don't know whether this winds up being a good thing to do, but you can express the distribution of $y_3$ conditional on $y_1,y_2$, which is a 1D Normal, using the standard Schur complement approach shown in https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions .
Let $\mu_1,\mu_2,\m... | Likelihood Factorization | I don't know whether this winds up being a good thing to do, but you can express the distribution of $y_3$ conditional on $y_1,y_2$, which is a 1D Normal, using the standard Schur complement approach | Likelihood Factorization
I don't know whether this winds up being a good thing to do, but you can express the distribution of $y_3$ conditional on $y_1,y_2$, which is a 1D Normal, using the standard Schur complement approach shown in https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributi... | Likelihood Factorization
I don't know whether this winds up being a good thing to do, but you can express the distribution of $y_3$ conditional on $y_1,y_2$, which is a 1D Normal, using the standard Schur complement approach |
43,075 | Poisson regression: how do number of observations and offset affect variance of betas? | For Poisson likelihoods estimated in log-linear models, the number of observations do not affect the variance of the betas. This is because there is a mean variance relationship. The variance-covariance estimate of a coefficients to a Poisson regression model is given by:
$$ \text{var}(\hat{\beta}) = \left( \mathbf{X}^... | Poisson regression: how do number of observations and offset affect variance of betas? | For Poisson likelihoods estimated in log-linear models, the number of observations do not affect the variance of the betas. This is because there is a mean variance relationship. The variance-covarian | Poisson regression: how do number of observations and offset affect variance of betas?
For Poisson likelihoods estimated in log-linear models, the number of observations do not affect the variance of the betas. This is because there is a mean variance relationship. The variance-covariance estimate of a coefficients to ... | Poisson regression: how do number of observations and offset affect variance of betas?
For Poisson likelihoods estimated in log-linear models, the number of observations do not affect the variance of the betas. This is because there is a mean variance relationship. The variance-covarian |
43,076 | Does it make a difference to run xgboost on hot encoded variables or single factor variable? | Yes, it makes a difference. There is no good answer, you should try both. If you are looking for performance, you may even need to try various encodings and stack the models with the different ecodings...
Another approach could be to replace the factors (with a relatively large number of occurences) by the conditional ... | Does it make a difference to run xgboost on hot encoded variables or single factor variable? | Yes, it makes a difference. There is no good answer, you should try both. If you are looking for performance, you may even need to try various encodings and stack the models with the different ecoding | Does it make a difference to run xgboost on hot encoded variables or single factor variable?
Yes, it makes a difference. There is no good answer, you should try both. If you are looking for performance, you may even need to try various encodings and stack the models with the different ecodings...
Another approach could... | Does it make a difference to run xgboost on hot encoded variables or single factor variable?
Yes, it makes a difference. There is no good answer, you should try both. If you are looking for performance, you may even need to try various encodings and stack the models with the different ecoding |
43,077 | Should L2 regularization be corrected for scale? | Recall where this term actually comes from: the amount of weight decay we want to have for each weight at every iteration
del(E)/del(w(j,k,l))= del(left cross entropy error) + λ*(w(j,k,l)
say the λ is 0.001 essentially it means if Error is not affected by this particular weight, decay it by 0.1%. The number of units an... | Should L2 regularization be corrected for scale? | Recall where this term actually comes from: the amount of weight decay we want to have for each weight at every iteration
del(E)/del(w(j,k,l))= del(left cross entropy error) + λ*(w(j,k,l)
say the λ is | Should L2 regularization be corrected for scale?
Recall where this term actually comes from: the amount of weight decay we want to have for each weight at every iteration
del(E)/del(w(j,k,l))= del(left cross entropy error) + λ*(w(j,k,l)
say the λ is 0.001 essentially it means if Error is not affected by this particular... | Should L2 regularization be corrected for scale?
Recall where this term actually comes from: the amount of weight decay we want to have for each weight at every iteration
del(E)/del(w(j,k,l))= del(left cross entropy error) + λ*(w(j,k,l)
say the λ is |
43,078 | Are spectral decompositions of time-series useful for modeling/forecasting, or are they more of a tool for analysis? | I'd like to informally try to approach a few of these.
1) Are spectral decompositions useful for modeling/forecasting, or are they typically used only for analysis purposes.
1A) When modelling, I use the spectrum to give information about the seasonal components of my data. Simplistically, I might consider a model of ... | Are spectral decompositions of time-series useful for modeling/forecasting, or are they more of a to | I'd like to informally try to approach a few of these.
1) Are spectral decompositions useful for modeling/forecasting, or are they typically used only for analysis purposes.
1A) When modelling, I use | Are spectral decompositions of time-series useful for modeling/forecasting, or are they more of a tool for analysis?
I'd like to informally try to approach a few of these.
1) Are spectral decompositions useful for modeling/forecasting, or are they typically used only for analysis purposes.
1A) When modelling, I use the... | Are spectral decompositions of time-series useful for modeling/forecasting, or are they more of a to
I'd like to informally try to approach a few of these.
1) Are spectral decompositions useful for modeling/forecasting, or are they typically used only for analysis purposes.
1A) When modelling, I use |
43,079 | ideas on machine-learning algorithms to classify products | Addressing your issues one by one:
1) OCR: This is probably the easiest of your problems as there are many algorithms that perform well in this task. As a reference, in the best known handwritten digit dataset, MNIST, several algorithms have achieved over 99.5% accuracy (the state-of-the-art being Convolutional Neural ... | ideas on machine-learning algorithms to classify products | Addressing your issues one by one:
1) OCR: This is probably the easiest of your problems as there are many algorithms that perform well in this task. As a reference, in the best known handwritten digi | ideas on machine-learning algorithms to classify products
Addressing your issues one by one:
1) OCR: This is probably the easiest of your problems as there are many algorithms that perform well in this task. As a reference, in the best known handwritten digit dataset, MNIST, several algorithms have achieved over 99.5% ... | ideas on machine-learning algorithms to classify products
Addressing your issues one by one:
1) OCR: This is probably the easiest of your problems as there are many algorithms that perform well in this task. As a reference, in the best known handwritten digi |
43,080 | ideas on machine-learning algorithms to classify products | It seems that you should define similarity among the entities.
You have plenty of sources for similarity. You mentioned distance on the names (edit distance) and membership in groups. Note that you can extend the similarity by groups to many group and many similarity types. Groups can be belonging to the same recipe, s... | ideas on machine-learning algorithms to classify products | It seems that you should define similarity among the entities.
You have plenty of sources for similarity. You mentioned distance on the names (edit distance) and membership in groups. Note that you ca | ideas on machine-learning algorithms to classify products
It seems that you should define similarity among the entities.
You have plenty of sources for similarity. You mentioned distance on the names (edit distance) and membership in groups. Note that you can extend the similarity by groups to many group and many simil... | ideas on machine-learning algorithms to classify products
It seems that you should define similarity among the entities.
You have plenty of sources for similarity. You mentioned distance on the names (edit distance) and membership in groups. Note that you ca |
43,081 | Causality and stationarity of AR models | A linear process $X_t$ is defined to be causal if $X_t=\psi(B)w_t$ where $w_t$ are white noises and $\sum_{j=1}^{\infty}|\psi(j)|<\infty$.
$X_t$ is defined to be invertible if we can write $w_t=\pi(B) X_t$ where $\pi(B)=\pi_0 + \pi_1 B+\pi_2 B^2 + \cdots$ and $\sum_{j=0}^{\infty}|\pi(j)|<\infty$.
Apparently, an arbitra... | Causality and stationarity of AR models | A linear process $X_t$ is defined to be causal if $X_t=\psi(B)w_t$ where $w_t$ are white noises and $\sum_{j=1}^{\infty}|\psi(j)|<\infty$.
$X_t$ is defined to be invertible if we can write $w_t=\pi(B) | Causality and stationarity of AR models
A linear process $X_t$ is defined to be causal if $X_t=\psi(B)w_t$ where $w_t$ are white noises and $\sum_{j=1}^{\infty}|\psi(j)|<\infty$.
$X_t$ is defined to be invertible if we can write $w_t=\pi(B) X_t$ where $\pi(B)=\pi_0 + \pi_1 B+\pi_2 B^2 + \cdots$ and $\sum_{j=0}^{\infty}... | Causality and stationarity of AR models
A linear process $X_t$ is defined to be causal if $X_t=\psi(B)w_t$ where $w_t$ are white noises and $\sum_{j=1}^{\infty}|\psi(j)|<\infty$.
$X_t$ is defined to be invertible if we can write $w_t=\pi(B) |
43,082 | Estimating a variable from its cosine corrupted by additive Gaussian noise | The CRLB in the general scalar case where we want to estimate $\theta=g(a)$, is given by:
$$\mathbb{E}(\theta-\hat{\theta})^2]\geq \frac{\left(\frac{\partial g}{\partial a}\right)^2}{I(a)}$$
where $I(a)$ is the Fisher information associated with $a$. Here $a=\cos\theta$. Since $\theta=\cos^{-1}(a)$, one must square t... | Estimating a variable from its cosine corrupted by additive Gaussian noise | The CRLB in the general scalar case where we want to estimate $\theta=g(a)$, is given by:
$$\mathbb{E}(\theta-\hat{\theta})^2]\geq \frac{\left(\frac{\partial g}{\partial a}\right)^2}{I(a)}$$
where $I( | Estimating a variable from its cosine corrupted by additive Gaussian noise
The CRLB in the general scalar case where we want to estimate $\theta=g(a)$, is given by:
$$\mathbb{E}(\theta-\hat{\theta})^2]\geq \frac{\left(\frac{\partial g}{\partial a}\right)^2}{I(a)}$$
where $I(a)$ is the Fisher information associated with... | Estimating a variable from its cosine corrupted by additive Gaussian noise
The CRLB in the general scalar case where we want to estimate $\theta=g(a)$, is given by:
$$\mathbb{E}(\theta-\hat{\theta})^2]\geq \frac{\left(\frac{\partial g}{\partial a}\right)^2}{I(a)}$$
where $I( |
43,083 | Learning functional analysis for studying kernels | You haven't given us much information about your current mathematical background. Do you have the background of a typical undergraduate science or engineering student (single and multivariable calculus, ordinary differential equations and perhaps an exposure to Fourier series)? Have you taken any introductory courses ... | Learning functional analysis for studying kernels | You haven't given us much information about your current mathematical background. Do you have the background of a typical undergraduate science or engineering student (single and multivariable calculu | Learning functional analysis for studying kernels
You haven't given us much information about your current mathematical background. Do you have the background of a typical undergraduate science or engineering student (single and multivariable calculus, ordinary differential equations and perhaps an exposure to Fourier ... | Learning functional analysis for studying kernels
You haven't given us much information about your current mathematical background. Do you have the background of a typical undergraduate science or engineering student (single and multivariable calculu |
43,084 | Exercise on finding the joint probability distribution | Statistical reasoning provides an elegant solution.
Because the integral of $f$ is used to define inverse trig functions, one is immediately tempted to interpret $X=\sin^2(A)$ for a random variable $A$ ranging from (say) $0$ to $\pi/2$. Substituting $\sin(a)$ for $x$ in $f$ gives
$$f(x)\,\mathrm{d}x = f(\sin^2(a))\mat... | Exercise on finding the joint probability distribution | Statistical reasoning provides an elegant solution.
Because the integral of $f$ is used to define inverse trig functions, one is immediately tempted to interpret $X=\sin^2(A)$ for a random variable $A | Exercise on finding the joint probability distribution
Statistical reasoning provides an elegant solution.
Because the integral of $f$ is used to define inverse trig functions, one is immediately tempted to interpret $X=\sin^2(A)$ for a random variable $A$ ranging from (say) $0$ to $\pi/2$. Substituting $\sin(a)$ for ... | Exercise on finding the joint probability distribution
Statistical reasoning provides an elegant solution.
Because the integral of $f$ is used to define inverse trig functions, one is immediately tempted to interpret $X=\sin^2(A)$ for a random variable $A |
43,085 | Why aren't we simply using $R_j^2$ instead the VIF? | You make a good point. I'd like to point out that one thing we like to use VIF for is its relationship to the standard error of the beta coefficient estimates. We can say that, the standard error is a function of MSE (the total variability around the model), $s^2\left\{X_k\right\}$ (the variability of the kth variab... | Why aren't we simply using $R_j^2$ instead the VIF? | You make a good point. I'd like to point out that one thing we like to use VIF for is its relationship to the standard error of the beta coefficient estimates. We can say that, the standard error i | Why aren't we simply using $R_j^2$ instead the VIF?
You make a good point. I'd like to point out that one thing we like to use VIF for is its relationship to the standard error of the beta coefficient estimates. We can say that, the standard error is a function of MSE (the total variability around the model), $s^2\l... | Why aren't we simply using $R_j^2$ instead the VIF?
You make a good point. I'd like to point out that one thing we like to use VIF for is its relationship to the standard error of the beta coefficient estimates. We can say that, the standard error i |
43,086 | Why aren't we simply using $R_j^2$ instead the VIF? | When I learned it, I was told the the larger numbers made it easier to identify to the naked eye. My instructor also used 10 as the cut off and not 5. So if you had many VIF calculations in a matrix of some sort, you would round to the digit and then numbers with 2 digits = multicolinearity.
Also I think the VIF intuit... | Why aren't we simply using $R_j^2$ instead the VIF? | When I learned it, I was told the the larger numbers made it easier to identify to the naked eye. My instructor also used 10 as the cut off and not 5. So if you had many VIF calculations in a matrix o | Why aren't we simply using $R_j^2$ instead the VIF?
When I learned it, I was told the the larger numbers made it easier to identify to the naked eye. My instructor also used 10 as the cut off and not 5. So if you had many VIF calculations in a matrix of some sort, you would round to the digit and then numbers with 2 di... | Why aren't we simply using $R_j^2$ instead the VIF?
When I learned it, I was told the the larger numbers made it easier to identify to the naked eye. My instructor also used 10 as the cut off and not 5. So if you had many VIF calculations in a matrix o |
43,087 | Is it valid to use an ARMAX model for TV Attribution? | This is an excellent question. I recommend that you get a cup of coffee and carefully read through Rob Hyndman's blog post on "The ARIMAX model muddle".
Basically, the answer is no. If you fit a straightforward AR(I)MAX model, your covariate coefficients cannot be interpreted as the promotion effect. The problem is tha... | Is it valid to use an ARMAX model for TV Attribution? | This is an excellent question. I recommend that you get a cup of coffee and carefully read through Rob Hyndman's blog post on "The ARIMAX model muddle".
Basically, the answer is no. If you fit a strai | Is it valid to use an ARMAX model for TV Attribution?
This is an excellent question. I recommend that you get a cup of coffee and carefully read through Rob Hyndman's blog post on "The ARIMAX model muddle".
Basically, the answer is no. If you fit a straightforward AR(I)MAX model, your covariate coefficients cannot be i... | Is it valid to use an ARMAX model for TV Attribution?
This is an excellent question. I recommend that you get a cup of coffee and carefully read through Rob Hyndman's blog post on "The ARIMAX model muddle".
Basically, the answer is no. If you fit a strai |
43,088 | How to down-weight older data in time series regression | A common method is to use an exponentially weighted cost function:
$$ \sum_i \lambda^{i} e(t-i)^2 $$
where $e(t)$ is the residual error, and $\lambda$ is the forgetting rate. If $\lambda=1$, you get back least squares regression.
You can use recursive least squares (RLS) to find a solution efficiently. | How to down-weight older data in time series regression | A common method is to use an exponentially weighted cost function:
$$ \sum_i \lambda^{i} e(t-i)^2 $$
where $e(t)$ is the residual error, and $\lambda$ is the forgetting rate. If $\lambda=1$, you get b | How to down-weight older data in time series regression
A common method is to use an exponentially weighted cost function:
$$ \sum_i \lambda^{i} e(t-i)^2 $$
where $e(t)$ is the residual error, and $\lambda$ is the forgetting rate. If $\lambda=1$, you get back least squares regression.
You can use recursive least square... | How to down-weight older data in time series regression
A common method is to use an exponentially weighted cost function:
$$ \sum_i \lambda^{i} e(t-i)^2 $$
where $e(t)$ is the residual error, and $\lambda$ is the forgetting rate. If $\lambda=1$, you get b |
43,089 | Bonferroni Correction - When not to use it | You should generally address the issue of multiple testing in some way. That doesn't mean Bonferroni is the best approach in all cases, however. Different methods address different error rates and the proper method depends on the goals of the testing and the consequences of making a Type I error. Try this paper:
Frane,... | Bonferroni Correction - When not to use it | You should generally address the issue of multiple testing in some way. That doesn't mean Bonferroni is the best approach in all cases, however. Different methods address different error rates and the | Bonferroni Correction - When not to use it
You should generally address the issue of multiple testing in some way. That doesn't mean Bonferroni is the best approach in all cases, however. Different methods address different error rates and the proper method depends on the goals of the testing and the consequences of ma... | Bonferroni Correction - When not to use it
You should generally address the issue of multiple testing in some way. That doesn't mean Bonferroni is the best approach in all cases, however. Different methods address different error rates and the |
43,090 | Bonferroni Correction - When not to use it | The Bonferroni correction is a pretty conservative approach to hypothesis testing. For $n$ tests, it requires you a p-value of $p/n$ where $p$ is your significance level. This guarantees that the probability of you getting a positive by pure chance stays below $p$, but sometimes it makes it go way below $p$, thus also ... | Bonferroni Correction - When not to use it | The Bonferroni correction is a pretty conservative approach to hypothesis testing. For $n$ tests, it requires you a p-value of $p/n$ where $p$ is your significance level. This guarantees that the prob | Bonferroni Correction - When not to use it
The Bonferroni correction is a pretty conservative approach to hypothesis testing. For $n$ tests, it requires you a p-value of $p/n$ where $p$ is your significance level. This guarantees that the probability of you getting a positive by pure chance stays below $p$, but sometim... | Bonferroni Correction - When not to use it
The Bonferroni correction is a pretty conservative approach to hypothesis testing. For $n$ tests, it requires you a p-value of $p/n$ where $p$ is your significance level. This guarantees that the prob |
43,091 | Do I need to use multivariate regression or several regression analyses? | Let $Y_i$ denote the vector of $i$th response, wehre $i = 1, \dots, r$. In your example $r$ is 5 since you have 5 test scores. Let $X$ be an $n \times p$ matrix of predictors. If you implement $r$ separate regressions (one for each $Y_i$),
$$Y_i = X\beta_i + \epsilon_i, $$
where $\epsilon_i \sim N_n(0, \sigma^2_iI_n)$.... | Do I need to use multivariate regression or several regression analyses? | Let $Y_i$ denote the vector of $i$th response, wehre $i = 1, \dots, r$. In your example $r$ is 5 since you have 5 test scores. Let $X$ be an $n \times p$ matrix of predictors. If you implement $r$ sep | Do I need to use multivariate regression or several regression analyses?
Let $Y_i$ denote the vector of $i$th response, wehre $i = 1, \dots, r$. In your example $r$ is 5 since you have 5 test scores. Let $X$ be an $n \times p$ matrix of predictors. If you implement $r$ separate regressions (one for each $Y_i$),
$$Y_i =... | Do I need to use multivariate regression or several regression analyses?
Let $Y_i$ denote the vector of $i$th response, wehre $i = 1, \dots, r$. In your example $r$ is 5 since you have 5 test scores. Let $X$ be an $n \times p$ matrix of predictors. If you implement $r$ sep |
43,092 | Multi-class classification easier than binary classification? | This is actually true as it is possible from this simulated example using R
library(mvtnorm)
sigma <- matrix(c(1,0,0,1), ncol=2)
x1 <- rmvnorm(n=500, mean=c(0,0), sigma=sigma, method="chol")
x2<- rmvnorm(n=500, mean=c(3,0), sigma=sigma, method="chol")
x3 <- rmvnorm(n=500, mean=c(1.5,3), sigma=sigma, method="chol")
x4 <... | Multi-class classification easier than binary classification? | This is actually true as it is possible from this simulated example using R
library(mvtnorm)
sigma <- matrix(c(1,0,0,1), ncol=2)
x1 <- rmvnorm(n=500, mean=c(0,0), sigma=sigma, method="chol")
x2<- rmvn | Multi-class classification easier than binary classification?
This is actually true as it is possible from this simulated example using R
library(mvtnorm)
sigma <- matrix(c(1,0,0,1), ncol=2)
x1 <- rmvnorm(n=500, mean=c(0,0), sigma=sigma, method="chol")
x2<- rmvnorm(n=500, mean=c(3,0), sigma=sigma, method="chol")
x3 <- ... | Multi-class classification easier than binary classification?
This is actually true as it is possible from this simulated example using R
library(mvtnorm)
sigma <- matrix(c(1,0,0,1), ncol=2)
x1 <- rmvnorm(n=500, mean=c(0,0), sigma=sigma, method="chol")
x2<- rmvn |
43,093 | Displaying mean +/- st. error or confidence interval on bar charts | SE and CI give us different - albeit related - information about the data. SE tells us about the variability of the mean values, e.g. if we were to repeat the study. CI tells us about the accuracy of our estimates. They are related because SE is used to calculate the CIs.
The why for using one or the other thus comes d... | Displaying mean +/- st. error or confidence interval on bar charts | SE and CI give us different - albeit related - information about the data. SE tells us about the variability of the mean values, e.g. if we were to repeat the study. CI tells us about the accuracy of | Displaying mean +/- st. error or confidence interval on bar charts
SE and CI give us different - albeit related - information about the data. SE tells us about the variability of the mean values, e.g. if we were to repeat the study. CI tells us about the accuracy of our estimates. They are related because SE is used to... | Displaying mean +/- st. error or confidence interval on bar charts
SE and CI give us different - albeit related - information about the data. SE tells us about the variability of the mean values, e.g. if we were to repeat the study. CI tells us about the accuracy of |
43,094 | In RNN Back Propagation through time, why is the D(h_t)/D(h_(t-1)) diagonal? | I'll provide a sketch of the derivation. Omitting the bias term (since anyways we take derivatives later), the recursion looks like:
$$\mathbf{h_{t+1}}=tanh(\mathbf{ U x_{t}+W h_{t}}) $$ where the $tanh$ is taken elementwise.
Now, since $\mathbf{h_{t}}$ and $\mathbf{h_{t+1}}$ are vectors, the derivative $\frac{\partial... | In RNN Back Propagation through time, why is the D(h_t)/D(h_(t-1)) diagonal? | I'll provide a sketch of the derivation. Omitting the bias term (since anyways we take derivatives later), the recursion looks like:
$$\mathbf{h_{t+1}}=tanh(\mathbf{ U x_{t}+W h_{t}}) $$ where the $ta | In RNN Back Propagation through time, why is the D(h_t)/D(h_(t-1)) diagonal?
I'll provide a sketch of the derivation. Omitting the bias term (since anyways we take derivatives later), the recursion looks like:
$$\mathbf{h_{t+1}}=tanh(\mathbf{ U x_{t}+W h_{t}}) $$ where the $tanh$ is taken elementwise.
Now, since $\math... | In RNN Back Propagation through time, why is the D(h_t)/D(h_(t-1)) diagonal?
I'll provide a sketch of the derivation. Omitting the bias term (since anyways we take derivatives later), the recursion looks like:
$$\mathbf{h_{t+1}}=tanh(\mathbf{ U x_{t}+W h_{t}}) $$ where the $ta |
43,095 | Box-Cox transformation for repeated measures ANOVA (rANOVA) in R | As @kjetil b halvorsen suggests I would go with linear mixed models: here are relevant paper and post . | Box-Cox transformation for repeated measures ANOVA (rANOVA) in R | As @kjetil b halvorsen suggests I would go with linear mixed models: here are relevant paper and post . | Box-Cox transformation for repeated measures ANOVA (rANOVA) in R
As @kjetil b halvorsen suggests I would go with linear mixed models: here are relevant paper and post . | Box-Cox transformation for repeated measures ANOVA (rANOVA) in R
As @kjetil b halvorsen suggests I would go with linear mixed models: here are relevant paper and post . |
43,096 | Combining multiple classifiers | This may be helpful as well: Kuncheva, L. I. (2004). Combining pattern classifiers: methods and algorithms
Edit: For my similar problem, I ended up finding classifier probabilities based on accuracy values as described in the question here: Assigning probabilities to ensemble experts (classification)
using using Theore... | Combining multiple classifiers | This may be helpful as well: Kuncheva, L. I. (2004). Combining pattern classifiers: methods and algorithms
Edit: For my similar problem, I ended up finding classifier probabilities based on accuracy v | Combining multiple classifiers
This may be helpful as well: Kuncheva, L. I. (2004). Combining pattern classifiers: methods and algorithms
Edit: For my similar problem, I ended up finding classifier probabilities based on accuracy values as described in the question here: Assigning probabilities to ensemble experts (cla... | Combining multiple classifiers
This may be helpful as well: Kuncheva, L. I. (2004). Combining pattern classifiers: methods and algorithms
Edit: For my similar problem, I ended up finding classifier probabilities based on accuracy v |
43,097 | Equality vs. Equality in Distribution ($t$-distribution for example) | $5.5$ years after posting this question, I've since taken measure-theoretic probability and can answer this question.
The very definition of a random variable $T \sim t_{\nu}$ is
$$T = \dfrac{Z}{\sqrt{V/\nu}}$$
for some $Z \sim \mathcal{N}(0, 1)$ and $V \sim \chi^2_\nu$ independent, with probability one ("almost surely... | Equality vs. Equality in Distribution ($t$-distribution for example) | $5.5$ years after posting this question, I've since taken measure-theoretic probability and can answer this question.
The very definition of a random variable $T \sim t_{\nu}$ is
$$T = \dfrac{Z}{\sqrt | Equality vs. Equality in Distribution ($t$-distribution for example)
$5.5$ years after posting this question, I've since taken measure-theoretic probability and can answer this question.
The very definition of a random variable $T \sim t_{\nu}$ is
$$T = \dfrac{Z}{\sqrt{V/\nu}}$$
for some $Z \sim \mathcal{N}(0, 1)$ and ... | Equality vs. Equality in Distribution ($t$-distribution for example)
$5.5$ years after posting this question, I've since taken measure-theoretic probability and can answer this question.
The very definition of a random variable $T \sim t_{\nu}$ is
$$T = \dfrac{Z}{\sqrt |
43,098 | How is it logically possible to sample a single value from a continuous distribution? | It's because zero probability should not be conflated with impossibility. Of course some value has to be sampled, so rather than observing that number and saying to yourself "what was the probability I would have observed this?" and then being confounded by the answer, pick a number arbitrarily and then draw samples u... | How is it logically possible to sample a single value from a continuous distribution? | It's because zero probability should not be conflated with impossibility. Of course some value has to be sampled, so rather than observing that number and saying to yourself "what was the probability | How is it logically possible to sample a single value from a continuous distribution?
It's because zero probability should not be conflated with impossibility. Of course some value has to be sampled, so rather than observing that number and saying to yourself "what was the probability I would have observed this?" and ... | How is it logically possible to sample a single value from a continuous distribution?
It's because zero probability should not be conflated with impossibility. Of course some value has to be sampled, so rather than observing that number and saying to yourself "what was the probability |
43,099 | Principal component analysis with group data | In general, I wouldn't see a problem why you couldn't do a PCA to visualize and interpret your multivariate dataset (however since you didn't provide data, I cannot say for sure). As for your second question, I would keep the two groups (drought, control) and not subtract them from each other. That way you will be able... | Principal component analysis with group data | In general, I wouldn't see a problem why you couldn't do a PCA to visualize and interpret your multivariate dataset (however since you didn't provide data, I cannot say for sure). As for your second q | Principal component analysis with group data
In general, I wouldn't see a problem why you couldn't do a PCA to visualize and interpret your multivariate dataset (however since you didn't provide data, I cannot say for sure). As for your second question, I would keep the two groups (drought, control) and not subtract th... | Principal component analysis with group data
In general, I wouldn't see a problem why you couldn't do a PCA to visualize and interpret your multivariate dataset (however since you didn't provide data, I cannot say for sure). As for your second q |
43,100 | Is the inductive bias a prior? | A prior is a property of the data and not the algorithm used on the data.
"Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the [inductive] bias used in the Naive Bayes classifier." - Wikipedia
Inductive biases can be thought of ... | Is the inductive bias a prior? | A prior is a property of the data and not the algorithm used on the data.
"Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional indepen | Is the inductive bias a prior?
A prior is a property of the data and not the algorithm used on the data.
"Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the [inductive] bias used in the Naive Bayes classifier." - Wikipedia
Indu... | Is the inductive bias a prior?
A prior is a property of the data and not the algorithm used on the data.
"Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional indepen |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.