idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
14,201 | Observed information matrix is a consistent estimator of the expected information matrix? | The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a strongly consistent estimator of the information matrix
, i.e. $N^{-1}J_{N}(\hat{\theta}_{N}(Y))\overset{a.s.}{\longrightarr... | Observed information matrix is a consistent estimator of the expected information matrix? | The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a stro | Observed information matrix is a consistent estimator of the expected information matrix?
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a strongly consistent estimator of t... | Observed information matrix is a consistent estimator of the expected information matrix?
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a stro |
14,202 | How to predict when the next event occurs, based on times of previous events? | Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about point processes, which match your particular data well. There is a great deal of work on predicting earthquakes (thoug... | How to predict when the next event occurs, based on times of previous events? | Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about | How to predict when the next event occurs, based on times of previous events?
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about point processes, which match your particu... | How to predict when the next event occurs, based on times of previous events?
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about |
14,203 | How to predict when the next event occurs, based on times of previous events? | Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection. | How to predict when the next event occurs, based on times of previous events? | Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection. | How to predict when the next event occurs, based on times of previous events?
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection. | How to predict when the next event occurs, based on times of previous events?
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection. |
14,204 | Analysis of time series with many zero values | To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysis. This arises normally when you have "lots of zeros" relative to the number of non-zeros.The issue is that there are tw... | Analysis of time series with many zero values | To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysi | Analysis of time series with many zero values
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysis. This arises normally when you have "lots of zeros" relative to the numb... | Analysis of time series with many zero values
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysi |
14,205 | What does the term "sparse prior" refer to (FBProphet Paper)? | Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka double exponential) distribution, that is peaked around zero.
(image source Tibshirani, 1996)
This effect is true for any... | What does the term "sparse prior" refer to (FBProphet Paper)? | Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka do | What does the term "sparse prior" refer to (FBProphet Paper)?
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka double exponential) distribution, that is peaked around zero... | What does the term "sparse prior" refer to (FBProphet Paper)?
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka do |
14,206 | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict") | This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data miners who do descriptive or predictive work typically don't deal with bias of this form). The term "bias" that I used ... | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict") | This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data miners who do descriptive or... | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data |
14,207 | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict") | In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parameters are involved, the original role of econometrics was to quantify them. So in economics/econometrics models the param... | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict") | In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parame | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parameters are involved, the origin... | Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parame |
14,208 | What is the significance of the number of convolution filters in a convolutional network? | What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detectors intuitively is the number of features (like edges, lines, object parts etc...) that the network can potentially lea... | What is the significance of the number of convolution filters in a convolutional network? | What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detec | What is the significance of the number of convolution filters in a convolutional network?
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detectors intuitively is the number... | What is the significance of the number of convolution filters in a convolutional network?
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detec |
14,209 | How to apply Bayes' theorem to the search for a fisherman lost at sea | Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law total of probability where $A^c$ is the complement of $A$, i.e. the person is not there. Likely you would assume $P(X|A^... | How to apply Bayes' theorem to the search for a fisherman lost at sea | Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law | How to apply Bayes' theorem to the search for a fisherman lost at sea
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law total of probability where $A^c$ is the complement... | How to apply Bayes' theorem to the search for a fisherman lost at sea
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law |
14,210 | How to apply Bayes' theorem to the search for a fisherman lost at sea | I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and rescue missions, no less!
In chapter 8 an example is provided something like this (I customized it a bit):
To start with, ... | How to apply Bayes' theorem to the search for a fisherman lost at sea | I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and re | How to apply Bayes' theorem to the search for a fisherman lost at sea
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and rescue missions, no less!
In chapter 8 an example is... | How to apply Bayes' theorem to the search for a fisherman lost at sea
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and re |
14,211 | Why not always use bootstrap CIs? | It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap because they are the more general case of the Bootstrap Percentile Intervals (i.e. the confidence interval based solely up... | Why not always use bootstrap CIs? | It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap be | Why not always use bootstrap CIs?
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap because they are the more general case of the Bootstrap Percentile Intervals (i.e. the c... | Why not always use bootstrap CIs?
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap be |
14,212 | Why not always use bootstrap CIs? | This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leading to the CLT). Assuming that both methods are constructed appropriately, we usually find three things: (1) the paramet... | Why not always use bootstrap CIs? | This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leadi | Why not always use bootstrap CIs?
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leading to the CLT). Assuming that both methods are constructed appropriately, we usually ... | Why not always use bootstrap CIs?
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leadi |
14,213 | Why not always use bootstrap CIs? | The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started gathering acorns like they were treasures. Her hands were quickly full, so I gestured that it would be okay to deposit th... | Why not always use bootstrap CIs? | The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started g | Why not always use bootstrap CIs?
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started gathering acorns like they were treasures. Her hands were quickly full, so I gestured t... | Why not always use bootstrap CIs?
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started g |
14,214 | Why not always use bootstrap CIs? | OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely on the CLT. The data don't have to be Normal, but the sampling distribution should be (asymptotically) Normal.
Alas, boot... | Why not always use bootstrap CIs? | OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely o | Why not always use bootstrap CIs?
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely on the CLT. The data don't have to be Normal, but the sampling distribution should be (... | Why not always use bootstrap CIs?
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely o |
14,215 | What do normal residuals mean and what does this tell me about my data? | Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of your predictor X, filling that out in the equation
$$
E[Y|X] = \beta_0 + \beta_1 X
$$
will have you calculate the expecte... | What do normal residuals mean and what does this tell me about my data? | Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of y | What do normal residuals mean and what does this tell me about my data?
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of your predictor X, filling that out in the equatio... | What do normal residuals mean and what does this tell me about my data?
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of y |
14,216 | What do normal residuals mean and what does this tell me about my data? | Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predictions) should also be valid. It's that simple! | What do normal residuals mean and what does this tell me about my data? | Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predict | What do normal residuals mean and what does this tell me about my data?
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predictions) should also be valid. It's that simple! | What do normal residuals mean and what does this tell me about my data?
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predict |
14,217 | What do normal residuals mean and what does this tell me about my data? | It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables are necessary and needed and care for identifying outliers then you have done a good job. Take a look here for more on th... | What do normal residuals mean and what does this tell me about my data? | It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables ar | What do normal residuals mean and what does this tell me about my data?
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables are necessary and needed and care for identifying ... | What do normal residuals mean and what does this tell me about my data?
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables ar |
14,218 | What do normal residuals mean and what does this tell me about my data? | In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is the difference between the true value and fitted value, and we hope this difference is appproximately zero.
But in most ... | What do normal residuals mean and what does this tell me about my data? | In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is | What do normal residuals mean and what does this tell me about my data?
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is the difference between the true value and fitted... | What do normal residuals mean and what does this tell me about my data?
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is |
14,219 | Elementary statistics for jurors | I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brush-up in basic statistics which I'd recommend to everyone. What may be even more important in the context of a jury is th... | Elementary statistics for jurors | I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brus | Elementary statistics for jurors
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brush-up in basic statistics which I'd recommend to everyone. What may be even more importa... | Elementary statistics for jurors
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brus |
14,220 | Elementary statistics for jurors | I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries. | Elementary statistics for jurors | I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries. | Elementary statistics for jurors
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries. | Elementary statistics for jurors
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries. |
14,221 | Elementary statistics for jurors | I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt mean. These are subjective notions. It is up to the prosecution and the defense to present evidence and explain any s... | Elementary statistics for jurors | I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt | Elementary statistics for jurors
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt mean. These are subjective notions. It is up to the prosecution and the defense to p... | Elementary statistics for jurors
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt |
14,222 | Interpreting proportions that sum to one as independent variables in linear regression | As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"Your estimated linear regression model looks like:
ln(Radon) = (a linear expression in other variables) + 0.43M + 0.92I
... | Interpreting proportions that sum to one as independent variables in linear regression | As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
" | Interpreting proportions that sum to one as independent variables in linear regression
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"Your estimated linear regression ... | Interpreting proportions that sum to one as independent variables in linear regression
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
" |
14,223 | Online estimation of quartiles without storing observations | The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the median between the median and the max, so yes, I think you're on solid ground applying whatever median algorithm you use f... | Online estimation of quartiles without storing observations | The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the me | Online estimation of quartiles without storing observations
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the median between the median and the max, so yes, I think you're ... | Online estimation of quartiles without storing observations
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the me |
14,224 | Online estimation of quartiles without storing observations | A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __init__(self, percentile=0.5, step=0.1):
self.step = step
self.step_up = 1.0 - percentile
self.ste... | Online estimation of quartiles without storing observations | A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __i | Online estimation of quartiles without storing observations
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __init__(self, percentile=0.5, step=0.1):
self.step = s... | Online estimation of quartiles without storing observations
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __i |
14,225 | Is there any statistical test that is parametric and non-parametric? | It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical ... | Is there any statistical test that is parametric and non-parametric? | It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is param | Is there any statistical test that is parametric and non-parametric?
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick s... | Is there any statistical test that is parametric and non-parametric?
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is param |
14,226 | Is there any statistical test that is parametric and non-parametric? | Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relationship between the explanatory variables and the outcome.
Some examples:
A quantile regression with a linear link would... | Is there any statistical test that is parametric and non-parametric? | Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relatio | Is there any statistical test that is parametric and non-parametric?
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relationship between the explanatory variables and the out... | Is there any statistical test that is parametric and non-parametric?
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relatio |
14,227 | Is there any statistical test that is parametric and non-parametric? | I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametric, as it doesn't parametrically estimate the baseline hazard.
Or you might choose to view many non-parametric statistics... | Is there any statistical test that is parametric and non-parametric? | I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametri | Is there any statistical test that is parametric and non-parametric?
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametric, as it doesn't parametrically estimate the baseli... | Is there any statistical test that is parametric and non-parametric?
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametri |
14,228 | Is there any statistical test that is parametric and non-parametric? | Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says are often conflated with each other, and gives an example of a parametric distribution-free test as the Sign test for the... | Is there any statistical test that is parametric and non-parametric? | Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says a | Is there any statistical test that is parametric and non-parametric?
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says are often conflated with each other, and gives an ex... | Is there any statistical test that is parametric and non-parametric?
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says a |
14,229 | Why does Q-learning overestimate action values? | $$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected value of a dice roll is 3.5, but if you throw the dice 100 times and take the max over all throws, you're very likely t... | Why does Q-learning overestimate action values? | $$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected | Why does Q-learning overestimate action values?
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected value of a dice roll is 3.5, but if you throw the dice 100 times and ta... | Why does Q-learning overestimate action values?
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected |
14,230 | Why does Q-learning overestimate action values? | I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper says
These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action val... | Why does Q-learning overestimate action values? | I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper s | Why does Q-learning overestimate action values?
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper says
These overestimations result from a positive bias that is introduce... | Why does Q-learning overestimate action values?
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper s |
14,231 | Why does Q-learning overestimate action values? | First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant positive bias. To see why, consider a single state s where
there are many actions a whose true values, q(s, a), are al... | Why does Q-learning overestimate action values? | First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant | Why does Q-learning overestimate action values?
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant positive bias. To see why, consider a single state s where
there are m... | Why does Q-learning overestimate action values?
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant |
14,232 | Why does Q-learning overestimate action values? | It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering this correlation).
Normal Explanation:
Essentially, the OC states, that if we constantly choose the maximum estimate of... | Why does Q-learning overestimate action values? | It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering | Why does Q-learning overestimate action values?
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering this correlation).
Normal Explanation:
Essentially, the OC states, that... | Why does Q-learning overestimate action values?
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering |
14,233 | Diagnostic plot for assessing homogeneity of variance-covariance matrices | An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155). It suggests several graphical procedures to compare covariance matrices.
The authors' R package heplot supports these... | Diagnostic plot for assessing homogeneity of variance-covariance matrices | An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155) | Diagnostic plot for assessing homogeneity of variance-covariance matrices
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155). It suggests several graphical procedures to... | Diagnostic plot for assessing homogeneity of variance-covariance matrices
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155) |
14,234 | How could one develop a stopping rule in a power analysis of two independent proportions? | This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia article, which you linked to, is unfortunately a little sparse), but there are several ways to go about this. I think @user... | How could one develop a stopping rule in a power analysis of two independent proportions? | This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia art | How could one develop a stopping rule in a power analysis of two independent proportions?
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia article, which you linked to, is ... | How could one develop a stopping rule in a power analysis of two independent proportions?
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia art |
14,235 | How could one develop a stopping rule in a power analysis of two independent proportions? | You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions are 'no' doesn't matter (too much). Your client seems pragmatic, so the true interpretation of a p-value is probably not ... | How could one develop a stopping rule in a power analysis of two independent proportions? | You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions ar | How could one develop a stopping rule in a power analysis of two independent proportions?
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions are 'no' doesn't matter (too muc... | How could one develop a stopping rule in a power analysis of two independent proportions?
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions ar |
14,236 | How could one develop a stopping rule in a power analysis of two independent proportions? | maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
maybe other works could be added here. | How could one develop a stopping rule in a power analysis of two independent proportions? | maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
| How could one develop a stopping rule in a power analysis of two independent proportions?
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
maybe other works could be add... | How could one develop a stopping rule in a power analysis of two independent proportions?
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
|
14,237 | How could one develop a stopping rule in a power analysis of two independent proportions? | The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your questions its easy:
NO
No, you can't stop early
No, you can't measure just longer
Once you defined your setup, you are... | How could one develop a stopping rule in a power analysis of two independent proportions? | The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your q | How could one develop a stopping rule in a power analysis of two independent proportions?
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your questions its easy:
NO
No, yo... | How could one develop a stopping rule in a power analysis of two independent proportions?
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your q |
14,238 | How should I mentally deal with Borel's paradox? | As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there is no paradox in defining a posterior distribution as conditional on a set of measure zero $\{X=x\}$ is that $x$ is not c... | How should I mentally deal with Borel's paradox? | As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there i | How should I mentally deal with Borel's paradox?
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there is no paradox in defining a posterior distribution as conditional on a s... | How should I mentally deal with Borel's paradox?
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there i |
14,239 | How should I mentally deal with Borel's paradox? | I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 123.45678kg. I'm conditioning on myself having measured their mass as 123.45678kg, something which is consistent with thei... | How should I mentally deal with Borel's paradox? | I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 12 | How should I mentally deal with Borel's paradox?
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 123.45678kg. I'm conditioning on myself having measured their mass as 123... | How should I mentally deal with Borel's paradox?
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 12 |
14,240 | How to define number of clusters in K-means clustering? | The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. This would be similar to using a scree plot to picking t... | How to define number of clusters in K-means clustering? | The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At t | How to define number of clusters in K-means clustering?
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. ... | How to define number of clusters in K-means clustering?
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At t |
14,241 | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)? | It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log base e to log base 2, then divide by the number of
pixels (e.g. 3072 pixels for a 32x32 rgb image).
To change base for the ... | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)? | It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log bas | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log base e to log base 2, then divide by the number of
pix... | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log bas |
14,242 | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)? | To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_cross_entropy_with_logits the log-likelihood is in natural log so you need to divide by np.log(2.)
If your reconstructio... | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)? | To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_ | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_cross_entropy_with_logits the log-likelihood is in ... | What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_ |
14,243 | Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition | I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_+$ in Brooks & Gelman (1998) and with $\widehat{\rm var}^+$ in BDA2 (Gelman et al, 2003) and BDA3 (Gelman et al, 2013).
... | Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition | I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_ | Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_+$ in Brooks & Gelman (1998) and with $\widehat{\rm var}^+$ in B... | Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_ |
14,244 | Differences between logistic regression and perceptrons | You mentioned already the important differences. So the results should not differ that much. | Differences between logistic regression and perceptrons | You mentioned already the important differences. So the results should not differ that much. | Differences between logistic regression and perceptrons
You mentioned already the important differences. So the results should not differ that much. | Differences between logistic regression and perceptrons
You mentioned already the important differences. So the results should not differ that much. |
14,245 | Differences between logistic regression and perceptrons | There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a linear equation (the mean being equal to the probability p of a Bernoulli event). By using the logit link as a function of ... | Differences between logistic regression and perceptrons | There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a lin | Differences between logistic regression and perceptrons
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a linear equation (the mean being equal to the probability p of a Ber... | Differences between logistic regression and perceptrons
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a lin |
14,246 | Differences between logistic regression and perceptrons | I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the Wiki article on Multinomial logistic regression. | Differences between logistic regression and perceptrons | I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the | Differences between logistic regression and perceptrons
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the Wiki article on Multinomial logistic regression. | Differences between logistic regression and perceptrons
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the |
14,247 | Basic questions about discrete time survival analysis | Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameters $\mathbf{B}$ a vector of conditioning variables $\mathbf{X}$: $h_{j,k} = \frac{e^{\alpha_{k} + \mathbf{BX}}}{1 + e^{\a... | Basic questions about discrete time survival analysis | Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameter | Basic questions about discrete time survival analysis
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameters $\mathbf{B}$ a vector of conditioning variables $\mathbf{X}$: $h... | Basic questions about discrete time survival analysis
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameter |
14,248 | Standard error of random effects in R (lme4) vs Stata (xtmixed) | According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If estimation is done by REML, these standard errors
account for uncertainty in the estimate of $\beta$, while for ML th... | Standard error of random effects in R (lme4) vs Stata (xtmixed) | According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If | Standard error of random effects in R (lme4) vs Stata (xtmixed)
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If estimation is done by REML, these standard errors
acc... | Standard error of random effects in R (lme4) vs Stata (xtmixed)
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If |
14,249 | Which robust correlation methods are actually used? | Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation procedures on constituent variables prior to performing Pearson's correlation. I imagine any examination of robustness should ... | Which robust correlation methods are actually used? | Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation proced | Which robust correlation methods are actually used?
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation procedures on constituent variables prior to performing Pearson's correlat... | Which robust correlation methods are actually used?
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation proced |
14,250 | Which robust correlation methods are actually used? | I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison with other ones. Moreover, all measures are tested on robustness. Note that this new measure is also capable to identify ... | Which robust correlation methods are actually used? | I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison w | Which robust correlation methods are actually used?
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison with other ones. Moreover, all measures are tested on robustness. Not... | Which robust correlation methods are actually used?
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison w |
14,251 | Which robust correlation methods are actually used? | Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and Rivest as a way to choose a model among families of bivariate copulas.
Link to Genest Rivest (1993) paper | Which robust correlation methods are actually used? | Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and R | Which robust correlation methods are actually used?
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and Rivest as a way to choose a model among families of bivariate copulas... | Which robust correlation methods are actually used?
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and R |
14,252 | Which robust correlation methods are actually used? | Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Correlation Coefficient
References:
• Blomqvist, N. (1950) "On a Measure of Dependence between Two Random Variables", Anna... | Which robust correlation methods are actually used? | Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Corre | Which robust correlation methods are actually used?
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Correlation Coefficient
References:
• Blomqvist, N. (1950) "On a Measu... | Which robust correlation methods are actually used?
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Corre |
14,253 | Which robust correlation methods are actually used? | Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and FastSpar | Which robust correlation methods are actually used? | Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and | Which robust correlation methods are actually used?
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and FastSpar | Which robust correlation methods are actually used?
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and |
14,254 | Bounding mutual information given bounds on pointwise mutual information | My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p(x) = 1/n$ for all $x \in X$. For any $m \in \{1,\ldots, n/2\}$ let $k > 0$ be the solution to the equation
$$m e^{k} + ... | Bounding mutual information given bounds on pointwise mutual information | My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p | Bounding mutual information given bounds on pointwise mutual information
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p(x) = 1/n$ for all $x \in X$. For any $m \in \{... | Bounding mutual information given bounds on pointwise mutual information
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p |
14,255 | Bounding mutual information given bounds on pointwise mutual information | I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to the bounds on pmi, clearly $\frac{p(x,y)}{p(x)p(y)}\leq e^k$ and thus $p(x,y)\leq p(x)p(y)\cdot e^k$. We can substitute f... | Bounding mutual information given bounds on pointwise mutual information | I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to th | Bounding mutual information given bounds on pointwise mutual information
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to the bounds on pmi, clearly $\frac{p(x,y)}{p(x)p(y... | Bounding mutual information given bounds on pointwise mutual information
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to th |
14,256 | What's the typical range of possible values for the shrinkage parameter in penalized regression? | You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this answer is that during the running of the LASSO the solver generates a sequence of $\lambda$, so while it may counterint... | What's the typical range of possible values for the shrinkage parameter in penalized regression? | You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this | What's the typical range of possible values for the shrinkage parameter in penalized regression?
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this answer is that during t... | What's the typical range of possible values for the shrinkage parameter in penalized regression?
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this |
14,257 | What's the typical range of possible values for the shrinkage parameter in penalized regression? | For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is an example using "applicants" in the College data set from ISLR
# Don't forget to set seed
set.seed(1)
train <- sample(1... | What's the typical range of possible values for the shrinkage parameter in penalized regression? | For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is | What's the typical range of possible values for the shrinkage parameter in penalized regression?
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is an example using "appli... | What's the typical range of possible values for the shrinkage parameter in penalized regression?
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is |
14,258 | Least squares logistic regression [duplicate] | It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood estimation is asymptotically optimal (in the class of regular estimators). I have doubts about the UMVUE concept, since MLE... | Least squares logistic regression [duplicate] | It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood est | Least squares logistic regression [duplicate]
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood estimation is asymptotically optimal (in the class of regular estimators). I ... | Least squares logistic regression [duplicate]
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood est |
14,259 | Least squares logistic regression [duplicate] | In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regression, the errors are not expected to have
the same variance: we should have high variance for p
near .5, lower variance t... | Least squares logistic regression [duplicate] | In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regress | Least squares logistic regression [duplicate]
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regression, the errors are not expected to have
the same variance: we should have... | Least squares logistic regression [duplicate]
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regress |
14,260 | Geometric understanding of PCA in the subject (dual) space | All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point cloud--each point is a row of $\mathbf X$--we may ask what simple operations on these points preserve the properties of ... | Geometric understanding of PCA in the subject (dual) space | All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point | Geometric understanding of PCA in the subject (dual) space
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point cloud--each point is a row of $\mathbf X$--we may ask what si... | Geometric understanding of PCA in the subject (dual) space
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point |
14,261 | Least stupid way to forecast a short multivariate time series | I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well), PCA can be used to look for (linear) dependencies in a systematic way. I will show that this works well for the given... | Least stupid way to forecast a short multivariate time series | I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well) | Least stupid way to forecast a short multivariate time series
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well), PCA can be used to look for (linear) dependencies in a s... | Least stupid way to forecast a short multivariate time series
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well) |
14,262 | Estimating R-squared and statistical significance from penalized regression model | My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data points ("big p small n")
The amount of time you have to investigate the variables
The computational cost of inverting a ... | Estimating R-squared and statistical significance from penalized regression model | My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data | Estimating R-squared and statistical significance from penalized regression model
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data points ("big p small n")
The amount of... | Estimating R-squared and statistical significance from penalized regression model
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data |
14,263 | Estimating R-squared and statistical significance from penalized regression model | The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. The theory behind the test and how to apply it is briefly explained in the hdm documentation. In short, it's based on a fr... | Estimating R-squared and statistical significance from penalized regression model | The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. Th | Estimating R-squared and statistical significance from penalized regression model
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. The theory behind the test and how to ap... | Estimating R-squared and statistical significance from penalized regression model
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. Th |
14,264 | Why is LASSO not finding my perfect predictor pair at high dimensionality? | This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcome these drawbacks by including an additional ridge penalty, hence the elastic net regression. This Tibshirani paper is a... | Why is LASSO not finding my perfect predictor pair at high dimensionality? | This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcom | Why is LASSO not finding my perfect predictor pair at high dimensionality?
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcome these drawbacks by including an additional ... | Why is LASSO not finding my perfect predictor pair at high dimensionality?
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcom |
14,265 | Can regularization be helpful if we are interested only in modeling, not in forecasting? | Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot gung made, you will be clear on why we need regularization / shrinkage. At first, I feel strange that why we need biased ... | Can regularization be helpful if we are interested only in modeling, not in forecasting? | Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot g | Can regularization be helpful if we are interested only in modeling, not in forecasting?
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot gung made, you will be clear on ... | Can regularization be helpful if we are interested only in modeling, not in forecasting?
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot g |
14,266 | Can regularization be helpful if we are interested only in modeling, not in forecasting? | Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estimation through Decision Trees. Every time I build a tree, I check the cross-validation error. I try to decrease the error... | Can regularization be helpful if we are interested only in modeling, not in forecasting? | Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estim | Can regularization be helpful if we are interested only in modeling, not in forecasting?
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estimation through Decision Trees. E... | Can regularization be helpful if we are interested only in modeling, not in forecasting?
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estim |
14,267 | What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest? | I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a structure with unknown (typically mean and variance) parameters. In this sense, using a gamma distribution for the individu... | What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe | I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a str | What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common o... | What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a str |
14,268 | What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest? | Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to refer to the case where some knowledge about the variance exists. Also notice that the picture of the distribution of the... | What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe | Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to r | What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's exa... | What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to r |
14,269 | How to tell if girlfriend can tell the future (i.e. predict stocks)? | Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock picking is actually evaluated in the “real world”. As an extreme example, suppose stock picker A chose 1 stock that went up... | How to tell if girlfriend can tell the future (i.e. predict stocks)? | Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock pic | How to tell if girlfriend can tell the future (i.e. predict stocks)?
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock picking is actually evaluated in the “real world”. As ... | How to tell if girlfriend can tell the future (i.e. predict stocks)?
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock pic |
14,270 | How to tell if girlfriend can tell the future (i.e. predict stocks)? | A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be approx. random.
Using this method, you can improve the statistical power by imposing some rules:
Both of you assign the... | How to tell if girlfriend can tell the future (i.e. predict stocks)? | A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be a | How to tell if girlfriend can tell the future (i.e. predict stocks)?
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be approx. random.
Using this method, you can improve ... | How to tell if girlfriend can tell the future (i.e. predict stocks)?
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be a |
14,271 | How to tell if girlfriend can tell the future (i.e. predict stocks)? | How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining sample size.
To provide an answer, let's make some assumptions
Let's assume we want a power of 80%, and confidence lev... | How to tell if girlfriend can tell the future (i.e. predict stocks)? | How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining s | How to tell if girlfriend can tell the future (i.e. predict stocks)?
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining sample size.
To provide an answer, let's make some... | How to tell if girlfriend can tell the future (i.e. predict stocks)?
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining s |
14,272 | Plot and interpret ordinal logistic regression | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
My Regression Modeling Strategies course notes has two... | Plot and interpret ordinal logistic regression | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Plot and interpret ordinal logistic regression
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
My Regr... | Plot and interpret ordinal logistic regression
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,273 | Standardization vs. Normalization for Lasso/Ridge Regression | Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppose one variable is in a very large scale, say order of millions and another variable is from 0 to 1. Then, we can think t... | Standardization vs. Normalization for Lasso/Ridge Regression | Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppos | Standardization vs. Normalization for Lasso/Ridge Regression
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppose one variable is in a very large scale, say order of milli... | Standardization vs. Normalization for Lasso/Ridge Regression
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppos |
14,274 | From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-output network | For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing. No one that I know of has ever tried to suggest a connection between option pricing and heat flow per se. I think tha... | From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi- | For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing | From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-output network
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing. No ... | From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing |
14,275 | Clustered standard errors vs. multilevel modeling? | This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for your data. I did this and was surprised to find that tests (regarding parameters in the first level) based on multilevel m... | Clustered standard errors vs. multilevel modeling? | This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for yo | Clustered standard errors vs. multilevel modeling?
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for your data. I did this and was surprised to find that tests (regarding p... | Clustered standard errors vs. multilevel modeling?
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for yo |
14,276 | Estimation of ARMA: state space vs. alternatives | If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.e., tens of parameters.
If you use the direct variables, you have one (or more) parameters per state, so if your time se... | Estimation of ARMA: state space vs. alternatives | If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i. | Estimation of ARMA: state space vs. alternatives
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.e., tens of parameters.
If you use the direct variables, you have one (... | Estimation of ARMA: state space vs. alternatives
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i. |
14,277 | Physical/pictoral interpretation of higher-order moments | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
If by graphical representation you meant histograms, I... | Physical/pictoral interpretation of higher-order moments | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Physical/pictoral interpretation of higher-order moments
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Physical/pictoral interpretation of higher-order moments
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,278 | Why do we worry about overfitting even if "all models are wrong"? | The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data. | Why do we worry about overfitting even if "all models are wrong"? | The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data. | Why do we worry about overfitting even if "all models are wrong"?
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data. | Why do we worry about overfitting even if "all models are wrong"?
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data. |
14,279 | Why do we worry about overfitting even if "all models are wrong"? | Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every model is equally satisfactory (and therefore flaws in models are irrelevant). Observe that you could just as easily ask ... | Why do we worry about overfitting even if "all models are wrong"? | Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every m | Why do we worry about overfitting even if "all models are wrong"?
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every model is equally satisfactory (and therefore flaws in m... | Why do we worry about overfitting even if "all models are wrong"?
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every m |
14,280 | Why do we worry about overfitting even if "all models are wrong"? | The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all models are wrong" statement is roughly equivalent to saying "all models have non-zero bias". Overfitting is the issue th... | Why do we worry about overfitting even if "all models are wrong"? | The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all | Why do we worry about overfitting even if "all models are wrong"?
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all models are wrong" statement is roughly equivalent to s... | Why do we worry about overfitting even if "all models are wrong"?
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all |
14,281 | Why do we worry about overfitting even if "all models are wrong"? | A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is seen as the pinnacle of automotive engineering. Fast, precise and using only the finest components. I wouldn't fancy driv... | Why do we worry about overfitting even if "all models are wrong"? | A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is s | Why do we worry about overfitting even if "all models are wrong"?
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is seen as the pinnacle of automotive engineering. Fast, p... | Why do we worry about overfitting even if "all models are wrong"?
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is s |
14,282 | Why do we worry about overfitting even if "all models are wrong"? | As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(123)
x1 <- rnorm(6)
x2 <- rnorm(6)
x3 <- rnorm(6)
x4 <- rnorm(6)
y <- rnorm(6)
which creates 5 variables, each a standar... | Why do we worry about overfitting even if "all models are wrong"? | As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(1 | Why do we worry about overfitting even if "all models are wrong"?
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(123)
x1 <- rnorm(6)
x2 <- rnorm(6)
x3 <- rnorm(6)
x4 <-... | Why do we worry about overfitting even if "all models are wrong"?
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(1 |
14,283 | Why do we worry about overfitting even if "all models are wrong"? | Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sample), and then applied to the remaining 'out of sample' data set. An over-fitted model will typically have a greater pred... | Why do we worry about overfitting even if "all models are wrong"? | Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sampl | Why do we worry about overfitting even if "all models are wrong"?
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sample), and then applied to the remaining 'out of sample' ... | Why do we worry about overfitting even if "all models are wrong"?
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sampl |
14,284 | Why do we worry about overfitting even if "all models are wrong"? | All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cancer, would you rather have them be wrong 50% of the time (very wrong) or 0.1% of the time (much less wrong)?
Or, let's say... | Why do we worry about overfitting even if "all models are wrong"? | All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cance | Why do we worry about overfitting even if "all models are wrong"?
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cancer, would you rather have them be wrong 50% of the time... | Why do we worry about overfitting even if "all models are wrong"?
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cance |
14,285 | Do we actually take random line in first step of linear regression? | NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Error often gets denoted by an $L$ for "loss".
$$
L(y, \hat y) = \sum_{i = 1}^N \bigg(y_i - \hat y_i\bigg)^2
$$
We have our... | Do we actually take random line in first step of linear regression? | NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Err | Do we actually take random line in first step of linear regression?
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Error often gets denoted by an $L$ for "loss".
$$
L(y, ... | Do we actually take random line in first step of linear regression?
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Err |
14,286 | Do we actually take random line in first step of linear regression? | We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to start somewhere looking for the optimal parameters, and the random set of parameters is one place to start.
So, in a way... | Do we actually take random line in first step of linear regression? | We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to | Do we actually take random line in first step of linear regression?
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to start somewhere looking for the optimal parameters, ... | Do we actually take random line in first step of linear regression?
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to |
14,287 | Do we actually take random line in first step of linear regression? | Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as Frank Harrel mentioned in the comment. If you have a single independent variable, as in simple linear regression, $\hat y... | Do we actually take random line in first step of linear regression? | Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as F | Do we actually take random line in first step of linear regression?
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as Frank Harrel mentioned in the comment. If you have a ... | Do we actually take random line in first step of linear regression?
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as F |
14,288 | Do we actually take random line in first step of linear regression? | That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form solution for finding the Least Squares Regression equation for a set of points.
That being said, what's being shown in t... | Do we actually take random line in first step of linear regression? | That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form | Do we actually take random line in first step of linear regression?
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form solution for finding the Least Squares Regression eq... | Do we actually take random line in first step of linear regression?
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form |
14,289 | Do we actually take random line in first step of linear regression? | This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or without a mathematical background in general.
If it was up to me I would do it in a slightly different way - start with some "... | Do we actually take random line in first step of linear regression? | This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or withou | Do we actually take random line in first step of linear regression?
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or without a mathematical background in general.
If it was up... | Do we actually take random line in first step of linear regression?
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or withou |
14,290 | Do we actually take random line in first step of linear regression? | To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrative example of how Stochastic Algorithms work rather than how to best fit a linear regression model.
However, linear regre... | Do we actually take random line in first step of linear regression? | To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrativ | Do we actually take random line in first step of linear regression?
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrative example of how Stochastic Algorithms work rather t... | Do we actually take random line in first step of linear regression?
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrativ |
14,291 | Do we actually take random line in first step of linear regression? | Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with those who say that
it is a pedagogical tool
the problem can be solved exactly (optimal least squares)
it is reminiscent of ... | Do we actually take random line in first step of linear regression? | Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with thos | Do we actually take random line in first step of linear regression?
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with those who say that
it is a pedagogical tool
the problem... | Do we actually take random line in first step of linear regression?
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with thos |
14,292 | What are essential rules for designing and producing plots? | Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows. | What are essential rules for designing and producing plots? | Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows. | What are essential rules for designing and producing plots?
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows. | What are essential rules for designing and producing plots?
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows. |
14,293 | What are essential rules for designing and producing plots? | Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain readable even in black and white.
This simple rule should account for colour blindness, low-quality printers
and bad lig... | What are essential rules for designing and producing plots? | Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain r | What are essential rules for designing and producing plots?
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain readable even in black and white.
This simple rule should acc... | What are essential rules for designing and producing plots?
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain r |
14,294 | What are essential rules for designing and producing plots? | Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever information (or supplementary information) that cannot go into the figure itself in the caption. The idea is to minimize the e... | What are essential rules for designing and producing plots? | Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever inform | What are essential rules for designing and producing plots?
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever information (or supplementary information) that cannot go into the... | What are essential rules for designing and producing plots?
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever inform |
14,295 | What are essential rules for designing and producing plots? | Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
using a legend when objects can be labeled directly | What are essential rules for designing and producing plots? | Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot. | What are essential rules for designing and producing plots?
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
using a legend when objects can be labeled directly | What are essential rules for designing and producing plots?
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot. |
14,296 | What are essential rules for designing and producing plots? | Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries. | What are essential rules for designing and producing plots? | Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries. | What are essential rules for designing and producing plots?
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries. | What are essential rules for designing and producing plots?
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries. |
14,297 | What are essential rules for designing and producing plots? | Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, and he couldn't make out what was going on in my graphs--it was a waste and I felt pretty stupid. Other forms of colorbli... | What are essential rules for designing and producing plots? | Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, an | What are essential rules for designing and producing plots?
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, and he couldn't make out what was going on in my graphs--it wa... | What are essential rules for designing and producing plots?
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, an |
14,298 | What are essential rules for designing and producing plots? | Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't use pie-charts.
Don't duplicate data that is contained in a graph by throwing in a table.
Use a sans serif font like A... | What are essential rules for designing and producing plots? | Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't | What are essential rules for designing and producing plots?
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't use pie-charts.
Don't duplicate data that is contained in a... | What are essential rules for designing and producing plots?
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't |
14,299 | What are essential rules for designing and producing plots? | Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than was actually measured. | What are essential rules for designing and producing plots? | Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than | What are essential rules for designing and producing plots?
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than was actually measured. | What are essential rules for designing and producing plots?
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than |
14,300 | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? | Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like one in the set. Models typically have parameters that specify which model you mean from the set. I'll write $\theta$ fo... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a | Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like one in... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.