idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
13,701 | Visualizing Likert responses using R or SPSS | @RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put the data in a more meaningful order also).
Of course it depends on what main message you're trying to communicate, but I... | Visualizing Likert responses using R or SPSS | @RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put th | Visualizing Likert responses using R or SPSS
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put the data in a more meaningful order also).
Of course it depends on what mai... | Visualizing Likert responses using R or SPSS
@RJ's code produces a plot like this, which is really a table with shaded cells. Its rather busy and a bit tricky to decipher. A plain table without shading might be more effective (and you can put th |
13,702 | Interpreting exp(B) in multinomial logistic regression | It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One might express this as a "5012%" increase in relative risk, but that's a confusing and potentially misleading way to do it... | Interpreting exp(B) in multinomial logistic regression | It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One m | Interpreting exp(B) in multinomial logistic regression
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One might express this as a "5012%" increase in relative risk, but tha... | Interpreting exp(B) in multinomial logistic regression
It will take us a while to get there, but in summary, a one-unit change in the variable corresponding to B will multiply the relative risk of the outcome (compared to the base outcome) by 6.012.
One m |
13,703 | Interpreting exp(B) in multinomial logistic regression | Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. In a multinomial context, by "odds ratio" we mean the ratio of these two quantities: a) the odds (not probability, bu... | Interpreting exp(B) in multinomial logistic regression | Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. | Interpreting exp(B) in multinomial logistic regression
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. In a multinomial context, by "odds ratio" we mean the ratio of ... | Interpreting exp(B) in multinomial logistic regression
Try considering this bit of explanation in addition to what @whuber has already written so well. If exp(B) = 6, then the odds ratio associated with an increase of 1 on the predictor in question is 6. |
13,704 | Interpreting exp(B) in multinomial logistic regression | I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do however read to the end, since it is important.
First of all the values B and Exp(B) are the once you are looking for. ... | Interpreting exp(B) in multinomial logistic regression | I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do | Interpreting exp(B) in multinomial logistic regression
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do however read to the end, since it is important.
First of all the... | Interpreting exp(B) in multinomial logistic regression
I was also looking for the same answer, but the once above were not satisfying for me. It seemed to complex for what it really is. So I will give my interpretation, please correct me if I am wrong.
Do |
13,705 | Interpreting exp(B) in multinomial logistic regression | Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion here might have to do with by 4% (multiplicative meaning) and by 4 percent points (additive meaning). The % interpretation... | Interpreting exp(B) in multinomial logistic regression | Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion her | Interpreting exp(B) in multinomial logistic regression
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion here might have to do with by 4% (multiplicative meaning) and by 4 p... | Interpreting exp(B) in multinomial logistic regression
Say that exp(b) in an mlogit is 1.04. if you multiply a number by 1.04, then it increases by 4%. That is the relative risk of being in category a instead of b. I suspect that part of the confusion her |
13,706 | What does "permutation invariant" mean in the context of neural networks doing image recognition? | In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This is not the case for convolutional networks, which assume neighbourhood relations. | What does "permutation invariant" mean in the context of neural networks doing image recognition? | In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance wo | What does "permutation invariant" mean in the context of neural networks doing image recognition?
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This ... | What does "permutation invariant" mean in the context of neural networks doing image recognition?
In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance wo |
13,707 | What does "permutation invariant" mean in the context of neural networks doing image recognition? | A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x_2, x_3))=f((x_2, x_1,x_3))=f((x_3,x_1,x_2))
$$
and so on. | What does "permutation invariant" mean in the context of neural networks doing image recognition? | A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x | What does "permutation invariant" mean in the context of neural networks doing image recognition?
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x_2, x_3))=f((x_2, x_1,... | What does "permutation invariant" mean in the context of neural networks doing image recognition?
A function $f$ of a vector argument $x=(x_1, \dots,x_n)$ is permutation invariant if the value of $f$ do not change if we permute the components of $x$, that is, for instance, when $n=3$:
$$
f((x_1, x |
13,708 | Difference between histogram and pdf? | To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in densities and not in frequencies here.... | Difference between histogram and pdf? | To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical p | Difference between histogram and pdf?
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in ... | Difference between histogram and pdf?
To clarify Dirks point :
Say your data is a sample of a normal distribution. You could construct the following plot:
The red line is the empirical density estimate, the blue line is the theoretical p |
13,709 | Difference between histogram and pdf? | A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the other hand, is a closed-form expression for a given distribution. That is different from describing your dataset with a... | Difference between histogram and pdf? | A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the | Difference between histogram and pdf?
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the other hand, is a closed-form expression for a given distribution. That is differe... | Difference between histogram and pdf?
A histogram is pre-computer age estimate of a density. A density estimate is an alternative.
These days we use both, and there is a rich literature about which defaults one should use.
A pdf, on the |
13,710 | Difference between histogram and pdf? | There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that an estimated density covers up. For example, Andrew Gelman makes this point:
Variations on the histogram
A key benefi... | Difference between histogram and pdf? | There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that | Difference between histogram and pdf?
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that an estimated density covers up. For example, Andrew Gelman makes this point:
Va... | Difference between histogram and pdf?
There's no hard and fast rule here. If you know the density of your population, then a PDF is better. On the other hand, often we deal with samples and a histogram might convey some information that |
13,711 | Difference between histogram and pdf? | Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' axis is density value ( 'Normalized count' divided by 'bin width')
Bar areas sum to 1
Probability Density Function PDF (... | Difference between histogram and pdf? | Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' ax | Difference between histogram and pdf?
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' axis is density value ( 'Normalized count' divided by 'bin width')
Bar areas sum to ... | Difference between histogram and pdf?
Relative frequency histogram (discrete)
'y' axis is Normalized count
'y' axis is discrete probability for that particular bin/range
Normalized counts sum up to 1
Density Histogram (discrete)
'y' ax |
13,712 | logit - interpreting coefficients as probabilities | These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odds ratio is $e^{0.25} = 1.28$.
The odds ratio is the multiplier that shows how the odds change for a one-unit increase in... | logit - interpreting coefficients as probabilities | These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odd | logit - interpreting coefficients as probabilities
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odds ratio is $e^{0.25} = 1.28$.
The odds ratio is the multiplier that s... | logit - interpreting coefficients as probabilities
These odds ratios are the exponential of the corresponding regression coefficient:
$$\text{odds ratio} = e^{\hat\beta}$$
For example, if the logistic regression coefficient is $\hat\beta=0.25$ the odd |
13,713 | logit - interpreting coefficients as probabilities | Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (emphasis added). Poisson regression uses a logarithmic link, in contrast to logistic regression, which uses a logit (log... | logit - interpreting coefficients as probabilities | Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" ( | logit - interpreting coefficients as probabilities
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" (emphasis added). Poisson regression uses a logarithmic link, in contr... | logit - interpreting coefficients as probabilities
Part of the problem is that you're taking a sentence from Gelman and Hill out of context. Here's a Google books screenshot:
Note that the heading says "Interpreting Poisson regression coefficients" ( |
13,714 | logit - interpreting coefficients as probabilities | If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can multiply by the odds-ratio of a given term to determine what the odds would be when that covariate is 1 instead of 0.
The... | logit - interpreting coefficients as probabilities | If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can mul | logit - interpreting coefficients as probabilities
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can multiply by the odds-ratio of a given term to determine what the odds wo... | logit - interpreting coefficients as probabilities
If you want to interpret in terms of the percentages, then you need the y-intercept ($\beta_0$). Taking the exponential of the intercept gives the odds when all the covariates are 0, then you can mul |
13,715 | Expected value of x in a normal distribution, GIVEN that it is below a certain value | A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is that
its cumulative distribution function is called $\Phi$,
it has a probability density function $\phi(z) = \Phi^\prime(... | Expected value of x in a normal distribution, GIVEN that it is below a certain value | A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is th | Expected value of x in a normal distribution, GIVEN that it is below a certain value
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is that
its cumulative distribution fun... | Expected value of x in a normal distribution, GIVEN that it is below a certain value
A normally distributed variable $X$ with mean $\mu$ and variance $\sigma^2$ has the same distribution as $\sigma Z + \mu$ where $Z$ is a standard normal variable. All you need to know about $Z$ is th |
13,716 | Expected value of x in a normal distribution, GIVEN that it is below a certain value | In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=\frac{P(c_1\leq X \leq x)}{P(c_1\leq X \leq c_2)}\\&=&\frac{F(x)-F(c_1)}{F(c_2)-F(c_1)}
\end{eqnarray*}
You may obtain s... | Expected value of x in a normal distribution, GIVEN that it is below a certain value | In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}= | Expected value of x in a normal distribution, GIVEN that it is below a certain value
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}=\frac{P(c_1\leq X \leq x)}{P(c_1\le... | Expected value of x in a normal distribution, GIVEN that it is below a certain value
In general, let $X$ have distribution function $F(X)$.
We have, for $x\in[c_1,c_2]$,
\begin{eqnarray*}
P(X\leq x|c_1\leq X \leq c_2)&=&\frac{P(X\leq x\cap c_1\leq X \leq c_2)}{P(c_1\leq X \leq c_2)}= |
13,717 | Error bars on error bars? | You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and standard deviation), which are functions of the underlying population only, and are not dependent on how large your sam... | Error bars on error bars? | You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and | Error bars on error bars?
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and standard deviation), which are functions of the underlying population only, and are not depen... | Error bars on error bars?
You are interested in standard errors, which describe the variability in a parameter estimate, and are related to your sampling approach. This is distinct from the parameters themselves (e.g. mean and |
13,718 | Error bars on error bars? | The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since they are already functions of the observed data, these objects do not have any uncertainty in them --- they represent in... | Error bars on error bars? | The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since | Error bars on error bars?
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since they are already functions of the observed data, these objects do not have any uncertainty in ... | Error bars on error bars?
The objects we use to make inferences (e.g., estimates, confidence intervals, error bars, test statistics, p-values, etc.) are statistics, meaning that they are functions of the observed data. Since |
13,719 | Error bars on error bars? | The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways to create confidence intervals. They are different rules with slightly different properties. However, they are a chosen... | Error bars on error bars? | The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways t | Error bars on error bars?
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways to create confidence intervals. They are different rules with slightly different properties. ... | Error bars on error bars?
The short answer is "no."
However you construct your error bars, they are a rule. You cannot be unsure of them. Let us imagine that they are confidence intervals. There are multiple standard ways t |
13,720 | Error bars on error bars? | The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distribution of the location of your estimate is least likely. Clause Wilke (in his book Fundamentals of Data Visualization, ... | Error bars on error bars? | The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distr | Error bars on error bars?
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distribution of the location of your estimate is least likely. Clause Wilke (in his book Fundamenta... | Error bars on error bars?
The traditional design of error bars gives an unfortunate impression of some linear distribution of uncertainty, and places a lot of visual emphasis on the the end of the bar, which is where the distr |
13,721 | Error bars on error bars? | Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by the property that
\begin{equation*}
\mathbb{P}\left[ L(X) < \theta < U(X) \right] = 1-\alpha,
\end{equation*}
where $L$... | Error bars on error bars? | Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by | Error bars on error bars?
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by the property that
\begin{equation*}
\mathbb{P}\left[ L(X) < \theta < U(X) \right] = 1-\alpha,... | Error bars on error bars?
Review of confidence intervals
Let $\theta \in \mathbb{R}$ be a parameter of interest which we study based on a random variable $X$. An exact $1-\alpha$ confidence interval $(L(X),U(X)$ is defined by |
13,722 | Error bars on error bars? | TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can indeed see that the estimate of the standard deviation is different each experiment. We are not certain about the exact val... | Error bars on error bars? | TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can ind | Error bars on error bars?
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can indeed see that the estimate of the standard deviation is different each experiment. We are not c... | Error bars on error bars?
TLDR;
Below is a simulation where we repeated an experiment of estimating the mean of a normal distribution with $\mu = 0$ and $\sigma = 1$. We did 200 repetitions with samples of size 10.
We can ind |
13,723 | How does one find the mean of a sum of dependent variables? | Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the expectations exist), regardless of whether they are independent or not.
We can generalise (e.g. by induction) so that $\m... | How does one find the mean of a sum of dependent variables? | Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the e | How does one find the mean of a sum of dependent variables?
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the expectations exist), regardless of whether they are independe... | How does one find the mean of a sum of dependent variables?
Expectation (taking the mean) is a linear operator.
This means that, amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the e |
13,724 | How does one find the mean of a sum of dependent variables? | TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with the sum of random variables $Y_n = \sum_{i=1}^n X_i$, i.e. of a function of many of them, the mean of the sum $E(Y_n)$ i... | How does one find the mean of a sum of dependent variables? | TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with | How does one find the mean of a sum of dependent variables?
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with the sum of random variables $Y_n = \sum_{i=1}^n X_i$, i.e. o... | How does one find the mean of a sum of dependent variables?
TL; DR:
Assuming it exists, the mean is an expected value, and the expected value is an integral, and the integrals have the linearity property with respect to sums.
TS; DR:
Since we are dealing with |
13,725 | How to name the ticks in a python matplotlib boxplot | Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon', 'tue', 'wed'])
edited to remove pylab bc pylab is a convenience module that bulk imports matplotlib.pyplot (for plott... | How to name the ticks in a python matplotlib boxplot | Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon' | How to name the ticks in a python matplotlib boxplot
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon', 'tue', 'wed'])
edited to remove pylab bc pylab is a convenience ... | How to name the ticks in a python matplotlib boxplot
Use the second argument of xticks to set the labels:
import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon' |
13,726 | How to name the ticks in a python matplotlib boxplot | ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I submitted this boxplot example that shows you other functionality that could be useful (like rotating the tick mark text, ... | How to name the ticks in a python matplotlib boxplot | ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I su | How to name the ticks in a python matplotlib boxplot
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I submitted this boxplot example that shows you other functionality tha... | How to name the ticks in a python matplotlib boxplot
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the thumbnail gallery to be really useful for finding relevant code and examples.
For your case, I su |
13,727 | Why is best subset selection not favored in comparison to lasso? | In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. If your selection procedure led you to exclude a predictor with a true nonzero coefficient, all coefficient estimates wi... | Why is best subset selection not favored in comparison to lasso? | In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. | Why is best subset selection not favored in comparison to lasso?
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. If your selection procedure led you to exclude a predic... | Why is best subset selection not favored in comparison to lasso?
In subset selection, the nonzero parameters will only be unbiased if you have chosen a superset of the correct model, i.e., if you have removed only predictors whose true coefficient values are zero. |
13,728 | Why is best subset selection not favored in comparison to lasso? | In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do not contribute to the fit, (3) prediction accuracy and (4) producing essentially unbiased estimates for the selected var... | Why is best subset selection not favored in comparison to lasso? | In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do | Why is best subset selection not favored in comparison to lasso?
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do not contribute to the fit, (3) prediction accuracy and ... | Why is best subset selection not favored in comparison to lasso?
In principle, if the best subset can be found, it is indeed better than the LASSO, in terms of (1) selecting the variables that actually contribute to the fit, (2) not selecting the variables that do |
13,729 | Power analysis for ordinal logistic regression | I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and affordable) using R.
decide what you think your data should look like and how you will analyze it
write a function or set... | Power analysis for ordinal logistic regression | I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and af | Power analysis for ordinal logistic regression
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and affordable) using R.
decide what you think your data should look like and ... | Power analysis for ordinal logistic regression
I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made.
Simulating for power is quite straight forward (and af |
13,730 | Power analysis for ordinal logistic regression | Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-replacement the same n, but the same idea.
So here's an example: I ran a little self-experiment which turned in a positive po... | Power analysis for ordinal logistic regression | Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-repla | Power analysis for ordinal logistic regression
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-replacement the same n, but the same idea.
So here's an example: I ran a littl... | Power analysis for ordinal logistic regression
Besides Snow's excellent example, I believe you can also do a power simulation by resampling from an existing dataset which has your effect. Not quite a bootstrap, since you're not sampling-with-repla |
13,731 | Power analysis for ordinal logistic regression | I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power default to 1-tailed test, and if you are trying to see if your simulations match them (always a good idea when you are l... | Power analysis for ordinal logistic regression | I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power | Power analysis for ordinal logistic regression
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power default to 1-tailed test, and if you are trying to see if your simulation... | Power analysis for ordinal logistic regression
I would add one other thing to Snow's answer (and this applies to any power analysis via simulation) - pay attention to whether you are looking for a 1 or 2 tailed test. Popular programs like G*Power |
13,732 | 100-sided dice roll problem | This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get there? Or,
You can play this game an unlimited number of times and you wish to maximize your expected profit per roll ... | 100-sided dice roll problem | This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get | 100-sided dice roll problem
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get there? Or,
You can play this game an unlimited number of times and you wish to maximize yo... | 100-sided dice roll problem
This question is ambiguous. Does it mean
You can play this game only once and you wish to maximize the expected difference between what you collect at the end and the cost of the rolls needed to get |
13,733 | 100-sided dice roll problem | Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that on average it will take us $\frac{1}{p}$ rolls to finish. Note that when we stop, we received a value uniformly distribut... | 100-sided dice roll problem | Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that o | 100-sided dice roll problem
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that on average it will take us $\frac{1}{p}$ rolls to finish. Note that when we stop, we received... | 100-sided dice roll problem
Let $t \in [0,99]$ be our rejection threshold value. In other words, if the value we rolled is $> t$, then we stop.
Then $p = 1 - \frac{t}{100}$ is the probability that we stop. This then means that o |
13,734 | 100-sided dice roll problem | I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test 2: Stopping when throw >= 87:
Average winnings: \$86.36
Minimum winnings: \$-4
Maximum throws: 92
I tested a few stoppi... | 100-sided dice roll problem | I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test | 100-sided dice roll problem
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test 2: Stopping when throw >= 87:
Average winnings: \$86.36
Minimum winnings: \$-4
Maximum thro... | 100-sided dice roll problem
I coded this in Python and obtained the following results from 1,000,000 runs for each test:
Test 1: Stopping when throw >= 50:
Average winnings: \$73.07
Minimum winnings: \$35
Maximum throws: 20
Test |
13,735 | 100-sided dice roll problem | The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. The expected value of the next roll, and every future roll, is \$50-1=\$49.
Thus, if you are currently getting \$50 or... | 100-sided dice roll problem | The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. | 100-sided dice roll problem
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. The expected value of the next roll, and every future roll, is \$50-1=\$49.
Thus, if you a... | 100-sided dice roll problem
The past is past and doesn't matter for your strategy, so after roll $i$ you have the option of $\$X_i$ if $X_i$ is showing, or paying \$1 to get the random $\$X_{i+1}$, for a total of $\$X_{i+1}-1$. |
13,736 | 100-sided dice roll problem | First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on your last roll. You previous rolls don't affect it at all. Furthermore, your marginal cost is not affected by the rolls. ... | 100-sided dice roll problem | First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on y | 100-sided dice roll problem
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on your last roll. You previous rolls don't affect it at all. Furthermore, your marginal cost is... | 100-sided dice roll problem
First, of all, the only thing that matters as far as deciding when to stop is the last roll. Others have mentioned this without proving it, so here's an argument for it: your winnings depend only on y |
13,737 | 100-sided dice roll problem | Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where $R_n$ is the revenue from the last roll; $R_n$ takes values $i=1,\ldots,100$ with each
value having probability $p_i=1/... | 100-sided dice roll problem | Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where | 100-sided dice roll problem
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where $R_n$ is the revenue from the last roll; $R_n$ takes values $i=1,\ldots,100$ with each
valu... | 100-sided dice roll problem
Maybe I don't understand your question, in which case I apologise. The expected payoff after $n$ rolls is the value of the last roll. This is,
$$
\mathbb{E}[R_n]=\sum_{i=1}^{100} p_i i = 50.5
$$
where |
13,738 | 100-sided dice roll problem | As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help someone or to get it commented on by someone more expert.
With every new dice roll you are paying 1\$ so you want to incr... | 100-sided dice roll problem | As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help s | 100-sided dice roll problem
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help someone or to get it commented on by someone more expert.
With every new dice roll you are pa... | 100-sided dice roll problem
As a stats learner some of the answers here went far above my head, but with my intuition I came to a similar conclusion so I thought I could be worth sharing my mental process in case it might help s |
13,739 | 100-sided dice roll problem | Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, the most profitable option is to stop playing right away (best_profit(last_roll, rolls) == last_roll - rolls). I do not kn... | 100-sided dice roll problem | Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, th | 100-sided dice roll problem
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, the most profitable option is to stop playing right away (best_profit(last_roll, rolls) == las... | 100-sided dice roll problem
Here is a function to compute the expected best profit of the game recursively, in Python. This value is 86.35, and it is also the case that for all values of last_roll greater than or equal to 87, th |
13,740 | 100-sided dice roll problem | Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note that however many prior turns have been should not impact our current decision. It follows immediately that we should tak... | 100-sided dice roll problem | Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note t | 100-sided dice roll problem
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note that however many prior turns have been should not impact our current decision. It follows im... | 100-sided dice roll problem
Here we generalize on the other approaches but realize the same solution. The difference is that here we do not presume a stopping rule of the form suggested, but rather prove it is optimal.
We note t |
13,741 | Algorithm for sampling fixed number of samples from a finite population | Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through the entire population, the cache will be the desired random sample.
This... | Algorithm for sampling fixed number of samples from a finite population | Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items u | Algorithm for sampling fixed number of samples from a finite population
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through ... | Algorithm for sampling fixed number of samples from a finite population
Yes.
Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items u |
13,742 | Algorithm for sampling fixed number of samples from a finite population | I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of that list.
Since you're taking the first $k$ elements, you don't need to keep track of the elements after that, so you c... | Algorithm for sampling fixed number of samples from a finite population | I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of | Algorithm for sampling fixed number of samples from a finite population
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of that list.
Since you're taking the first $k$ ele... | Algorithm for sampling fixed number of samples from a finite population
I think the most intuitive solution is that you have an ordered list, and every time you see a new item, you place the item into the list at a random location. Then you take the first $k$ elements of |
13,743 | Algorithm for sampling fixed number of samples from a finite population | Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling methodologies have been strongly motivated by the need to sample streaming data where the overall sample size $n$ is unknown o... | Algorithm for sampling fixed number of samples from a finite population | Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling method | Algorithm for sampling fixed number of samples from a finite population
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling methodologies have been strongly motivated by the need... | Algorithm for sampling fixed number of samples from a finite population
Complementary to @whuber's and @Accumulation's good answers (+1).
Sampling techniques used to address such tasks are usually categorised under the umbrella of reservoir sampling; these sampling method |
13,744 | Difference between Anomaly and Outlier | The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also referred to as abnormalities, discordants, deviants, or anomalies in the data mining and statistics literature.
Bold ... | Difference between Anomaly and Outlier | The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also | Difference between Anomaly and Outlier
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also referred to as abnormalities, discordants, deviants, or anomalies in the data mi... | Difference between Anomaly and Outlier
The two terms are synonyms according to:
Aggarwal, Charu C. Outlier Analysis. Springer New York, 2017, doi: http://dx.doi.org/10.1007/978-3-319-47578-3_1
Quotation from page 1:
Outliers are also |
13,745 | Difference between Anomaly and Outlier | A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that indicates your model does work properly
A more serious, less cryptic answer:
The concept of outliers starts from the issue ... | Difference between Anomaly and Outlier | A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that ind | Difference between Anomaly and Outlier
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that indicates your model does work properly
A more serious, less cryptic answer:
The con... | Difference between Anomaly and Outlier
A tongue-in-cheek answer:
Outlier: a value that you predictably find in your data that indicates your model does not work properly
Anomaly: a value that against all odds you find in your data that ind |
13,746 | Difference between Anomaly and Outlier | An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbability). | Difference between Anomaly and Outlier | An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbabil | Difference between Anomaly and Outlier
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbability). | Difference between Anomaly and Outlier
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbabil |
13,747 | Difference between Anomaly and Outlier | The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rare observations. In statistics, on a normal distribution, you would consider three sigma to be outliers. That is 99.7% of... | Difference between Anomaly and Outlier | The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rar | Difference between Anomaly and Outlier
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rare observations. In statistics, on a normal distribution, you would consider three... | Difference between Anomaly and Outlier
The terms are largely used in an interchangeable way.
"Outlier" refers to something lying outside the norm - so it is "anomalous".
But I have the inpression that "outlier" is usually used for very rar |
13,748 | Difference between Anomaly and Outlier | Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value or long-term average. A positive anomaly indicates that the
observed temperature was warmer than the reference value, w... | Difference between Anomaly and Outlier | Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value o | Difference between Anomaly and Outlier
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value or long-term average. A positive anomaly indicates that the
observed temperature... | Difference between Anomaly and Outlier
Just to muddy the waters further, in climatology anomaly just implies the difference between value and mean, or a deviation:
The term temperature anomaly means a departure from a reference
value o |
13,749 | Difference between Anomaly and Outlier | Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeably in literature. | Difference between Anomaly and Outlier | Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeabl | Difference between Anomaly and Outlier
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeably in literature. | Difference between Anomaly and Outlier
Good question. However, google search on "difference between outliers and anomalies site:.edu" shows that there is no theoretical difference between these two terms. They are being used interchangeabl |
13,750 | Difference between Anomaly and Outlier | An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. more generalizable) models. A point $(1,5)$ would be an outlier for the model $y=x$. You ignore it in light of the fact ... | Difference between Anomaly and Outlier | An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. | Difference between Anomaly and Outlier
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. more generalizable) models. A point $(1,5)$ would be an outlier for the model $y=... | Difference between Anomaly and Outlier
An outlier is a data point that makes it hard to fit a model. You face outliers, often unwillingly, when you are trying to fit a model on your dataset. Removing outliers enables building better (i.e. |
13,751 | What is a good book about the philosophy behind Bayesian thinking? | Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it. | What is a good book about the philosophy behind Bayesian thinking? | Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it. | What is a good book about the philosophy behind Bayesian thinking?
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it. | What is a good book about the philosophy behind Bayesian thinking?
Jay Kadane's Principles of uncertainty is a recent and highly coherent introduction to subjective Bayesian thinking. I reviewed it there and definitely recommend it. |
13,752 | What is a good book about the philosophy behind Bayesian thinking? | I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book. | What is a good book about the philosophy behind Bayesian thinking? | I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book. | What is a good book about the philosophy behind Bayesian thinking?
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book. | What is a good book about the philosophy behind Bayesian thinking?
I'm a particular fan of Understanding Uncertainty by Dennis Lindley. I actually emailed Jay Kadane a while back to ask the same question you did, and he recommended me this book. |
13,753 | What is a good book about the philosophy behind Bayesian thinking? | Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jeffreys' Theory of Probability of 1939 (1948, 1961), Good's Probability & the Weighing of Evidence of 1950 and Savage's Fo... | What is a good book about the philosophy behind Bayesian thinking? | Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jef | What is a good book about the philosophy behind Bayesian thinking?
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jeffreys' Theory of Probability of 1939 (1948, 1961), Go... | What is a good book about the philosophy behind Bayesian thinking?
Probability, The Logic of Science by E.T. Jaynes, provides excellent discussions around this subject. Jaynes is on the side of Objective Bayesianism.
Related books that influenced Jaynes' book are Jef |
13,754 | What is a good book about the philosophy behind Bayesian thinking? | Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods | What is a good book about the philosophy behind Bayesian thinking? | Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods | What is a good book about the philosophy behind Bayesian thinking?
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods | What is a good book about the philosophy behind Bayesian thinking?
Here is a recent title with a focus on regression: Bayesian and Frequentist Regression Methods |
13,755 | What is a good book about the philosophy behind Bayesian thinking? | One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain.html | What is a good book about the philosophy behind Bayesian thinking? | One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-s | What is a good book about the philosophy behind Bayesian thinking?
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-stone.staff.shef.ac.uk/BookBayes2012/BayesRuleBookMain... | What is a good book about the philosophy behind Bayesian thinking?
One of the most lucid expositions of Bayesian Thinking can be found in "Bayes' Rule" by Jim Stone. The same book comes in a several versions, with accompanying R, Python and MATLAB code.
http://jim-s |
13,756 | Influence functions and OLS | Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that statistic. They can also be used to create asymptotic variance estimates. If influence equals $I$ then asymptotic varia... | Influence functions and OLS | Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that s | Influence functions and OLS
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that statistic. They can also be used to create asymptotic variance estimates. If influence equa... | Influence functions and OLS
Influence functions are basically an analytical tool that can be used to assess the effect (or "influence") of removing an observation on the value of a statistic without having to re-calculate that s |
13,757 | Influence functions and OLS | Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The contaminated distribution function, $F_\epsilon(x)$ can be defined as:
$$
F_\epsilon(x)=(1-\epsilon)F+\epsilon\delta_x
$$
whe... | Influence functions and OLS | Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The conta | Influence functions and OLS
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The contaminated distribution function, $F_\epsilon(x)$ can be defined as:
$$
F_\epsilon(x)=(1-\epsil... | Influence functions and OLS
Here is a super general way to talk about influence functions of a regression. First I'm going to tackle one way of presenting influence functions:
Suppose $F$ is a distribution on $\Sigma$. The conta |
13,758 | Influence functions and OLS | Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
Consider the mapping $$\phi:\mathbb{D} \mapsto \mathbb{E} $$
such that $\phi$ maps the joint distribution of $X,Y, {P}$... | Influence functions and OLS | Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$. | Influence functions and OLS
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$.
Consider the mapping $$\phi:\mathbb{D} \mapsto \mathbb{E} $$
such that $\phi$ maps the join... | Influence functions and OLS
Consider a simple linear model $$Y_i=X_i\beta +u$$
where for simplicity, we assume $\mu_X=\mu_Y=0$, and $X_i,Y_i$ are scalar random variables with independent and identical joint measure $P$ of $X,Y$. |
13,759 | How will studying "stochastic processes" help me as a statistician? | Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways:
Enable you to develop models for situations of interest to you.
An exposure to s... | How will studying "stochastic processes" help me as a statistician? | Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic | How will studying "stochastic processes" help me as a statistician?
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways:
Enable you t... | How will studying "stochastic processes" help me as a statistician?
Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic |
13,760 | How will studying "stochastic processes" help me as a statistician? | You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology could help with biological statistical consultancy since you know more biology!
I presume that you have a choice of mod... | How will studying "stochastic processes" help me as a statistician? | You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology | How will studying "stochastic processes" help me as a statistician?
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology could help with biological statistical consultancy ... | How will studying "stochastic processes" help me as a statistician?
You need to be careful how you ask this question. Since you could substitute almost anything in place of stochastic processes and it would still be potentially useful. For example, a course in biology |
13,761 | How will studying "stochastic processes" help me as a statistician? | A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. ISBN 9780387202877
Having said that, many applied statisticians (inclu... | How will studying "stochastic processes" help me as a statistician? | A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history an | How will studying "stochastic processes" help me as a statistician?
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. ISB... | How will studying "stochastic processes" help me as a statistician?
A deep understanding of survival analysis requires knowledge of counting processes, martingales, Cox processes... See e.g. Odd O. Aalen, Ørnulf Borgan, Håkon K. Gjessing. Survival and event history an |
13,762 | How will studying "stochastic processes" help me as a statistician? | The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course will probably teach you the mathematics behind these stochastic processes, e. g. distribution functions, which will allo... | How will studying "stochastic processes" help me as a statistician? | The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course | How will studying "stochastic processes" help me as a statistician?
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course will probably teach you the mathematics behind these... | How will studying "stochastic processes" help me as a statistician?
The short answer probably is that all observable processes, which we may want to analyze with statistical tools, are stochastic processes, that is, they contain some element of randomness. The course |
13,763 | How will studying "stochastic processes" help me as a statistician? | Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one). | How will studying "stochastic processes" help me as a statistician? | Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one). | How will studying "stochastic processes" help me as a statistician?
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one). | How will studying "stochastic processes" help me as a statistician?
Just for the sake of completeness, an IID sequence of random variables is also a stochastic process (a very simple one). |
13,764 | How will studying "stochastic processes" help me as a statistician? | In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerging evidence points to one hypothesis or another, is based on the theory of stochastic processes. So yes, this course is... | How will studying "stochastic processes" help me as a statistician? | In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerg | How will studying "stochastic processes" help me as a statistician?
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerging evidence points to one hypothesis or another, is... | How will studying "stochastic processes" help me as a statistician?
In medical statistics, you need stochastic processes to calculate how to adjust significance levels when stopping a clinical trial early. In fact, the whole area of monitoring clinical trials as emerg |
13,765 | How will studying "stochastic processes" help me as a statistician? | Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an understanding of stochastic processes. This is so fundamental in so many areas of application that I am inclined to say th... | How will studying "stochastic processes" help me as a statistician? | Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an un | How will studying "stochastic processes" help me as a statistician?
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an understanding of stochastic processes. This is so fund... | How will studying "stochastic processes" help me as a statistician?
Other areas of application for stochastic processes: (1) Asymptotic theory: This builds on PeterR's comment about an IID sequence. Law of large numbers and central limit theorem results require an un |
13,766 | (Why) Is absolute loss not a proper scoring rule? | Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In your examples, $s$ is a function of observed data $y_1,\dots,y_n$ with $s = \hat{p}$. The Brier score loss function is $L_b... | (Why) Is absolute loss not a proper scoring rule? | Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In you | (Why) Is absolute loss not a proper scoring rule?
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In your examples, $s$ is a function of observed data $y_1,\dots,y_n$ with $s... | (Why) Is absolute loss not a proper scoring rule?
Let's first make sure we agree on definitions. Consider a binary random variable $Y \sim \text{Ber}(p)$, and consider a loss function $L(y_i|s)$, where $s$ is an estimate of $p$ given the data. In you |
13,767 | (Why) Is absolute loss not a proper scoring rule? | Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = 1$ if $p_i>0.5$ and $\check y_i=0$ if $p_i<0.5$.
Suppose $p_i>0.5$ (for simplicity).
The expected Brier loss of $\hat y... | (Why) Is absolute loss not a proper scoring rule? | Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = | (Why) Is absolute loss not a proper scoring rule?
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = 1$ if $p_i>0.5$ and $\check y_i=0$ if $p_i<0.5$.
Suppose $p_i>0.5$ (f... | (Why) Is absolute loss not a proper scoring rule?
Take a simple example where $p_i$ are known probabilities and $y_i$ are Bernoulli($p_i$).
What is $\hat y_i$? The best choice is obviously $\hat y_i=p_i$. Alternatively, we might take $\check y_i = |
13,768 | (Why) Is absolute loss not a proper scoring rule? | In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an observation $y$, the CRPS is defined like this:
$$\text{CRPS}(F,y) = \int (F(z)-I(y\leq z))^2dz$$
Intuitively it is a me... | (Why) Is absolute loss not a proper scoring rule? | In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an | (Why) Is absolute loss not a proper scoring rule?
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an observation $y$, the CRPS is defined like this:
$$\text{CRPS}(F,y) = \... | (Why) Is absolute loss not a proper scoring rule?
In a slightly different direction, one way to look at this is to consider more generally the continuous ranked probability score (CRPS), which is a proper scoring rule.
For a predicted CDF $F$ and an |
13,769 | Split data into N equal groups | If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
This will return a list of data frames where each data frame is consists of randomly selected rows from df. By default samp... | Split data into N equal groups | If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
Thi | Split data into N equal groups
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
This will return a list of data frames where each data frame is consists of randomly selecte... | Split data into N equal groups
If I understand the question correctly, this will get you what you want. Assuming your data frame is called df and you have N defined, you can do this:
split(df, sample(1:N, nrow(df), replace=T))
Thi |
13,770 | Split data into N equal groups | Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere. Maybe my answer
will help if someone finds this page from now on.
I wrote an R package, which does exactly what the qu... | Split data into N equal groups | Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere | Split data into N equal groups
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere. Maybe my answer
will help if someone finds this page from now on.
I wrote an R package,... | Split data into N equal groups
Edit: The minDiff package has been superceded by the anticlust package.
This is a very late answer, but I found this page while googling whether
the problem as stated has ever been discussed anywhere |
13,771 | Split data into N equal groups | Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- df[order(runif(nrow(df))), ]
bins <- rep(1:N, nrow(df) / N)
split(df, bins) | Split data into N equal groups | Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- | Split data into N equal groups
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- df[order(runif(nrow(df))), ]
bins <- rep(1:N, nrow(df) / N)
split(df, bins) | Split data into N equal groups
Although Alex A's answer gives an equal probability for each group, it does not meet the question's request for the groups to have an equal number of rows. In R:
stopifnot(nrow(df) %% N == 0)
df <- |
13,772 | Split data into N equal groups | This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data) | Split data into N equal groups | This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data) | Split data into N equal groups
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data) | Split data into N equal groups
This can be solved with nesting using tidyr/dplyr
require(dplyr)
require(tidyr)
num_groups = 10
iris %>%
group_by((row_number()-1) %/% (n()/num_groups)) %>%
nest %>% pull(data) |
13,773 | Significance testing or cross validation? | First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function
$$f(x_1, \ldots, x_p) ... | Significance testing or cross validation? | First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or | Significance testing or cross validation?
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and ... | Significance testing or cross validation?
First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or |
13,774 | Significance testing or cross validation? | Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you may get strong correlations by chance and these correlations can seemingly be enhanced as you remove other unnecessary ... | Significance testing or cross validation? | Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you | Significance testing or cross validation?
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you may get strong correlations by chance and these correlations can seemingly be... | Significance testing or cross validation?
Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you |
13,775 | Is there ever a reason to solve a regression problem as a classification problem? | In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-896285461 :
One loses information by binning the response. Why would one want to do that in the first place (except dat... | Is there ever a reason to solve a regression problem as a classification problem? | In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment- | Is there ever a reason to solve a regression problem as a classification problem?
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment-896285461 :
One loses information by ... | Is there ever a reason to solve a regression problem as a classification problem?
In line with @delaney's reply: I have not seen and I'm unable to imagine a reason for doing so.
Borrowing from the discussion in https://github.com/scikit-learn/scikit-learn/issues/15850#issuecomment- |
13,776 | Is there ever a reason to solve a regression problem as a classification problem? | In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably out of practical convenience. Libraries for classification might be more common and easily accessible, and they also auto... | Is there ever a reason to solve a regression problem as a classification problem? | In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably o | Is there ever a reason to solve a regression problem as a classification problem?
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably out of practical convenience. Libraries... | Is there ever a reason to solve a regression problem as a classification problem?
In general, there is no good reason. Grouping the data as you describe means that some information is being thrown away, and that can't be a good thing.
The reason you see people do this is probably o |
13,777 | Is there ever a reason to solve a regression problem as a classification problem? | In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectation is badly modeled as a linear function of the predictors. But then there are better alternatives, like response transf... | Is there ever a reason to solve a regression problem as a classification problem? | In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectati | Is there ever a reason to solve a regression problem as a classification problem?
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectation is badly modeled as a linear functi... | Is there ever a reason to solve a regression problem as a classification problem?
In addition to the good answers by users J. Delaney and Soeren Soerensen: One motivation for doing this might be that they think the response will not work well with a linear model, that its expectati |
13,778 | Is there ever a reason to solve a regression problem as a classification problem? | One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. This will weight the cases correctly even though they have different variances.
OTOH, logistic regression is a proper re... | Is there ever a reason to solve a regression problem as a classification problem? | One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. | Is there ever a reason to solve a regression problem as a classification problem?
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. This will weight the cases correctly e... | Is there ever a reason to solve a regression problem as a classification problem?
One counter-example that I see often:
Outcomes that are proportions (eg 10% = 2/20, 20%= 1/5, etc) should not get dumped through OLS, instead use a logistic regression with the denominator specified. |
13,779 | Is there ever a reason to solve a regression problem as a classification problem? | I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this one (all code is attached at the end), where the red class corresponds to $y \leq 1$ and the blue class to $y>1$ and we... | Is there ever a reason to solve a regression problem as a classification problem? | I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this | Is there ever a reason to solve a regression problem as a classification problem?
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this one (all code is attached at the end)... | Is there ever a reason to solve a regression problem as a classification problem?
I found this a very interesting question and I struggled to think of scenarios where binning a response variable would lead to better predictions.
The best I could come up with is a scenario like this |
13,780 | Is there ever a reason to solve a regression problem as a classification problem? | Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each value of sales (a continuoum of classes) a probability is assigned predicting how likely it is that sales-value/class.
H... | Is there ever a reason to solve a regression problem as a classification problem? | Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each | Is there ever a reason to solve a regression problem as a classification problem?
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each value of sales (a continuoum of classe... | Is there ever a reason to solve a regression problem as a classification problem?
Bayesian regression does something like this on a continuous scale.
To each value of the parameter a probability is assigned indicating how likely the parameter has that value.
For instance, for each |
13,781 | Is there ever a reason to solve a regression problem as a classification problem? | You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an ML classification model.
You might have perhaps ten different intensities of this illness and you know the thresholds for... | Is there ever a reason to solve a regression problem as a classification problem? | You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an M | Is there ever a reason to solve a regression problem as a classification problem?
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an ML classification model.
You might have... | Is there ever a reason to solve a regression problem as a classification problem?
You can discretize the regression problem for example into the classification of having an illness "yes" and "no", by this making it possible to read the probabilities of each class (yes/no) from an M |
13,782 | Is there ever a reason to solve a regression problem as a classification problem? | I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend you're a data scientist at a company and they say to you that they want to forecast monthly sales. They hand you a bunc... | Is there ever a reason to solve a regression problem as a classification problem? | I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend | Is there ever a reason to solve a regression problem as a classification problem?
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend you're a data scientist at a company ... | Is there ever a reason to solve a regression problem as a classification problem?
I actually do this quite often, in general because the data may work for regression, but the scenario isn't necessarily a regression problem even if it could be. Here's a common scenario:
Let pretend |
13,783 | Variablity in cv.glmnet results | The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this is done $K$ times, using a different $K$ part each time). This is done for all the lambdas, and the lambda.min is the on... | Variablity in cv.glmnet results | The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this | Variablity in cv.glmnet results
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this is done $K$ times, using a different $K$ part each time). This is done for all the lambd... | Variablity in cv.glmnet results
The point here is that in cv.glmnet the K folds ("parts") are picked randomly.
In K-folds cross validation the dataset is divided in $K$ parts, and $K-1$ parts are used to predict the K-th part (this |
13,784 | Variablity in cv.glmnet results | Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create 3 cv test each with 1000 iterations averaging the min MSEs for each $\alpha$, I get 3 different best ($\lambda$, $\alph... | Variablity in cv.glmnet results | Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create | Variablity in cv.glmnet results
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create 3 cv test each with 1000 iterations averaging the min MSEs for each $\alpha$, I get 3 d... | Variablity in cv.glmnet results
Lately I faced the same problem. I tried repeating the CV many times, like 100, 200, 1000 on my data set trying to find the best $\lambda$ and $\alpha$ (i'm using an elastic net). But even if I create |
13,785 | Variablity in cv.glmnet results | I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes useful points!
lambdas = NULL
for (i in 1:n)
{
fit <- cv.glmnet(xs,ys)
errors = data.frame(fit$lambda,fit$cvm)
... | Variablity in cv.glmnet results | I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes use | Variablity in cv.glmnet results
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes useful points!
lambdas = NULL
for (i in 1:n)
{
fit <- cv.glmnet(xs,ys)
errors = dat... | Variablity in cv.glmnet results
I'll add another solution, which handles the bug in @Alice's due to missing lambdas, but doesn't require extra packages like @Max Ghenis. Thanks are owed to all the other answers - everyone makes use |
13,786 | Variablity in cv.glmnet results | Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452, 0.12368107787663, : length of 'dimnames' [1] not equal to array extent.
OptimLambda below should work in the general... | Variablity in cv.glmnet results | Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452 | Variablity in cv.glmnet results
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452, 0.12368107787663, : length of 'dimnames' [1] not equal to array extent.
OptimLambda ... | Variablity in cv.glmnet results
Alice's answer works well in most cases, but sometimes errors out due to cv.glmnet$lambda sometimes returning results of different length, e.g.:
Error in rownames<-(tmp, value = c(0.135739830284452 |
13,787 | Variablity in cv.glmnet results | You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
foldids = rep(1,length(responseDiffs))
foldids[flds$Fold2] = 2
foldids[flds$Fold3] = 3
foldids[flds$Fold4] = 4
foldids[... | Variablity in cv.glmnet results | You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE) | Variablity in cv.glmnet results
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE)
foldids = rep(1,length(responseDiffs))
foldids[flds$Fold2] = 2
foldids[flds$Fold3] = 3
... | Variablity in cv.glmnet results
You can control the randomness if you explicitly set foldid. Here an example for 5-fold CV
library(caret)
set.seed(284)
flds <- createFolds(responseDiffs, k = cvfold, list = TRUE, returnTrain = FALSE) |
13,788 | How to assess skewness from a boxplot? | One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a ratio
When (Q3-Q2) vs (Q2-Q1) is instead expressed as a difference (or equivalently midhinge-median), that must be scaled t... | How to assess skewness from a boxplot? | One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a rat | How to assess skewness from a boxplot?
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a ratio
When (Q3-Q2) vs (Q2-Q1) is instead expressed as a difference (or equivalently ... | How to assess skewness from a boxplot?
One measure of skewness is based on mean-median - Pearson's second skewness coefficient.
Another measure of skewness is based on the relative quartile differences (Q3-Q2) vs (Q2-Q1) expressed as a rat |
13,789 | How to assess skewness from a boxplot? | No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting some form of asymmetry in the data distribution).
John Tukey described a systematic way to explore asymmetry in batches... | How to assess skewness from a boxplot? | No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting | How to assess skewness from a boxplot?
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting some form of asymmetry in the data distribution).
John Tukey described a systema... | How to assess skewness from a boxplot?
No, you did not miss anything: you are actually seeing beyond the simplistic summaries that were presented. These data are both positively and negatively skewed (in the sense of "skewness" suggesting |
13,790 | How to assess skewness from a boxplot? | The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively skewed but the mean is larger than the median due to the outlier. | How to assess skewness from a boxplot? | The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively | How to assess skewness from a boxplot?
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively skewed but the mean is larger than the median due to the outlier. | How to assess skewness from a boxplot?
The mean being less than or greater than the median is a shortcut that often works for determining the direction of skew so long as there are no outliers. In this case, the distribution is negatively |
13,791 | Mann-Whitney U test with unequal sample sizes | Yes, the Mann-Whitney test works fine with unequal sample sizes. | Mann-Whitney U test with unequal sample sizes | Yes, the Mann-Whitney test works fine with unequal sample sizes. | Mann-Whitney U test with unequal sample sizes
Yes, the Mann-Whitney test works fine with unequal sample sizes. | Mann-Whitney U test with unequal sample sizes
Yes, the Mann-Whitney test works fine with unequal sample sizes. |
13,792 | Mann-Whitney U test with unequal sample sizes | @HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will diminish as the group sizes become more unequal. For an example, I have a simulation (actually of a t-test, but the pr... | Mann-Whitney U test with unequal sample sizes | @HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will | Mann-Whitney U test with unequal sample sizes
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will diminish as the group sizes become more unequal. For an example, I have ... | Mann-Whitney U test with unequal sample sizes
@HarveyMotulsky is right, you can use the Mann-Whitney U-test with unequal sample sizes. Note however, that your statistical power (i.e., the ability to detect a difference that really is there) will |
13,793 | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors? | To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is that the probability that the true parameter value lies within the interval equals 95%. One of the two common frequent... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti | To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is that the... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
To give a more narrow response than the excellent ones that have already been posted, and focus on the advantage in interpretation - the Bayesian interpretation of a, e.g., "95% credible interval" is |
13,794 | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors? | In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relative plausibility of the truth of some proposition) is more closely in accord with our everyday usage of the word than is t... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti | In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relativ | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relative plausib... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
In my opinion, the reason that Bayesian statistics are "better" for intepretation is nothing to do with the priors, but is due to the definition of a probability. The Bayesian definition (the relativ |
13,795 | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors? | The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods depend on using what information you have, and knowing how to encode that information into a probability distribution.
U... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti | The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods depend on... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
The Bayesian framework has a big advantage over frequentist because it does not depend on having a "crystal ball" in terms of knowing the correct distributional assumptions to make. Bayesian methods |
13,796 | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors? | I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors that provide little information about what the solution will be, but which encode mathematically what a good solution prob... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti | I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors th | How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors that provid... | How is the bayesian framework better in interpretation when we usually use uninformative or subjecti
I have typically seen the uniform prior used in either "instructive" type examples, or in cases in which truly nothing is known about a particular hyperparameter. Typically, I see uninformed priors th |
13,797 | What is the intuitive meaning behind a random variable being defined as a "lattice"? | It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
Note that not all discrete distributions are lattices. Eg if $X$ can take on the values $\{1, e, \pi, 5\}$, this is not a ... | What is the intuitive meaning behind a random variable being defined as a "lattice"? | It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
No | What is the intuitive meaning behind a random variable being defined as a "lattice"?
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
Note that not all discrete distributi... | What is the intuitive meaning behind a random variable being defined as a "lattice"?
It means that $X$ is discrete, and there is some kind of regular spacing to its distribution; that is, the probability mass is concentrated on a finite/countable set of points ${d, 2d, 3d, \dots}$.
No |
13,798 | What is the intuitive meaning behind a random variable being defined as a "lattice"? | This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the meaning and potential applications of lattice random variables.
Background
In mathematics, a "lattice" $\mathcal{L}$ is... | What is the intuitive meaning behind a random variable being defined as a "lattice"? | This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the | What is the intuitive meaning behind a random variable being defined as a "lattice"?
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the meaning and potential applications... | What is the intuitive meaning behind a random variable being defined as a "lattice"?
This terminology connects the random variable with concepts of group theory used to study geometric symmetries. You might therefore enjoy seeing the more general connection, which will illuminate the |
13,799 | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model | There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we plot the data. Here is a scatter plot where the data points are colored by group. Additionally, we have a separate group-s... | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model | There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we pl | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we plot the data. Here is... | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
There are several things going on here. These are interesting issues, but it will take a fair amount of time/space to explain it all.
First of all, this all becomes a lot easier to understand if we pl |
13,800 | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model | After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independent and the dependent variables. In this case, those variables are omitted or unobserved. However, I do observe the grou... | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model | After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independ | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independent and the dependen... | Big disagreement in the slope estimate when groups are treated as random vs. fixed in a mixed model
After considerable contemplation, I believe I have discovered my own answer. I believe an econometrician would define my independent variable to be endogenous and thus be correlated with both independ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.