idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
12,001 | How to tell if my data distribution is symmetric? | No doubt you have been told otherwise, but mean $=$ median does not imply symmetry.
There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the distribution is not symmetric (like any of the common skewness measures).
Similarly, the relationship between mean and medi... | How to tell if my data distribution is symmetric? | No doubt you have been told otherwise, but mean $=$ median does not imply symmetry.
There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the dis | How to tell if my data distribution is symmetric?
No doubt you have been told otherwise, but mean $=$ median does not imply symmetry.
There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the distribution is not symmetric (like any of the common skewness measures).... | How to tell if my data distribution is symmetric?
No doubt you have been told otherwise, but mean $=$ median does not imply symmetry.
There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the dis |
12,002 | How to tell if my data distribution is symmetric? | The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, but in most practical cases it would.
As @NickCox noted, there's more than one definition of skewness. I use the one tha... | How to tell if my data distribution is symmetric? | The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, | How to tell if my data distribution is symmetric?
The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, but in most practical cases it would.
As @NickCox noted, there's more ... | How to tell if my data distribution is symmetric?
The easiest thing is to compute the sample skewness. There's a function in Minitab for that. The symmetrical distributions will have zero skewness. Zero skewness doesn't necessarily mean symmetrical, |
12,003 | How to tell if my data distribution is symmetric? | Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sample Kolmogorov-Smirnov test by comparing the two partitions to each other. Make your conclusion based on the p-value. | How to tell if my data distribution is symmetric? | Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sa | How to tell if my data distribution is symmetric?
Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sample Kolmogorov-Smirnov test by comparing the two partitions to each o... | How to tell if my data distribution is symmetric?
Center your data around zero by subtracting off the sample mean. Now split your data into two parts, the negative and the positive. Take the absolute value of the negative data points. Now do a two-sa |
12,004 | How to tell if my data distribution is symmetric? | Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column.
Then compute the correlation coefficient (call it Rm) between these two columns.
Compute the chiral index: CHI=(1+Rm)/2.
CHI takes values in the interval [0..1].
CHI is null IF and ONLY IF your... | How to tell if my data distribution is symmetric? | Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column.
Then compute the correlation coefficient (call it Rm) between these two c | How to tell if my data distribution is symmetric?
Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column.
Then compute the correlation coefficient (call it Rm) between these two columns.
Compute the chiral index: CHI=(1+Rm)/2.
CHI takes values in th... | How to tell if my data distribution is symmetric?
Put your observations sorted in increasing values in one column, then put them sorted in decreasing values in an other column.
Then compute the correlation coefficient (call it Rm) between these two c |
12,005 | difference between R square and rmse in linear regression [duplicate] | Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$.
The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the root mean squared error is the square root thus $RMSE=\sqrt{MSE}$.
The $R^2$ is equal to $R^2=1-\frac{SSE}{TSS}$ where $SS... | difference between R square and rmse in linear regression [duplicate] | Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$.
The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the roo | difference between R square and rmse in linear regression [duplicate]
Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$.
The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the root mean squared error is the square root thus $RMSE... | difference between R square and rmse in linear regression [duplicate]
Assume that you have $n$ observations $y_i$ and that you have an estimator that estimates the values $\hat{y}_i$.
The mean squared error is $MSE=\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2$, the roo |
12,006 | difference between R square and rmse in linear regression [duplicate] | Both indicate the goodness of the fit.
R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more easily interpreted, but with RMSE we explicitly know how much our predictions deviate, on average, from the actual value... | difference between R square and rmse in linear regression [duplicate] | Both indicate the goodness of the fit.
R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more | difference between R square and rmse in linear regression [duplicate]
Both indicate the goodness of the fit.
R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more easily interpreted, but with RMSE we explicitly kn... | difference between R square and rmse in linear regression [duplicate]
Both indicate the goodness of the fit.
R-squared is conveniently scaled between 0 and 1, whereas RMSE is not scaled to any particular values. This can be good or bad; obviously R-squared can be more |
12,007 | How to setup xreg argument in auto.arima() in R? [closed] | The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works.
library(forecast)
# create some artifical data
modelfitsample <- data.frame(Customer_Visit=rpois(49,3000),Weekday=rep(1:7,7),
Christmas=c... | How to setup xreg argument in auto.arima() in R? [closed] | The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works.
library(forecast)
# create some artifical data | How to setup xreg argument in auto.arima() in R? [closed]
The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works.
library(forecast)
# create some artifical data
modelfitsample <- data.frame(Customer_Visit=rpois(49,3000),We... | How to setup xreg argument in auto.arima() in R? [closed]
The main problem is that your xreg is not a matrix. I think the following code does what you want. I've used some artificial data to check that it works.
library(forecast)
# create some artifical data |
12,008 | Low classification accuracy, what to do next? | First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a position, is weather you or a domain expert could infer the class (with an accuracy greater than a random classifier) based on... | Low classification accuracy, what to do next? | First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a positi | Low classification accuracy, what to do next?
First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a position, is weather you or a domain expert could infer the class (with an accur... | Low classification accuracy, what to do next?
First of all, if your classifier doesn't do better than a random choice, there is a risk that there simply is no connection between features and class. A good question to ask yourself in such a positi |
12,009 | Low classification accuracy, what to do next? | I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification.
It is worth examining your features on an individual basis to see if there is any relationship with the outcome of interest - it may that the features you have do not have any association with the class label... | Low classification accuracy, what to do next? | I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification.
It is worth examining your features on an individual basis to see if there is any relati | Low classification accuracy, what to do next?
I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification.
It is worth examining your features on an individual basis to see if there is any relationship with the outcome of interest - it may that the features you have do... | Low classification accuracy, what to do next?
I would suggest taking a step back and doing some exploratory data analysis prior to attempting classification.
It is worth examining your features on an individual basis to see if there is any relati |
12,010 | Low classification accuracy, what to do next? | Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see practically no separation, that could indicate a lack of predictability; you can do this with all the pairs of covariates. T... | Low classification accuracy, what to do next? | Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see pract | Low classification accuracy, what to do next?
Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see practically no separation, that could indicate a lack of predictability; you ca... | Low classification accuracy, what to do next?
Why not follow the principle "look at plots of the data first"? One thing you can do is a 2-D scatterplot of the two class conditional densities for two covariates. If you look at these and see pract |
12,011 | Low classification accuracy, what to do next? | It's good that you separated your data into the training data and test data.
Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect the error on your test set to be greater than the error on your training set, so if you have an unacceptably high error on... | Low classification accuracy, what to do next? | It's good that you separated your data into the training data and test data.
Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect th | Low classification accuracy, what to do next?
It's good that you separated your data into the training data and test data.
Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect the error on your test set to be greater than the error on your training set... | Low classification accuracy, what to do next?
It's good that you separated your data into the training data and test data.
Did your training error go down when you trained? If not, then you may have a bug in your training algorithm. You expect th |
12,012 | Is AR(1) a Markov process? | The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively as
$$X_n = f_n(X_{n-1}, \epsilon_n), \quad X_0 = x_0 \in F$$
the process $(X_n)_{n \geq 0}$ in $F$ is a Markov process st... | Is AR(1) a Markov process? | The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively a | Is AR(1) a Markov process?
The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively as
$$X_n = f_n(X_{n-1}, \epsilon_n), \quad X_0 = x_0 \in F$$
the process $(X_n)_{n \geq 0}$ in... | Is AR(1) a Markov process?
The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively a |
12,013 | Is AR(1) a Markov process? | A process $X_{t}$ is an AR(1) process if
$$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$
where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if
$$P(X_{t} = x_t | {\rm entire \ history \ of \ the \ process }) = P(X_{t}=x_t| X_{t-1}=x_{t-1})$$
From the first equation, the probability distr... | Is AR(1) a Markov process? | A process $X_{t}$ is an AR(1) process if
$$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$
where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if
$$P(X_{t} = x_t | {\rm e | Is AR(1) a Markov process?
A process $X_{t}$ is an AR(1) process if
$$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$
where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if
$$P(X_{t} = x_t | {\rm entire \ history \ of \ the \ process }) = P(X_{t}=x_t| X_{t-1}=x_{t-1})$$
From the first equa... | Is AR(1) a Markov process?
A process $X_{t}$ is an AR(1) process if
$$X_{t} = c + \varphi X_{t-1} + \varepsilon_{t} $$
where the errors, $\varepsilon_{t}$ are iid. A process has the Markov property if
$$P(X_{t} = x_t | {\rm e |
12,014 | Is AR(1) a Markov process? | What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition
$$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 \right ),...,X\left ( t-1 \right )= x\left ( t-1 \right )\right ]=P\left [ X\left ( t \right )= x\left ( t \right ) | ... | Is AR(1) a Markov process? | What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition
$$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 | Is AR(1) a Markov process?
What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition
$$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 \right ),...,X\left ( t-1 \right )= x\left ( t-1 \right )\right ]=P\left [ X\left ( t \right... | Is AR(1) a Markov process?
What is a Markov process? (loosely speeking) A stochastic process is a first order Markov process if the condition
$$P\left [ X\left ( t \right )= x\left ( t \right ) | X\left ( 0 \right )= x\left ( 0 |
12,015 | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a rather non-representative value. Consider the log-normal distribution:
x <- rlnorm(1000)
plot(density(x), xlim=c(0, 10))... | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a rather non-representative va... | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
First of all, I would check whether the mean is an appropriate index for the task at hand. If you are looking for "a typical/ or central value" of a skewed distribution, the mean might point you to a |
12,016 | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shifted by $\hat\kappa/(6s^2n)$, where $\hat\kappa$ is the estimate of the population third moment, and the width stays the... | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shi | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shifted by $\hat\kappa/(6s^2n)$... | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
If you are open to a semi-parametric solution, here's one: Johnson, N. (1978) Modified t Tests and Confidence Intervals for Asymmetrical Populations, JASA. The center of the confidence interval is shi |
12,017 | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | Try a log-normal distribution, calculating:
Logarithm of the data;
Mean and standard deviation of (1)
Confidence interval corresponding to (2)
Exponential of (3)
You'll end up with an asymmetric confidence interval around the expected value (which is not the mean of the raw data). | How can I calculate the confidence interval of a mean in a non-normally distributed sample? | Try a log-normal distribution, calculating:
Logarithm of the data;
Mean and standard deviation of (1)
Confidence interval corresponding to (2)
Exponential of (3)
You'll end up with an asymmetric con | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
Try a log-normal distribution, calculating:
Logarithm of the data;
Mean and standard deviation of (1)
Confidence interval corresponding to (2)
Exponential of (3)
You'll end up with an asymmetric confidence interval around the ... | How can I calculate the confidence interval of a mean in a non-normally distributed sample?
Try a log-normal distribution, calculating:
Logarithm of the data;
Mean and standard deviation of (1)
Confidence interval corresponding to (2)
Exponential of (3)
You'll end up with an asymmetric con |
12,018 | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example? | The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p_i)$, so the normal approximation to the binomial distribution gives (taking $ \hat{p_{i}} = Y_{i}$)
$$
\hat{p}_{i}\... | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b | The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(4 | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example?
The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p... | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b
The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(4 |
12,019 | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example? | I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data:
Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and its generalizations. Journal of the American Statistical Association, 70(350), 311-319 (link to pdf)
or more detailed
Ef... | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b | I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data:
Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example?
I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data:
Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and its... | James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their b
I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data:
Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and |
12,020 | Is a decision stump a linear model? | No, unless you transform the data.
It is a linear model if you transform $x$ using indicator function:
$$
x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x>2 \end{align}\end{cases}
$$
Then $f(x) = 2x' + 3 = \left(\matrix{3 \\2}\right)^T \left(\matrix{1 \\x'}\right)$
Edit: th... | Is a decision stump a linear model? | No, unless you transform the data.
It is a linear model if you transform $x$ using indicator function:
$$
x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x | Is a decision stump a linear model?
No, unless you transform the data.
It is a linear model if you transform $x$ using indicator function:
$$
x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x>2 \end{align}\end{cases}
$$
Then $f(x) = 2x' + 3 = \left(\matrix{3 \\2}\right)^T \l... | Is a decision stump a linear model?
No, unless you transform the data.
It is a linear model if you transform $x$ using indicator function:
$$
x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x |
12,021 | Is a decision stump a linear model? | Answers to your questions:
A decision stump is not a linear model.
The decision boundary can be a line, even if the model is not linear. Logistic regression is an example.
The boosted model does not have to be the same kind of model as the base learner. If you think about it, your example of boosting, plus the questio... | Is a decision stump a linear model? | Answers to your questions:
A decision stump is not a linear model.
The decision boundary can be a line, even if the model is not linear. Logistic regression is an example.
The boosted model does not | Is a decision stump a linear model?
Answers to your questions:
A decision stump is not a linear model.
The decision boundary can be a line, even if the model is not linear. Logistic regression is an example.
The boosted model does not have to be the same kind of model as the base learner. If you think about it, your e... | Is a decision stump a linear model?
Answers to your questions:
A decision stump is not a linear model.
The decision boundary can be a line, even if the model is not linear. Logistic regression is an example.
The boosted model does not |
12,022 | Is a decision stump a linear model? | This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts.
I once was in a court room and the judge asked (for good reason in context) , if we call a dog's tail a leg, does that mean a dog has 5 legs ? So what is a linear model ?
In the context of sta... | Is a decision stump a linear model? | This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts.
I once was in a court room and the judge asked (for good reason in context) | Is a decision stump a linear model?
This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts.
I once was in a court room and the judge asked (for good reason in context) , if we call a dog's tail a leg, does that mean a dog has 5 legs ? So what is a li... | Is a decision stump a linear model?
This answer is more verbose than is needed to just answer the question. I hope to provoke some comments from real experts.
I once was in a court room and the judge asked (for good reason in context) |
12,023 | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses
$$F(p,q) = \frac{3}{2^{1+p+q}}.$$
This is easily summed (all series involved are geometric) to demonstrate it really is a distribution (the total probability is unity).
For any nonze... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses
$$F(p,q) = \frac{3}{2^{1+p+q}}.$$
This is easily summed (all s | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses
$$F(p,q) = \frac{3}{2^{1+p+q}}.$$
This is easily summed (all series involved are geometric) to demonstrate... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
Consider the discrete distribution $F$ with support on the set $\{(p,q)\,|\, q \ge p \ge 1\}\subset \mathbb{N}^2$ with probability masses
$$F(p,q) = \frac{3}{2^{1+p+q}}.$$
This is easily summed (all s |
12,024 | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem.
My notation:
$Q$ is a RV whose support is $\mathbb{Q}\cap\left[0,1\right]$ -- my $Q$ is not the same as the $Q$ the OP constructs from his $\fr... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem.
My notation:
$Q$ is a R | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem.
My notation:
$Q$ is a RV whose support is $\mathbb{Q}\cap\left[0,1\... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
I'll put my comments together and post them as an answer just for clarity. I expect you won't be very satisfied, however, as all I do is reduce your problem to another problem.
My notation:
$Q$ is a R |
12,025 | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some arbitrary surjective function $H: \mathbb{N} \rightarrow \mathbb{Q}_*$, which can be interpreted as a sequence on $\mathb... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$ | One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some a | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some arbitrary surjective function $H: \mathbb{N} ... | Constructing a discrete r.v. having as support all the rationals in $[0,1]$
One obvious way to construct discrete distributions on the set $\mathbb{Q}_* \equiv \mathbb{Q} \cap [0,1]$ is to do so via sequences of values on that set that cover that set. Suppose you have some a |
12,026 | How to calculate cumulative distribution in R? | The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example:
> X = rnorm(100) # X is a sample of 100 normally distributed random variables
> P = ecdf(X) # P is a function giving the empirical CDF of X
> P(0.0) # This returns the empir... | How to calculate cumulative distribution in R? | The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example:
> X = rnorm(100) # X is a sample of 100 normally distributed ran | How to calculate cumulative distribution in R?
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example:
> X = rnorm(100) # X is a sample of 100 normally distributed random variables
> P = ecdf(X) # P is a function giving the empirical CDF... | How to calculate cumulative distribution in R?
The ecdf function applied to a data sample returns a function representing the empirical cumulative distribution function. For example:
> X = rnorm(100) # X is a sample of 100 normally distributed ran |
12,027 | How to calculate cumulative distribution in R? | What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the argument of that function, if it were a stair, would be the index of the tread.
You can use this:
acumulated.distrib= fu... | How to calculate cumulative distribution in R? | What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the | How to calculate cumulative distribution in R?
What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the argument of that function, if it were a stair, would be the index of the ... | How to calculate cumulative distribution in R?
What you appear to need is this to get the acumulated distribution (probability of get a value <= than x on a sample), ecdf returns you a function, but it appears to be made for plotting, and so, the |
12,028 | How to calculate cumulative distribution in R? | I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead.
First install data.table. Then install my package, mltools (or just copy the empirical_cdf() method into your R environment.)
Then it's as easy as
# load packages
library... | How to calculate cumulative distribution in R? | I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead.
First install data.table. Then install my package, | How to calculate cumulative distribution in R?
I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead.
First install data.table. Then install my package, mltools (or just copy the empirical_cdf() method into your R environment... | How to calculate cumulative distribution in R?
I always found ecdf() to be a little confusing. Plus I think it only works in the univariate case. Ended up rolling my own function for this instead.
First install data.table. Then install my package, |
12,029 | How to calculate cumulative distribution in R? | friend, you can read the code on this blog.
sample.data = read.table ('data.txt', header = TRUE, sep = "\t")
cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf()
cdf
more details can be found on following link:
r cdf and histogram | How to calculate cumulative distribution in R? | friend, you can read the code on this blog.
sample.data = read.table ('data.txt', header = TRUE, sep = "\t")
cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf()
cdf | How to calculate cumulative distribution in R?
friend, you can read the code on this blog.
sample.data = read.table ('data.txt', header = TRUE, sep = "\t")
cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf()
cdf
more details can be found on following link:
r cdf and histogram | How to calculate cumulative distribution in R?
friend, you can read the code on this blog.
sample.data = read.table ('data.txt', header = TRUE, sep = "\t")
cdf <- ggplot (data=sample.data, aes(x=Delay, group =Type, color = Type)) + stat_ecdf()
cdf |
12,030 | Poisson regression vs. log-count least-squares regression? | On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$.
On the other hand, in a "standard" linear model, the left-hand side is the expected value of the normal response variable: $E[Y|x]$. In particular, the link function is the identity... | Poisson regression vs. log-count least-squares regression? | On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$.
On the other hand, in a "standard" linear model, the left-han | Poisson regression vs. log-count least-squares regression?
On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$.
On the other hand, in a "standard" linear model, the left-hand side is the expected value of the normal response variable:... | Poisson regression vs. log-count least-squares regression?
On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$.
On the other hand, in a "standard" linear model, the left-han |
12,031 | Poisson regression vs. log-count least-squares regression? | I see two important differences.
First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model the represent conditional means. Since data in this type of analysis are often skewed right, the conditional geometric m... | Poisson regression vs. log-count least-squares regression? | I see two important differences.
First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model | Poisson regression vs. log-count least-squares regression?
I see two important differences.
First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model the represent conditional means. Since data in this type of a... | Poisson regression vs. log-count least-squares regression?
I see two important differences.
First, the predicted values (on the original scale) behave different; in loglinear least-squares they represent conditional geometric means; in the log-poisson model |
12,032 | C++ libraries for statistical computing | We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package.
And because linear algebra is already such a well-understood and coded-for field, Armadillo, a current, modern, plesant, well-documted, small, templated, ... library was a very natural fit for our... | C++ libraries for statistical computing | We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package.
And because linear algebra is already such a well-understood and coded-for f | C++ libraries for statistical computing
We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package.
And because linear algebra is already such a well-understood and coded-for field, Armadillo, a current, modern, plesant, well-documted, small, templated, ..... | C++ libraries for statistical computing
We have spent some time making the wrapping from C++ into R (and back for that matter) a lot easier via our Rcpp package.
And because linear algebra is already such a well-understood and coded-for f |
12,033 | C++ libraries for statistical computing | I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntactic sugar is really sweet (pun intended).
As a side remark, I would recommend that you have a look at JAGS, which does M... | C++ libraries for statistical computing | I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntac | C++ libraries for statistical computing
I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntactic sugar is really sweet (pun intended).
As a side remark, I would recommend th... | C++ libraries for statistical computing
I would strongly suggest that you have a look at RCpp and RcppArmadillo packages for R. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntac |
12,034 | C++ libraries for statistical computing | Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as
Uniform (real)
Uniform (unit sphere or arbitrary dimension)
Bernoulli
Binomial
Cauchy
Gamma
Poisson
Geometric
Triangle
Exponential
Normal
Lognor... | C++ libraries for statistical computing | Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as
Uniform (real)
Uniform ( | C++ libraries for statistical computing
Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as
Uniform (real)
Uniform (unit sphere or arbitrary dimension)
Bernoulli
Binomial
Cauchy
Gamma
Poisson
Geom... | C++ libraries for statistical computing
Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as
Uniform (real)
Uniform ( |
12,035 | C++ libraries for statistical computing | There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially useful because they are written in C but have excellent Python wrappers already written.
1) IMSL C and PyIMSL
2) trilinos an... | C++ libraries for statistical computing | There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially usef | C++ libraries for statistical computing
There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially useful because they are written in C but have excellent Python wrappers already writ... | C++ libraries for statistical computing
There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially usef |
12,036 | Minimum sample size per cluster in a random effect model | TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high"
Longer version:
In general, the number of clusters is more important than the number of observations per cluster. With 700, clearly you ha... | Minimum sample size per cluster in a random effect model | TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high"
Longer version:
In | Minimum sample size per cluster in a random effect model
TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high"
Longer version:
In general, the number of clusters is more important than the num... | Minimum sample size per cluster in a random effect model
TL;DR: The minimum sample size per cluster in a mixed-effecs model is 1, provided that the number of clusters is adequate, and the proportion of singleton cluster is not "too high"
Longer version:
In |
12,037 | Minimum sample size per cluster in a random effect model | In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the overall mean of the model described by the fixed-effects part. The degree of shrinkage depends on two components:
The ma... | Minimum sample size per cluster in a random effect model | In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the o | Minimum sample size per cluster in a random effect model
In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the overall mean of the model described by the fixed-effects part. T... | Minimum sample size per cluster in a random effect model
In mixed models the random effects are most often estimated using empirical Bayes methodology. A feature of this methodology is shrinkage. Namely, the estimated random effects are shrunk towards the o |
12,038 | What is the difference between univariate and multivariate time series? | Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a one-dimensional value, which is the temperature.
Multivariate time series: Multiple variables are varying over time. Fo... | What is the difference between univariate and multivariate time series? | Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a | What is the difference between univariate and multivariate time series?
Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a one-dimensional value, which is the temperature... | What is the difference between univariate and multivariate time series?
Univariate time series: Only one variable is varying over time. For example, data collected from a sensor measuring the temperature of a room every second. Therefore, each second, you will only have a |
12,039 | Texas sharpshooter fallacy in exploratory data analysis | If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independent. Many researchers attempt to "reconcile differences" with things like pooled analyses, meta analyses, and Bayesian met... | Texas sharpshooter fallacy in exploratory data analysis | If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independen | Texas sharpshooter fallacy in exploratory data analysis
If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independent. Many researchers attempt to "reconcile differences" with thin... | Texas sharpshooter fallacy in exploratory data analysis
If one views the role of EDA strictly as generating hypotheses, then no the sharpshooter fallacy does not apply. However, it is very important that subsequent confirmatory trials are indeed independen |
12,040 | Texas sharpshooter fallacy in exploratory data analysis | This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?"
Accepting unadjusted p-values from EDA methods will lead to vastly inflated type I error rates. But I think Tukey would not be happ... | Texas sharpshooter fallacy in exploratory data analysis | This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?"
Accepting u | Texas sharpshooter fallacy in exploratory data analysis
This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?"
Accepting unadjusted p-values from EDA methods will lead to vastly inflated... | Texas sharpshooter fallacy in exploratory data analysis
This paints a very negative view of exploratory data analysis. While the argument is not wrong, it's really saying "what can go wrong when I use a very important tool in the wrong manner?"
Accepting u |
12,041 | Texas sharpshooter fallacy in exploratory data analysis | It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses.
I would temper this statement and express it a little differently: Choosing a hypothesis to test based on the data undermines the test if one doesn't use the correct null hypothesis. The thr... | Texas sharpshooter fallacy in exploratory data analysis | It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses.
I would temper this statement and express it a little differently: Cho | Texas sharpshooter fallacy in exploratory data analysis
It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses.
I would temper this statement and express it a little differently: Choosing a hypothesis to test based on the data undermines the test... | Texas sharpshooter fallacy in exploratory data analysis
It looks like any exploratory process performed without having a hypothesis beforehand is prone to generate spurious hypotheses.
I would temper this statement and express it a little differently: Cho |
12,042 | Texas sharpshooter fallacy in exploratory data analysis | Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters.
The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself about how many tests you've really performed, and thus in assigning some kind of $p$-value to your discovery. Even in cas... | Texas sharpshooter fallacy in exploratory data analysis | Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters.
The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself ab | Texas sharpshooter fallacy in exploratory data analysis
Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters.
The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself about how many tests you've really performed, and thus in assignin... | Texas sharpshooter fallacy in exploratory data analysis
Almost by definition, yes, of course EDA without CDA attracts Texas sharpshooters.
The difficulty when CDA is not possible (perhaps no further data can be obtained) is in being honest with yourself ab |
12,043 | Texas sharpshooter fallacy in exploratory data analysis | Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis), you can get a sense of its robustness by performing cross-validation (CV) or bootstrap simulations. If your findings ... | Texas sharpshooter fallacy in exploratory data analysis | Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis | Texas sharpshooter fallacy in exploratory data analysis
Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis), you can get a sense of its robustness by performing cross-val... | Texas sharpshooter fallacy in exploratory data analysis
Just to add to the already great answers: There is a middle ground between a full CDA and just accepting your EDA results at face value. Once you've found a possible feature of interest (or hypothesis |
12,044 | Texas sharpshooter fallacy in exploratory data analysis | The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data. This can, in theory, result from exploratory data analysis alone.
See "Causal deconvolution by algorithmic generative... | Texas sharpshooter fallacy in exploratory data analysis | The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data | Texas sharpshooter fallacy in exploratory data analysis
The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data. This can, in theory, result from exploratory data analysis al... | Texas sharpshooter fallacy in exploratory data analysis
The most rigorous criterion for data model selection is the degree to which is approximates the Kolmogorov Complexity of the data -- which is to say the degree to which it losslessly compress the data |
12,045 | Neyman-Pearson lemma | I think you understood the lemma well.
Why it does not work for a composite alternative?
As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If the alternative is composite, which parameter are you going to plug in? | Neyman-Pearson lemma | I think you understood the lemma well.
Why it does not work for a composite alternative?
As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If | Neyman-Pearson lemma
I think you understood the lemma well.
Why it does not work for a composite alternative?
As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If the alternative is composite, which parameter are you going to plug in? | Neyman-Pearson lemma
I think you understood the lemma well.
Why it does not work for a composite alternative?
As you can see in the likelihood ratio, we need to plug in the parameter(s) for the alternative hypothesis. If |
12,046 | Neyman-Pearson lemma | I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the lemma. As often in probability, it is based on a discrete probability mass function so it is easy to undertand than when w... | Neyman-Pearson lemma | I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the le | Neyman-Pearson lemma
I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the lemma. As often in probability, it is based on a discrete probability mass function so it is easy to ... | Neyman-Pearson lemma
I recently wrote an entry in a linkedin blog stating Neyman Pearson lemma in plain words and providing an example. I found the example eye opening in the sense of providing a clear intuition on the le |
12,047 | Neyman-Pearson lemma | The Context
(In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section)
The Neyman-Pearson lemma comes up in the problem of simple hypothesis testing. We have two different probability distributions on ... | Neyman-Pearson lemma | The Context
(In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section)
The Neyman | Neyman-Pearson lemma
The Context
(In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section)
The Neyman-Pearson lemma comes up in the problem of simple hypothesis testing. We have two different probabil... | Neyman-Pearson lemma
The Context
(In this section I'm just going to explain hypothesis testing, type one and two errors, etc, in my own style. If you're comfortable with this material, skip to the next section)
The Neyman |
12,048 | How to calculate purity? | Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in the unit range [0..1].
$$Purity = \frac 1 N \sum_{i=1}^k max_j | c_i \cap t_j | $$
where $N$ = number of objects(data p... | How to calculate purity? | Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in | How to calculate purity?
Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in the unit range [0..1].
$$Purity = \frac 1 N \sum_{i=1}^k max_j | c_i \cap t_j | $$
where $N$ =... | How to calculate purity?
Within the context of cluster analysis, Purity is an external evaluation criterion of cluster quality. It is the percent of the total number of objects(data points) that were classified correctly, in |
12,049 | What is the difference between "margin of error" and "standard error"? | Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution.
Long answer: you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, from say a logistic regression parameter to the 75th percentile of the ... | What is the difference between "margin of error" and "standard error"? | Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution.
Long answer: you are estimating a certain population parameter (say, proportion of people with red | What is the difference between "margin of error" and "standard error"?
Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution.
Long answer: you are estimating a certain population parameter (say, proportion of people with red hair; it may be something far more complicated, ... | What is the difference between "margin of error" and "standard error"?
Short answer: they differ by a quantile of the reference (usually, the standard normal) distribution.
Long answer: you are estimating a certain population parameter (say, proportion of people with red |
12,050 | What is the difference between "margin of error" and "standard error"? | This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions.
Standard Error:
The standard error (SE) of the sampling distribution a proportion $p$ is defined as:
$\text{SE}_p=\sqrt{\frac{p\,(1-p)}{n}}$. This can be contrasted to the standard deviation (SD) of the sampl... | What is the difference between "margin of error" and "standard error"? | This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions.
Standard Error:
The standard error (SE) of the sampling distribution a proportion $p$ is | What is the difference between "margin of error" and "standard error"?
This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions.
Standard Error:
The standard error (SE) of the sampling distribution a proportion $p$ is defined as:
$\text{SE}_p=\sqrt{\frac{p\,(1-p)}{n... | What is the difference between "margin of error" and "standard error"?
This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on proportions.
Standard Error:
The standard error (SE) of the sampling distribution a proportion $p$ is |
12,051 | What is the difference between "margin of error" and "standard error"? | The margin of error is the amount added and subtracted in a confidence interval.
The standard error is the standard deviation of the sample statistics if we could take many samples of the same size. | What is the difference between "margin of error" and "standard error"? | The margin of error is the amount added and subtracted in a confidence interval.
The standard error is the standard deviation of the sample statistics if we could take many samples of the same size. | What is the difference between "margin of error" and "standard error"?
The margin of error is the amount added and subtracted in a confidence interval.
The standard error is the standard deviation of the sample statistics if we could take many samples of the same size. | What is the difference between "margin of error" and "standard error"?
The margin of error is the amount added and subtracted in a confidence interval.
The standard error is the standard deviation of the sample statistics if we could take many samples of the same size. |
12,052 | What is the difference between "margin of error" and "standard error"? | sampling error measures the extent to which a sample statistic differs with the parameter being estimated
on the other hand standard error try to quantify the variation among sample statistics drawn from the same population | What is the difference between "margin of error" and "standard error"? | sampling error measures the extent to which a sample statistic differs with the parameter being estimated
on the other hand standard error try to quantify the variation among sample statistics drawn f | What is the difference between "margin of error" and "standard error"?
sampling error measures the extent to which a sample statistic differs with the parameter being estimated
on the other hand standard error try to quantify the variation among sample statistics drawn from the same population | What is the difference between "margin of error" and "standard error"?
sampling error measures the extent to which a sample statistic differs with the parameter being estimated
on the other hand standard error try to quantify the variation among sample statistics drawn f |
12,053 | What is the difference between "margin of error" and "standard error"? | Using @Antoni Parellada's example of sampling proportion, $\hat{p}$,
The standard error of the sample is defined as:
$$
\hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}
$$
The margin of error utilizes the z-score at a specified level of confidence $\alpha$ (e.g., $\alpha =$ 0.05 corresponds to a 95% CI):
$$
M = z_{\alph... | What is the difference between "margin of error" and "standard error"? | Using @Antoni Parellada's example of sampling proportion, $\hat{p}$,
The standard error of the sample is defined as:
$$
\hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}
$$
The margin of error utilizes | What is the difference between "margin of error" and "standard error"?
Using @Antoni Parellada's example of sampling proportion, $\hat{p}$,
The standard error of the sample is defined as:
$$
\hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}
$$
The margin of error utilizes the z-score at a specified level of confidence $\... | What is the difference between "margin of error" and "standard error"?
Using @Antoni Parellada's example of sampling proportion, $\hat{p}$,
The standard error of the sample is defined as:
$$
\hat{SE} = \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}
$$
The margin of error utilizes |
12,054 | Are mediation analyses inherently causal? | A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable through its effect on the mediator, variance in which in turn causes the outcome to vary. But modeling something as a "mediat... | Are mediation analyses inherently causal? | A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable throu | Are mediation analyses inherently causal?
A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable through its effect on the mediator, variance in which in turn causes the outcome to... | Are mediation analyses inherently causal?
A. "Mediation" conceptually means causation (as Kenny quote indicates). Path models that treat a variable as a mediator thus mean to convey that some treatment is influencing an outcome variable throu |
12,055 | Are mediation analyses inherently causal? | Causality and Mediation
A mediation model makes theoretical claims about causality.
The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of causality whereby the IV causes the MEDIATOR which in turn causes the DV.
Support for a mediational model does not prov... | Are mediation analyses inherently causal? | Causality and Mediation
A mediation model makes theoretical claims about causality.
The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of | Are mediation analyses inherently causal?
Causality and Mediation
A mediation model makes theoretical claims about causality.
The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of causality whereby the IV causes the MEDIATOR which in turn causes the DV.
Sup... | Are mediation analyses inherently causal?
Causality and Mediation
A mediation model makes theoretical claims about causality.
The model proposes that the IV causes the DV and that this effect is totally or partially explained by a chain of |
12,056 | Are mediation analyses inherently causal? | I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on paper and work it over in your mind a couple of times or draw the hypothesized effects. | Are mediation analyses inherently causal? | I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on pap | Are mediation analyses inherently causal?
I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on paper and work it over in your mind a couple of times or draw the hypothesized ef... | Are mediation analyses inherently causal?
I believe that those variables you are talking about, should perhaps be considered 'control' variables if the IV doesn't cause them or moderators if you expect an interaction effect. Try it out on pap |
12,057 | Are mediation analyses inherently causal? | Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ice-cream consumption causes drowning. Spurious correlation occurs when a third "moderating" variable is actually causal w... | Are mediation analyses inherently causal? | Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ic | Are mediation analyses inherently causal?
Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ice-cream consumption causes drowning. Spurious correlation occurs when a third ... | Are mediation analyses inherently causal?
Perhaps better language, or at least a lot less confusing is spurious correlation. A typical example for this is that ice-cream consumption correlates with drowning. Therefore, someone might think, ic |
12,058 | Are mediation analyses inherently causal? | I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's genetic code can be thought of as randomized (due to how sex cells are formed and ultimately pair up). Coupling this with kn... | Are mediation analyses inherently causal? | I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's gene | Are mediation analyses inherently causal?
I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's genetic code can be thought of as randomized (due to how sex cells are formed and ... | Are mediation analyses inherently causal?
I came across this post in my own research relating to causal inference in the context of genomics. The attempt at discerning causality in this domain often stems from playing with how a person's gene |
12,059 | Assessing the significance of differences in distributions | I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF) of two samples, meaning it is sensitive to both location and shape of the the two samples. It also generalizes out to... | Assessing the significance of differences in distributions | I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF | Assessing the significance of differences in distributions
I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF) of two samples, meaning it is sensitive to both location an... | Assessing the significance of differences in distributions
I believe that this calls for a two-sample Kolmogorov–Smirnov test, or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the empirical distribution functions (ECDF |
12,060 | Assessing the significance of differences in distributions | I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way?
Is it that the data that you are using are representative samples from populations or processes, and you want to assess the evidence that those populations or processes di... | Assessing the significance of differences in distributions | I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way?
Is it that the data that you are using are represen | Assessing the significance of differences in distributions
I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way?
Is it that the data that you are using are representative samples from populations or processes, and you want to... | Assessing the significance of differences in distributions
I'm going to ask the consultant's dumb question. Why do you want to know if these distributions are different in a statistically significant way?
Is it that the data that you are using are represen |
12,061 | Assessing the significance of differences in distributions | You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability plot, you can construct a relative CDF/PDF, which is a ratio of the densities. This relative density can be used for in... | Assessing the significance of differences in distributions | You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability | Assessing the significance of differences in distributions
You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability plot, you can construct a relative CDF/PDF, which is a ratio ... | Assessing the significance of differences in distributions
You might be interested in applying relative distribution methods. Call one group the reference group, and the other the comparison group. In a way similar to constructing a probability-probability |
12,062 | Assessing the significance of differences in distributions | One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distributions in a Reproducing Kernel Hilbert Space (RKHS). See this paper "A kernel method for the two sample problem". | Assessing the significance of differences in distributions | One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distr | Assessing the significance of differences in distributions
One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distributions in a Reproducing Kernel Hilbert Space (RKHS). See th... | Assessing the significance of differences in distributions
One measure of the difference between two distribution is the "maximum mean discrepancy" criteria, which basically measures the difference between the empirical means of the samples from the two distr |
12,063 | Assessing the significance of differences in distributions | I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test. | Assessing the significance of differences in distributions | I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test. | Assessing the significance of differences in distributions
I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test. | Assessing the significance of differences in distributions
I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a chi-square test. |
12,064 | Why is the James-Stein estimator called a "shrinkage" estimator? | A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can see, what Stein's estimator does is move each of the values closer to the grand average. It makes values greater than the ... | Why is the James-Stein estimator called a "shrinkage" estimator? | A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can se | Why is the James-Stein estimator called a "shrinkage" estimator?
A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can see, what Stein's estimator does is move each of the valu... | Why is the James-Stein estimator called a "shrinkage" estimator?
A picture is sometimes worth a thousand words, so let me share one with you. Below you can see an illustration that comes from Bradley Efron's (1977) paper Stein's paradox in statistics. As you can se |
12,065 | Importance of the bias node in neural networks | Removing the bias will definitely affect the performance and here is why...
Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the weights and the bias affects the initial level of squashing in the sigmoid function (tanh etc.), which results the desi... | Importance of the bias node in neural networks | Removing the bias will definitely affect the performance and here is why...
Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the | Importance of the bias node in neural networks
Removing the bias will definitely affect the performance and here is why...
Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the weights and the bias affects the initial level of squashing in the sigmo... | Importance of the bias node in neural networks
Removing the bias will definitely affect the performance and here is why...
Each neuron is like a simple logistic regression and you have $y=\sigma(W x + b)$. The input values are multiplied with the |
12,066 | Importance of the bias node in neural networks | I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation of all of its inputs, which by the law of larg... | Importance of the bias node in neural networks | I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little differ | Importance of the bias node in neural networks
I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little difference because each node can make a bias node out of the average activation... | Importance of the bias node in neural networks
I disagree with the other answer in the particular context of your question. Yes, a bias node matters in a small network. However, in a large model, removing the bias inputs makes very little differ |
12,067 | Importance of the bias node in neural networks | I'd comment on @NeilG's answer if I had enough reputation, but alas...
I disagree with you, Neil, on this. You say:
... the average activation of all of its inputs, which by the law of large numbers will be roughly normal.
I'd argue against that, and say that the law of large number necessitates that all observation... | Importance of the bias node in neural networks | I'd comment on @NeilG's answer if I had enough reputation, but alas...
I disagree with you, Neil, on this. You say:
... the average activation of all of its inputs, which by the law of large numbers | Importance of the bias node in neural networks
I'd comment on @NeilG's answer if I had enough reputation, but alas...
I disagree with you, Neil, on this. You say:
... the average activation of all of its inputs, which by the law of large numbers will be roughly normal.
I'd argue against that, and say that the law of... | Importance of the bias node in neural networks
I'd comment on @NeilG's answer if I had enough reputation, but alas...
I disagree with you, Neil, on this. You say:
... the average activation of all of its inputs, which by the law of large numbers |
12,068 | What are efficient ways to organize R code and output? [closed] | You are not the first person to ask this question.
Managing a statistical analysis project – guidelines and best practices
A workflow for R
R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpages dedicated to R Workflow)
My own stuff: Dynamic documents wit... | What are efficient ways to organize R code and output? [closed] | You are not the first person to ask this question.
Managing a statistical analysis project – guidelines and best practices
A workflow for R
R Workflow: Slides from a Talk at Melbourne R Users by Je | What are efficient ways to organize R code and output? [closed]
You are not the first person to ask this question.
Managing a statistical analysis project – guidelines and best practices
A workflow for R
R Workflow: Slides from a Talk at Melbourne R Users by Jeromy Anglim (including another much longer list of webpa... | What are efficient ways to organize R code and output? [closed]
You are not the first person to ask this question.
Managing a statistical analysis project – guidelines and best practices
A workflow for R
R Workflow: Slides from a Talk at Melbourne R Users by Je |
12,069 | What are efficient ways to organize R code and output? [closed] | I for one organize everything into 4 files for every project or analysis.
(1) 'code' Where I store text files of R functions.
(2) 'sql' Where I keep the queries used to gather my data.
(3) 'dat' Where I keep copies (usually csv) of my raw and processed data.
(4) 'rpt' Where I store the reports I've distributed.
ALL of ... | What are efficient ways to organize R code and output? [closed] | I for one organize everything into 4 files for every project or analysis.
(1) 'code' Where I store text files of R functions.
(2) 'sql' Where I keep the queries used to gather my data.
(3) 'dat' Where | What are efficient ways to organize R code and output? [closed]
I for one organize everything into 4 files for every project or analysis.
(1) 'code' Where I store text files of R functions.
(2) 'sql' Where I keep the queries used to gather my data.
(3) 'dat' Where I keep copies (usually csv) of my raw and processed dat... | What are efficient ways to organize R code and output? [closed]
I for one organize everything into 4 files for every project or analysis.
(1) 'code' Where I store text files of R functions.
(2) 'sql' Where I keep the queries used to gather my data.
(3) 'dat' Where |
12,070 | What are efficient ways to organize R code and output? [closed] | Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much easier to correct one minor thing at the beginning and have it ripple through the output without having to rerun anythi... | What are efficient ways to organize R code and output? [closed] | Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much | What are efficient ways to organize R code and output? [closed]
Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much easier to correct one minor thing at the beginning and ... | What are efficient ways to organize R code and output? [closed]
Now that I've made the switch to Sweave I never want to go back. Especially if you have plots as output, it's so much easier to keep track of the code used to create each plot. It also makes it much |
12,071 | What are efficient ways to organize R code and output? [closed] | For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explain the usage of it in more detail in this blog post. | What are efficient ways to organize R code and output? [closed] | For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explai | What are efficient ways to organize R code and output? [closed]
For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explain the usage of it in more detail in this blog post. | What are efficient ways to organize R code and output? [closed]
For structuring single .R code files, you can also use strcode, a RStudio add-in I created to insert code separators (with optional titles) and based on them - obtain summaries of code files. I explai |
12,072 | A Measure Theoretic Formulation of Bayes' Theorem | One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995).
The conditional distribution of $\Theta$ given $X=x$ is called the posterior distribution of $\Theta$.
The next theorem shows us how to calculate the posterior distribution of a parameter in the c... | A Measure Theoretic Formulation of Bayes' Theorem | One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995).
The conditional distribution of $\Theta$ given $X=x$ is called the posterior d | A Measure Theoretic Formulation of Bayes' Theorem
One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995).
The conditional distribution of $\Theta$ given $X=x$ is called the posterior distribution of $\Theta$.
The next theorem shows us how to calculate ... | A Measure Theoretic Formulation of Bayes' Theorem
One precise formulation of Bayes' Theorem is the following, taken verbatim from Schervish's Theory of Statistics (1995).
The conditional distribution of $\Theta$ given $X=x$ is called the posterior d |
12,073 | Is decision threshold a hyperparameter in logistic regression? | The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold will decrease the number of positives that you predict and increase the number of negatives that you predict.
The decisio... | Is decision threshold a hyperparameter in logistic regression? | The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold w | Is decision threshold a hyperparameter in logistic regression?
The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold will decrease the number of positives that you predict and... | Is decision threshold a hyperparameter in logistic regression?
The decision threshold creates a trade-off between the number of positives that you predict and the number of negatives that you predict -- because, tautologically, increasing the decision threshold w |
12,074 | Is decision threshold a hyperparameter in logistic regression? | But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter?
Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the underlying regression.
If so, why is it (for example) not possible to easily search over a grid of thresholds using sciki... | Is decision threshold a hyperparameter in logistic regression? | But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter?
Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the un | Is decision threshold a hyperparameter in logistic regression?
But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter?
Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the underlying regression.
If so, why is it (for example) not ... | Is decision threshold a hyperparameter in logistic regression?
But varying the threshold will change the predicted classifications. Does this mean the threshold is a hyperparameter?
Yup, it does, sorta. It's a hyperparameter of you decision rule, but not the un |
12,075 | Is the median a type of mean, for some generalization of "mean"? | Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics:
$$\bar{x} = \sum_i w_i x_{(i)},\qquad w_i=\frac{_1}{^n}\,.$$
Then by replacing that ordinary average of order statistics with some other weight function, we ge... | Is the median a type of mean, for some generalization of "mean"? | Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics:
$$\bar{x} = \sum_i w_i x_{(i)},\qquad w | Is the median a type of mean, for some generalization of "mean"?
Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics:
$$\bar{x} = \sum_i w_i x_{(i)},\qquad w_i=\frac{_1}{^n}\,.$$
Then by replacing that ordinary a... | Is the median a type of mean, for some generalization of "mean"?
Here's one way that you might regard a median as a "general sort of mean" -- first, carefully define your ordinary arithmetic mean in terms of order statistics:
$$\bar{x} = \sum_i w_i x_{(i)},\qquad w |
12,076 | Is the median a type of mean, for some generalization of "mean"? | If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 loss function. No transformations required.
So the median is an example of a Fréchet mean. | Is the median a type of mean, for some generalization of "mean"? | If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 lo | Is the median a type of mean, for some generalization of "mean"?
If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 loss function. No transformations required.
So the media... | Is the median a type of mean, for some generalization of "mean"?
If you think of the mean as the point minimizing the quadratic loss function SSE, then the median is the point minimizing the linear loss function MAD, and the mode is the point minimizing some 0-1 lo |
12,077 | Is the median a type of mean, for some generalization of "mean"? | The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that it becomes almost useless for data analysis. This reply discusses some of the axiomatic properties that any reasonably ... | Is the median a type of mean, for some generalization of "mean"? | The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that | Is the median a type of mean, for some generalization of "mean"?
The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that it becomes almost useless for data analysis. This repl... | Is the median a type of mean, for some generalization of "mean"?
The question invites us to characterize the concept of "mean" in a sufficiently broad sense to encompass all the usual means--power means, $L^p$ means, medians, trimmed means--but not so broadly that |
12,078 | Is the median a type of mean, for some generalization of "mean"? | One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with equal weights $w_i = 1/n$.
Letting the weights depend on the order of values in magnitude, from smallest to largest, po... | Is the median a type of mean, for some generalization of "mean"? | One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with | Is the median a type of mean, for some generalization of "mean"?
One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with equal weights $w_i = 1/n$.
Letting the weights depend ... | Is the median a type of mean, for some generalization of "mean"?
One easy but fruitful generalization is to weighted means, $\sum_{i=1}^n w_i x_i / \sum_{i=1}^n w_i,$ where $\sum_{i=1}^n w_i = 1$. Clearly the common or garden mean is the simplest special case with |
12,079 | Is the median a type of mean, for some generalization of "mean"? | I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini mean. If you are going to perform some operation over a set of values, the Chisini mean is a number that you can substi... | Is the median a type of mean, for some generalization of "mean"? | I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini | Is the median a type of mean, for some generalization of "mean"?
I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini mean. If you are going to perform some operation over ... | Is the median a type of mean, for some generalization of "mean"?
I think the median can be considered a type of a generalization of the arithmetic mean. Specifically, the arithmetic mean and the median (among others) can be unified as special cases of the Chisini |
12,080 | Is the median a type of mean, for some generalization of "mean"? | The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of central tendency we could say both Mean and Median are generealization but not of each other. Part of my background is ... | Is the median a type of mean, for some generalization of "mean"? | The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of | Is the median a type of mean, for some generalization of "mean"?
The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of central tendency we could say both Mean and Median are... | Is the median a type of mean, for some generalization of "mean"?
The question is not well defined. If we agree on the common "street" definition of mean as the sum of n numbers divided by n then we have a stake in the ground. Further If we would look at measures of |
12,081 | CHAID vs CRT (or CART) | I will list some properties and later give you my appraisal for what its worth:
CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two nodes). This may or may not be desired (it can lead to better segments or easier interpretation). What it definitely does, t... | CHAID vs CRT (or CART) | I will list some properties and later give you my appraisal for what its worth:
CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two node | CHAID vs CRT (or CART)
I will list some properties and later give you my appraisal for what its worth:
CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two nodes). This may or may not be desired (it can lead to better segments or easier interpretation). Wha... | CHAID vs CRT (or CART)
I will list some properties and later give you my appraisal for what its worth:
CHAID uses multiway splits by default (multiway splits means that the current node is splitted into more than two node |
12,082 | CHAID vs CRT (or CART) | All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree averaging (bagging, boosting, random forests) is necessary (except that you lose the advantage of trees - interpretabil... | CHAID vs CRT (or CART) | All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree | CHAID vs CRT (or CART)
All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree averaging (bagging, boosting, random forests) is necessary (except that you lose the advantage o... | CHAID vs CRT (or CART)
All single-tree methods involve a staggering number of multiple comparisons that bring great instability to the result. That is why to achieve satisfactory predictive discrimination some form of tree |
12,083 | Whether to use structural equation modelling to analyse observational studies in psychology | My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from the sounds of it, probably a bit more comfortable with such designs than Henrik (though his concerns about causal inter... | Whether to use structural equation modelling to analyse observational studies in psychology | My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from | Whether to use structural equation modelling to analyse observational studies in psychology
My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from the sounds of it, probably ... | Whether to use structural equation modelling to analyse observational studies in psychology
My disclaimer: I realize this question has laid dormant for some time, but it seems to be an important one, and one that you intended to elicit multiple responses. I am a Social Psychologist, and from |
12,084 | Whether to use structural equation modelling to analyse observational studies in psychology | Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this.
To answer your first and second question: I think for a design like this a SEM or, depending on the number of variables involved, mediation or moderation analyses is the nat... | Whether to use structural equation modelling to analyse observational studies in psychology | Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this.
To answer your first and second question: I think for | Whether to use structural equation modelling to analyse observational studies in psychology
Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this.
To answer your first and second question: I think for a design like this a SEM or... | Whether to use structural equation modelling to analyse observational studies in psychology
Disclaimer: I consider myself an experiemtal psychologist with an emphasis on experimental. Hence, I have a natural unease with designs like this.
To answer your first and second question: I think for |
12,085 | What is/are the implicit priors in frequentist statistics? | In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and sufficient
condition (Stein. 1955; Farrell, 1968b) states that, under the following
assumptions
the sampling density $f(x|\t... | What is/are the implicit priors in frequentist statistics? | In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and suffi | What is/are the implicit priors in frequentist statistics?
In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and sufficient
condition (Stein. 1955; Farrell, 1968b) states that, un... | What is/are the implicit priors in frequentist statistics?
In frequentist decision theory, there exist complete class results that characterise admissible procedures as Bayes procedures or as limits of Bayes procedures. For instance, Stein necessary and suffi |
12,086 | What is/are the implicit priors in frequentist statistics? | @Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.)
Frequentists often (but not always) like to use estimators that are "minimax": if I want to estimate $\theta$, my estimator $\hat{\theta}$'s... | What is/are the implicit priors in frequentist statistics? | @Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.)
Frequentists often ( | What is/are the implicit priors in frequentist statistics?
@Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.)
Frequentists often (but not always) like to use estimators that are "minimax": if... | What is/are the implicit priors in frequentist statistics?
@Xi'an's answer is more complete. But since you also asked for a pithy take-away, here's one. (The concepts I mention are not exactly the same as the admissibility setting above.)
Frequentists often ( |
12,087 | r glmer warnings: model fails to converge & model is nearly unidentifiable | There is a nice description of how to troubleshoot this issue here:
https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html
Basically, the recommendations are to rescale and center your variables, check for singularity, double-check gradient calculations, add more iterations by restarti... | r glmer warnings: model fails to converge & model is nearly unidentifiable | There is a nice description of how to troubleshoot this issue here:
https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html
Basically, the recommendations are to resca | r glmer warnings: model fails to converge & model is nearly unidentifiable
There is a nice description of how to troubleshoot this issue here:
https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html
Basically, the recommendations are to rescale and center your variables, check for singu... | r glmer warnings: model fails to converge & model is nearly unidentifiable
There is a nice description of how to troubleshoot this issue here:
https://rstudio-pubs-static.s3.amazonaws.com/33653_57fc7b8e5d484c909b615d8633c01d51.html
Basically, the recommendations are to resca |
12,088 | r glmer warnings: model fails to converge & model is nearly unidentifiable | The correlation of fixed effects in your last output suggests
that there is a problem of multicollinearity. Some of the
fixed effects are almost perfectly correlated (r = 1 or r =
-1). Especially, group1 and its interactions seem to be
problematic. You could check some descriptive statistics and
plots of your fixed ef... | r glmer warnings: model fails to converge & model is nearly unidentifiable | The correlation of fixed effects in your last output suggests
that there is a problem of multicollinearity. Some of the
fixed effects are almost perfectly correlated (r = 1 or r =
-1). Especially, gr | r glmer warnings: model fails to converge & model is nearly unidentifiable
The correlation of fixed effects in your last output suggests
that there is a problem of multicollinearity. Some of the
fixed effects are almost perfectly correlated (r = 1 or r =
-1). Especially, group1 and its interactions seem to be
problema... | r glmer warnings: model fails to converge & model is nearly unidentifiable
The correlation of fixed effects in your last output suggests
that there is a problem of multicollinearity. Some of the
fixed effects are almost perfectly correlated (r = 1 or r =
-1). Especially, gr |
12,089 | Interpreting estimates of cloglog regression | With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course.
the estimate of time is 0.015. Is it correct to say the odds of mortality per unit time is multiplied by exp(0.015) = 1.015113 (~1.5% increase per unit time)... | Interpreting estimates of cloglog regression | With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course.
the estimate of time is 0.015. Is it | Interpreting estimates of cloglog regression
With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course.
the estimate of time is 0.015. Is it correct to say the odds of mortality per unit time is multiplied by exp(0.0... | Interpreting estimates of cloglog regression
With a complementary-log-log link function, it's not logistic regression -- the term "logistic" implies a logit link. It's still a binomial regression of course.
the estimate of time is 0.015. Is it |
12,090 | Can we use leave one out mean and standard deviation to reveal the outliers? | It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one should never do it: the risks of it not working are large and besides, there exists a simpler, much safer and better esta... | Can we use leave one out mean and standard deviation to reveal the outliers? | It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one s | Can we use leave one out mean and standard deviation to reveal the outliers?
It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one should never do it: the risks of it not work... | Can we use leave one out mean and standard deviation to reveal the outliers?
It might seem counter-intuitive, but using the approach you describe doesn't make sense (to take your wording, I would rather write "can lead to outcomes very different from those intended") and one s |
12,091 | Appropriateness of ANOVA after k-means cluster analysis | No!
You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering will impose one by grouping together points which are nearby. This shrinks the within-group variance and grows the acros... | Appropriateness of ANOVA after k-means cluster analysis | No!
You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering | Appropriateness of ANOVA after k-means cluster analysis
No!
You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering will impose one by grouping together points which are nearby. Th... | Appropriateness of ANOVA after k-means cluster analysis
No!
You must not use the same data to 1) perform clustering and 2) hunt for significant differences between the points in the clusters. Even if there's no actual structure in the data, the clustering |
12,092 | Appropriateness of ANOVA after k-means cluster analysis | Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap statistic to estimate the number of clusters.
On the other hand, the snooped p-values are biased downward, so if ANOVA o... | Appropriateness of ANOVA after k-means cluster analysis | Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap | Appropriateness of ANOVA after k-means cluster analysis
Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap statistic to estimate the number of clusters.
On the other hand,... | Appropriateness of ANOVA after k-means cluster analysis
Your real problem is data snooping. You can't apply ANOVA or KW if the observations were assigned to groups (clusters) based on the input data set itself. What you can do is to use something like Gap |
12,093 | Appropriateness of ANOVA after k-means cluster analysis | I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions.
What you'd need to do is simulate from the situation in which your null is true, apply the whole procedure (clustering, etc), and then calculate whicheve... | Appropriateness of ANOVA after k-means cluster analysis | I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions.
What you'd need to do is simulate | Appropriateness of ANOVA after k-means cluster analysis
I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions.
What you'd need to do is simulate from the situation in which your null is true, apply the whole ... | Appropriateness of ANOVA after k-means cluster analysis
I think you could apply such an approach (i.e. using the statistics, such as F-statistics or t-statistics or whatever), if you toss out the usual null distributions.
What you'd need to do is simulate |
12,094 | Appropriateness of ANOVA after k-means cluster analysis | Not exactly an answer, but a proposal on how one would find the solution.
I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seeing if the same kmeans occurs within a distribution (example with clustergram) from various samples (normally kmeans itsel... | Appropriateness of ANOVA after k-means cluster analysis | Not exactly an answer, but a proposal on how one would find the solution.
I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seein | Appropriateness of ANOVA after k-means cluster analysis
Not exactly an answer, but a proposal on how one would find the solution.
I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seeing if the same kmeans occurs within a distribution (example with ... | Appropriateness of ANOVA after k-means cluster analysis
Not exactly an answer, but a proposal on how one would find the solution.
I was thinking about that cluster problem. The test would require sampling from the full dataset and deriving kmeans and seein |
12,095 | Test for IID sampling | What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data was collected and other outside information.
Consider some examples.
Scenario 1: We generate a set of data independently... | Test for IID sampling | What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data w | Test for IID sampling
What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data was collected and other outside information.
Consider some examples.
Scenario 1: We generate a set... | Test for IID sampling
What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data w |
12,096 | Test for IID sampling | If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence part. I think your approach is trying to mainly address the identically distributed part of the assumption. I think the... | Test for IID sampling | If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence p | Test for IID sampling
If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence part. I think your approach is trying to mainly address the identically distributed part of the as... | Test for IID sampling
If the data have an index ordering you can use white noise tests for time series. Essentially that means testing that the autocorrelations at all non zero lags are 0. This handles the independence p |
12,097 | When mathematical statistics outsmarts probability theory | I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$.
So let's use this to construct an example using $Y\sim \mathcal N(0,1)$:
let $X=\pm\frac1Y$ with equal probability (or $0$ ... | When mathematical statistics outsmarts probability theory | I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$.
So l | When mathematical statistics outsmarts probability theory
I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$.
So let's use this to construct an example using $Y\sim \mathcal N(... | When mathematical statistics outsmarts probability theory
I do not find this any more surprising than saying that if $Y \sim \mathcal N(0,1)$ then $\mathbb E\left[\frac1Y\right]$ is undefined even though $\frac1Y$ has a distribution symmetric about $0$.
So l |
12,098 | When mathematical statistics outsmarts probability theory | While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons:
(i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the expectation under the distribution of $X$;
(ii) conditional distributions and therefore expectations are only defined wit... | When mathematical statistics outsmarts probability theory | While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons:
(i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the e | When mathematical statistics outsmarts probability theory
While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons:
(i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the expectation under the distribution of $X$;
(ii) conditional dis... | When mathematical statistics outsmarts probability theory
While it is a pleasant remark, I do not find this occurrence that surprising or paradoxical, and this for several reasons:
(i) $\mathbb E^X[0]=0$ remains true, where $\mathbb E^X[\cdot]$ denotes the e |
12,099 | When mathematical statistics outsmarts probability theory | Note that we have
$$E(X∣Y)=0.$$
$E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$.
In a way the example poses the naive statement that a distribution must have $E[X] = m$ when it is symmetric about $m$. And it does this via the expression $E (X) = E [E (... | When mathematical statistics outsmarts probability theory | Note that we have
$$E(X∣Y)=0.$$
$E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$.
In a way the example poses the naive statement that | When mathematical statistics outsmarts probability theory
Note that we have
$$E(X∣Y)=0.$$
$E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$.
In a way the example poses the naive statement that a distribution must have $E[X] = m$ when it is symmetric abou... | When mathematical statistics outsmarts probability theory
Note that we have
$$E(X∣Y)=0.$$
$E(X|y) = E[X|1] \cdot y$, which when $y\to \infty$ seems to become like a case of the undefined $0 \times \infty$.
In a way the example poses the naive statement that |
12,100 | The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ | #A geometrical interpretation
The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem:
$$\text{minimize $f(\beta)$ subject to $g(\beta) \leq t$ and $h(\beta) = 1$ } $$
$$\begin{align}
f(\beta) &= \lVert y-X\beta \lVert^2 \\
g(\beta) &= \lVert \beta \lVert^... | The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ | #A geometrical interpretation
The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem:
$$\text{minimize $f(\beta)$ subject to $g(\beta) \leq | The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
#A geometrical interpretation
The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem:
$$\text{minimize $f(\beta)$ subject to $g(\beta) \leq t$ and $h(\beta) = 1$ } $$
$$\begin{ali... | The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$
#A geometrical interpretation
The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem:
$$\text{minimize $f(\beta)$ subject to $g(\beta) \leq |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.