idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
46,301 | Performing multiple linear regressions, in Excel, that have a common x-intercept? | I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in all z's. For a common intercept with 3 slopes it would look like this.
batch y x xz 48 xz 58
48 0.9 0 0... | Performing multiple linear regressions, in Excel, that have a common x-intercept? | I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in al | Performing multiple linear regressions, in Excel, that have a common x-intercept?
I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in all z's. For a common intercept with 3 s... | Performing multiple linear regressions, in Excel, that have a common x-intercept?
I'm only a chemist not a statistician but the easiest way I know is to use dummy variables. i.e. for each (n-1)slope assign a 1 in the z column and multiply x*z. This means one slope will have 0 in al |
46,302 | Assessing the accuracy of a deterministic mathematical model | A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) relationship between the predicted and observed values, while higher values suggest that your model better agrees (up to... | Assessing the accuracy of a deterministic mathematical model | A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) | Assessing the accuracy of a deterministic mathematical model
A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) relationship between the predicted and observed values, whi... | Assessing the accuracy of a deterministic mathematical model
A decent first step might be to compute the correlation between your model's predictions ("data A") and the observed temperatures ("data B"). Correlations range from -1 to +1: 0 indicates no (linear) |
46,303 | Assessing the accuracy of a deterministic mathematical model | Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I don't think you can assess the latter. As to the former, I would compute the mean square prediction error. By that I mean... | Assessing the accuracy of a deterministic mathematical model | Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I do | Assessing the accuracy of a deterministic mathematical model
Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I don't think you can assess the latter. As to the former, I w... | Assessing the accuracy of a deterministic mathematical model
Your question sounds confusing. When you say accuracy of the model, are you just referring to how well it predicts or do you mean how well it simulates the behavior of weather in New York City? I do |
46,304 | Assessing the accuracy of a deterministic mathematical model | I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not involve trying to reduce model performance to a p-value.
How well does your model predict parameters? If your model estima... | Assessing the accuracy of a deterministic mathematical model | I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not inv | Assessing the accuracy of a deterministic mathematical model
I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not involve trying to reduce model performance to a p-value.
How ... | Assessing the accuracy of a deterministic mathematical model
I would suggest two approaches to assessing whether or not a deterministic mathematical model is performing well - neither of which actually involve a statistical test, and which especially do not inv |
46,305 | Assessing the accuracy of a deterministic mathematical model | I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is generally applicable for the forecast of continuous stochastic variables.
See https://doi.org/10.1016/j.renene.2021.08.... | Assessing the accuracy of a deterministic mathematical model | I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is | Assessing the accuracy of a deterministic mathematical model
I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is generally applicable for the forecast of continuous stocha... | Assessing the accuracy of a deterministic mathematical model
I recently devised a validation frame for deterministic solar irradiance forecasts. It bases on the insight that outcome and prediction of a perfect forecast must be mathematically exchangeable. It is |
46,306 | Plotting interval censored follow-up time as a line chart | There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye.
Just to give another perspective, here's an approach using the ggplot2 package.
require(ggplot2)
# Your example data
da... | Plotting interval censored follow-up time as a line chart | There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye.
Ju | Plotting interval censored follow-up time as a line chart
There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye.
Just to give another perspective, here's an approach using the g... | Plotting interval censored follow-up time as a line chart
There must be many ways to make follow-up time plots with interval censored data, although a quick Google search only found this image in an overview of censoring, which looks a bit busy to my eye.
Ju |
46,307 | Plotting interval censored follow-up time as a line chart | Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This:
plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start")
lines(data$t1, type="h", col="lightskyblue", lwd=2)
points(data$atime, pch=19, cex=0.75, col="Black")
Decided that adding markers for both A and B for ... | Plotting interval censored follow-up time as a line chart | Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This:
plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start")
lines(data$t1, type="h", col="li | Plotting interval censored follow-up time as a line chart
Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This:
plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start")
lines(data$t1, type="h", col="lightskyblue", lwd=2)
points(data$atime, pch=19, cex=0.75, col="... | Plotting interval censored follow-up time as a line chart
Well, that was...fairly easy. Inspired by an unrelated graph in Visualize This:
plot(data$t2, type="h", col="grey", lwd=2, xlab="Subject", ylab="Days Since Start")
lines(data$t1, type="h", col="li |
46,308 | Visualizing high dimensional binary data | Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your data.
In R:
# data is your data.frame/matrix of data
pca <- prcomp(data, scale.=TRUE)
# Screeplot to see how much variance... | Visualizing high dimensional binary data | Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your da | Visualizing high dimensional binary data
Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your data.
In R:
# data is your data.frame/matrix of data
pca <- prcomp(data, scale.=T... | Visualizing high dimensional binary data
Even if this is binary, you can do a scaled Principal Component Analysis (PCA). By projecting the results on the 2D plane of the first Principal Components you get an idea of the clustering of your da |
46,309 | Visualizing high dimensional binary data | Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data). | Visualizing high dimensional binary data | Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data). | Visualizing high dimensional binary data
Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data). | Visualizing high dimensional binary data
Sometimes for binary data Parallel Coordinate Plots can work quite well (you will stil have to play around with it, but it would work much better than with non-binary data). |
46,310 | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$ | One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution.
$\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{array}{c}\mu_{X}\\\mu_{Y}\\\mu_{Z}\end{array}\right],\left[\begin{array}{ccc}\sigma_{X}^{2} & 0 & 0\\0 & \sigma_{Y}^{2} & ... | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia | One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution.
$\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{ar | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$
One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution.
$\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{a... | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia
One relatively easy approach is to consider $X$, $Y$, and $Z$ as having a joint multivariate normal distribution.
$\left[\begin{array}{c}X\\Y\\Z\end{array}\right]\sim\mathrm{MVN}\left(\left[\begin{ar |
46,311 | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$ | I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R:
N=10^7
x=rnorm(N,mu_x,sig_x)
y=rnorm(N,mu_y,sig_y)
z=rnorm(N,mu_z,sig_z)
sum(x<y & y >z )/N
It is just an estimation so maybe do it a couple times. Quick and dirty | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia | I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R:
N=10^7
x=rnorm(N,mu_x,sig_x)
y=rnorm(N,mu_y,sig_y)
z=rnorm(N,mu_z,sig_z) | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random variables $X, Y$, and $Z$
I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R:
N=10^7
x=rnorm(N,mu_x,sig_x)
y=rnorm(N,mu_y,sig_y)
z=rnorm(N,mu_z,sig_z... | What is a method to calculate precisely $P(Y \geq X, Y\leq Z)$, given three independent random varia
I might just make many draws from the distribution and calculate the rate that the event you are interested in occurs. In R:
N=10^7
x=rnorm(N,mu_x,sig_x)
y=rnorm(N,mu_y,sig_y)
z=rnorm(N,mu_z,sig_z) |
46,312 | Why is generalized linear model (GLM) a semi-parametric model? | A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions.
If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are independent and that
$$
g(\mathbb{E}[\,Y_i|X_i=x_i\,]) = x_i^T\beta
$$
then, under mild regularity conditions, solving... | Why is generalized linear model (GLM) a semi-parametric model? | A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions.
If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are | Why is generalized linear model (GLM) a semi-parametric model?
A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions.
If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are independent and that
$$
g(\mathbb{E}[\,Y_i|X_i=x_i\,]) =... | Why is generalized linear model (GLM) a semi-parametric model?
A GLM isn't a semi-parametric model, but the output from typical use of GLMs can be justified with only semi-parametric assumptions.
If one only assumes that the observations $Y_1, Y_2, ... Y_n$ are |
46,313 | A Stats 101 question with a real world application | A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2.
(1) Express the profit ratios as a multiplier (1+rateOfReturn) and plot them to see if they follow some likely distribution (you might start with a Q-Q plot for normality, and a Q-Q... | A Stats 101 question with a real world application | A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2.
(1) Express the profit ratios as a multiplier (1+rateOfReturn | A Stats 101 question with a real world application
A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2.
(1) Express the profit ratios as a multiplier (1+rateOfReturn) and plot them to see if they follow some likely distribution (you m... | A Stats 101 question with a real world application
A couple of things you can do to confirm that these are really oddballs. They actually might not be, since someone has to rank #1 and #2.
(1) Express the profit ratios as a multiplier (1+rateOfReturn |
46,314 | A Stats 101 question with a real world application | Before using Excel for something like this, first read the Spreadsheet Addiction page.
One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then want to test them. This will always lead to some skepticism compared to if you formulated the question before looking at... | A Stats 101 question with a real world application | Before using Excel for something like this, first read the Spreadsheet Addiction page.
One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then w | A Stats 101 question with a real world application
Before using Excel for something like this, first read the Spreadsheet Addiction page.
One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then want to test them. This will always lead to some skepticism compared ... | A Stats 101 question with a real world application
Before using Excel for something like this, first read the Spreadsheet Addiction page.
One problem you will have with whatever analysis you do is that you first identified the top 2 as unusual, then w |
46,315 | A Stats 101 question with a real world application | It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea:
You want to know whether these to datapoints are clear outliers (i.e., conceptually what Mike Anderson says). The easiest way to do this is to... | A Stats 101 question with a real world application | It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea:
You want to know wheth | A Stats 101 question with a real world application
It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea:
You want to know whether these to datapoints are clear outliers (i.e., conceptually what Mi... | A Stats 101 question with a real world application
It is difficult to use a significance test for this problem (i.e., one data vector with unusual observations). But oops, when I describe the problem in that way I have an idea:
You want to know wheth |
46,316 | Is there a reliable recursive formula for a simple moving average (moving mean)? | Just try to remove the last value of the window and add the new one.
If
$$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$
then
$$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$ | Is there a reliable recursive formula for a simple moving average (moving mean)? | Just try to remove the last value of the window and add the new one.
If
$$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$
then
$$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$ | Is there a reliable recursive formula for a simple moving average (moving mean)?
Just try to remove the last value of the window and add the new one.
If
$$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$
then
$$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$ | Is there a reliable recursive formula for a simple moving average (moving mean)?
Just try to remove the last value of the window and add the new one.
If
$$MA(t)=\frac{1}{w}\sum\limits_{i=t-w+1}^t{y_i}$$
then
$$MA(t+1)=MA(t)+\frac{y(t+1)-y(t-w+1)}{w}.$$ |
46,317 | Is there a reliable recursive formula for a simple moving average (moving mean)? | double mean(const double F, const double C, unsigned int *n)
{
return (F*(*n)+C)/(++*n);
}
F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This does not need a buffer. | Is there a reliable recursive formula for a simple moving average (moving mean)? | double mean(const double F, const double C, unsigned int *n)
{
return (F*(*n)+C)/(++*n);
}
F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This doe | Is there a reliable recursive formula for a simple moving average (moving mean)?
double mean(const double F, const double C, unsigned int *n)
{
return (F*(*n)+C)/(++*n);
}
F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This does not need a buffer. | Is there a reliable recursive formula for a simple moving average (moving mean)?
double mean(const double F, const double C, unsigned int *n)
{
return (F*(*n)+C)/(++*n);
}
F is the old average number, C is a new addition to the avarage. *n is the number of values in F. This doe |
46,318 | Probability and log probability in hidden Markov models | A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to observations.
These probabilities have to be combined to produce the sequence probability. Therefore they are multiplied... | Probability and log probability in hidden Markov models | A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to | Probability and log probability in hidden Markov models
A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to observations.
These probabilities have to be combined to produce... | Probability and log probability in hidden Markov models
A Markov Model has probabilities for each individual transition (the transfer function). In the case of a Hidden Markov Model (HMM) there is also a probability function mapping the hidden state(s) to |
46,319 | Probability and log probability in hidden Markov models | Yes, the probability of an observation sequence can be computed using the forward algorithm.
Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multiplications are carried out in each iteration. So the answer to your second question is yes as well. | Probability and log probability in hidden Markov models | Yes, the probability of an observation sequence can be computed using the forward algorithm.
Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multipl | Probability and log probability in hidden Markov models
Yes, the probability of an observation sequence can be computed using the forward algorithm.
Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multiplications are carried out in each iteration. So the answer to yo... | Probability and log probability in hidden Markov models
Yes, the probability of an observation sequence can be computed using the forward algorithm.
Note however that the forward algorithm is an iterative algorithm where a bunch of summations and multipl |
46,320 | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? | First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman wrote an article on categorizing variables, but only the abstract is freely available, and I have not read the whole ar... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re | First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
First, categorizing continuous variables is generally a bad idea; Royston, Altman and Saurbrei wrote a good article on why dichotomizing is bad, and the same arguments apply to more categories. Altman |
46,321 | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? | One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and Devlin, N. (2010) Statistical analysis of EQ-5D profiles: Does the use of value sets bias inference? Medical Decision Maki... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re | One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and De | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and De... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
One question that you ought to ask first is whether or not using a weighted score is correct for the kind of analysis that you want to do. There is a discussion of that in Parkin, D., Rice, N. and De |
46,322 | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem? | if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the dependent variable (which appears to be ordered categories of pre- versus post- differences), is the best way forward. I love ... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re | if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the depend | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear regression a problem?
if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the depend... | Is using a questionnaire score (EuroQol's EQ-5D) with a bimodal distribution as outcome in linear re
if you have some predictor variables (which I'm assuming you have as you mention regression in your question), I'm wondering if an ordinal logistic regression, using the Paretian measure as the depend |
46,323 | Sphering data with SVD components of covariance matrix | I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening.
$cov(X^*) = E[X^*X^{*T}]$
$= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$
$D^{-\frac{1}{2}T} = D^{-\frac{1}{2}}$ because it's a diagonal matrix
$= D^{-\frac{1}{2}}U^TE[XX^T]UD^{-\frac{1}{2}}$
$= D^{-\fra... | Sphering data with SVD components of covariance matrix | I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening.
$cov(X^*) = E[X^*X^{*T}]$
$= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$
$D^{-\frac | Sphering data with SVD components of covariance matrix
I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening.
$cov(X^*) = E[X^*X^{*T}]$
$= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$
$D^{-\frac{1}{2}T} = D^{-\frac{1}{2}}$ because it's a diagonal matrix
$= D... | Sphering data with SVD components of covariance matrix
I think I figured out the answer after seeing cardinal's suggestion and reading the Wikipedia page on whitening.
$cov(X^*) = E[X^*X^{*T}]$
$= E[D^{-\frac{1}{2}}U^TXX^TUD^{-\frac{1}{2}T}]$
$D^{-\frac |
46,324 | Temporal analysis of variation in random effects | New answer: 2020 !
the main interest lies in the changes in hospital-level variation
You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of around 1000 observations per hospital per year.
I think there is only one approach that will answer the research question, ... | Temporal analysis of variation in random effects | New answer: 2020 !
the main interest lies in the changes in hospital-level variation
You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of aro | Temporal analysis of variation in random effects
New answer: 2020 !
the main interest lies in the changes in hospital-level variation
You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of around 1000 observations per hospital per year.
I think there is only one ... | Temporal analysis of variation in random effects
New answer: 2020 !
the main interest lies in the changes in hospital-level variation
You have 15 years of data, with over 100 hospitals and around 100,000 observations per year, so an average of aro |
46,325 | Temporal analysis of variation in random effects | Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code.
Modeling the outcome
If the outcome is binary, use family = binomial(). If it is count data, use family = poisson() since it's a fixed time interval. You could also consider aggregati... | Temporal analysis of variation in random effects | Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code.
Modeling the outcome
If the outcome is binary, use family = bin | Temporal analysis of variation in random effects
Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code.
Modeling the outcome
If the outcome is binary, use family = binomial(). If it is count data, use family = poisson() since it's a fixed... | Temporal analysis of variation in random effects
Some more information is needed to figure out the best solution here, so I'm simply answering a number of scenarios with example R code.
Modeling the outcome
If the outcome is binary, use family = bin |
46,326 | Temporal analysis of variation in random effects | You've really been thrown in the deep end !
It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual data exploration etc. of course) I would probably fit a generalised linear mixed effects model. To include a time compn... | Temporal analysis of variation in random effects | You've really been thrown in the deep end !
It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual | Temporal analysis of variation in random effects
You've really been thrown in the deep end !
It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual data exploration etc. of course) I would probably fit a generalised li... | Temporal analysis of variation in random effects
You've really been thrown in the deep end !
It doesn't seem like a time series problem, but does seem though it could naturally be modelled as a multilevel regression. As a first step (after the usual |
46,327 | How to treat holidays when working with time series data? | There is a little detail here, so a generic answer.
First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy there. Your model might have already take holidays into account (for instance through some other predictors), or they are ... | How to treat holidays when working with time series data? | There is a little detail here, so a generic answer.
First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy th | How to treat holidays when working with time series data?
There is a little detail here, so a generic answer.
First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy there. Your model might have already take holidays into account ... | How to treat holidays when working with time series data?
There is a little detail here, so a generic answer.
First of all, check if this problem exists; look at the residuals during holidays and test whether there is any significant problem with accuracy th |
46,328 | How to treat holidays when working with time series data? | When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more complicated option would be to have separate dummy variables (0/1) for weekday vs. weekend and normal day vs. holiday.
The... | How to treat holidays when working with time series data? | When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more co | How to treat holidays when working with time series data?
When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more complicated option would be to have separate dummy variables (0/... | How to treat holidays when working with time series data?
When dealing with electricity data, I think the simplest option is to treat holidays as weekends (e.g. you have a dummy variable where 1 is a normal weekday, and 0 is a weekend or holiday). A more co |
46,329 | How to treat holidays when working with time series data? | I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week effects , any fixed-day of the month effects that can be identified along with any holiday effects. Each holiday can have... | How to treat holidays when working with time series data? | I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week e | How to treat holidays when working with time series data?
I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week effects , any fixed-day of the month effects that can be identi... | How to treat holidays when working with time series data?
I have done a lot of work with hourly data and have concluded that a two prong approach seems to deliver useful models. First of all we model the daily totals taking into account any day-of-the-week e |
46,330 | How to treat holidays when working with time series data? | I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except that I forecast the daily peak (MW) and total energy (MW/h) using machine learning methods (as opposed to time series), a... | How to treat holidays when working with time series data? | I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except t | How to treat holidays when working with time series data?
I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except that I forecast the daily peak (MW) and total energy (MW/h) usi... | How to treat holidays when working with time series data?
I've always found it tough to handle the multiple (annual, weekly, daily) seasonality of electricity load/price data using time series methods. I use an approach (very) similar to IrishStat's except t |
46,331 | Is it legal to publish the code of a published algorithm? [closed] | As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here are a few things that I have learned.
On the NR homepage you can find the license information and information on redist... | Is it legal to publish the code of a published algorithm? [closed] | As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here a | Is it legal to publish the code of a published algorithm? [closed]
As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here are a few things that I have learned.
On the NR home... | Is it legal to publish the code of a published algorithm? [closed]
As mentioned by the OP, this is probably not the right place for expert advice on legal issues, but we all have to live with such things as software licenses and try not to get into trouble, so here a |
46,332 | Is it legal to publish the code of a published algorithm? [closed] | The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution (including commercialization and free release) is where one tends to run afoul of things. What's more, release can be c... | Is it legal to publish the code of a published algorithm? [closed] | The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution | Is it legal to publish the code of a published algorithm? [closed]
The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution (including commercialization and free release) is whe... | Is it legal to publish the code of a published algorithm? [closed]
The classic example of a patented algorithm is RSA, by the way. Rules for patents of algorithms are rather nebulous and changing quite a bit. In practice, implementations are okay, but distribution |
46,333 | Is it legal to publish the code of a published algorithm? [closed] | My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the algorithm, understand it, teach it to another person without ever letting them see the code in the original, and they i... | Is it legal to publish the code of a published algorithm? [closed] | My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the | Is it legal to publish the code of a published algorithm? [closed]
My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the algorithm, understand it, teach it to another person... | Is it legal to publish the code of a published algorithm? [closed]
My understanding is that publication disqualifies an invention for patenting. Thus any algorithm that has been published can be used freely. That does not apply to the code itself! If you learn the |
46,334 | How to plot results from text mining (e.g. classification or clustering)? | If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random initializations of k-means. This gives you some information about how well the algorithm would perform on average.
If yo... | How to plot results from text mining (e.g. classification or clustering)? | If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random | How to plot results from text mining (e.g. classification or clustering)?
If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random initializations of k-means. This gives you som... | How to plot results from text mining (e.g. classification or clustering)?
If you're doing classification, this should be fairly straight forward. Just select some aggregate measure of performance (e.g. accuracy), and plot a distribution of that measure for different random |
46,335 | How to plot results from text mining (e.g. classification or clustering)? | I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vector that corresponds to each data point. I use MDS if it's more convenient to generate a distance matrix than exact n-dim... | How to plot results from text mining (e.g. classification or clustering)? | I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vecto | How to plot results from text mining (e.g. classification or clustering)?
I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vector that corresponds to each data point. I use ... | How to plot results from text mining (e.g. classification or clustering)?
I use two different techniques for projecting all the data points in an n-dimensional space down onto two dimensions: PCA or MDS (multidimensional scaling). I use PCA if I have an n-dimensional vecto |
46,336 | Assessing error of a spatial interpolation algorithm | One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimated by comparing interpolated values at the validation point locations with the actual validation point values. Note that ... | Assessing error of a spatial interpolation algorithm | One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimate | Assessing error of a spatial interpolation algorithm
One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimated by comparing interpolated values at the validation point location... | Assessing error of a spatial interpolation algorithm
One option may be to split the original data into two subsets: one that will be used in interpolating values and one that will be used to validate the interpolation results. The error is then estimate |
46,337 | Assessing error of a spatial interpolation algorithm | I have brief answers to the two points in your question, and encourage you to see the reference below for details.
Most surface estimation algorithms estimate a point cloud P' to approximate the input set P. The point to point distance between the estimated point and corresponding input point may suffice for your e... | Assessing error of a spatial interpolation algorithm | I have brief answers to the two points in your question, and encourage you to see the reference below for details.
Most surface estimation algorithms estimate a point cloud P' to approximate the in | Assessing error of a spatial interpolation algorithm
I have brief answers to the two points in your question, and encourage you to see the reference below for details.
Most surface estimation algorithms estimate a point cloud P' to approximate the input set P. The point to point distance between the estimated point... | Assessing error of a spatial interpolation algorithm
I have brief answers to the two points in your question, and encourage you to see the reference below for details.
Most surface estimation algorithms estimate a point cloud P' to approximate the in |
46,338 | How to apply a soft coefficient constraint to an OLS regression? | Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving
$$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$$
If your software won't do that directly, you can get the same results with this trick:
Include a column of 1's in the... | How to apply a soft coefficient constraint to an OLS regression? | Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving
$$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$ | How to apply a soft coefficient constraint to an OLS regression?
Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving
$$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$$
If your software won't do that directly, you can get ... | How to apply a soft coefficient constraint to an OLS regression?
Differentiating the objective function with respect to $b$ and equating to $0$ shows that the solution to the modified equation is obtained by solving
$$(X'X + \lambda)b = X'y + \lambda\tilde{\beta}.$ |
46,339 | How to apply a soft coefficient constraint to an OLS regression? | This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of those does exactly what you want they could be used as a starting point. You could also look at the lasso and lars algori... | How to apply a soft coefficient constraint to an OLS regression? | This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of th | How to apply a soft coefficient constraint to an OLS regression?
This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of those does exactly what you want they could be used as a ... | How to apply a soft coefficient constraint to an OLS regression?
This looks a lot like ridge regression, the lm.ridge function in the MASS package for R does ridge regression and the ols function in the MASS package also does penalized regression. If neither of th |
46,340 | Fitting a probability distribution to zero inflated data in R | You can use Vuong test in pscl package to compare non-nested models. Here is an example
> m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin")
> summary(m1)
Call:
zeroinfl(formula = i.vec ~ 1 | 1, dist = "negbin")
Pearson residuals:
Min 1Q Median 3Q Max
-0.3730 -0.3730 -0.3730 -0.2503 7.3544
Count mo... | Fitting a probability distribution to zero inflated data in R | You can use Vuong test in pscl package to compare non-nested models. Here is an example
> m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin")
> summary(m1)
Call:
zeroinfl(formula = i.vec ~ 1 | 1, dist = " | Fitting a probability distribution to zero inflated data in R
You can use Vuong test in pscl package to compare non-nested models. Here is an example
> m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin")
> summary(m1)
Call:
zeroinfl(formula = i.vec ~ 1 | 1, dist = "negbin")
Pearson residuals:
Min 1Q Median ... | Fitting a probability distribution to zero inflated data in R
You can use Vuong test in pscl package to compare non-nested models. Here is an example
> m1 <- zeroinfl(i.vec ~ 1 | 1, dist = "negbin")
> summary(m1)
Call:
zeroinfl(formula = i.vec ~ 1 | 1, dist = " |
46,341 | Fitting a probability distribution to zero inflated data in R | I don't think you necessarily need to inflate zeros.
Your data seem quite consistent with a negative binomial:
> library(MASS)
> table(rnegbin(49,mu=3.1,theta=0.075))
0 1 2 3 5 18 20 21 31 61
36 4 2 1 1 1 1 1 1 1
> table(i.vec)
i.vec
0 1 2 3 4 6 11 44 63
30 8 5 1 1 1 1 1 1
So lets get... | Fitting a probability distribution to zero inflated data in R | I don't think you necessarily need to inflate zeros.
Your data seem quite consistent with a negative binomial:
> library(MASS)
> table(rnegbin(49,mu=3.1,theta=0.075))
0 1 2 3 5 18 20 21 31 61
| Fitting a probability distribution to zero inflated data in R
I don't think you necessarily need to inflate zeros.
Your data seem quite consistent with a negative binomial:
> library(MASS)
> table(rnegbin(49,mu=3.1,theta=0.075))
0 1 2 3 5 18 20 21 31 61
36 4 2 1 1 1 1 1 1 1
> table(i.vec)
i.vec
0 1 ... | Fitting a probability distribution to zero inflated data in R
I don't think you necessarily need to inflate zeros.
Your data seem quite consistent with a negative binomial:
> library(MASS)
> table(rnegbin(49,mu=3.1,theta=0.075))
0 1 2 3 5 18 20 21 31 61
|
46,342 | Fitting a probability distribution to zero inflated data in R | Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the variance of your error should be of the order of the inverse of the number of your observations (via Efron-Stein).
Mayb... | Fitting a probability distribution to zero inflated data in R | Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the | Fitting a probability distribution to zero inflated data in R
Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the variance of your error should be of the order of the inve... | Fitting a probability distribution to zero inflated data in R
Im not sure you can do much better than just plugin the empirical measure in that case, without further information on your data (especially since you have very few observations). And in that case the |
46,343 | Are Fisher's linear discriminant and logistic regression related? | Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to outliers and to unbalanced n's; the predictors should normally be interval scale. All that is not required by LR which i... | Are Fisher's linear discriminant and logistic regression related? | Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to | Are Fisher's linear discriminant and logistic regression related?
Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to outliers and to unbalanced n's; the predictors should ... | Are Fisher's linear discriminant and logistic regression related?
Just (maybe redundant) elaborate on Frank's answer. LDA is based on assumptions of multivariate normality and equality of covariance matrices of the 2 groups (in population); it is also irritable to |
46,344 | Are Fisher's linear discriminant and logistic regression related? | For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linear discriminant analysis (LDA) assumes that X has a multivariate normal distribution given Y. Using Bayes' rule to get P... | Are Fisher's linear discriminant and logistic regression related? | For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linea | Are Fisher's linear discriminant and logistic regression related?
For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linear discriminant analysis (LDA) assumes that X has a mul... | Are Fisher's linear discriminant and logistic regression related?
For many reasons, classification is not a good goal for most problems; prediction is. Logistic regression (LR) is a more direct probability model to use for prediction, with fewer assumptions. Linea |
46,345 | Do image recognition efforts always rely on machine learning and statistics? | No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detection as an inverse problem using parametric or implicit representations of your "pattern" or object of interest without ma... | Do image recognition efforts always rely on machine learning and statistics? | No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detecti | Do image recognition efforts always rely on machine learning and statistics?
No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detection as an inverse problem using parametric o... | Do image recognition efforts always rely on machine learning and statistics?
No, or at least I would say not necessarily explicitly. If you have an image formation model (e.g. derived from the physics of the imaging process), you can pose recognition, reconstruction or detecti |
46,346 | Do image recognition efforts always rely on machine learning and statistics? | Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they contract the identification efforts out to human beings - who are fairly good at some forms of recognition, using a service... | Do image recognition efforts always rely on machine learning and statistics? | Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they con | Do image recognition efforts always rely on machine learning and statistics?
Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they contract the identification efforts out to hum... | Do image recognition efforts always rely on machine learning and statistics?
Amusingly, there are also some image recognition research efforts that do not rely on mechanistic identification - either by machine learning, statistics, or other automated methods. Instead, they con |
46,347 | Do image recognition efforts always rely on machine learning and statistics? | Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into probabilistic models. Machine learning is the learning (training the parameters ) using statistics of the given samples. ... | Do image recognition efforts always rely on machine learning and statistics? | Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into p | Do image recognition efforts always rely on machine learning and statistics?
Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into probabilistic models. Machine learning is th... | Do image recognition efforts always rely on machine learning and statistics?
Yes and No. Nothing is ideal. Uncertainties come from everywhere. There is no exact mathematical model for all. Even though there is, it takes long time to figure out. The easiest way is to fit into p |
46,348 | How to visualize iterative parameter constraint? | One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence.
Tufte's sparklines:
Translation as a probability distribution:
Tufte might suggest reducing the height to make the posterior angles a... | How to visualize iterative parameter constraint? | One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence.
Tufte's sparkli | How to visualize iterative parameter constraint?
One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence.
Tufte's sparklines:
Translation as a probability distribution:
Tufte might suggest r... | How to visualize iterative parameter constraint?
One option would be to use color to show the progression, specifically by highlighting the final result in red - inspired by sparklines, including those on p. 51 of Beautiful Evidence.
Tufte's sparkli |
46,349 | How to visualize iterative parameter constraint? | Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using some of your numbers:
library(ggplot2)
n = 10000; set.seed(0)
x <- data.frame(theta1 = rnorm(n, 10, 3),
theta2 = ... | How to visualize iterative parameter constraint? | Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using so | How to visualize iterative parameter constraint?
Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using some of your numbers:
library(ggplot2)
n = 10000; set.seed(0)
x <- d... | How to visualize iterative parameter constraint?
Personally, I kind of like the facet_grid() from ggplot for showing how elements change over different experiments - especially if there's a visually noticeable progression. Here's an example using so |
46,350 | How to visualize iterative parameter constraint? | Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper...
library(ggplot2)
library(animation)
n = 10000; set.seed(0)
x <- data.frame(theta1 = rnorm(n, 10, 3),
theta2 = rnorm(n, 20... | How to visualize iterative parameter constraint? | Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper...
libra | How to visualize iterative parameter constraint?
Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper...
library(ggplot2)
library(animation)
n = 10000; set.seed(0)
x <- data.frame(t... | How to visualize iterative parameter constraint?
Another possibility is animating the graphics building one on top of the other. This is really just for shits and giggles though, not sure how it would fly for a stats heavy crowd or on paper...
libra |
46,351 | What distribution would lead to this highly peaked and skewed density plot? | It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation).
I would look at a qqplot. In R, if x contains your data:
n <- length(x)
qqplot(x, qexp( (1:n - 0.5)/n ) )
Note that in the use of density() for the non-negative case, it is best to... | What distribution would lead to this highly peaked and skewed density plot? | It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation).
I would look at a qqplot. In R, if x contains your data:
n <- | What distribution would lead to this highly peaked and skewed density plot?
It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation).
I would look at a qqplot. In R, if x contains your data:
n <- length(x)
qqplot(x, qexp( (1:n - 0.5)/n ) )... | What distribution would lead to this highly peaked and skewed density plot?
It looks rather like an exponential distribution (assuming that the bit below 0 is an artifact of smoothing in the density estimation).
I would look at a qqplot. In R, if x contains your data:
n <- |
46,352 | What distribution would lead to this highly peaked and skewed density plot? | It's not usually possible to identify a distribution from looking at a histogram like this.
As a start, plot the density on a log scale:
The tail of this density (from around 40 onward) is close to linear, showing it is close to exponential. That's part of the characterization. To go further, compare the density to ... | What distribution would lead to this highly peaked and skewed density plot? | It's not usually possible to identify a distribution from looking at a histogram like this.
As a start, plot the density on a log scale:
The tail of this density (from around 40 onward) is close to l | What distribution would lead to this highly peaked and skewed density plot?
It's not usually possible to identify a distribution from looking at a histogram like this.
As a start, plot the density on a log scale:
The tail of this density (from around 40 onward) is close to linear, showing it is close to exponential. ... | What distribution would lead to this highly peaked and skewed density plot?
It's not usually possible to identify a distribution from looking at a histogram like this.
As a start, plot the density on a log scale:
The tail of this density (from around 40 onward) is close to l |
46,353 | What distribution would lead to this highly peaked and skewed density plot? | Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distribution.
I'd start with either a exponential distribution, or the slightly more flexible Weibull distribution, and see if ... | What distribution would lead to this highly peaked and skewed density plot? | Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distrib | What distribution would lead to this highly peaked and skewed density plot?
Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distribution.
I'd start with either a exponential d... | What distribution would lead to this highly peaked and skewed density plot?
Assuming, as others have, that the small blip below zero is an artifact of a density smoothing process, rather than a small amount of negative data, your distribution looks like an exponential distrib |
46,354 | What distribution would lead to this highly peaked and skewed density plot? | This is a long-tail distribution.
GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data.
It's in package GB2. | What distribution would lead to this highly peaked and skewed density plot? | This is a long-tail distribution.
GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data.
It's in package GB2. | What distribution would lead to this highly peaked and skewed density plot?
This is a long-tail distribution.
GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data.
It's in package GB2. | What distribution would lead to this highly peaked and skewed density plot?
This is a long-tail distribution.
GB2 (Generalized beta of second kind) with four parameters has a good flexibility for this kind of data.
It's in package GB2. |
46,355 | Visualizing k-nearest neighbour? | If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage.
They also have several neat examples for KNN regression, but I have not found the code for those.
More to the point, the package you m... | Visualizing k-nearest neighbour? | If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage.
They also have | Visualizing k-nearest neighbour?
If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage.
They also have several neat examples for KNN regression, but I have not found the code for those.
Mor... | Visualizing k-nearest neighbour?
If you want to visualize KNN classification, there's a good example here taken from the book An Introduction to Statistical Learning, which can be downloaded freely from their webpage.
They also have |
46,356 | Visualizing k-nearest neighbour? | kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple this would be rather harsh to decipher.
You may do this by counting the distances between train objects the way you did i... | Visualizing k-nearest neighbour? | kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple th | Visualizing k-nearest neighbour?
kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple this would be rather harsh to decipher.
You may do this by counting the distances betwee... | Visualizing k-nearest neighbour?
kNN is just a simple interpolation of feature space, so its visualization would be in fact equivalent to just drawing a train set in some less or more funky manner, and unless the problem is simple th |
46,357 | Anyone know of a simple dendrogram visualizer? | TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems.
More powerful solution is to use R, but here you would have to invest some time in making conversion to the dendrogram object (basically list... | Anyone know of a simple dendrogram visualizer? | TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems.
More powerful solutio | Anyone know of a simple dendrogram visualizer?
TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems.
More powerful solution is to use R, but here you would have to invest some time in making conv... | Anyone know of a simple dendrogram visualizer?
TreeView -- it is not a statistical tool, but it is very light and I have a great sentiment to it; and it is easy to make output to Newick format, which TV eats without problems.
More powerful solutio |
46,358 | Anyone know of a simple dendrogram visualizer? | You can use PhyFi web server for generating dendrograms from Newick files.
Sample output using your data from PhyFi: | Anyone know of a simple dendrogram visualizer? | You can use PhyFi web server for generating dendrograms from Newick files.
Sample output using your data from PhyFi: | Anyone know of a simple dendrogram visualizer?
You can use PhyFi web server for generating dendrograms from Newick files.
Sample output using your data from PhyFi: | Anyone know of a simple dendrogram visualizer?
You can use PhyFi web server for generating dendrograms from Newick files.
Sample output using your data from PhyFi: |
46,359 | Anyone know of a simple dendrogram visualizer? | Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulating the display. | Anyone know of a simple dendrogram visualizer? | Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulati | Anyone know of a simple dendrogram visualizer?
Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulating the display. | Anyone know of a simple dendrogram visualizer?
Archaeopteryx is a Java application that you can use standalone or embed in an application. Dendroscope is also pretty good. Both can read files in Newick format, and provide many ways of manipulati |
46,360 | Anyone know of a simple dendrogram visualizer? | While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross. | Anyone know of a simple dendrogram visualizer? | While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross. | Anyone know of a simple dendrogram visualizer?
While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross. | Anyone know of a simple dendrogram visualizer?
While it's not a tool per-se, ascii art is a fairly safe option actually, and not as hard as it seems at first. It's not pretty, but it gets the point accross. |
46,361 | Possible identifiability issue in hierarchical model | Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here for a detailed exposition of just this model and appropriate prior specification.
In short, yes, this model can be very u... | Possible identifiability issue in hierarchical model | Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here f | Possible identifiability issue in hierarchical model
Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here for a detailed exposition of just this model and appropriate prior s... | Possible identifiability issue in hierarchical model
Your notation is a little strange (what do you mean by "diffuse"?), but I suspect that your prior on $\sigma^2_\theta$ is leading to an improper or nearly improper posterior, for one thing. See here f |
46,362 | Possible identifiability issue in hierarchical model | Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if you were to use jeffreys prior for both parameters, yo... | Possible identifiability issue in hierarchical model | Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance pa | Possible identifiability issue in hierarchical model
Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if y... | Possible identifiability issue in hierarchical model
Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance pa |
46,363 | Predicting forecasts for next 12 months using Box-Jenkins | If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing models, and uses a solid methodology to do so. It will probably get you a lot farther than what you are building in exc... | Predicting forecasts for next 12 months using Box-Jenkins | If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing | Predicting forecasts for next 12 months using Box-Jenkins
If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing models, and uses a solid methodology to do so. It will probab... | Predicting forecasts for next 12 months using Box-Jenkins
If you are at all familiar with R (if you're building time series models, you should be), check out the forecast package. It's designed to choose parameters for Arima as well as exponential smoothing |
46,364 | Predicting forecasts for next 12 months using Box-Jenkins | The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type of models are only used for irregular part and by their design these model do not incorporate any trend (I am assuming t... | Predicting forecasts for next 12 months using Box-Jenkins | The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type | Predicting forecasts for next 12 months using Box-Jenkins
The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type of models are only used for irregular part and by their design... | Predicting forecasts for next 12 months using Box-Jenkins
The time series are usually decomposed into 3 parts, trend, seasonality and irregular. (The link gives 4 parts, but cyclical and seasonality are usually lumped together). Strictly speaking ARIMA type |
46,365 | Predicting forecasts for next 12 months using Box-Jenkins | Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. The best way to answer this question is to evaluate alternative final models for adequacy in terms of separating the obs... | Predicting forecasts for next 12 months using Box-Jenkins | Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. | Predicting forecasts for next 12 months using Box-Jenkins
Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. The best way to answer this question is to evaluate alternativ... | Predicting forecasts for next 12 months using Box-Jenkins
Your approach suggests initially adjusting in a deterministic manner the impact of seasonality. This approach may or may not be applicable as the impact of seasonality may be auto-projective in form. |
46,366 | Predicting forecasts for next 12 months using Box-Jenkins | As mentioned, Use R, not excel.
My understanding of this process you are asking for.
Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data points.
Use that model and find the residuals from that. Fit your time series model to these residuals. To forecast, u... | Predicting forecasts for next 12 months using Box-Jenkins | As mentioned, Use R, not excel.
My understanding of this process you are asking for.
Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data | Predicting forecasts for next 12 months using Box-Jenkins
As mentioned, Use R, not excel.
My understanding of this process you are asking for.
Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data points.
Use that model and find the residuals from that. Fit ... | Predicting forecasts for next 12 months using Box-Jenkins
As mentioned, Use R, not excel.
My understanding of this process you are asking for.
Say you have a data set with a linear trend. Let's assume that trend to be Y = 3t+1, also assume you have 15 data |
46,367 | Deriving mathematical model of pLSA | I am assuming you want to derive:
\begin{align*}
P(w,d) = \sum_{c} P(c) P(d|c) P(w|c)
&= P(d) \sum_{c} P(c|d) P(w|c)
\end{align*}
Further, this is similar to Probabilistic latent semantic indexing (cf. Blei, Jordan, and Ng (2003) Latent Dirichlet Allocation. JMLR section 4.3). PLSI posits that a document label $d$ and... | Deriving mathematical model of pLSA | I am assuming you want to derive:
\begin{align*}
P(w,d) = \sum_{c} P(c) P(d|c) P(w|c)
&= P(d) \sum_{c} P(c|d) P(w|c)
\end{align*}
Further, this is similar to Probabilistic latent semantic indexing (c | Deriving mathematical model of pLSA
I am assuming you want to derive:
\begin{align*}
P(w,d) = \sum_{c} P(c) P(d|c) P(w|c)
&= P(d) \sum_{c} P(c|d) P(w|c)
\end{align*}
Further, this is similar to Probabilistic latent semantic indexing (cf. Blei, Jordan, and Ng (2003) Latent Dirichlet Allocation. JMLR section 4.3). PLSI ... | Deriving mathematical model of pLSA
I am assuming you want to derive:
\begin{align*}
P(w,d) = \sum_{c} P(c) P(d|c) P(w|c)
&= P(d) \sum_{c} P(c|d) P(w|c)
\end{align*}
Further, this is similar to Probabilistic latent semantic indexing (c |
46,368 | Deriving mathematical model of pLSA | The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$.
I'm not sure why you don't think Bayes theorem and basic probability rules are useful:
Eq 1 is Bayes theorem (ie recognizing that $P(d|c)P(c) = P(c,d)$ and plugging in to the definition of conditional probability)
Eq 2 follows immediat... | Deriving mathematical model of pLSA | The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$.
I'm not sure why you don't think Bayes theorem and basic probability rules are useful:
Eq 1 is Bayes theorem (ie re | Deriving mathematical model of pLSA
The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$.
I'm not sure why you don't think Bayes theorem and basic probability rules are useful:
Eq 1 is Bayes theorem (ie recognizing that $P(d|c)P(c) = P(c,d)$ and plugging in to the definition of conditiona... | Deriving mathematical model of pLSA
The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$.
I'm not sure why you don't think Bayes theorem and basic probability rules are useful:
Eq 1 is Bayes theorem (ie re |
46,369 | Kruskal-Wallis test data considerations | With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_{0}$. If all distributions have the same shape (and are therefore identical under $H_{0}$), this is true.
Here's a first... | Kruskal-Wallis test data considerations | With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_ | Kruskal-Wallis test data considerations
With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_{0}$. If all distributions have the same shape (and are therefore identical unde... | Kruskal-Wallis test data considerations
With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_ |
46,370 | Kruskal-Wallis test data considerations | You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”.
If there is no exact test available, you can use bootstrap. | Kruskal-Wallis test data considerations | You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”.
If there is no exact test available, you | Kruskal-Wallis test data considerations
You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”.
If there is no exact test available, you can use bootstrap. | Kruskal-Wallis test data considerations
You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”.
If there is no exact test available, you |
46,371 | Discrepancy measures for transition matrices | As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be more relevant but I clearly prefer to come back to probabilities.
I assume you want to compare $Q=(Q_{ij})$ and $P=(P_{ij}... | Discrepancy measures for transition matrices | As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be mor | Discrepancy measures for transition matrices
As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be more relevant but I clearly prefer to come back to probabilities.
I assume yo... | Discrepancy measures for transition matrices
As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be mor |
46,372 | Discrepancy measures for transition matrices | Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfies triangle inequality.
I hope by 'transition matrix' you mean 'probability transition matrix'. Never mind, as long as th... | Discrepancy measures for transition matrices | Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfie | Discrepancy measures for transition matrices
Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfies triangle inequality.
I hope by 'transition matrix' you mean 'probability ... | Discrepancy measures for transition matrices
Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfie |
46,373 | How to choose number of dummy variables when encoding several categorical variables? | You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can think of the k-1 dummies as being contrasts between the effects of their corresponding levels, and the level whose dummy... | How to choose number of dummy variables when encoding several categorical variables? | You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can | How to choose number of dummy variables when encoding several categorical variables?
You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can think of the k-1 dummies as being c... | How to choose number of dummy variables when encoding several categorical variables?
You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can |
46,374 | How to choose number of dummy variables when encoding several categorical variables? | In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough information on your dependent variable or if you are using binary or multi logistic regression. Nevertheless,if you are using Gend... | How to choose number of dummy variables when encoding several categorical variables? | In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough informatio | How to choose number of dummy variables when encoding several categorical variables?
In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough information on your dependent variable or if ... | How to choose number of dummy variables when encoding several categorical variables?
In building logistic regression, you have to bear in mind that the dependent value must assume exactly two values on the cases being processed. In your question , you did not provide enough informatio |
46,375 | Estimate the Kullback-Leibler divergence | Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.
red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDistribution[1, 832/1000];
kl[pF_, qF_] := Module[{p, q},
p[x_] := PDF[pF, x];
q[x_] := PDF[qF, x];
Integrate[p[x... | Estimate the Kullback-Leibler divergence | Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.
red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDi | Estimate the Kullback-Leibler divergence
Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.
red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDistribution[1, 832/1000];
kl[pF_, qF_] := Module[{p, q},
p[x_] := PDF[pF, x];... | Estimate the Kullback-Leibler divergence
Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.
red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDi |
46,376 | Is it possible to do meta analysis of only two studies | Yes, it is possible, but whether it is appropriate depends on the intent of your analysis.
Meta-analysis is a method of combining information from different sources, so it is technically possible to do a meta-analysis of only two studies - even of multiple results within a single paper. The key concern is not if you ca... | Is it possible to do meta analysis of only two studies | Yes, it is possible, but whether it is appropriate depends on the intent of your analysis.
Meta-analysis is a method of combining information from different sources, so it is technically possible to d | Is it possible to do meta analysis of only two studies
Yes, it is possible, but whether it is appropriate depends on the intent of your analysis.
Meta-analysis is a method of combining information from different sources, so it is technically possible to do a meta-analysis of only two studies - even of multiple results ... | Is it possible to do meta analysis of only two studies
Yes, it is possible, but whether it is appropriate depends on the intent of your analysis.
Meta-analysis is a method of combining information from different sources, so it is technically possible to d |
46,377 | Is it possible to do meta analysis of only two studies | If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect. | Is it possible to do meta analysis of only two studies | If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect. | Is it possible to do meta analysis of only two studies
If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect. | Is it possible to do meta analysis of only two studies
If you compute a likelihood ratio for the effect of interest in each study, you can simply multiply them together to obtain the aggregate weight of evidence for the effect. |
46,378 | Threshold models and flu epidemic recognition | The CDC uses the epidemic threshold of
1.645 standard deviations above the baseline for that time of year.
The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve is not really a series, but rather a modeled ... | Threshold models and flu epidemic recognition | The CDC uses the epidemic threshold of
1.645 standard deviations above the baseline for that time of year.
The definition may have multiple sorts of detection or mortality endpoints. (The one you a | Threshold models and flu epidemic recognition
The CDC uses the epidemic threshold of
1.645 standard deviations above the baseline for that time of year.
The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve... | Threshold models and flu epidemic recognition
The CDC uses the epidemic threshold of
1.645 standard deviations above the baseline for that time of year.
The definition may have multiple sorts of detection or mortality endpoints. (The one you a |
46,379 | Threshold models and flu epidemic recognition | A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to it.
What happens is, you have a linear model with a Poisson (or negative binomial) distribution in roughly the following ... | Threshold models and flu epidemic recognition | A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to i | Threshold models and flu epidemic recognition
A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to it.
What happens is, you have a linear model with a Poisson (or negative bi... | Threshold models and flu epidemic recognition
A quick rundown of how these things go. What you're seeing is called 'Serfling Regression'. What it is is a linear regression with at least one linear term for a time trend, and several harmonics to i |
46,380 | Can the multiple linear correlation coefficient be negative? | The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would calculate $R^{2}$ is:
$$R^2=\frac{SSR}{TSS}$$
where
$$ SSR = \sum_{i} (\hat{Y_i}-\bar{Y})^2$$
and
$$ TSS = \sum_{i} (Y_i... | Can the multiple linear correlation coefficient be negative? | The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would | Can the multiple linear correlation coefficient be negative?
The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would calculate $R^{2}$ is:
$$R^2=\frac{SSR}{TSS}$$
where
$$ SSR ... | Can the multiple linear correlation coefficient be negative?
The multiple correlation in standard linear regression cannot be negative, the maths are easy to show it, although it depends on what "multiple correlation" is taken to mean. The usual way you would |
46,381 | Can the multiple linear correlation coefficient be negative? | $R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number.
For example, if we correlated income and time spent in jail throughout life, I would guess we would get a negative correlation (I haven't done this, I'm... | Can the multiple linear correlation coefficient be negative? | $R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number.
For example, if we correlated in | Can the multiple linear correlation coefficient be negative?
$R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number.
For example, if we correlated income and time spent in jail throughout life, I would guess ... | Can the multiple linear correlation coefficient be negative?
$R$ can indeed be negative - if two variables are negatively related. $R^2$ can only be between 0 and 1, for the simple reason that it is the square of a real number.
For example, if we correlated in |
46,382 | Discerning between two different linear regression models in one sample | You need to model the observations as a mixture model. Define:
$p$ as the probability that a sample belongs to the first data generating process.
Thus, the density function of $y_i$ is given by:
$f(y_i|-) \sim p f_1(y_i|-) + (1-p) f_2(y_i|-)$
where
$f_1(.)$ is the density that arises because of the first data generati... | Discerning between two different linear regression models in one sample | You need to model the observations as a mixture model. Define:
$p$ as the probability that a sample belongs to the first data generating process.
Thus, the density function of $y_i$ is given by:
$f(y_ | Discerning between two different linear regression models in one sample
You need to model the observations as a mixture model. Define:
$p$ as the probability that a sample belongs to the first data generating process.
Thus, the density function of $y_i$ is given by:
$f(y_i|-) \sim p f_1(y_i|-) + (1-p) f_2(y_i|-)$
wher... | Discerning between two different linear regression models in one sample
You need to model the observations as a mixture model. Define:
$p$ as the probability that a sample belongs to the first data generating process.
Thus, the density function of $y_i$ is given by:
$f(y_ |
46,383 | Discerning between two different linear regression models in one sample | The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well. | Discerning between two different linear regression models in one sample | The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well. | Discerning between two different linear regression models in one sample
The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well. | Discerning between two different linear regression models in one sample
The first hit on Rseek with keywords "mixture regression" brings up the flexmix package, which does what you want. I seem to recall that there were other packages for this as well. |
46,384 | Might be an unbalanced within subjects repeated measures? | It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means.
Just as you aggregated your accuracies to get a percentage correct and do your ANOVA in the first place you average your latenc... | Might be an unbalanced within subjects repeated measures? | It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means.
Just as | Might be an unbalanced within subjects repeated measures?
It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means.
Just as you aggregated your accuracies to get a percentage correct an... | Might be an unbalanced within subjects repeated measures?
It's not imbalanced because your repeated measures should be averaged across such subgroups within subject beforehand. The only thing imbalanced is the quality of the estimates of your means.
Just as |
46,385 | Might be an unbalanced within subjects repeated measures? | I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclusions. See: http://dx.doi.org/10.1016/j.jml.2007.11.004
As John Christie notes, the best way to approach analysis of acc... | Might be an unbalanced within subjects repeated measures? | I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclu | Might be an unbalanced within subjects repeated measures?
I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclusions. See: http://dx.doi.org/10.1016/j.jml.2007.11.004
As Joh... | Might be an unbalanced within subjects repeated measures?
I just want to emphasize the importance of not analyzing accuracies on the proportion scale. While lamentably pervasive across a number of disciplines, this practice can yield frankly incorrect conclu |
46,386 | Might be an unbalanced within subjects repeated measures? | So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them).
I think the easiest way for doing this would be to take the mean response time for each subject for each of the three levels (which will results in 3 nu... | Might be an unbalanced within subjects repeated measures? | So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them).
I think the easiest way for doin | Might be an unbalanced within subjects repeated measures?
So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them).
I think the easiest way for doing this would be to take the mean response time for each subjec... | Might be an unbalanced within subjects repeated measures?
So this is a one way repeated measures Anova - with the "Y" being time till answer was given, and the first factor having 3 levels (each subject having three of them).
I think the easiest way for doin |
46,387 | When should normalization never be used? | Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some tests are more prone to failure when normalizing non-normal data, while some are more resistant ("robust" tests).
One le... | When should normalization never be used? | Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some te | When should normalization never be used?
Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some tests are more prone to failure when normalizing non-normal data, while some are ... | When should normalization never be used?
Whether one can normalize a non-normal data set depends on the application. For example, data normalization is required for many statistical tests (i.e. calculating a z-score, t-score, etc.) Some te |
46,388 | When should normalization never be used? | Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution.
For example one might want to rescale observables $X$ to all be normal with $(X-\mu)/\sigma$, but this can only work if the data is normal and if both $\mu$ and $\sigma$ are the same for all data points ... | When should normalization never be used? | Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution.
For example one might want to rescale observables $X$ to all be normal with $(X-\mu | When should normalization never be used?
Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution.
For example one might want to rescale observables $X$ to all be normal with $(X-\mu)/\sigma$, but this can only work if the data is normal and if both $\mu$ and $... | When should normalization never be used?
Of course one should never try to blindly normalize data if the data does not follow a (single) normal distribution.
For example one might want to rescale observables $X$ to all be normal with $(X-\mu |
46,389 | When should normalization never be used? | I thought this was too obvious, until I saw this question!
When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you have a good reason, e.g. storage. | When should normalization never be used? | I thought this was too obvious, until I saw this question!
When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you ha | When should normalization never be used?
I thought this was too obvious, until I saw this question!
When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you have a good reason, e.g. storage. | When should normalization never be used?
I thought this was too obvious, until I saw this question!
When you normalise data, make sure you always have access to the raw data after normalisation. Of course, you could break this rule if you ha |
46,390 | Intro to statistics for an MD? | This is the one I've used successfully:
Statistics Without Maths for Psychology: Using Spss for Windows.
I just stumbled on this too, this might be useful:
Statistics Notes in the British Medical Journal.
I'm sure I knew of a free pdf that some doctors I know use, but I can't seem to find it at the moment. I will try t... | Intro to statistics for an MD? | This is the one I've used successfully:
Statistics Without Maths for Psychology: Using Spss for Windows.
I just stumbled on this too, this might be useful:
Statistics Notes in the British Medical Jour | Intro to statistics for an MD?
This is the one I've used successfully:
Statistics Without Maths for Psychology: Using Spss for Windows.
I just stumbled on this too, this might be useful:
Statistics Notes in the British Medical Journal.
I'm sure I knew of a free pdf that some doctors I know use, but I can't seem to find... | Intro to statistics for an MD?
This is the one I've used successfully:
Statistics Without Maths for Psychology: Using Spss for Windows.
I just stumbled on this too, this might be useful:
Statistics Notes in the British Medical Jour |
46,391 | Intro to statistics for an MD? | My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math. | Intro to statistics for an MD? | My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math. | Intro to statistics for an MD?
My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math. | Intro to statistics for an MD?
My book, Intuitive Biostatistics, is written partly from a medical point of view. It focusses on the practical parts of interpreting statistical results, with almost no math. |
46,392 | Intro to statistics for an MD? | I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. If an online reference works, I like Gerard Dallal's Handbook of Statistical Practice, which may do the trick if he's j... | Intro to statistics for an MD? | I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. | Intro to statistics for an MD?
I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. If an online reference works, I like Gerard Dallal's Handbook of Statistical Practice, w... | Intro to statistics for an MD?
I assume your friend prefers something that's biostatistics oriented. Glantz's Primer of Biostatistics is a small book, an easy and quick read, and tends to get rave reviews from a similar audience. |
46,393 | Intro to statistics for an MD? | Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2
Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch
Logistic Regression: A Self-Learning Text (Statistics for Biology and Health) Third (3rd) Edition
David G. Kleinbaum, Mitchel Klein
... | Intro to statistics for an MD? | Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2
Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch
Logistic Regr | Intro to statistics for an MD?
Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2
Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch
Logistic Regression: A Self-Learning Text (Statistics for Biology and Health) Third (3rd) Edition
Davi... | Intro to statistics for an MD?
Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models, Edition 2
Eric Vittinghoff · David V. Glidden · Stephen C. Shiboski · Charles E. McCulloch
Logistic Regr |
46,394 | What is the adequate regression model for bounded, continuous but poisson-like data? | Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, say, between 4 and 5 or between 8 and 9. Frank Harrell recommends this as an approach when you have this type of data, ev... | What is the adequate regression model for bounded, continuous but poisson-like data? | Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, s | What is the adequate regression model for bounded, continuous but poisson-like data?
Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, say, between 4 and 5 or between 8 an... | What is the adequate regression model for bounded, continuous but poisson-like data?
Consider ordinal regression. You have data that are ordered but, from your description, it doesn't seem that the difference between scores of 1 and 2 is the same as the difference between scores of, s |
46,395 | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$
Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the sequence of probability spaces $\langle (\mathcal X_i, \boldsymbol{\mathfrak A}_i, \mathbf P_i)\rangle_{i=1}^\infty,$ wher... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$
Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the se | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$
Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the sequence of probability spaces $\langle (\mathcal... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
I deem a generalized framework formalizing the concepts at work is apt here. For more details, refer to $\rm [I].$
Let $(\Omega, \boldsymbol{\mathfrak A}, \Pr)$ be a probability space. Consider the se |
46,396 | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | It is customary in probability or mathematical statistics to encounter statements such as
Let $X$ be an absolutely continuous random variable with density $f$
with no reference to underlying probability space. However, we can always supply an appropriate space as follows.
Take $\Omega = \mathbb{R}$, $\mathcal{F} =$ B... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | It is customary in probability or mathematical statistics to encounter statements such as
Let $X$ be an absolutely continuous random variable with density $f$
with no reference to underlying probabi | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
It is customary in probability or mathematical statistics to encounter statements such as
Let $X$ be an absolutely continuous random variable with density $f$
with no reference to underlying probability space. However, we can always supply an ap... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
It is customary in probability or mathematical statistics to encounter statements such as
Let $X$ be an absolutely continuous random variable with density $f$
with no reference to underlying probabi |
46,397 | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, ..., X_n)$ is a small abuse of notation as random variables are functions from $\Omega$, while the right hand side is a "... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on? | Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, . | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, ..., X_n)$ is a small abuse of notation as rando... | What 's the $(\Omega,\mathcal{F},P_{\theta} )$ those $T_{n}$ defined on?
Let's start by setting up each individual $X_i$ as a function $X_i: \Omega_i \to S_i$, with $S_i$ being a set and $\mathcal{F}_i$ and $P_{i, \theta}$ defined appropriately. Now $T_n = T_n(X_1, X_2, . |
46,398 | Equivalent definition of stochastic dominance | The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (do not confuse $F, G$ in this theorem with $F, G$ in your question):
Let $F$ and $G$ be two nondecreasing, right-contin... | Equivalent definition of stochastic dominance | The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley ( | Equivalent definition of stochastic dominance
The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley (do not confuse $F, G$ in this theorem with $F, G$ in your question):
Let ... | Equivalent definition of stochastic dominance
The integration by parts formula still holds for general distribution functions (under appropriate technical conditions). For example, Theorem 18.4 in Probability and Measure by Patrick Billingsley ( |
46,399 | Equivalent definition of stochastic dominance | This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs:
When $f:\mathbb R\to\mathbb R$ is an integrable function with non-zero norm $|f|=\int |f(x)|\,\mathrm dx \lt \infty$ and $\mathcal A$ is a set of positive m... | Equivalent definition of stochastic dominance | This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs:
When $f:\mathbb R\to\mathbb R$ is an | Equivalent definition of stochastic dominance
This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs:
When $f:\mathbb R\to\mathbb R$ is an integrable function with non-zero norm $|f|=\int |f(x)|\,\mathrm dx \lt \... | Equivalent definition of stochastic dominance
This is not really a theorem about stochastic dominance: it's a property of areas. It comes down to this lemma, which will be applied in the last two paragraphs:
When $f:\mathbb R\to\mathbb R$ is an |
46,400 | How to compute a prediction interval from ordinary least squares regression output alone? | $S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula
$$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$
where the left hand side is the standard error of the slope, given as $1373$ in the question, and $MS_{Res}$ is the mean squared residual, whose square root (t... | How to compute a prediction interval from ordinary least squares regression output alone? | $S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula
$$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$
where the left hand side is the standar | How to compute a prediction interval from ordinary least squares regression output alone?
$S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula
$$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$
where the left hand side is the standard error of the slope, given as... | How to compute a prediction interval from ordinary least squares regression output alone?
$S_{xx},$ the sum of squares of the explanatory variable, is easy to obtain from the formula
$$\operatorname{se}(\hat\beta_1) = \sqrt{\frac{MS_{Res}}{S_{xx}}}$$
where the left hand side is the standar |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.