category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
linear regression
|
Linear regression on the results of linear regression
|
https://stats.stackexchange.com/questions/324376/linear-regression-on-the-results-of-linear-regression
|
<p>I created a model for predicting a scalar variable from a set of features. I trained a linear regression on a training set, and used the resulting coefficients to produce predictions for a test set.</p>
<p>Then, I did simple linear regression to the predictions as a function of the ground truth values of the test set, expecting a slope of 1 and an intercept of 0. Although I got $R^2 \approx 1$, the slope was significantly different from 1 and the intercept with the vertical axis was significantly different from 0.</p>
<p>What does this tell me about the original linear regression?</p>
<p>What it be more informative to "force intercept to 0" for the second linear regression? This causes the intercept to be exactly 0, the slope becomes closer to 1 and the $R^2$ becomes somewhat smaller.</p>
|
<p>It just tells you that you won't have perfect fit in a linear regression on realistic datasets. </p>
<p>First linear regression will not fit the data completely, so there will be some unexplained variance remaining between predictions and original output value (on train as well as test set). Your second model will try to fit in that unexplained variance in the regression. So you are seeing non-one slope and non-zero intercept.</p>
| 0
|
linear regression
|
Log-linear regression vs. Poisson regression
|
https://stats.stackexchange.com/questions/261946/log-linear-regression-vs-poisson-regression
|
<p>In this <a href="https://stats.stackexchange.com/questions/86720/log-linear-regression-vs-logistic-regression">post</a>, OP asked the difference between log linear regression and logistic regression. Two answers in the post are very clear and directly address OP's question. </p>
<p>I understand log-linear regression and logistic regression are quite different but do not understand <strong>what's the difference between log-linear regression and Poisson regression?</strong> </p>
<p>I think AdamO and Gung's answer do not explain my question in detail.</p>
<p>From AdamO</p>
<blockquote>
<p>the log-linear model is actually just a Poisson regression model</p>
</blockquote>
<p>From Gung</p>
<blockquote>
<p>"log-linear regression" is usually understood to be a Poisson GLiM applied to multi-way contingency tables.</p>
</blockquote>
<hr>
<p>Update: I am reading some source code fro <a href="https://cran.r-project.org/web/packages/R0/index.html" rel="nofollow noreferrer">R0 package</a> in R. The author was trying to estimate the exponential growth rate using different methods:</p>
<pre><code> ##details<< method "poisson" uses Poisson regression of incidence.
## method "linear" uses linear regression of log(incidence)
if (reg.met == "linear") {
tmp <-lm((log(incid)) ~ t, data=epid)
...
}
# Method 2 == Poisson regression
else if (reg.met == "poisson") {
tmp <- glm(incid ~ t, family=poisson(), data=epid)
...
}
</code></pre>
<p>Is there any relationship between linear regression on log scale and poisson regression? what is the reason to use different methods?</p>
|
<p>A Poisson regression is a regression where the outcome variable consists of non-negative integers, and it is sensible to assume that the variance and mean of the model are the same. </p>
<p>A log-linear regression is usually a model estimated using linear regression, where the response variable is replaced by a new variable that is the natural logarithm of the of the original response variable. Or, if using a GLM, this is done via a logarithmic link function (essentially the same idea, but the mechanics of fitting the model are the different). </p>
<p>The Poisson regression and log-linear regression are not the same thing, but are often used for very similar problems, particularly among older statisticians (the Poisson regression model only became widely available in software in the 1980s). </p>
<p>Most people these days prefer a Poisson regression because it can deal with 0 values, whereas you will get an error using a log-linear regression.</p>
<p>It is possible to use a Poisson regression to model data from a contingency table, where the predictor variables are the dimensions (e.g., row and column labels) of a contingency table. This can be referred to as a log-linear model. Perhaps some people call it a log-linear regression (one of the challenges of statistics is that the language is used rather loosely, but many people act as if the language is precise).</p>
| 1
|
linear regression
|
Difference between Univariate Linear Regression and Simple Linear Regression?
|
https://stats.stackexchange.com/questions/351325/difference-between-univariate-linear-regression-and-simple-linear-regression
|
<p>Is there any difference between <strong>Univariate Linear Regression</strong> and <strong>Simple Linear Regression</strong>? If so, what is the difference exactly? It seems both of them are exactly same. I would appreciate if anyone could cite a scientific paper that defines Univariate Linear Regression.</p>
|
<p>A good start (I hope uncontentious) on this is simply to note that <strong>univariate</strong>, <strong>bivariate</strong> and <strong>multivariate</strong> denote focus on one, two or many variables respectively. (Other words such as <strong>trivariate</strong> can be found but seem much more rarely used and rarely needed.)</p>
<p>Then it might follow that univariate techniques are those for which only one variable is needed (e.g. calculating mean or median, drawing a histogram); bivariate techniques are those which two variables are needed (e.g. correlation); while multivariate techniques are those for which many variables are needed, or (similarly if not identically) those which can be applied to many variables at once. Many and two sometimes overlap as when principal component analysis could be applied to two variables, not just many. (Or even one...)</p>
<p>I'd say that <strong>regression</strong> without qualification implies just one outcome or response variable, whereas <strong>multivariate regression</strong> implies two or more outcome or response variables (although regression with one predictor could naturally be seen as a special case).</p>
<p>Historically <strong>multiple regression</strong> has been used to refer to any set-up with several predictor variables and in contrast <strong>simple regression</strong> can be used when only one predictor is used. I'd assert, as a mixture of personal impressions from reading and personal opinions about good terminology, that references to multiple regression are fading slowly (it's routine in many fields, scientifically, statistically and computationally, and long since not very special) and that neither term, multiple or simple, fills an important gap.</p>
<p>A clear distinction in literature between multiple and multivariate regression doesn't stop multivariate being misused for multiple, a common confusion often seen on this site.</p>
<p>So much context, now the question:</p>
<p>The expression <strong>univariate regression</strong> seems to have crept into informal discussions fairly recently. Can anyone cite a good textbook reference? (*) I'd presume that it could only mean one outcome, one predictor. So it's at best redundant or a term for the simplest kind of regression. It's also at odds with the idea that such regression is a bivariate technique.</p>
<p>(*) The original question over 5 years ago as I write asked for scientific paper references, which no-one seems to have provided. I don't have one either.</p>
| 2
|
linear regression
|
Time series linear regression vs Linear regression
|
https://stats.stackexchange.com/questions/619730/time-series-linear-regression-vs-linear-regression
|
<p>Is it okay if my outcome for time series linear regression and linear regression is the same?</p>
<p>I have time series data with 756 observations and for each year there are 252 observations. The time series data is from 2015-2021. It is an individual data type.</p>
<p>All my independent variables are categorical and I categorized them.</p>
<p>I created a time series linear regression model and a linear regression model. My goal is to find the best model that fits the time series data from the two models.</p>
<p><a href="https://i.sstatic.net/LFzBw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LFzBw.png" alt="multiple linear regression model" /></a>
<a href="https://i.sstatic.net/U2P6P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U2P6P.png" alt="Time series linear regression model" /></a></p>
<p>The outputs are the same. Isn't it supposed to be different?</p>
|
<p>As said in the comment, "time-series linear regression" is not a different model. As <a href="https://www.rdocumentation.org/packages/forecast/versions/8.21/topics/tslm" rel="noreferrer">its documentation says</a></p>
<blockquote>
<p><code>tslm</code> is largely a wrapper for <code>lm()</code> except that it allows variables "trend" and "season" which are created on the fly from the time series characteristics of the data. The variable "trend" is a simple time trend and "season" is a factor indicating the season (e.g., the month or the quarter depending on the frequency of the data).</p>
</blockquote>
<p>If you didn't use those extra terms, the two models are exactly the same, not only mathematically, but it is also exactly the same code underneath.</p>
| 3
|
linear regression
|
compare Bayesian linear regression vs standard linear regression
|
https://stats.stackexchange.com/questions/393313/compare-bayesian-linear-regression-vs-standard-linear-regression
|
<p><strong>1st question,</strong></p>
<p>I recently learnt bayesian linear regression, but I'm confused that in what situation we should use bayesian linear regression, and when to use standard linear regression? What is the advantage of bayesian linear regression over standard one?</p>
<hr>
<p><strong>2nd question,</strong></p>
<p>Also, another thing I'm confused with is that for a simple linear regression whose formula is <span class="math-container">$𝑦_𝑖=α+β𝑥_𝑖+𝜀$</span>, why the bayesian version is as:</p>
<p><span class="math-container">$𝑢_𝑖=α+β𝑥_𝑖$</span></p>
<p><span class="math-container">$𝑦_𝑖∼\mathcal{N}(𝜇_𝑖,𝜎)$</span></p>
<p>I read from other place that <span class="math-container">$μ_𝑖$</span> corresponds to <span class="math-container">$𝑦_𝑖=α+β𝑥_𝑖$</span>, what does σ correspond to? And how is the version transformation realize?</p>
<hr>
<p><strong>3nd question,</strong></p>
<p>Last question, does <span class="math-container">$𝑦_𝑖∼\mathcal{N}(𝜇_𝑖,𝜎)$</span></p>
<p>mean that each value y <span class="math-container">$\in𝑌$</span>
is a normal distribution, instead of the observed data is a normal distribution?</p>
<hr>
| 4
|
|
linear regression
|
Linear Regression vs Keras
|
https://stats.stackexchange.com/questions/563448/linear-regression-vs-keras
|
<p>I created a dummy dataset and compared the performance of SKLearn LinearRegression and Keras.
Why is Keras producing horrible results compared to Linear Regression?</p>
<p>Code:</p>
<pre><code># Create Dataset
from sklearn.datasets import make_regression
X, y = make_regression(n_samples=5000, n_features=10, noise=0.1)
# Build Linear Regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lr = LinearRegression()
lr.fit(X,y)
prediction_lr = lr.predict(X)
# Build Keras Linear Regression
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(1, activation='relu', input_dim=10))
model.compile(optimizer='rmsprop', loss='mse')
model.fit(X,y, epochs=100, verbose=0)
prediction_nn = model.predict(X)
print(f'LR MSE: {mean_squared_error(prediction_lr, y)}')
print(f'NN MSE: {mean_squared_error(prediction_nn, y)}')
Output:
LR MSE: 0.010068399696132291
NN MSE: 26936.27829985695
</code></pre>
<p>Why is there a dramatic difference of MSE? How can we replicate Linear Regression using Keras?</p>
<p>Thanks</p>
| 5
|
|
linear regression
|
Linear Regression Doubt
|
https://stats.stackexchange.com/questions/507053/linear-regression-doubt
|
<p>I'm studying about Linear Regression and searching about it I found an example that was a graphics that the axis X was the Year and the axis Y was Price, but my doubt is: When we are talking about Year we need to treat that as a Time Series problem, yes? Also, Linear Regression applies just when the variables are continuous, yes?</p>
|
<p>You can relate Price to Year using <em>time series regression</em>.</p>
<p>A time series regression model for the setting you mention could be formulated like so:</p>
<p><span class="math-container">$Price_t = \alpha_0 + \beta_0 Year_t + \epsilon_t$</span></p>
<p>where <span class="math-container">$\epsilon_t$</span> could be temporally correlated and t = 1, ..., n. For example, <span class="math-container">$\epsilon_t$</span> could follow an AR(1) process.</p>
<p>In practice, you would need to determine what kind of process best captures the temporal dependence of the model errors.</p>
<p>Usually, with data collected every year, people assume that the underlying process is something like an ARIMA(p,d, q) process and then they determine the values of p, d and q which are best supported by the data.</p>
<p>See here for an example of time series regression with ARIMA errors in R:<a href="https://otexts.com/fpp2/regarima.html" rel="nofollow noreferrer">https://otexts.com/fpp2/regarima.html</a>.</p>
| 6
|
linear regression
|
Linear regression VS linear modeling
|
https://stats.stackexchange.com/questions/126165/linear-regression-vs-linear-modeling
|
<p>Can I claim that linear regression and linear modeling are the same topics? If not, what is the difference?</p>
|
<p>Comment made into an answer per suggestion of gung.</p>
<p>Linear modeling can have meanings, outside Statistics, well beyond the Wikipedia entry <a href="https://en.wikipedia.org/wiki/Linear_model" rel="nofollow" title="Liinear Model">Linear Model</a> in whuber's comment above. For instance, Linear Programming <a href="https://en.wikipedia.org/wiki/Linear_programming" rel="nofollow">https://en.wikipedia.org/wiki/Linear_programming</a> is the minimization or maximization of a linear function of several (could be millions) variables subject to linear constraints on those variables. Creation of the model to be solved by Linear Programming is considered to be linear modeling. </p>
<p>Without Linear Programming (it is widely used in oil refining), the gasoline (petrol) you buy for your car would be more expensive, and transportation would cost more (aside from petrol cost). I would venture to say that Linear Programming (to include Mixed Integer Linear Programming) plays a far more important role in the U.S. and world economies than does linear regression, and is THE most important and greatest impact linear modeling which is performed. </p>
<p>That said, I'm a nonlinear guy, so I see nonlinearity everywhere. On the other hand, I sometimes see how to restrict linearity to cost functions (input data to optimization), and thereby still perform "linear modeling" and solution, even though I have managed to get (sneak) significant and vital nonlinearity into the "linear" model.</p>
| 7
|
linear regression
|
Linear Regression in groups / Multivariate regression
|
https://stats.stackexchange.com/questions/357540/linear-regression-in-groups-multivariate-regression
|
<p>I try to do a linear regression of 84 patients, 1 numeric variable =threshold, 1 nominal variable=Group for prediction of the target numeric variable = dist.</p>
<p>On the multivariate linear regression: threshold is statistical significant, and group isn't.
However, when I split the data for the 2 different groups which Group is made of and running the linear regression just for threshold and dist:</p>
<ul>
<li>Group1 -threshold remains significant</li>
<li>Group 2 - threshold is not significant.</li>
</ul>
<p>I wonder - in that case why wasn't it seen in the multivariate linear regression? which way is the "right" way to conclude from this experiment?</p>
| 8
|
|
linear regression
|
Linear regression vs. Individual linear regression
|
https://stats.stackexchange.com/questions/523851/linear-regression-vs-individual-linear-regression
|
<p>If we want to do multiple individual (componentwise) regression, (like the one used in <a href="https://fan.princeton.edu/papers/06/SIS.pdf" rel="nofollow noreferrer">Sure-Independent-Screening</a>, Fan & Lv 2007) we have that:</p>
<p><span class="math-container">$$\hat\beta_{ind} = \frac{1}{n}X^Ty$$</span></p>
<p>(assuming normalized <span class="math-container">$X$</span>)</p>
<p>i.e., <span class="math-container">$\hat\beta_{j,ind} = $</span> regression estimate made for a model of <span class="math-container">$y=\beta_jx_j$</span></p>
<p><a href="https://i.sstatic.net/YVRRX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YVRRX.png" alt="enter image description here" /></a></p>
<p>Compare this with the full linear regression (normal equation) which is:
<span class="math-container">$$\hat\beta = (X^TX)^{-1}X^Ty
$$</span></p>
<p>It appears that the full linear estimator is actually some linear transformation (projection even?) of the individual regression coefficients! I.e.,
<span class="math-container">$$\hat\beta = P\hat\beta_{ind}
$$</span></p>
<p>I was wondering if there is more insight on the nature of this transformation?</p>
|
<p><strong>Edit</strong>: I think my old answer is a bit inaccurate.
1st of all - regarding my question - <span class="math-container">$(X^TX)^{-1}$</span> is obviously not a projection matrix. By the mere fact that <span class="math-container">$P^2 \neq P$</span>.</p>
<p>2nd thing - there's seem to be a bit of confusion because of the standardizing/normalizing stuff. If I regress only 1 covariate w/o intercept, I get <span class="math-container">$\hat\beta_j=(x^Tx)^{-1}x^Ty$</span>. If I do this to all the covariates separately, I will get what I called the "individual regression", i.e., in matrix form:
<span class="math-container">$$\hat\beta_{ind}=\begin{pmatrix} x_1^Tx_1 & \dots &0 \\
\vdots &\ddots & \vdots \\
0 & \dots & x_p^Tx_p
\end{pmatrix}^{-1}X^Ty$$</span>
I.e., it's as if we are assuming that the covariance between the covariates is 0 = they are uncorrelated. Which in realty, of course, is not true.
Compare this to the full regression which doesn't assume this:
<span class="math-container">$$\hat\beta=\begin{pmatrix} x_1^Tx_1 & \dots &x_1^Tx_p \\
\vdots &\ddots & \vdots \\
x_p^Tx_1 & \dots & x_p^Tx_p
\end{pmatrix}^{-1}X^T y
$$</span>
I'm not sure it's possible to break this down to some matrix time <span class="math-container">$\hat\beta_{ind}$</span>...</p>
<p>In the case where we standardize the columns of <span class="math-container">$X$</span>, then this is possible. <span class="math-container">$\hat\beta_{ind}$</span> is reduced to <span class="math-container">$\frac{1}{n}X^Ty$</span> and <span class="math-container">$\hat\beta$</span> can be written as <span class="math-container">$(\frac{1}{n}X^TX)^{-1}\hat\beta_{ind}$</span></p>
<hr />
<p><strong>Old post</strong>:
So, this is what I think:</p>
<ul>
<li><span class="math-container">$X^Ty$</span> finds the individual regression, if <span class="math-container">$X$</span> is normalized. It is also the complete regression in an orthogonal design (i.e., if <span class="math-container">$X^TX=I$</span>).</li>
<li><span class="math-container">$(X^TX)^{-1}$</span> is actually normalizing the <span class="math-container">$X$</span>'s anyway, i.e., <span class="math-container">$(X^TX)^{-1}X^T y$</span> will be normalized. You can see this clearly if you take the columns of <span class="math-container">$X$</span> to be orthogonal but <strong>not orthonormal</strong>. <span class="math-container">$X^TX$</span> will be a diagonal matrix but without 1's in the diagonal. Taking the inverse of that, and multiplying that by <span class="math-container">$X^Ty$</span> we get again the individual regression.</li>
<li>This means that if <span class="math-container">$X$</span> has no correlation between the features, and is normalized, than <span class="math-container">$X^Ty$</span> reveals the coefficients.</li>
<li>If <span class="math-container">$X$</span> has features with a <strong>positive</strong> correlation, then <span class="math-container">$X(X^TX)^{-1}$</span> has <strong>negative</strong> correlation. And vice versa.</li>
<li>I would expect that <span class="math-container">$(X^TX)^{-1}$</span> also serves to <strong>de-correlate the structure of the <span class="math-container">$X$</span>'s</strong> to a new space <span class="math-container">$X^*=X(X^TX)^{-1}$</span>, and in this new space we use individual regression to recover the coefficients.
<ul>
<li>The thing that bothers me is why isn't <span class="math-container">${X^*}^TX^*=I$</span>?</li>
<li>Maybe it's a 2-way trip - <span class="math-container">$(X^TX)^{-1}X^T y$</span> goes to this new space, preforms individual regression there, and then comes back. Perhaps using SVD-decomposition we can see this?
<span class="math-container">$$X = UDV'\Rightarrow (X^TX)^{-1}X^T y = V(D)^{-1}U'y
$$</span>
where <span class="math-container">$U'y$</span> is the individual regression for the <span class="math-container">$U$</span>'s, <span class="math-container">$D^{-1}$</span> is the normalization, and <span class="math-container">$V$</span> is the projection back?</li>
<li>It is true that if you regress <span class="math-container">$U$</span> to <span class="math-container">$y$</span> you get individual regression = regular regression, which is not so surprising given that the columns of <span class="math-container">$U$</span> are orthonormal.</li>
</ul>
</li>
</ul>
<p>So in the end the difference between component wise regression, <span class="math-container">$\hat\beta_{ind} = VDU'y$</span> and normal regression <span class="math-container">$\hat\beta = VD^{-1}U'y$</span> - is the that the <span class="math-container">$D$</span> is inverted.</p>
| 9
|
linear regression
|
Linear regression
|
https://stats.stackexchange.com/questions/661411/linear-regression
|
<p>If I have a single model say y = ax^2 + bx + c, can I use 3 linear regression algorithms y=ax^2, y=ax and y=a to learn the original function if use the same data set. Please help me out here.</p>
| 10
|
|
linear regression
|
Linear regression questions
|
https://stats.stackexchange.com/questions/577915/linear-regression-questions
|
<p>I am new to the field of machine learning and am just learning linear regression, and I have some questions about this concept:</p>
<p>Does linear regression allow vector-valued target variables?</p>
<p>Does linear regression not assume that the features are uncorrelated?</p>
|
<blockquote>
<p>Does linear regression allow vector-valued target variables?</p>
</blockquote>
<p>You can formulate that way. It'll be a parallel set of equations, <span class="math-container">$y=X\beta$</span>, where <span class="math-container">$\beta$</span> is of size <span class="math-container">$f\times t$</span> (<span class="math-container">$f$</span> is number of features and <span class="math-container">$t$</span> is number of targets).</p>
<blockquote>
<p>Does linear regression not assume that the features are uncorrelated?</p>
</blockquote>
<p>I believe @Dave's comment and the associated post clears this question; but to reiterate, features can be correlated or uncorrelated. Linear regression does not assume anything.</p>
| 11
|
linear regression
|
Linear regression explanations
|
https://stats.stackexchange.com/questions/59782/linear-regression-explanations
|
<p>In explaining simple linear regression, isn't it a bit misleading for many examples to illustrate a straight line going through some scatterplot? This seems to suggest that linear regression only works if your independent and dependent variables have some sort of straight-line relationship, whereas the "linear" in linear regression really refers to linear in the parameters of the model right?</p>
|
<p>Well, it's also linear in the predictors. </p>
<p>For example, if you fit a quadratic you might say 'see, not linear!'... but it is! If <span class="math-container">$x_1 = x$</span> and <span class="math-container">$x_2 = x^2$</span>, and you regress on <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>, it's certainly linear in <span class="math-container">$(1,x_1,x_2)$</span>. It's linear in the predictors you gave it.</p>
<p>If you regress on <span class="math-container">$x_1 = \sin(\pi x)$</span> and <span class="math-container">$x_2 = \cos(\pi x)$</span>... well, it's still linear in <span class="math-container">$(1,x_1,x_2)$</span>.</p>
<p>and so on.</p>
<p>By judicious choices of your <span class="math-container">$x$</span>'s you can use it to fit curves, but it's still linear in what you give it.</p>
<p>Even a local polynomial (kernel-type) fit is actually linear in the predictors. You can write the whole thing as one large linear model.</p>
<p>If <span class="math-container">$E(y) = X\beta$</span>, <span class="math-container">$X\beta$</span> is clearly linear in either <span class="math-container">$X$</span> (in the columns of X) or <span class="math-container">$\beta$</span>.</p>
<p>But yes, the linear-in-the-parameters is what the 'linear' in linear regression 'means'.</p>
<p>Is it at least partly misleading that the elementary presentations are always drawing straight line relationships when regression can fit curves? Perhaps, but you pretty much have to start with lines.</p>
| 12
|
linear regression
|
Simple linear regression vs Multiple Linear regression interpretation
|
https://stats.stackexchange.com/questions/579066/simple-linear-regression-vs-multiple-linear-regression-interpretation
|
<p>Suppose we have a multiple linear regression model with two predictors, <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span>:
<span class="math-container">$$Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \epsilon.$$</span></p>
<p>We can interpret <span class="math-container">$\beta_1$</span> as the expected increase in <span class="math-container">$Y$</span> with a unit increase in <span class="math-container">$X_1$</span> when <span class="math-container">$X_2$</span> is held constant. This is because <span class="math-container">$\beta_1$</span> is the partial derivative of the expected value of <span class="math-container">$Y$</span> with respect to <span class="math-container">$X_1$</span>.</p>
<p>Further, suppose that we also compute the simple linear regression of <span class="math-container">$Y$</span> against <span class="math-container">$X_1$</span>:
<span class="math-container">$$Y = b_0 + b_1X_1 + \epsilon.$$</span></p>
<p>Then I've seen by some authors that: <span class="math-container">$b_1$</span> is the expected increase in <span class="math-container">$Y$</span> with a unit increase in <span class="math-container">$X_1$</span> without holding <span class="math-container">$X_2$</span> constant.</p>
<p>But I really don't see the last point because for me in the simple linear regression is like holding constant <span class="math-container">$X_2$</span> by giving it a zero value.</p>
<p>So why do they say the in the simple linear regression all other predictors not considered are not constant?</p>
<p>I would really appreciate if you can help me clarify this idea.</p>
|
<p>For the most part, you should read my answer to: <a href="https://stats.stackexchange.com/a/78830/7290">Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?</a>, of which, this is nearly a duplicate.</p>
<hr />
<p>To address your explicit question more directly, <span class="math-container">$X_2$</span> is <em>not</em> being held constant. What you have done is set <span class="math-container">$\beta_2 = 0$</span>, not adjust the data to account for what they would be like if <span class="math-container">$X_2$</span> were <span class="math-container">$0$</span> for all data in the dataset.</p>
<p>Unless <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> were perfectly uncorrelated <em>in your dataset</em>, controlling for <span class="math-container">$X_2$</span> would amount to shifting the <span class="math-container">$X_1$</span> values to some degree. As a result, the estimated <span class="math-container">$\hat\beta_1$</span>s between the two models would differ.</p>
| 13
|
linear regression
|
Is linear regression obsolete?
|
https://stats.stackexchange.com/questions/305116/is-linear-regression-obsolete
|
<p>I am currently in a linear regression class, but I can't shake the feeling that what I am learning is no longer relevant in either modern statistics or machine learning. Why is so much time spent on doing inference on simple or multiple linear regression when so many interesting datasets these days frequently violate many of the unrealistic assumptions of linear regression? Why not instead teach inference on more flexible, modern tools like regression using support vector machines or Gaussian process? Though more complicated than finding a hyperplane in a space, wouldn't this give students a much better background for which to tackle modern day problems?</p>
|
<p>It is true that the assumptions of linear regression aren't realistic. However, this is true of all statistical models. "All models are wrong, but some are useful."</p>
<p>I guess you're under the impression that there's no reason to use linear regression when you could use a more complex model. This isn't true, because in general, more complex models are more vulnerable to overfitting, and they use more computational resources, which are important if, e.g., you're trying to do statistics on an embedded processor or a web server. Simpler models are also easier to understand and interpret; by contrast, complex machine-learning models such as neural networks tend to end up as black boxes, more or less.</p>
<p>Even if linear regression someday becomes no longer practically useful (which seems extremely unlikely in the foreseeable future), it will still be theoretically important, because more complex models tend to build on linear regression as a foundation. For example, in order to understand a regularized mixed-effects logistic regression, you need to understand plain old linear regression first.</p>
<p>This isn't to say that more complex, newer, and shinier models aren't useful or important. Many of them are. But the simpler models are more widely applicable and hence more important, and clearly make sense to present first if you're going to present a variety of models. There are a lot of bad data analyses conducted these days by people who call themselves "data scientists" or something but don't even know the foundational stuff, like what a confidence interval really is. Don't be a statistic!</p>
| 14
|
linear regression
|
What is stepwise linear regression?
|
https://stats.stackexchange.com/questions/317625/what-is-stepwise-linear-regression
|
<p>I am reading about 'interaction effects on linear regression' <a href="https://jp.mathworks.com/help/stats/linear-regression-with-interaction-effects.html?lang=en" rel="nofollow noreferrer">here</a> and came across 'stepwise linear regression'. </p>
<p>There are originally 5 predictors in the model. This means to say that by using the ordinary linear regression, we have $ Y = c + a_iX_i$ where $i = 1,...,5$.</p>
<p>Then it says here: For the initial model, use the full model with all terms and their pairwise interactions.</p>
<p>The succeeding steps involve what it calls 'stepwise linear regression'.</p>
<p>I am confused by this statement. Can anyone please give an insight on what 'stepwise linear regression' is all about? What are its advantages and why does it need to be done? </p>
|
<p>Stepwise Linear Regression is a method by which you leave it up to a statistical model test each predictor variable in a stepwise fashion, meaning 1 is inserted into the model and kept if it "improves" the model. Improve is defined by the type of stepwise regression being done, this can be defined by AIC, BIC, or any other variables. If it worsens the model, the predictor is then taken out. It sort of does some work for you. <strong>DON'T SKIP THE NEXT PARAGRAPH!!!!</strong></p>
<p><strong>HOWEVER!!!!</strong> this method should be avoided. Nothing wrong with the mathematics, but the logical thinking about how and why each variable should be in a model is not taken into account. What is your reasoning for putting this or that variable in the model? Questions of that nature, to understand our uncertainty about some variable of interest is not accounted for by the stepwise process. These are questions that stepwise regression can't answer, and the variables it's based on to include/exclude variables can't do that either. People loved it before because they could dump 20+ predictor variables in and get an "equation", but they didn't know if it was good or not, and the thinking behind what was in the equation was lost. Otherwise, it's like predicting shoe size by ice cream scoops (totally bogus, even if r^2 = 1.00)</p>
| 15
|
linear regression
|
Coefficients linear and log-linear regression model
|
https://stats.stackexchange.com/questions/221910/coefficients-linear-and-log-linear-regression-model
|
<p>I performed both a linear and log-linear regression to predict the price of a smartphone based on its characteristics.
Now I have a question concerning the coefficients between the two models.</p>
<p>In the linear regression model, the dummy variable GPS included or not is 37,7.
This means that smartphone users pay on average 47.7 euro more for a smartphone
with a GPS built in than one without, while holding other variables in the model constant.</p>
<pre><code>lm <- lm(Price ~ ., data=data_price2)
summary(lm)
Call:
lm(formula = Price ~ ., data = data_price2)
Residuals:
Min 1Q Median 3Q Max
-702.43 -46.68 -6.49 37.59 1522.53
Coefficients: (38 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 44.62802 70.21355 0.636 0.525128
Screensize -6.78973 7.14553 -0.950 0.342155
Multitouch 11.20542 12.62356 0.888 0.374861
nbrCores 14.58104 2.67044 5.460 5.53e-08 ***
Processorspeed 46.84652 9.54521 4.908 1.02e-06 ***
Memory -24.12829 6.02706 -4.003 6.54e-05 ***
nbrSims -9.23095 8.00187 -1.154 0.248842
CameraBack 3.10923 0.62724 4.957 7.94e-07 ***
CameraFront 10.69124 2.45340 4.358 1.40e-05 ***
Autofocus -20.51415 9.40548 -2.181 0.029326 *
Flitsertype 10.63140 7.10996 1.495 0.135043
5-GHzOndersteuning NA NA NA NA
GPS 47.68043 11.81778 4.035 5.73e-05 ***
....
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 102.3 on 1556 degrees of freedom
Multiple R-squared: 0.7766, Adjusted R-squared: 0.7613
F-statistic: 51.02 on 106 and 1556 DF, p-value: < 2.2e-16
</code></pre>
<p>Next, when we take a look at the log-linear regression model, the coefficient for the GPS variable is 2.249e-02, which means that the smartphone retail price increases with 2.52% = (e2.249e-02 − 1) when GPS is included, while holding other variables in the model constant.</p>
<pre><code>lm3 <- lm(log(Price) ~ ., data = data_price2 )
summary(lm3)
Call:
lm(formula = log(Price) ~ ., data = data_price2)
Residuals:
Min 1Q Median 3Q Max
-2.3367 -0.1964 -0.0008 0.1896 3.1645
Coefficients: (38 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.268e+00 2.598e-01 12.575 < 2e-16 ***
Screensize 4.878e-02 2.644e-02 1.845 0.065255 .
Multitouch 2.155e-02 4.672e-02 0.461 0.644685
nbrCores 5.670e-02 9.883e-03 5.737 1.16e-08 ***
Processorspeed 7.306e-02 3.533e-02 2.068 0.038787 *
Memory 8.273e-03 2.231e-02 0.371 0.710761
nbrSims -3.488e-02 2.961e-02 -1.178 0.239022
CameraBack 9.779e-03 2.321e-03 4.213 2.67e-05 ***
CameraFront 5.348e-02 9.080e-03 5.890 4.73e-09 ***
Autofocus 1.061e-02 3.481e-02 0.305 0.760654
Flitsertype 1.080e-01 2.631e-02 4.105 4.26e-05 ***
5-GHzOndersupport NA NA NA NA
GPS 2.249e-02 4.374e-02 0.514 0.607221
....
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.3785 on 1556 degrees of freedom
Multiple R-squared: 0.7974, Adjusted R-squared: 0.7835
F-statistic: 57.76 on 106 and 1556 DF, p-value: < 2.2e-16
</code></pre>
<p>The average price for a smartphone in my model is 232€. So, in the log-linear model 2.52% of 232€ is +- 5.85€. How come this value is so different in comparison with the result obtained from the linear regression model?</p>
|
<p>It's not just GPS, whose coefficient is "significant" in the linear price model but not in the log-price model. Many of your predictors change in apparent "significance" between the two models: screen size, memory, auto focus, flitsertype, too.</p>
<p>This is probably due to significant correlations among sets of your predictors, called multicollinearity. In that situation, exactly which predictor gets "credit" for an influence on outcome (in terms of a significant regression coefficient) depends strongly on peculiarities of the data sample at hand. Best guess is that the log transformation of price simply changed which variables among the multicollinear set happened to get that "credit." The distressingly large number of coefficients "not defined because of singularities" might even represent perfect correlations among some predictors.</p>
<p>Depending on what you're trying to accomplish, you might be better off using ridge regression, which better handles multicollinearity as it tends to treat correlated predictors together. But first look closely at the relations among your predictors, remove any that are perfectly correlated to other predictors, and think hard about how you want to deal with sets of highly correlated predictors.</p>
| 16
|
linear regression
|
linear regression vs linear mixed effect model coefficients
|
https://stats.stackexchange.com/questions/161703/linear-regression-vs-linear-mixed-effect-model-coefficients
|
<p>It is my understanding that linear regression models and linear mixed effect regression models will produce the same regression coefficients (i.e., fixed effects); however, linear regression models produce downwardly biased standard errors leading to inflated Type I error (Cohen, Cohen, Aiken, & West, 2003). Yet, I have a dataset where the linear regression and mixed model coefficients are orders of magnitude different and I do not understand why. The regressions have only one predictor and I estimate a random effect for just the intercept in the linear mixed effect regression model. Does anyone know the conditions under which the model coefficients will be discrepant?</p>
<p>As requested by a comment, here is my R code and output as well as the dataset attached. Notice the linear regression slope is twice the linear mixed effect model fixed slope and the intercepts have different signs! </p>
<pre><code>lm1 <- lm(Y ~ X, data = d); lm1$coefficients
(Intercept) X
-1.132507 1.184904
lmer1 <- lmer(Y ~ X + (1 | ID), data = d); lmer1@beta
[1] 1.6767616 0.6376439
ID
1.00
1.00
1.00
2.00
2.00
2.00
3.00
3.00
3.00
4.00
4.00
4.00
5.00
5.00
5.00
6.00
6.00
6.00
7.00
7.00
7.00
8.00
8.00
8.00
9.00
9.00
9.00
10.00
10.00
10.00
11.00
11.00
11.00
12.00
12.00
12.00
13.00
13.00
13.00
14.00
14.00
14.00
15.00
15.00
15.00
16.00
16.00
16.00
17.00
17.00
17.00
18.00
18.00
18.00
19.00
19.00
19.00
20.00
20.00
20.00
Y
1.00
2.00
3.00
5.00
4.00
6.00
7.00
8.00
9.00
2.00
3.00
4.00
5.00
5.00
6.00
7.00
6.00
8.00
3.00
4.00
2.00
1.00
2.00
1.00
5.00
6.00
4.00
7.00
8.00
9.00
8.00
8.00
7.00
6.00
4.00
2.00
4.00
5.00
6.00
6.00
7.00
5.00
3.00
4.00
2.00
1.00
2.00
3.00
4.00
2.00
3.00
5.00
6.00
4.00
7.00
8.00
6.00
9.00
8.00
9.00
X
3.00
4.00
3.00
6.00
4.00
6.00
6.00
8.00
5.50
4.00
3.00
5.50
5.00
7.00
5.50
7.00
4.50
6.00
4.00
3.00
4.00
2.50
4.00
3.00
6.00
6.00
6.50
7.00
8.00
7.00
7.00
5.50
6.00
6.50
4.00
4.00
3.50
5.00
4.00
5.50
7.00
4.50
4.50
6.00
5.50
2.00
3.00
6.00
3.00
4.50
3.00
5.00
6.00
3.00
7.50
7.50
5.50
6.50
7.00
6.00
</code></pre>
|
<p>I don't know that I can give a rigorous theoretical explanation, but a picture may make things clearer:</p>
<p><a href="https://i.sstatic.net/rsFi2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rsFi2.png" alt="enter image description here"></a></p>
<ul>
<li>The blue line is the OLS fit, the gray line is the population-level prediction for the mixed model. The individual lines are predicted lines (all equal slopes, randomly varying intercepts) for each ID. </li>
<li>Since there is some correlation between the mean values of X and Y for each group, some of the variability that would go into the slope is instead taken out by the random intercept term.</li>
<li>The apparently large difference in the intercepts is partly caused by extrapolation (the data starts at X=2, the intercept refers to the expected value at X=0).</li>
</ul>
<hr>
<pre><code>d <- data.frame(ID=factor(rep(1:20,each=3)),
Y=c(1,2,3,5,4,6,7,8,9,2,3,4,5,5,6,7,6,
8,3,4,2,1,2,
1,5,6,4,7,8,9,8,8,7,6,4,
2,4,5,6,6,7,5,3,4,2,1,2,
3,4,2,3,5,6,4,7,8,6,9,8,9),
X=c(3,4,3,6,4,6,6,8,5.5,4,3,5.5,5,7,5.5,7,4.5,6,4,
3,4,2.5,4,3,6,6,6.5,7,8,7,7,5.5,6,6.5,4,4,3.5,
5,4,5.5,7,4.5,4.5,6,5.5,2,3,6,3,4.5,3,5,6,3,
7.5,7.5,5.5,6.5,7,6))
lm1 <- lm(Y ~ X, data = d)
library(lme4)
lmer1 <- lmer(Y ~ X + (1 | ID), data = d)
ff <- fixef(lmer1)
## get predictions
pp <- d
pp$Y <- predict(lmer1)
library(dplyr)
pp <- pp %>%
group_by(ID) %>%
filter(Y %in% range(Y))
library(ggplot2); theme_set(theme_bw())
ggplot(d,aes(X,Y,colour=ID))+
geom_point()+
scale_colour_discrete(guide=FALSE)+
geom_line(data=pp)+
scale_x_continuous(limits=c(0,8))+
geom_smooth(method="lm",aes(group=1),fullrange=TRUE)+
geom_abline(slope=ff["X"],intercept=ff["(Intercept)"],
colour="darkgray",lwd=1.5)
ggsave("CV161703.png")
</code></pre>
| 17
|
linear regression
|
Multidimensional linear regression (not multiple linear regression)
|
https://stats.stackexchange.com/questions/612513/multidimensional-linear-regression-not-multiple-linear-regression
|
<p>Let <span class="math-container">$p$</span> be a positive integer and suppose that each observation in my data set is a length-<span class="math-container">$p$</span> multivariate normal vector, and I have <span class="math-container">$n$</span> (an integer) observations of the length-<span class="math-container">$p$</span> multivariate normal vector. So
<span class="math-container">$$
\vec{Y} = \beta_0 + \beta_1 \vec{X}_{1} + \cdots + \beta_k \vec{X}_{k} + \vec{\epsilon},
$$</span>
with <span class="math-container">$\vec{\epsilon} \sim N_p(\vec{0}, \Sigma) $</span>, <span class="math-container">$\Sigma$</span> is a covariance matrix of an observation-vector, <span class="math-container">$\beta_i \in \mathbb{R}$</span> (for <span class="math-container">$i \in \{0,1,\cdots,k\}$</span>) and <span class="math-container">$X_i \in \mathbb{R}^p$</span>. I am in a situation where this model looks relevant to my problem, but I have never been taught how to generalize the usual regression model into one where each observation is itself a vector of size <span class="math-container">$p>1$</span>.</p>
<p>Is this called multivariate multiple regression? How can I find literature for it? If I look up multivariate-, or multidimensional linear regression I only get stuff on the multivariate linear regression model (the case where <span class="math-container">$p=1$</span>).</p>
|
<p>Much confusion can come from the too-frequent lack of distinction between "multivariate" and "multiple" regression. Although one might argue that "multivariate" can describe any situation with multiple variables, it's best current practice to restrict "multivariate" to situations with multiple outcome variables. See Hidalgo, B and Goodman, M (2013) <a href="https://doi.org/10.2105/AJPH.2012.300897" rel="nofollow noreferrer">American Journal of Public Health 103: 39-40</a>, or <a href="https://stats.stackexchange.com/q/447455/28500">this page</a> or <a href="https://stats.stackexchange.com/q/2358/28500">this page</a>. Having more than one predictor variable is then "multiple" or "multivariable" regression. This ideal distinction, unfortunately, is too often neglected; at least once I have published "multivariate" when I should have said "multivariable."</p>
<p>For your application, a classic multivariate multiple regression model would seem to be OK. <a href="https://stats.stackexchange.com/q/11127/28500">This page</a> illustrates such a model. Fox and Weisberg have an <a href="https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Multivariate-Linear-Models.pdf" rel="nofollow noreferrer">online appendix</a> to their text that explains in detail. The point estimates end up the same as with separate regressions for each outcome, but the (co)variances are adjusted to take the correlations into account.</p>
<p>More generally, there are several ways to deal with correlated outcomes. Chapter 7 of Frank Harrell's <a href="https://hbiostat.org/rmsc/long.html" rel="nofollow noreferrer">Regression Modeling Strategies</a> provides a useful overview in a table. That chapter focuses on generalized least squares, which avoids the very strict no-missing-values requirement of classical multivariate multiple regression.</p>
| 18
|
linear regression
|
Simple linear regression in multiple linear regression analysis
|
https://stats.stackexchange.com/questions/266596/simple-linear-regression-in-multiple-linear-regression-analysis
|
<p>I am doing a multiple linear regression analysis project, and my instructor told me that I shouldn't be fitting the simple linear regressions at all. Does that mean scatter plots and added variable plots and diagnostic plots do not matter for individual predictors? I know that I probably don't need to transform individual predictors, but I'm not too sure about the plots.</p>
| 19
|
|
linear regression
|
Can we solve multiple linear regression using simple linear regression solver?
|
https://stats.stackexchange.com/questions/195199/can-we-solve-multiple-linear-regression-using-simple-linear-regression-solver
|
<p>Suppose I have a blackbox function that solves simple linear regression. Can I use this function to solve "multiple" linear regression?
The blackbox computes the slope and intercept in a simple linear regression model.</p>
| 20
|
|
linear regression
|
Linear model vs. linear regression
|
https://stats.stackexchange.com/questions/129063/linear-model-vs-linear-regression
|
<p>I have a question that I find really confusing regarding linear modelling and linear regression. I have expectation regarding the way some dependent variable (DV) are going to evolve with an independent variable (IV).</p>
<p>In order to check for a relationship between IV and DV, on several participants, I just computed the linear model by calculating Y as follow:</p>
<p>Y = XB + E </p>
<p>Therefore I used the weight of my linear model as B and my DV as X. Finally I just calculated a weighted sum. Then I tested for an effect by using a one sample t test on the various Y.</p>
<p>Well I'm confused because I don't see the difference between doing that and computing a linear regression by ordinary least square and calculating the slopes.</p>
<p>According to the two methods (weighted sum to the predictive X) or linear regression, I get different numerical values, but these values are correlated between them to 1.</p>
<p>If anyone can enlighten me about the difference, on the theoretical ground, between using one of these two methods, thank you!</p>
|
<p>I don't know how do you define a "linear model" but in general this term is used as synonym for linear regression (e.g. on <a href="http://en.wikipedia.org/wiki/Linear_model" rel="nofollow">Wikipedia</a>). Also from your definition:</p>
<p>$$y = X\beta + \epsilon$$</p>
<p>it appears that this <em>is</em> linear regression. So you computed regression twice and that is a reason why the results are equivalent. On the other hand, you say that the estimated values are different (while being highly correlated). There could be two reasons for that:</p>
<ul>
<li>Generally, if you compute regression, you include intercept in the model so it is:</li>
</ul>
<p>$$y = \beta_0 + X\beta_1 + \epsilon$$</p>
<p>so if you computed regression without the intercept and then on the second time with the intercept, the results could be a little bit different. </p>
<ul>
<li>You didn't describe the way how you estimated both models. If you used different algorithms for estimating both models, the results should be similar but there could be slight differences.</li>
</ul>
| 21
|
linear regression
|
Hierarchical Linear Regression should always outperform Ordinary Linear Regression
|
https://stats.stackexchange.com/questions/337235/hierarchical-linear-regression-should-always-outperform-ordinary-linear-regressi
|
<p>I am building a hierarchical linear model with varying intercepts. It takes the form for each unit $i$ in group $j$:</p>
<p>$$y_{ij} = \alpha_j + \beta_1 x_{ij,1} + \beta_2 x_{ij,2} \quad (1) $$</p>
<p>I am developing this hierarchical linear model using a complete Bayesian Analysis using stan. In stan, I am using fairly non-informative priors. All variables -- for each $j$, $\alpha_j$, $\beta_1$, and $\beta_2$ -- are given normal distributions with hyper-parameters. Additionally the output $y$ follows $N(\hat{y}, \sigma_y)$ where $\hat{y} = y_{ij}$ from above.</p>
<p>These hyper-parameters are all given uniform distributions over $[0,\infty)$ with the exception of </p>
<p>$$\sigma_y \sim U(0, 100)$$ $$\sigma_a \sim U(0, 100)$$ $$\sigma_b \sim U(0, 100)$$</p>
<hr>
<p>By contrast, if I build a linear regression model with complete pooling, then the model takes the form</p>
<p>$$y_{ij} = \alpha + \beta_1 x_{ij,1} + \beta_2 x_{ij,2} \quad(2) $$</p>
<p><strong>Question:</strong> The non-informative decisions for priors has the effect that the parameter space for $\alpha_j$, for each $j$, $\beta_1$, and $\beta_2$ should consider the coefficients obtained from regression in (2). Then it should follow that hierarchical linear regression performs as well as ordinary regression. Running this in py-stan, I find that the answer to this assertion is "no." Why?</p>
<p>Edit: By "perform as well as", I mean than when comparing the RMSE between predictions using (1) and (2), (2) always outperforms (1). In fact, the opposite is seen in the scatter plot below which displays for, years $y \in [2000,2015]$, the RMSE of (1) and (2) trained on data from years $< y$ and then tested on data from year $y$.</p>
<p><a href="https://i.sstatic.net/bEuB3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bEuB3.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Aside:</strong></p>
<p>Here is a related example that shares the same spirit of the phenomenon I expect to observe in this example:</p>
<p>Suppose that a regression model of $k$ predictors trained on an output $y$ has $R^2 = G_k$. If we add one predictor $p$ to this model, making the model have a total of $k+1$ predictors, then let $R^2 = G_{k+1}$. Since the set of solution(s) $S_2$ to linear regression in the second instance (with predictor $p$) is a superset of the first regression model's solution space $S_1$, it follows that $G_{k+1} \geq G_{k}$.</p>
<p>That is, if $\beta_1, \beta_2$ are the predictors for the first and second regression problems, then</p>
<p>$$ G_{k} = \frac{\sum_{j=1}^{n} (\beta_1^Tx_j - y_j)^2}{\sum_{j=1}^{n} (\bar{y} - y_j)^2} \leq \frac{\sum_{j=1}^{n} (\beta_2^Tx_j - y_j)^2}{\sum_{j=1}^{n} (\bar{y} - y_j)^2} = G_{k+1} $$</p>
| 22
|
|
linear regression
|
Kernelize Linear Regression
|
https://stats.stackexchange.com/questions/388403/kernelize-linear-regression
|
<p>We can kernelize Ridge regression as shown in these notes: <a href="https://www.ics.uci.edu/~welling/classnotes/papers_class/Kernel-Ridge.pdf" rel="nofollow noreferrer">https://www.ics.uci.edu/~welling/classnotes/papers_class/Kernel-Ridge.pdf</a>. </p>
<p>However would it be possible to find a vector <span class="math-container">$\boldsymbol\alpha$</span> such that we can express linear regression as <span class="math-container">$$f(\mathbf x)=\sum_{i=1}^N \alpha_i \kappa(\mathbf x,\mathbf x_i)$$</span>
where <span class="math-container">$\mathbf x\in \mathbb R^N$</span>, and <span class="math-container">$\kappa:\mathbb R^N\times \mathbb R^N\mapsto \mathbb R$</span> is a positive semi definite kernel (<em>i.e.</em> kernelize linear regression)?</p>
|
<p>I'm assuming that by "linear regression" you mean unregularized linear regression, i.e. ordinary least squares. In that case, then yes, sure: this is just ridge regression with <span class="math-container">$\lambda = 0$</span>. If the kernel matrix <span class="math-container">$K$</span> is invertible, then everything still works.</p>
<p>In fact, for many popular choices of kernel (such as the Gaussian RBF), the matrix <span class="math-container">$K$</span> is <em>guaranteed</em> to be invertible. But this is true only in exact arithmetic; in practice, the trailing eigenvalues of <span class="math-container">$K$</span> will be far too close to zero for computer arithmetic to handle them properly, and you'll get <em>extreme</em> numerical instability. You'll also be extremely close to severe <a href="https://en.wikipedia.org/wiki/Multicollinearity" rel="nofollow noreferrer">multicollinearity</a>, so that numerically you'll run into all the same problems there (which are often addressed by ridge regression).</p>
| 23
|
linear regression
|
linear regression after rotation
|
https://stats.stackexchange.com/questions/193723/linear-regression-after-rotation
|
<p>I have a set of 2 dimensional points [x,y], with a barycenter in 0,0 and I'm rotating it.</p>
<p>I'm wondering why the linear regression of this set of points is not rotating of the same amplitude.</p>
<p>Below is a sample python code :</p>
<pre><code>#creating a vector with a barycenter in 0,0
vecta=myData-barycentre(myData)
#rotating it of pi/4
vectb=rotate(pi/4,vecta)
#coef1 and 2 are the coefficient of the linear regression : [slope,intercept]
coef1=plot_points(vecta,"green")
coef2=plot_points(vectb,"blue")
print coef1,coef2
</code></pre>
<blockquote>
<blockquote>
<p>[ -3.16058764e-02 1.71389357e-14] [-0.06893819 0. ]</p>
</blockquote>
</blockquote>
<p>Intercept is 0,0 for both lines</p>
<pre><code>#printing the result (I should get pi/4)
print atan(coef1[0])-atan(coef2[0]),pi/4
</code></pre>
<blockquote>
<blockquote>
<p>0.0372339369372 0.785398163397</p>
</blockquote>
</blockquote>
<pre><code>#showing the graphs
show()
</code></pre>
<p><a href="https://i.sstatic.net/BtYhF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BtYhF.png" alt="enter image description here"></a></p>
<p>One can clearly see that points have been rotated of pi/4 but not the slopes.</p>
<p>What am I missing here ?</p>
| 24
|
|
linear regression
|
Non Linear Regression -Regression Trees
|
https://stats.stackexchange.com/questions/175103/non-linear-regression-regression-trees
|
<p>For datasets of higher dimensions, how do I decide if a Linear model is sufficient to fit the data or if I have to use non linear models like regression trees to fit the data ? </p>
<p>NOTE:I did try both linear and non linear models to fit the data and observed that the mean squared error is substantively reduced by replacing linear regression model with a non-linear regression model (like M5 Regression Trees).
But I do not understand how to visualize linear relationships in higher dimensions. So if I fit my 5 dimensional dataset using a linear model, what would be a threshold that would suggest that there is a necessity to adopt non linear models like Regression Trees ?</p>
<p>I am beginning to learn statistics only recently, so apologize if this question is too elementary in nature. </p>
|
<p>Welcome to our site. Of course in regression problems like this (along with interpretability) our main goal is accuracy. Why are we afraid of going from linear models to more complex models, because by adding more parameters we may be over-fitting to our data.</p>
<p>Arguable the best way of dealing with this is <strong>hold-out-testing</strong>, and <strong>cross-validation</strong>. If you add parameters, and accuracy improves, but on a held-out-set the accuracy is poor, then you've over-fit to your data.</p>
<p>There are also information-theoretic methods for comparing models of varying complexity, such as the <strong>Likelihood-Ratio-Test</strong> and <strong>AIC</strong>. One special case of model comparison is <strong>nested models</strong> where all of the resulting regressors of one model are also possible in the other, for instance comparing Linear Regression with some higher order Polynomial Regression. In these cases, the training set accuracy will <strong>always</strong> be as good or better for the more general model.</p>
| 25
|
linear regression
|
What does 'linear' word in multiple linear regression and linearity assumption in multiple linear regression mean?
|
https://stats.stackexchange.com/questions/621226/what-does-linear-word-in-multiple-linear-regression-and-linearity-assumption-i
|
<p>I am studying linear regression. I want to ask is the linear word in multiple linear regression refers to the linear relationship between the target variable and each of the regression coefficients b_0, b_1,b_2, ..., b_n? Also in multiple linear regression there is a linearity assumption does this assumption refers to the linear relationship between the dependent variable and each independent variables which we confirm using scatter plot?</p>
<p>MLR equation that I am using:</p>
<p><span class="math-container">$\hat y = b_0 + b_1x_1 + b_2x_2 + \dots + b_n x_n$</span></p>
<p>Also if this linearity assumption is not satisfied should I then try polynomial regression? And is this polynomial regression also a kind of linear regression where the independent variable is non-linearly related to dependent variable but the regression coefficients are linearly related to target variable?</p>
<p>I know I have asked too many doubts so please bear with me and be kind to clear my doubts.</p>
| 26
|
|
linear regression
|
Best linear regression strategy
|
https://stats.stackexchange.com/questions/453277/best-linear-regression-strategy
|
<p>I have 11 variables (with 4 of them being sociodemographics) that predict my dependent variable. I want to perform linear regression analysis and I have two options. One: Exclude sociodemographics variables from regression analysis and just describe the participant's socidempgraphics in results and run simple linear regression for each variable separately. Two: Include sociodemographics and perform a hierarchical linear regression(which would result in seven regression models). What do you think the best option would be?</p>
|
<p>I don't know why including sociodemographics in a hierarchical linear regression gives you seven regression models. My very strong preference would be to fit the hierarchical linear model with demographics. But an equally strong preference would be to have a theory before I even looked at the data about which variables were important, why, and in what direction they would have an effect. Otherwise you have an enormous model space and stand a very good chance of getting spurious but very good looking fits to your data. I find Richard Berk one of the most thought provoking authorities on this subject who talks to users of the method rather than methodological theoreticians (e.g. <a href="https://www.amazon.co.uk/Regression-Analysis-Constructive-Quantitative-Techniques/dp/0761929045" rel="nofollow noreferrer">https://www.amazon.co.uk/Regression-Analysis-Constructive-Quantitative-Techniques/dp/0761929045</a>) </p>
| 27
|
linear regression
|
Linear and non-linear regression analysis
|
https://stats.stackexchange.com/questions/295550/linear-and-non-linear-regression-analysis
|
<p>I'm currently reading Maths and Stats for Web Analytics and Conversion Optimisation by Himanshu Sharma and noticed the following regarding regression analysis:</p>
<p>"If there is no or weak linear relationship between two variables or in other words the correlation between the two variables is zero or weak then this relationship is not good enough to predict anything. Therefore there is no point in running regression analysis."</p>
<p>This strikes me as ignoring non-linear regression analysis. I could understand if the last sentence was "Therefore there is no point in running linear regression analysis" but the author excludes all forms of regression.</p>
<p>My question is, even if the R is low, if you chart the data and see a curved scatter plot, should you be looking to run non-linear regression analysis as opposed to scrapping analysis entirely?</p>
<p>It is implied the R calc is Pearson's.</p>
|
<p>The statement is at best misleading and at worst wrong and you don't need to go to nonlinear regression to prove it wrong. Here is the statement again:</p>
<blockquote>
<p>If there is no or weak linear relationship between two variables or in
other words the correlation between the two variables is zero or weak
then this relationship is not good enough to predict anything.
Therefore there is no point in running regression analysis.</p>
</blockquote>
<p>This ignores:</p>
<ul>
<li>Moderation effects</li>
<li>Mediation</li>
<li>Quadratic relationships (which are easily examined within linear regression)</li>
<li>The fact that finding a small effect is often interesting and scientifically important.</li>
</ul>
| 28
|
linear regression
|
Linear regression, independent variable stationarity
|
https://stats.stackexchange.com/questions/319604/linear-regression-independent-variable-stationarity
|
<p>If we have classical linear regression model, and one of the regressors is time series (e.g. GDP), is it necessary for that variable to be stationary? Well i do not think so, because diversity in values yields to better results when we talk about linear regression, but I encounter different opinions.</p>
|
<p>What you assume in a linear regression model is that the <em>error term</em> is a white noise process and, therefore, it must be stationary. There is no assumption that either the independent or dependant variables are stationary.</p>
<p>However, consider the following simple linear regression model for time series data:</p>
<p>$$Y_t = a + b X_t + \varepsilon_t$$</p>
<p>If $Y_t$ is stationary but $X_t$ is not, then if you rearrange the equation:</p>
<p>$$Y_t - \varepsilon_t = a + bX_t$$</p>
<p>Then, the left-hand side is stationary, but the right-hand side is not, so the model can't be correct. </p>
<p>If, instead, both variables are not stationary, then:</p>
<p>$$Y_t - bX_t = a + \varepsilon_t$$</p>
<p>The right-hand side is stationary, but the left-hand side may or may not be. If it's not, then the model is wrong. It's possible for it to be stationary, as in a cointegration model for example, but it need not be.</p>
<p>Violating the assumption about the stationarity of the error process can lead to all sorts of problems, like spurious regressions where what appears to be a significant coefficient is frequently really not at all significant.</p>
| 29
|
linear regression
|
Time Series w/ Linear Regression
|
https://stats.stackexchange.com/questions/401771/time-series-w-linear-regression
|
<p>I have some time series data for prices that I'm trying to perform linear regression on. However, I feel that what I'm doing is incorrect and was hoping someone could point me in the right direction.</p>
<p><em><strong>Background</strong></em></p>
<p>The overall background of what I'm doing is taking sentiment analyzed from Twitter data and using that to capture trends in the Korean and American Bitcoin markets and see if Korean markets are more sensitive to trends and social media sentiment. </p>
<p><em><strong>Data</strong></em></p>
<p>My data looks like this:</p>
<pre><code>date mentions likes retweets sentiment Volume Close
2017-05-10 0.23 0.2 0.52 -0.24 0.9 0.12512
2017-05-11 -0.12 0.51 0.67 0.8 0.6 0.12353
2017-05-12 0.83 0.12 -0.12 0.23 -0.9 -0.35235
.
.
.
2019-01-10 0.123 0.27 0.87 0.12 0.52 0.87890
</code></pre>
<p>This is just an example DataFrame that I'm working with, but I believe it captures what I'm doing well enough. There are a total of 608 samples from May 10th, 2017 to January 10th, 2019.</p>
<p><code>Volume</code> and <code>Close</code> actually pertain to the price data. The other columns are for Tweets. I used to have several Tweets per day but I've combined all of them and scaled them to be between (-1, 1).</p>
<p>The target variable that I'm attempting to predict is <code>Close</code>.</p>
<p><em><strong>Methodology</strong></em></p>
<p>What I'm trying to do in this particular question is <em>predict values using past time series data with linear regression</em>. My first benchmark is the RMSE, and my plan was to use various models on the same data to compare how they perform in comparison to that benchmark (e.g. SVM, LSTM NN's, etc.)</p>
<p>If I were to write out the steps of my work in order, it would look something like this:</p>
<ol>
<li>Prepare data.</li>
<li>Predict test value data using RMSE.</li>
<li>Predict test value data using linear regression and compare.</li>
<li>Use other models.</li>
</ol>
<p>I've attempted to use linear regression from Python's <code>sklearn.linear_model.LinearRegression</code> library. When I initially ran it to fit the training data, <code>date</code> is a string type and so the program alerted me that it cannot work with string data. So I simply dropped the <code>date</code> column and just worked with the other values in the training and test set.</p>
<p>After dropping the label I inserted it back into the DataFrame after using <code>sklearn.linear_model.LinearRegression</code> to make predictions and got the following image:</p>
<p><em><strong>Results</strong></em></p>
<p><a href="https://i.sstatic.net/0VqHU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0VqHU.png" alt="enter image description here"></a></p>
<p>For comparison, the graph I obtained for RMSE is as follows:</p>
<p><a href="https://i.sstatic.net/az3Pl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/az3Pl.png" alt="enter image description here"></a></p>
<p>The blue and yellow lines together graph out the historical price of Bitcoin. The green lines are the predicted values that have been trained on the training data (blue line). A simple RMSE measure doesn't predict too well as expected, and linear regression also seems to perform very poorly. The poor performance of linear regression was expected, but I get the feeling that something is fundamentally wrong and I'm wondering if I'm understanding implementing linear regression for time series analysis correctly. The Python Scikit-Learn linear regression model uses a basic regression method without any extra functionality (e.g. moving average) as far as I'm aware.</p>
<p>The main concern that I'm feeling is if this is the correct way to implement linear regression with time series data. In my method I disregarded the <code>date</code> values when training the model, which is an essential part of time series analysis. Is it possible to use linear regression with time series in this case?</p>
<p><strong>Python Code</strong></p>
<p>In case anyone's wondering what code I used or if it may help. Code is in Python and uses Scikit-Learn, Pandas, and Matplotlib:</p>
<pre><code>train_label, test_label = train.pop('Close'), test.pop('Close')
linreg = LinearRegression()
linreg.fit(train_.drop('date', axis=1), train_label)
prediction = linreg.predict(test.drop('date', axis=1))
rmse = (np.sqrt(np.mean(np.array(test - prediction) ** 2)))
test['Predictions'] = 0
test['Predictions'] = prediction
train.insert(loc=11, column='Close', value=train_label)
test.insert(loc=11, column='Close', value=test_label)
plt.plot(train['Close'], label='Training Set')
plt.Figure()
plt.plot(test['Close'], label='Test Set')
plt.Figure()
plt.plot(test['Predictions'], label='Predictions')
plt.legend()
plt.grid(True)
plt.show()
</code></pre>
| 30
|
|
linear regression
|
Multivariate Weighted Linear Regression
|
https://stats.stackexchange.com/questions/52363/multivariate-weighted-linear-regression
|
<p>Very simple. I am looking for a package that does Multivariate Linear Regression with weights on the observations. Does anyone know of a package that does this? I am shocked that I have not been able to find any.</p>
<p><strong>NOTE:</strong> R does <em>NOT</em> do multivariate regression. The <code>lm()</code> help page specifically states: "If response is a matrix a linear model is fitted separately by least-squares to each column of the matrix. " This means independent regression models for each response variable. Thus <code>lm()</code> does <strong>NOT</strong> do multivariate linear regression. It merely does several univariate linear regressions for convenience.</p>
|
<p>Try package <a href="http://cran.r-project.org/web/packages/MRCE/MRCE.pdf" rel="nofollow">MRCE</a> in <a href="http://www.r-project.org/" rel="nofollow">R</a>. This is for "Multivariate regression with covariance estimation".</p>
| 31
|
linear regression
|
Linear Regression Model
|
https://stats.stackexchange.com/questions/185851/linear-regression-model
|
<p>Which of the following is NOT a linear regression model?</p>
<pre><code>A. y = w_0 + w_1 * x
B. y = w_0 + w_1 * (x^2)
C. y = w_0 + w_1 * log(x)
D. y = w_0 * w_1 + log(w_1) * x
</code></pre>
|
<p>When we say "linear regression" we mean linearity in <em>parameters</em>, not <em>variables</em>. Therefore, <code>A</code>, <code>B</code> and <code>C</code> are linear (the parameters $w_0$ and $w_1$ enter the equations linearly) while <strong><code>D</code></strong> is not (the parameter $w_1$ enters in logarithm). </p>
<p>See also <a href="https://en.wikipedia.org/wiki/Regression_analysis" rel="nofollow noreferrer">this</a> Wikipedia article, section "Linear regression".</p>
| 32
|
linear regression
|
Understanding the Assumptions of Linear Regression – Confused About Linearity!
|
https://stats.stackexchange.com/questions/662056/understanding-the-assumptions-of-linear-regression-confused-about-linearity
|
<p>I was recently in an interview and the guy asked me what is the assumptions behind the linear regression, where I mentioned that linear regression assumes a linear relationship between X and Y. The interviewer then gave me the equation:</p>
<p>y=alpha1+alpha2∗x+alpha3∗x^2</p>
<p>I said this isn't linear since it includes x^2, but the interviewer told me that it's still considered linear because linearity in regression refers to the relationship between
y and the parameters (alpha1, alpha2, alpha3), not necessarily x.</p>
<p>Honestly, I’m confused! I always thought linear regression required
𝑥 to appear in a linear form. Can someone explain the assumptions of linear regression and clarify why the above equation is still considered linear?</p>
<p>Also, if you know any good resources for learning linear regression in depth, I'd really appreciate it!</p>
|
<p>The interviewer is correct. OLS regression assumes, among other things, that the regression is linear <em>in the parameters</em>. That means that the parameters (<span class="math-container">$\beta$</span>) cannot be raised to powers, or part of trig functions, or whatever. So</p>
<p><span class="math-container">$$Y = \beta_0 + \beta_1x_1 + \beta_2x_1^2 + \beta_3 \sin(x_1)$$</span></p>
<p>is fine. But</p>
<p><span class="math-container">$$Y = \beta_0 + \beta_1x_1 + x_1^{\beta_2} + \sin(\beta_3+x_1)$$</span></p>
<p>is not.</p>
<p>For the assumptions, any good book on introductory stats should do. One that I particularly like is <em>Statistics</em> by Freedman, Pisani, and Purves (you can get any edition).</p>
<p>If you want a somewhat more advanced book that focuses only on regression, then I recommend <em>Regression Modelling Strategies</em> by Frank Harrell.</p>
<p>Both these books will require substantial time and thought. But both will repay those efforts. Personally, I think the first book is worth studying, even if you think you are beyond it.</p>
| 33
|
linear regression
|
Linear regression correlation
|
https://stats.stackexchange.com/questions/268464/linear-regression-correlation
|
<p>In a linear regression with several variables, a variable has a positive regression coefficient if and only if its correlation with the response is positive. ¿(TRUE OR FALSE)?</p>
|
<p>False - if theres enough positive correlation within the independent variables ($cor(X_i,X_j) > 0$), and they're all positively correlated with the dependent variable ($cor(Y,X_i) > 0$), you can have a situation where one variable has a positive $\beta$ and another has a negative one, especially if linear combinations of the $X$'s have explanatory power. Try regressing ten year interest rates on 3 year, 5 year, and 15 year for a clear example of this.</p>
| 34
|
linear regression
|
X on Y Linear regression
|
https://stats.stackexchange.com/questions/505423/x-on-y-linear-regression
|
<p>Basically in a research project I am looking at the linear regression between my independent variable: Government Stringency Index, and dependent real GDP growth.</p>
<p>One area I investigate assumes if real GDP growth is more precisely measured, switching my variables in linear regression; to x on y linear regression; would associate the error with my independent variable (which is presumably less accurate).</p>
<p>My question is:
After doing this, what statistical tests could I use to determine which is a better fit for my model?</p>
<p><a href="https://i.sstatic.net/8zljX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8zljX.png" alt="enter image description here" /></a></p>
<p>r,r^2 are of course the same. Visually the best fit line in x on y appears to be worse.</p>
|
<p>For simple linear regression using Ordinary Least Squares, you are minimising the sum of squares of the vertical residuals, and the outlying point has much more influence on the red line than on the blue line. Flipping your chart round (below) so horizontal becomes vertical and vice versa makes this obvious, especially when you remember that regression lines pass through the mean of the data; the square of the vertical residual of the outlying point from the blue line would be huge in this second chart.</p>
<p>Since the outlying point is presumably Covid-19 related, it is fairly clear that in such extreme circumstances you do not in fact have a linear relationship between your two variables.</p>
<p>You might want to ask yourself whether (a) you want a linear model, (b) the outlying point is actually part of the relationship you are trying to model (removing it will affect both lines but will mean that your model will not extend to current circumstances) and (c) which variable is really the dependent variable (if you are trying to predict one from the other, that might give you a clue).</p>
<p><a href="https://i.sstatic.net/e7kCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e7kCW.png" alt="flipped chart" /></a></p>
| 35
|
linear regression
|
Log-linear regression vs. logistic regression
|
https://stats.stackexchange.com/questions/86720/log-linear-regression-vs-logistic-regression
|
<p>Can anyone provide a clear list of differences between log-linear regression and logistic regression? I understand the former is a simple linear regression model but I am not clear on when each should be used.</p>
|
<p>The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution, the log-linear model is actually just a Poisson regression model. Hence the "log" name (Poisson regression models contain a "log" link function). </p>
<p>A "log transformed outcome variable" in a linear regression model is <em>not</em> a log-linear model, (neither is an exponentiated outcome variable, as "log-linear" would suggest). Both log-linear models and logistic regressions are examples of <em>generalized linear models</em>, in which the relationship between a <em>linear predictor</em> (such as log-odds or log-rates) is linear in the model variables. They are not "simple linear regression models" (or models using the usual $E[Y|X] = a + bX$ format).</p>
<p>Despite all that, it's possible to obtain equivalent inference on associations between categorical variables using logistic regression and poisson regression. It's just that in the poisson model, the outcome variables are treated like covariates. Interestingly, you can set up some models that borrow information across groups in a way much similar to a proportional odds model, but this is not well understood and rarely used.</p>
<p>Examples of obtaining equivalent inference in logistic and poisson regression models using R illustrated below: </p>
<pre class="lang-r prettyprint-override"><code>y <- c(0, 1, 0, 1)
x <- c(0, 0, 1, 1)
w <- c(10, 20, 30, 40)
## odds ratio for relationship between x and y from logistic regression
glm(y ~ x, family=binomial, weights=w)
## the odds ratio is the same interaction parameter between contingency table frequencies
glm(w ~ y * x, family=poisson)
</code></pre>
<p>Interesting, lack of association between $y$ and $x$ means the odds ratio is 1 in the logistic regression model and, likewise, the interaction term is 0 in the loglinear model. Gives you an idea of how we measure conditional independence in contingency table data.</p>
| 36
|
linear regression
|
Discrepancy between multiple linear regression & simple linear regression results - Which one do I report?
|
https://stats.stackexchange.com/questions/562126/discrepancy-between-multiple-linear-regression-simple-linear-regression-result
|
<p>I have a dependent variable followed by 3 independent variables that I am trying to fit the best model for (using R). Examples of my models are below:</p>
<pre><code>#Multiple linear regression
mod <- lm(y ~ x1 + x2 + x3, data = data)
#Simple linear regression
mod2 <- lm(y ~ x1, data = data)
mod3 <- lm(y ~ x1, data = data)
mod4 <- lm(y ~ x1, data = data)
</code></pre>
<p>The output of the multiple regression model shows x1 and x3 are both significant whereas x2 is not, but when each of the simple linear regression models are run, x2 is significant as well. Some of the outputs of each are listed below.</p>
<p><strong>My question is this: If the multiple linear regression model is significant, then should that one be used in a report/paper rather than the simple linear regression models? Even though one of the variable is not significant?</strong> I know that you're not supposed to go back to the main effects models when there is a significant interaction between main effects when using ANOVA, but is there a similar rule in regression?</p>
<p>Also, It should be noted that the R2 and AIC values all favor the multiple regression model.</p>
<pre><code>#Multiple linear regression output (mod):
Call:
lm(formula = y ~ x1 + x2 + x3, data = data)
Residuals:
Min 1Q Median 3Q Max
-0.38347 -0.07256 0.01893 0.09067 0.25851
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.9149320 0.0175039 52.270 < 2e-16 ***
x1 -0.0008528 0.0001137 -7.503 2.16e-12 ***
x2 -0.0008818 0.0013214 -0.667 0.505
x3 -0.0024993 0.0005614 -4.452 1.43e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1142 on 195 degrees of freedom
Multiple R-squared: 0.3791, Adjusted R-squared: 0.3696
F-statistic: 39.69 on 3 and 195 DF, p-value: < 2.2e-16
</code></pre>
<pre><code>#Simple linear regression - mod2
Call:
lm(formula = y ~ x1, data = data)
Residuals:
Min 1Q Median 3Q Max
-0.33985 -0.09797 0.01622 0.11713 0.21119
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8384434 0.0103638 80.901 < 2e-16 ***
x1 -0.0008201 0.0001287 -6.373 1.29e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1312 on 197 degrees of freedom
Multiple R-squared: 0.1709, Adjusted R-squared: 0.1667
F-statistic: 40.61 on 1 and 197 DF, p-value: 1.287e-09
</code></pre>
<pre><code>#Simple linear regression - mod3
Call:
lm(formula = y ~ x2, data = data)
Residuals:
Min 1Q Median 3Q Max
-0.44224 -0.08695 0.01411 0.11709 0.26569
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8886699 0.0189938 46.79 < 2e-16 ***
x2 -0.0046871 0.0009664 -4.85 2.5e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1362 on 197 degrees of freedom
Multiple R-squared: 0.1067, Adjusted R-squared: 0.1021
F-statistic: 23.52 on 1 and 197 DF, p-value: 2.5e-06
</code></pre>
<pre><code>Simple linear regression - mod4
Call:
lm(formula = y ~ x3, data = data)
Residuals:
Min 1Q Median 3Q Max
-0.43780 -0.08383 0.02178 0.10805 0.21189
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8755046 0.0131656 66.499 < 2e-16 ***
x3 -0.0027375 0.0003918 -6.987 4.23e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.129 on 197 degrees of freedom
Multiple R-squared: 0.1986, Adjusted R-squared: 0.1945
F-statistic: 48.82 on 1 and 197 DF, p-value: 4.229e-11
<span class="math-container">```</span>
</code></pre>
|
<blockquote>
<p>If the multiple linear regression model is significant, then should that one be used in a report/paper rather than the simple linear regression models?</p>
</blockquote>
<p>Yes. With correlated predictor variables (as you seem to have) each correlated with outcome (as you show in your individual models), removing any of them from a regression model can lead to <a href="https://en.wikipedia.org/wiki/Omitted-variable_bias" rel="nofollow noreferrer">omitted-variable bias</a>. Even though <code>x2</code> doesn't have a "statistically significant" association with outcome in the multiple-regression model, keeping it in the model is probably the best way to get good estimates of the other coefficients.*</p>
<p>There's no problem with reporting a model like that. You might even go on to show that the apparent association of <code>x2</code> with outcome individually is spurious, perhaps explained by its correlations with the other 2 predictors.</p>
<hr />
<p>*Models to be used for prediction often benefit from including as many variables as possible without overfitting, as even "insignificant" variables can improve predictive ability.</p>
| 37
|
linear regression
|
interpretation of Linear regression
|
https://stats.stackexchange.com/questions/422076/interpretation-of-linear-regression
|
<p>I was reading about linear regressions on wikipedia and came across the <a href="https://en.wikipedia.org/wiki/Mean_and_predicted_response#Predicted_response" rel="nofollow noreferrer">mean and predicted response</a>. I just wanted to clarify somethings. So suppose we have a simple linear regression model, is the result for the response variable <span class="math-container">$y_i$</span> for a given explanatory variable <span class="math-container">$x_i$</span> interpreted as the mean result for that specific <span class="math-container">$x_i$</span>? </p>
<p>For example if the explanatory variable is temperature, and the response variable is # of ice cream sales, is <span class="math-container">$y_i$</span> interpreted as the number of sales that will occur at that specific temperature, or the average number of sales that will occur?</p>
|
<p>The predicted mean response at <span class="math-container">$x_i$</span> (the estimated conditional expectation of <span class="math-container">$y_i$</span>, <span class="math-container">$E(y_i|x=x_i)$</span> would be of the form <span class="math-container">$\hat{\alpha} + \hat{\beta} x_i$</span>. This is sometimes denoted as <span class="math-container">$\hat{y}_i$</span>.</p>
<p>In the example, <span class="math-container">$\hat{y}_i$</span> is the mean/expected number of ice cream sales at temperature <span class="math-container">$x_i$</span>.</p>
| 38
|
linear regression
|
regression tree vs linear regression
|
https://stats.stackexchange.com/questions/408857/regression-tree-vs-linear-regression
|
<p>I'm using one explanatory variable in a regression tree and in a linear regression. The tree finds a split (with variance reduction splitting rule), though R2 is pretty small (0.2). On the validation data the model is confirmed. On the other hand the linear regression shows no relation (not even with 2nd order polynomial reg): coef and R2 are almost 0s. (Outliers are handled by truncation.) How can you explain this? Can it be that the tree finds/creates a "non existing" pattern only because of the splitting rule? </p>
|
<p>Just because there is no <em>linear</em> relationship between the variables does not mean that the pattern is "non-existing". Here is a simple example (using R).</p>
<pre><code>set.seed(1)
x = runif(100, 0, 5)
y = ifelse(x<1, 0, ifelse(x<2, 1, ifelse(x<3,0, ifelse(x<4,1,0))))
plot(x,y, pch=20)
</code></pre>
<p><a href="https://i.sstatic.net/kZOry.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZOry.png" alt="Example data"></a></p>
<p>Because there is no <em>linear</em> relationship, fitting linear model shows a very poor fit. </p>
<pre><code>LM = lm(y~x)
summary(LM) ## Output simplified
Residuals:
Min 1Q Median 3Q Max
-0.5214 -0.5106 0.4829 0.4921 0.4963
Residual standard error: 0.5049 on 98 degrees of freedom
Multiple R-squared: 0.0001441, Adjusted R-squared: -0.01006
F-statistic: 0.01413 on 1 and 98 DF, p-value: 0.9056
</code></pre>
<p>Even with a quadratic term we get a poor fit. </p>
<pre><code>LM2 = lm(y~poly(x,2))
summary(LM2) ## Output simplified
Residuals:
Min 1Q Median 3Q Max
-0.7255 -0.3478 0.3062 0.4062 0.5495
Residual standard error: 0.4659 on 97 degrees of freedom
Multiple R-squared: 0.1576, Adjusted R-squared: 0.1402
F-statistic: 9.07 on 2 and 97 DF, p-value: 0.0002448
</code></pre>
<p>But a regression tree gives a perfect fit to the data.</p>
<pre><code>library(rpart)
RP = rpart(y ~ x)
summary(predict(RP) - y)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0 0 0 0 0 0
</code></pre>
<p>There is a very real relationship between x and y, just not a linear relationship.</p>
| 39
|
linear regression
|
KNN-regression vs Linear regression
|
https://stats.stackexchange.com/questions/646366/knn-regression-vs-linear-regression
|
<p>Is there any assumption on data or any number of k, that makes kNN-regression equivalent to linear regression?</p>
| 40
|
|
linear regression
|
How $R^2$ values in simple linear regression bound $R^2$ in multiple linear regression
|
https://stats.stackexchange.com/questions/268314/how-r2-values-in-simple-linear-regression-bound-r2-in-multiple-linear-regr
|
<p>A simple linear regression on x1 and y yields R^2 of 0.2</p>
<p>A simple linear regression on x2 and y yields R^2 of 0.1</p>
<p>What is a upper and lower bound of R^2 if we do a multiple linear regression on x1, x2 and y?</p>
<p>My guess on the lower bound is 0.2 if x1 and x2 are perfectly correlated, but is the upper bound 0.3? Or little bit less than 0.3?</p>
| 41
|
|
linear regression
|
Introduction to linear regression
|
https://stats.stackexchange.com/questions/254763/introduction-to-linear-regression
|
<p>As part of my work (programmer), I need to learn some linear regression. I have a degree in pure mathematics, but not in statistics. Could anyone be able to give me a good book, an introduction, in linear regression?</p>
<p>Thanks in advance!</p>
|
<p>Linear regression is one of the core topics in statistics and hence should get some coverage in any introductory statistics textbook.</p>
<p>Gelman and Hill's <em>Data Analysis Using Regression and Multilevel/Hierarchical Models</em> is a well-respected textbook on regression that includes an early chapter with an overview of more basic statistical concepts. That chapter may not be enough if you have no background in statistics at all, though.</p>
| 42
|
linear regression
|
linear regression
|
https://stats.stackexchange.com/questions/124208/linear-regression
|
<p>I am reading a paper and come across the following information:</p>
<pre><code> Predictor Dependent Variable R Square Beta P
A D .12 .35 <0.05
B D .16 .40 <0.05
C D .13 .36 <0.05
</code></pre>
<p>Authors are using linear regression to compute the correlation between A and D, B and D and C and D, and they claim there is significant positive correlations between each and every one of them. I am confused and do not understand how can the authors draw such a conclusion with the presented data? </p>
|
<p>The significance of the correlation between y and x is related to the significance of the coefficient in the regression of y on x. </p>
<p>Specifically, for the <a href="http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Testing_using_Student.27s_t-distribution" rel="nofollow">usual t-test</a> on correlation under normality, they have the same p-value (they're said to be <em>equivalent</em> tests).</p>
<p>The conclusion that the correlation positive is based on the sign of the regression coefficient - in simple regression they have the same sign.</p>
| 43
|
linear regression
|
What is the difference between 'regular' linear regression and deep learning linear regression?
|
https://stats.stackexchange.com/questions/253337/what-is-the-difference-between-regular-linear-regression-and-deep-learning-lin
|
<p>I want to know the difference between linear regression in a regular machine learning analysis and linear regression in "deep learning" setting. What algorithms are used for linear regression in deep learning setting.</p>
|
<p>Assuming that by deep learning you meant more precisely neural networks: a vanilla fully connected feedforward neural network with only <em>linear activation functions</em> will perform linear regression, regardless of how many layers it has. One difference is that with a neural network one typically uses gradient descent, whereas with "normal" linear regression one uses the normal equation if possible (when the number of features isn't too huge).</p>
<p><a href="https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15381-s06/www/nn.pdf" rel="noreferrer">Example</a> of a fully connected feedforward neural network with no hidden layer and using a linear activation function (namely the identity activation function):</p>
<p><a href="https://i.sstatic.net/75VU8.png" rel="noreferrer"><img src="https://i.sstatic.net/75VU8.png" alt="enter image description here"></a></p>
<p>If you replace the activation function of the output layer with a sigmoid function, then the neural network performs logistic regression. If you replace the activation function of the output layer with a softmax function and add a few output units, then the neural network performs multiclass logistic regression:
<a href="https://stats.stackexchange.com/a/162548/12359">Difference between logistic regression and neural networks</a>. If you replace the cost function with the <a href="https://en.wikipedia.org/w/index.php?title=Hinge_loss&oldid=741695220" rel="noreferrer">hinge loss</a>, then the neural network is an <a href="https://stats.stackexchange.com/a/254658/12359">SVM</a> optimized in its primal form: <a href="http://cs231n.github.io/linear-classify/" rel="noreferrer">http://cs231n.github.io/linear-classify/</a>.</p>
<hr>
<p>Here is the example shown in the picture above programmed in TensorFlow:</p>
<pre><code>""" Linear Regression Example """
# https://github.com/tflearn/tflearn/blob/master/examples/basics/linear_regression.py
from __future__ import absolute_import, division, print_function
import tflearn
# Regression data
X = [3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1]
Y = [1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3]
# Linear Regression graph
input_ = tflearn.input_data(shape=[None])
linear = tflearn.single_unit(input_)
regression = tflearn.regression(linear, optimizer='sgd', loss='mean_square',
metric='R2', learning_rate=0.01)
m = tflearn.DNN(regression)
m.fit(X, Y, n_epoch=1000, show_metric=True, snapshot_epoch=False)
print("\nRegression result:")
print("Y = " + str(m.get_weights(linear.W)) +
"*X + " + str(m.get_weights(linear.b)))
print("\nTest prediction for x = 3.2, 3.3, 3.4:")
print(m.predict([3.2, 3.3, 3.4]))
# should output (close, not exact) y = [1.5315033197402954, 1.5585315227508545, 1.5855598449707031]
</code></pre>
<p><a href="http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1" rel="noreferrer">Here</a> is a code snippet that does not use any neural network libraries:</p>
<pre><code># From http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1
import matplotlib.pyplot as plt
import numpy as np
# Load the data and create the data matrices X and Y
# This creates a feature vector X with a column of ones (bias)
# and a column of car weights.
# The target vector Y is a column of MPG values for each car.
X_file = np.genfromtxt('mpg.csv', delimiter=',', skip_header=1)
N = np.shape(X_file)[0]
X = np.hstack((np.ones(N).reshape(N, 1), X_file[:, 4].reshape(N, 1)))
Y = X_file[:, 0]
# Standardize the input
X[:, 1] = (X[:, 1]-np.mean(X[:, 1]))/np.std(X[:, 1])
# There are two weights, the bias weight and the feature weight
w = np.array([0, 0])
# Start batch gradient descent, it will run for max_iter epochs and have a step
# size eta
max_iter = 100
eta = 1E-3
for t in range(0, max_iter):
# We need to iterate over each data point for one epoch
grad_t = np.array([0., 0.])
for i in range(0, N):
x_i = X[i, :]
y_i = Y[i]
# Dot product, computes h(x_i, w)
h = np.dot(w, x_i)-y_i
grad_t += 2*x_i*h
# Update the weights
w = w - eta*grad_t
print "Weights found:",w
# Plot the data and best fit line
tt = np.linspace(np.min(X[:, 1]), np.max(X[:, 1]), 10)
bf_line = w[0]+w[1]*tt
plt.plot(X[:, 1], Y, 'kx', tt, bf_line, 'r-')
plt.xlabel('Weight (Normalized)')
plt.ylabel('MPG')
plt.title('ANN Regression on 1D MPG Data')
plt.savefig('mpg.png')
plt.show()
</code></pre>
<p>Data file <code>mpg.csv</code> (~50% abridged due to Stack Exchange answer size limitation):</p>
<pre><code>mpg (n),cylinders (n),displacement (n),horsepower (n),weight (n),acceleration (n),year (n),origin (n), name (s)
18.000000,8.000000,307.000000,130.000000,3504.000000,12.000000,70.000000,1.000000
15.000000,8.000000,350.000000,165.000000,3693.000000,11.500000,70.000000,1.000000
18.000000,8.000000,318.000000,150.000000,3436.000000,11.000000,70.000000,1.000000
16.000000,8.000000,304.000000,150.000000,3433.000000,12.000000,70.000000,1.000000
17.000000,8.000000,302.000000,140.000000,3449.000000,10.500000,70.000000,1.000000
15.000000,8.000000,429.000000,198.000000,4341.000000,10.000000,70.000000,1.000000
14.000000,8.000000,454.000000,220.000000,4354.000000,9.000000,70.000000,1.000000
14.000000,8.000000,440.000000,215.000000,4312.000000,8.500000,70.000000,1.000000
14.000000,8.000000,455.000000,225.000000,4425.000000,10.000000,70.000000,1.000000
15.000000,8.000000,390.000000,190.000000,3850.000000,8.500000,70.000000,1.000000
15.000000,8.000000,383.000000,170.000000,3563.000000,10.000000,70.000000,1.000000
14.000000,8.000000,340.000000,160.000000,3609.000000,8.000000,70.000000,1.000000
15.000000,8.000000,400.000000,150.000000,3761.000000,9.500000,70.000000,1.000000
14.000000,8.000000,455.000000,225.000000,3086.000000,10.000000,70.000000,1.000000
24.000000,4.000000,113.000000,95.000000,2372.000000,15.000000,70.000000,3.000000
22.000000,6.000000,198.000000,95.000000,2833.000000,15.500000,70.000000,1.000000
18.000000,6.000000,199.000000,97.000000,2774.000000,15.500000,70.000000,1.000000
21.000000,6.000000,200.000000,85.000000,2587.000000,16.000000,70.000000,1.000000
27.000000,4.000000,97.000000,88.000000,2130.000000,14.500000,70.000000,3.000000
26.000000,4.000000,97.000000,46.000000,1835.000000,20.500000,70.000000,2.000000
25.000000,4.000000,110.000000,87.000000,2672.000000,17.500000,70.000000,2.000000
24.000000,4.000000,107.000000,90.000000,2430.000000,14.500000,70.000000,2.000000
25.000000,4.000000,104.000000,95.000000,2375.000000,17.500000,70.000000,2.000000
26.000000,4.000000,121.000000,113.000000,2234.000000,12.500000,70.000000,2.000000
21.000000,6.000000,199.000000,90.000000,2648.000000,15.000000,70.000000,1.000000
10.000000,8.000000,360.000000,215.000000,4615.000000,14.000000,70.000000,1.000000
10.000000,8.000000,307.000000,200.000000,4376.000000,15.000000,70.000000,1.000000
11.000000,8.000000,318.000000,210.000000,4382.000000,13.500000,70.000000,1.000000
9.000000,8.000000,304.000000,193.000000,4732.000000,18.500000,70.000000,1.000000
27.000000,4.000000,97.000000,88.000000,2130.000000,14.500000,71.000000,3.000000
28.000000,4.000000,140.000000,90.000000,2264.000000,15.500000,71.000000,1.000000
25.000000,4.000000,113.000000,95.000000,2228.000000,14.000000,71.000000,3.000000
19.000000,6.000000,232.000000,100.000000,2634.000000,13.000000,71.000000,1.000000
16.000000,6.000000,225.000000,105.000000,3439.000000,15.500000,71.000000,1.000000
17.000000,6.000000,250.000000,100.000000,3329.000000,15.500000,71.000000,1.000000
19.000000,6.000000,250.000000,88.000000,3302.000000,15.500000,71.000000,1.000000
18.000000,6.000000,232.000000,100.000000,3288.000000,15.500000,71.000000,1.000000
14.000000,8.000000,350.000000,165.000000,4209.000000,12.000000,71.000000,1.000000
14.000000,8.000000,400.000000,175.000000,4464.000000,11.500000,71.000000,1.000000
14.000000,8.000000,351.000000,153.000000,4154.000000,13.500000,71.000000,1.000000
14.000000,8.000000,318.000000,150.000000,4096.000000,13.000000,71.000000,1.000000
12.000000,8.000000,383.000000,180.000000,4955.000000,11.500000,71.000000,1.000000
13.000000,8.000000,400.000000,170.000000,4746.000000,12.000000,71.000000,1.000000
13.000000,8.000000,400.000000,175.000000,5140.000000,12.000000,71.000000,1.000000
18.000000,6.000000,258.000000,110.000000,2962.000000,13.500000,71.000000,1.000000
22.000000,4.000000,140.000000,72.000000,2408.000000,19.000000,71.000000,1.000000
19.000000,6.000000,250.000000,100.000000,3282.000000,15.000000,71.000000,1.000000
18.000000,6.000000,250.000000,88.000000,3139.000000,14.500000,71.000000,1.000000
23.000000,4.000000,122.000000,86.000000,2220.000000,14.000000,71.000000,1.000000
28.000000,4.000000,116.000000,90.000000,2123.000000,14.000000,71.000000,2.000000
30.000000,4.000000,79.000000,70.000000,2074.000000,19.500000,71.000000,2.000000
30.000000,4.000000,88.000000,76.000000,2065.000000,14.500000,71.000000,2.000000
31.000000,4.000000,71.000000,65.000000,1773.000000,19.000000,71.000000,3.000000
35.000000,4.000000,72.000000,69.000000,1613.000000,18.000000,71.000000,3.000000
27.000000,4.000000,97.000000,60.000000,1834.000000,19.000000,71.000000,2.000000
26.000000,4.000000,91.000000,70.000000,1955.000000,20.500000,71.000000,1.000000
24.000000,4.000000,113.000000,95.000000,2278.000000,15.500000,72.000000,3.000000
25.000000,4.000000,97.500000,80.000000,2126.000000,17.000000,72.000000,1.000000
23.000000,4.000000,97.000000,54.000000,2254.000000,23.500000,72.000000,2.000000
20.000000,4.000000,140.000000,90.000000,2408.000000,19.500000,72.000000,1.000000
21.000000,4.000000,122.000000,86.000000,2226.000000,16.500000,72.000000,1.000000
13.000000,8.000000,350.000000,165.000000,4274.000000,12.000000,72.000000,1.000000
14.000000,8.000000,400.000000,175.000000,4385.000000,12.000000,72.000000,1.000000
15.000000,8.000000,318.000000,150.000000,4135.000000,13.500000,72.000000,1.000000
14.000000,8.000000,351.000000,153.000000,4129.000000,13.000000,72.000000,1.000000
17.000000,8.000000,304.000000,150.000000,3672.000000,11.500000,72.000000,1.000000
11.000000,8.000000,429.000000,208.000000,4633.000000,11.000000,72.000000,1.000000
13.000000,8.000000,350.000000,155.000000,4502.000000,13.500000,72.000000,1.000000
12.000000,8.000000,350.000000,160.000000,4456.000000,13.500000,72.000000,1.000000
13.000000,8.000000,400.000000,190.000000,4422.000000,12.500000,72.000000,1.000000
19.000000,3.000000,70.000000,97.000000,2330.000000,13.500000,72.000000,3.000000
15.000000,8.000000,304.000000,150.000000,3892.000000,12.500000,72.000000,1.000000
13.000000,8.000000,307.000000,130.000000,4098.000000,14.000000,72.000000,1.000000
13.000000,8.000000,302.000000,140.000000,4294.000000,16.000000,72.000000,1.000000
14.000000,8.000000,318.000000,150.000000,4077.000000,14.000000,72.000000,1.000000
18.000000,4.000000,121.000000,112.000000,2933.000000,14.500000,72.000000,2.000000
22.000000,4.000000,121.000000,76.000000,2511.000000,18.000000,72.000000,2.000000
21.000000,4.000000,120.000000,87.000000,2979.000000,19.500000,72.000000,2.000000
26.000000,4.000000,96.000000,69.000000,2189.000000,18.000000,72.000000,2.000000
22.000000,4.000000,122.000000,86.000000,2395.000000,16.000000,72.000000,1.000000
28.000000,4.000000,97.000000,92.000000,2288.000000,17.000000,72.000000,3.000000
23.000000,4.000000,120.000000,97.000000,2506.000000,14.500000,72.000000,3.000000
28.000000,4.000000,98.000000,80.000000,2164.000000,15.000000,72.000000,1.000000
27.000000,4.000000,97.000000,88.000000,2100.000000,16.500000,72.000000,3.000000
13.000000,8.000000,350.000000,175.000000,4100.000000,13.000000,73.000000,1.000000
14.000000,8.000000,304.000000,150.000000,3672.000000,11.500000,73.000000,1.000000
13.000000,8.000000,350.000000,145.000000,3988.000000,13.000000,73.000000,1.000000
14.000000,8.000000,302.000000,137.000000,4042.000000,14.500000,73.000000,1.000000
15.000000,8.000000,318.000000,150.000000,3777.000000,12.500000,73.000000,1.000000
12.000000,8.000000,429.000000,198.000000,4952.000000,11.500000,73.000000,1.000000
13.000000,8.000000,400.000000,150.000000,4464.000000,12.000000,73.000000,1.000000
13.000000,8.000000,351.000000,158.000000,4363.000000,13.000000,73.000000,1.000000
14.000000,8.000000,318.000000,150.000000,4237.000000,14.500000,73.000000,1.000000
13.000000,8.000000,440.000000,215.000000,4735.000000,11.000000,73.000000,1.000000
12.000000,8.000000,455.000000,225.000000,4951.000000,11.000000,73.000000,1.000000
13.000000,8.000000,360.000000,175.000000,3821.000000,11.000000,73.000000,1.000000
18.000000,6.000000,225.000000,105.000000,3121.000000,16.500000,73.000000,1.000000
16.000000,6.000000,250.000000,100.000000,3278.000000,18.000000,73.000000,1.000000
18.000000,6.000000,232.000000,100.000000,2945.000000,16.000000,73.000000,1.000000
18.000000,6.000000,250.000000,88.000000,3021.000000,16.500000,73.000000,1.000000
23.000000,6.000000,198.000000,95.000000,2904.000000,16.000000,73.000000,1.000000
26.000000,4.000000,97.000000,46.000000,1950.000000,21.000000,73.000000,2.000000
11.000000,8.000000,400.000000,150.000000,4997.000000,14.000000,73.000000,1.000000
12.000000,8.000000,400.000000,167.000000,4906.000000,12.500000,73.000000,1.000000
13.000000,8.000000,360.000000,170.000000,4654.000000,13.000000,73.000000,1.000000
12.000000,8.000000,350.000000,180.000000,4499.000000,12.500000,73.000000,1.000000
18.000000,6.000000,232.000000,100.000000,2789.000000,15.000000,73.000000,1.000000
20.000000,4.000000,97.000000,88.000000,2279.000000,19.000000,73.000000,3.000000
21.000000,4.000000,140.000000,72.000000,2401.000000,19.500000,73.000000,1.000000
22.000000,4.000000,108.000000,94.000000,2379.000000,16.500000,73.000000,3.000000
18.000000,3.000000,70.000000,90.000000,2124.000000,13.500000,73.000000,3.000000
19.000000,4.000000,122.000000,85.000000,2310.000000,18.500000,73.000000,1.000000
21.000000,6.000000,155.000000,107.000000,2472.000000,14.000000,73.000000,1.000000
26.000000,4.000000,98.000000,90.000000,2265.000000,15.500000,73.000000,2.000000
15.000000,8.000000,350.000000,145.000000,4082.000000,13.000000,73.000000,1.000000
16.000000,8.000000,400.000000,230.000000,4278.000000,9.500000,73.000000,1.000000
29.000000,4.000000,68.000000,49.000000,1867.000000,19.500000,73.000000,2.000000
24.000000,4.000000,116.000000,75.000000,2158.000000,15.500000,73.000000,2.000000
20.000000,4.000000,114.000000,91.000000,2582.000000,14.000000,73.000000,2.000000
19.000000,4.000000,121.000000,112.000000,2868.000000,15.500000,73.000000,2.000000
15.000000,8.000000,318.000000,150.000000,3399.000000,11.000000,73.000000,1.000000
24.000000,4.000000,121.000000,110.000000,2660.000000,14.000000,73.000000,2.000000
20.000000,6.000000,156.000000,122.000000,2807.000000,13.500000,73.000000,3.000000
11.000000,8.000000,350.000000,180.000000,3664.000000,11.000000,73.000000,1.000000
20.000000,6.000000,198.000000,95.000000,3102.000000,16.500000,74.000000,1.000000
19.000000,6.000000,232.000000,100.000000,2901.000000,16.000000,74.000000,1.000000
15.000000,6.000000,250.000000,100.000000,3336.000000,17.000000,74.000000,1.000000
31.000000,4.000000,79.000000,67.000000,1950.000000,19.000000,74.000000,3.000000
26.000000,4.000000,122.000000,80.000000,2451.000000,16.500000,74.000000,1.000000
32.000000,4.000000,71.000000,65.000000,1836.000000,21.000000,74.000000,3.000000
25.000000,4.000000,140.000000,75.000000,2542.000000,17.000000,74.000000,1.000000
16.000000,6.000000,250.000000,100.000000,3781.000000,17.000000,74.000000,1.000000
16.000000,6.000000,258.000000,110.000000,3632.000000,18.000000,74.000000,1.000000
18.000000,6.000000,225.000000,105.000000,3613.000000,16.500000,74.000000,1.000000
16.000000,8.000000,302.000000,140.000000,4141.000000,14.000000,74.000000,1.000000
13.000000,8.000000,350.000000,150.000000,4699.000000,14.500000,74.000000,1.000000
14.000000,8.000000,318.000000,150.000000,4457.000000,13.500000,74.000000,1.000000
14.000000,8.000000,302.000000,140.000000,4638.000000,16.000000,74.000000,1.000000
14.000000,8.000000,304.000000,150.000000,4257.000000,15.500000,74.000000,1.000000
29.000000,4.000000,98.000000,83.000000,2219.000000,16.500000,74.000000,2.000000
26.000000,4.000000,79.000000,67.000000,1963.000000,15.500000,74.000000,2.000000
26.000000,4.000000,97.000000,78.000000,2300.000000,14.500000,74.000000,2.000000
31.000000,4.000000,76.000000,52.000000,1649.000000,16.500000,74.000000,3.000000
32.000000,4.000000,83.000000,61.000000,2003.000000,19.000000,74.000000,3.000000
28.000000,4.000000,90.000000,75.000000,2125.000000,14.500000,74.000000,1.000000
24.000000,4.000000,90.000000,75.000000,2108.000000,15.500000,74.000000,2.000000
26.000000,4.000000,116.000000,75.000000,2246.000000,14.000000,74.000000,2.000000
24.000000,4.000000,120.000000,97.000000,2489.000000,15.000000,74.000000,3.000000
26.000000,4.000000,108.000000,93.000000,2391.000000,15.500000,74.000000,3.000000
31.000000,4.000000,79.000000,67.000000,2000.000000,16.000000,74.000000,2.000000
19.000000,6.000000,225.000000,95.000000,3264.000000,16.000000,75.000000,1.000000
18.000000,6.000000,250.000000,105.000000,3459.000000,16.000000,75.000000,1.000000
15.000000,6.000000,250.000000,72.000000,3432.000000,21.000000,75.000000,1.000000
15.000000,6.000000,250.000000,72.000000,3158.000000,19.500000,75.000000,1.000000
16.000000,8.000000,400.000000,170.000000,4668.000000,11.500000,75.000000,1.000000
15.000000,8.000000,350.000000,145.000000,4440.000000,14.000000,75.000000,1.000000
16.000000,8.000000,318.000000,150.000000,4498.000000,14.500000,75.000000,1.000000
14.000000,8.000000,351.000000,148.000000,4657.000000,13.500000,75.000000,1.000000
17.000000,6.000000,231.000000,110.000000,3907.000000,21.000000,75.000000,1.000000
16.000000,6.000000,250.000000,105.000000,3897.000000,18.500000,75.000000,1.000000
15.000000,6.000000,258.000000,110.000000,3730.000000,19.000000,75.000000,1.000000
18.000000,6.000000,225.000000,95.000000,3785.000000,19.000000,75.000000,1.000000
21.000000,6.000000,231.000000,110.000000,3039.000000,15.000000,75.000000,1.000000
20.000000,8.000000,262.000000,110.000000,3221.000000,13.500000,75.000000,1.000000
13.000000,8.000000,302.000000,129.000000,3169.000000,12.000000,75.000000,1.000000
29.000000,4.000000,97.000000,75.000000,2171.000000,16.000000,75.000000,3.000000
23.000000,4.000000,140.000000,83.000000,2639.000000,17.000000,75.000000,1.000000
20.000000,6.000000,232.000000,100.000000,2914.000000,16.000000,75.000000,1.000000
23.000000,4.000000,140.000000,78.000000,2592.000000,18.500000,75.000000,1.000000
24.000000,4.000000,134.000000,96.000000,2702.000000,13.500000,75.000000,3.000000
25.000000,4.000000,90.000000,71.000000,2223.000000,16.500000,75.000000,2.000000
24.000000,4.000000,119.000000,97.000000,2545.000000,17.000000,75.000000,3.000000
18.000000,6.000000,171.000000,97.000000,2984.000000,14.500000,75.000000,1.000000
29.000000,4.000000,90.000000,70.000000,1937.000000,14.000000,75.000000,2.000000
19.000000,6.000000,232.000000,90.000000,3211.000000,17.000000,75.000000,1.000000
23.000000,4.000000,115.000000,95.000000,2694.000000,15.000000,75.000000,2.000000
23.000000,4.000000,120.000000,88.000000,2957.000000,17.000000,75.000000,2.000000
22.000000,4.000000,121.000000,98.000000,2945.000000,14.500000,75.000000,2.000000
25.000000,4.000000,121.000000,115.000000,2671.000000,13.500000,75.000000,2.000000
33.000000,4.000000,91.000000,53.000000,1795.000000,17.500000,75.000000,3.000000
28.000000,4.000000,107.000000,86.000000,2464.000000,15.500000,76.000000,2.000000
25.000000,4.000000,116.000000,81.000000,2220.000000,16.900000,76.000000,2.000000
25.000000,4.000000,140.000000,92.000000,2572.000000,14.900000,76.000000,1.000000
26.000000,4.000000,98.000000,79.000000,2255.000000,17.700000,76.000000,1.000000
27.000000,4.000000,101.000000,83.000000,2202.000000,15.300000,76.000000,2.000000
17.500000,8.000000,305.000000,140.000000,4215.000000,13.000000,76.000000,1.000000
16.000000,8.000000,318.000000,150.000000,4190.000000,13.000000,76.000000,1.000000
15.500000,8.000000,304.000000,120.000000,3962.000000,13.900000,76.000000,1.000000
14.500000,8.000000,351.000000,152.000000,4215.000000,12.800000,76.000000,1.000000
22.000000,6.000000,225.000000,100.000000,3233.000000,15.400000,76.000000,1.000000
22.000000,6.000000,250.000000,105.000000,3353.000000,14.500000,76.000000,1.000000
24.000000,6.000000,200.000000,81.000000,3012.000000,17.600000,76.000000,1.000000
22.500000,6.000000,232.000000,90.000000,3085.000000,17.600000,76.000000,1.000000
29.000000,4.000000,85.000000,52.000000,2035.000000,22.200000,76.000000,1.000000
24.500000,4.000000,98.000000,60.000000,2164.000000,22.100000,76.000000,1.000000
29.000000,4.000000,90.000000,70.000000,1937.000000,14.200000,76.000000,2.000000
33.000000,4.000000,91.000000,53.000000,1795.000000,17.400000,76.000000,3.000000
20.000000,6.000000,225.000000,100.000000,3651.000000,17.700000,76.000000,1.000000
18.000000,6.000000,250.000000,78.000000,3574.000000,21.000000,76.000000,1.000000
18.500000,6.000000,250.000000,110.000000,3645.000000,16.200000,76.000000,1.000000
17.500000,6.000000,258.000000,95.000000,3193.000000,17.800000,76.000000,1.000000
29.500000,4.000000,97.000000,71.000000,1825.000000,12.200000,76.000000,2.000000
32.000000,4.000000,85.000000,70.000000,1990.000000,17.000000,76.000000,3.000000
28.000000,4.000000,97.000000,75.000000,2155.000000,16.400000,76.000000,3.000000
26.500000,4.000000,140.000000,72.000000,2565.000000,13.600000,76.000000,1.000000
20.000000,4.000000,130.000000,102.000000,3150.000000,15.700000,76.000000,2.000000
13.000000,8.000000,318.000000,150.000000,3940.000000,13.200000,76.000000,1.000000
19.000000,4.000000,120.000000,88.000000,3270.000000,21.900000,76.000000,2.000000
19.000000,6.000000,156.000000,108.000000,2930.000000,15.500000,76.000000,3.000000
16.500000,6.000000,168.000000,120.000000,3820.000000,16.700000,76.000000,2.000000
16.500000,8.000000,350.000000,180.000000,4380.000000,12.100000,76.000000,1.000000
13.000000,8.000000,350.000000,145.000000,4055.000000,12.000000,76.000000,1.000000
13.000000,8.000000,302.000000,130.000000,3870.000000,15.000000,76.000000,1.000000
13.000000,8.000000,318.000000,150.000000,3755.000000,14.000000,76.000000,1.000000
31.500000,4.000000,98.000000,68.000000,2045.000000,18.500000,77.000000,3.000000
30.000000,4.000000,111.000000,80.000000,2155.000000,14.800000,77.000000,1.000000
36.000000,4.000000,79.000000,58.000000,1825.000000,18.600000,77.000000,2.000000
25.500000,4.000000,122.000000,96.000000,2300.000000,15.500000,77.000000,1.000000
33.500000,4.000000,85.000000,70.000000,1945.000000,16.800000,77.000000,3.000000
17.500000,8.000000,305.000000,145.000000,3880.000000,12.500000,77.000000,1.000000
17.000000,8.000000,260.000000,110.000000,4060.000000,19.000000,77.000000,1.000000
15.500000,8.000000,318.000000,145.000000,4140.000000,13.700000,77.000000,1.000000
15.000000,8.000000,302.000000,130.000000,4295.000000,14.900000,77.000000,1.000000
17.500000,6.000000,250.000000,110.000000,3520.000000,16.400000,77.000000,1.000000
20.500000,6.000000,231.000000,105.000000,3425.000000,16.900000,77.000000,1.000000
19.000000,6.000000,225.000000,100.000000,3630.000000,17.700000,77.000000,1.000000
18.500000,6.000000,250.000000,98.000000,3525.000000,19.000000,77.000000,1.000000
16.000000,8.000000,400.000000,180.000000,4220.000000,11.100000,77.000000,1.000000
15.500000,8.000000,350.000000,170.000000,4165.000000,11.400000,77.000000,1.000000
15.500000,8.000000,400.000000,190.000000,4325.000000,12.200000,77.000000,1.000000
16.000000,8.000000,351.000000,149.000000,4335.000000,14.500000,77.000000,1.000000
29.000000,4.000000,97.000000,78.000000,1940.000000,14.500000,77.000000,2.000000
24.500000,4.000000,151.000000,88.000000,2740.000000,16.000000,77.000000,1.000000
26.000000,4.000000,97.000000,75.000000,2265.000000,18.200000,77.000000,3.000000
25.500000,4.000000,140.000000,89.000000,2755.000000,15.800000,77.000000,1.000000
30.500000,4.000000,98.000000,63.000000,2051.000000,17.000000,77.000000,1.000000
33.500000,4.000000,98.000000,83.000000,2075.000000,15.900000,77.000000,1.000000
30.000000,4.000000,97.000000,67.000000,1985.000000,16.400000,77.000000,3.000000
30.500000,4.000000,97.000000,78.000000,2190.000000,14.100000,77.000000,2.000000
22.000000,6.000000,146.000000,97.000000,2815.000000,14.500000,77.000000,3.000000
21.500000,4.000000,121.000000,110.000000,2600.000000,12.800000,77.000000,2.000000
21.500000,3.000000,80.000000,110.000000,2720.000000,13.500000,77.000000,3.000000
43.100000,4.000000,90.000000,48.000000,1985.000000,21.500000,78.000000,2.000000
36.100000,4.000000,98.000000,66.000000,1800.000000,14.400000,78.000000,1.000000
32.800000,4.000000,78.000000,52.000000,1985.000000,19.400000,78.000000,3.000000
39.400000,4.000000,85.000000,70.000000,2070.000000,18.600000,78.000000,3.000000
36.100000,4.000000,91.000000,60.000000,1800.000000,16.400000,78.000000,3.000000
19.900000,8.000000,260.000000,110.000000,3365.000000,15.500000,78.000000,1.000000
</code></pre>
| 44
|
linear regression
|
Deciding between a linear regression model or non-linear regression model
|
https://stats.stackexchange.com/questions/136564/deciding-between-a-linear-regression-model-or-non-linear-regression-model
|
<p>How should one decide between using a linear regression model or non-linear regression model?</p>
<p>My goal is to predict Y.</p>
<p>In case of simple $x$ and $y$ dataset I could easily decide which regression model should be used by plotting a scatter plot. </p>
<p>In case of multi-variant like $x_1,x_2,...x_n$ and $y$. How can I decide which regression model has to be used? That is, How will I decide about going with simple linear model or non linear models such as quadric, cubic etc.</p>
<p>Is there any technique or statistical approach or graphical plots to infer and decide which regression model has to be used? </p>
|
<p>This is a realm of statistics called model selection. A lot of research is done in this area and there's no definitive and easy answer.</p>
<p>Let's assume you have <span class="math-container">$X_1, X_2$</span>, and <span class="math-container">$X_3$</span> and you want to know if you should include an <span class="math-container">$X_3^2$</span> term in the model. In a situation like this your more parsimonious model is nested in your more complex model. In other words, the variables <span class="math-container">$X_1, X_2$</span>, and <span class="math-container">$X_3$</span> (parsimonious model) are a subset of the variables <span class="math-container">$X_1, X_2, X_3$</span>, and <span class="math-container">$X_3^2$</span> (complex model). In model building you have (at least) one of the following two main goals:</p>
<ol>
<li>Explain the data: you are trying to understand <em>how</em> some set of variables affect your response variable or you are interested in how <span class="math-container">$X_1$</span> effects <span class="math-container">$Y$</span> while controlling for the effects of <span class="math-container">$X_2,...X_p$</span></li>
<li>Predict <span class="math-container">$Y$</span>: you want to accurately predict <span class="math-container">$Y$</span>, without caring about what or how many variables are in your model</li>
</ol>
<p>If your goal is number 1, then I recommend the Likelihood Ratio Test (LRT). LRT is used when you have nested models and you want to know "are the data significantly more likely to come from the complex model than the parsimonous model?". This will give you insight into which model better explains the relationship between your data.</p>
<p>If your goal is number 2, then I recommend some sort of cross-validation (CV) technique (<span class="math-container">$k$</span>-fold CV, leave-one-out CV, test-training CV) depending on the size of your data. In summary, these methods build a model on a subset of your data and predict the results on the remaining data. Pick the model that does the best job predicting on the remaining data according to cross-validation.</p>
| 45
|
linear regression
|
Linear Regression Analysis
|
https://stats.stackexchange.com/questions/177371/linear-regression-analysis
|
<p>I am very new to linear regression analysis and I am trying to solve my first examples, most of the examples I have come across contained some tables and data where I could easily use the formulas I know and solve them. However, I have just come across an example that does not have much data and I have no idea where I should start and which formulas I should use to initiate.</p>
<p>we assume that the number of schoolchildren's close relationship has a linear association with the likelihood (0-100%) that a child becomes bullied in the classroom. We build a regression model where we predict the likelihood of becoming bullied with the number of friends. We found out that if a child has no friends, the likelihood of being bullied is 70%. We also know that the regression coefficient (beta) for the variable 'number of friends' is -10.</p>
<p>This question is asking me to write the <strong>regression equation</strong> and also <strong>predict</strong> the likelihood of being bullied if the child has 14 friends.</p>
<p>Shouldn't I simply use the following formula? But isn't something missing in the question?
ŷ = β0 + β1x</p>
|
<p>Simple illustration to know why the linear regression in this case does not work , and what is the logistic regression .</p>
<p>First of all , you have to know that your dependent variable $y$ (child becomes bullied ) is a binary variable , that means it takes two outcomes either Yes (becomes bullied ) or No (does not become bullied ).Let us create a dummy variable to indicate if an observation yes or no :</p>
<p>$y=1$ if yes </p>
<p>$y=0$ if no</p>
<p>In the example we want to know what determines that a child becomes bullied , our independent variable in this case is the number of friends $x$</p>
<p>Suppose that we run the regression model:</p>
<p>$Yes(y=1) =\alpha +\beta{x_i} + error$ </p>
<p>Now suppose we got the following outputs</p>
<p>$yes=-1+0.5{x_i}$</p>
<p>Since our dependent variable is binary , that means we want to know what makes it change from 0 to 1 , in other words , we want to know what increase the likelihood of being bullied $Pr(y=1)$
So our model could be
$Pr(y=1)=-1+0.5{x_i}$</p>
<p>Now can you calculate the likelihood that a child being bullied who has 25 friends ?.I suppose ,you know that the probability is bounded wherby $0\leq p \leq 1$.</p>
<p>If you get a strange result you have to find out a function which satisfies this condition $0\leq p \leq 1$ (squared function or exponential function..etc)</p>
| 46
|
linear regression
|
What are the differences between the linear regression and multiple linear regression?
|
https://stats.stackexchange.com/questions/83747/what-are-the-differences-between-the-linear-regression-and-multiple-linear-regre
|
<p>I'm interested to know that, what is the difference between linear regression and multiple linear regression? both of them seems same to me.</p>
|
<p>By linear regression I assume that you mean simple linear regression. The difference is in the number of independent explanatory variables you use to model your dependent variable.</p>
<p>Simple linear regression</p>
<p>$Y=\beta X+\beta_0$</p>
<p>Multiple linear regression</p>
<p>$Y=\beta_1 X_1+\beta_2 X_2+...+ \beta_m X_m + \beta_0$</p>
<p>Where the $\beta$'s are the parameters to fit. In other words
<em>simple linear regression is just a special case of multiple linear regression</em>.</p>
<p>This is not the same as multivariate linear regression which has more than one dependent variable.</p>
| 47
|
linear regression
|
Logistic vs linear regression
|
https://stats.stackexchange.com/questions/44569/logistic-vs-linear-regression
|
<p>Let's say I run a linear regression model with a binary dependent variable. If I ran logistic regression on the same data would the results be comparable or exactly similar? By results I mean both the beta values and the value of dependent variable. If not why? Also what can I say about linear regression being a subset of Logistic regression or vice versa?</p>
|
<p>Neither method is a subset of the other. Suppose you have response variable $Y$ using covariates $X$. In linear regression, you assume that $E[Y|X]$ is linear in the parameters $\beta$, whereas in logistic regression you assume the log-odds,</p>
<p>$$
\log\Bigg(\frac{P(Y=1|X)}{P(Y=0|X)}\Bigg)
$$</p>
<p>is linear in the parameters $\beta$. For each method you assume two different quantities are linear in $\beta$. The estimates of $\beta$ will likely be different and have different interpretations as well.</p>
<p>It may or may not make sense to use logistic regression for your data, but linear regression is almost certainly not a good way to analyze your (dichotomous) data.</p>
| 48
|
linear regression
|
When to use Simple Linear Regression over Multiple Linear Regression
|
https://stats.stackexchange.com/questions/535464/when-to-use-simple-linear-regression-over-multiple-linear-regression
|
<p>I am fairly new to the world of statistics and approaching it as I learn more about machine learning. I have a fairly firm grasp on regression analysis so far but not necessarily on nuances and best practices of application.</p>
<p>For example; assume I have 5 predictor variables—a clear case for consideration of multiple regression as I understand it.</p>
<p>I'm curious as to any conditions in which it would be beneficial to draw primary conclusions based on simple linear regression modeling from these data vs. using multiple regression.</p>
<p>The one situation I can imagine is where all five of the explanatory variables are realized to have a high degree of collinearity and can be combined into a single feature.</p>
<p>The only other case I've been able to imagine is where, after initial analyzes for correlation between predictors and the response variable, it's concluded that only a single predictor has any significant correlation such that a linear relationship exists between it and the response variable. In this case, however, it's really a conclusion that only one predictor variable is suited for inclusion anyway—kind of sidestepping the issue.</p>
<p>So the question is: under what conditions would one choose simple linear regression over multiple linear regression when multiple predictor variables are available for analysis. As a caveat, assume more than 1 exists where there exists a significant linear correlation between that variable and the response variable.</p>
|
<p>If you care about prediction, then you want the model that will maximize your out of sample predictive accuracy. The best way is to have a sense, in advance, of what variables will do that (e.g., all, some, or just one of your variables), and then fit that model. Often, people don't. In such a case, you can use a cross validation scheme to select the model that will perform best (again, that model may use all, some, or just one of your variables. You can read some of our existing threads that discuss cross validation by searching on the <a href="/questions/tagged/cross-validation" class="post-tag" title="show questions tagged 'cross-validation'" rel="tag">cross-validation</a> tag; you may want to sort by votes and start reading from the best ranked threads.</p>
<p>If you have specific hypotheses to test, you should fit the model that corresponds to what you want to know. That model may use all, some, or only one of your variables. Once the model is fit, you can assess it (e.g., use plots to determine if the assumptions are met). If you are OK with the model, you can interpret the hypothesis tests of interest. If it turns out that some covariates are not significant, that's no problem. <em>You do not 'have to', nor should you, drop non-significant variables and refit the model.</em></p>
<p>Regarding multicollinearity, if you have <em>perfect</em> multicollinearity, you need to choose some variables to drop, or your software will choose for you. Perfect multicollinearity is rare, though, and if you don't have perfect multicollinearity, you don't have to drop any variables. If you care about prediction, multicollinearity may not pose much of a problem. If you care about testing a hypothesis, and the hypothesis of interest isn't part of the collinear variables, it won't matter. If it is part of the collinear variables, it's likely you won't have enough information to answer your question. Dropping variables may lead to a significant p-value, but that p-value won't be valid—it's still the case that you wouldn't have enough information to answer your question, it's just that p-hacking is an effective way to get significant p-values.</p>
| 49
|
linear regression
|
Simple or multiple linear regression
|
https://stats.stackexchange.com/questions/547330/simple-or-multiple-linear-regression
|
<p>If you were testing the hypothesis that age would have an effect on your dv for men, but not for women, would you measure this by doing a multiple linear regression with sex and age as the predictors for your DV or by splitting the data into male/female scores and then doing a simple linear regression with age as your predictor for each?</p>
| 50
|
|
linear regression
|
Multiple Linear Regression coefficents
|
https://stats.stackexchange.com/questions/145949/multiple-linear-regression-coefficents
|
<p>I'm doing a linear regression, in R. The values are like this -</p>
<pre><code>u <- c(1,2,3,4,5,6,7,8,9,10)
v <- c(21,22,23,24,25,26,27,28,29,30)
w <- c(41,42,43,44,45,46,47,48,49,50)
y <- c(128.2305,132.4040,140.1732,147.3236, 154.5410, 158.7206, 165.1761, 169.7121,178.9751,181.0309)
</code></pre>
<p>If I call linear regression function, it's returning a model, which is disregarding v and w.</p>
<pre><code>model <- lm(y~u+v+w)
Coefficients:
(Intercept) u v w
122.074 6.101 NA NA
summary(model)
</code></pre>
<p>Output: </p>
<pre><code>Call:
lm(formula = y ~ u + v + w)
Residuals:
Min 1Q Median 3Q Max
-2.05143 -0.92734 0.04845 0.73362 1.99357
Coefficients: (2 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 122.0743 1.0197 119.72 2.65e-14 ***
u 6.1008 0.1643 37.12 3.04e-10 ***
v NA NA NA NA
w NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.493 on 8 degrees of freedom
Multiple R-squared: 0.9942, Adjusted R-squared: 0.9935
F-statistic: 1378 on 1 and 8 DF, p-value: 3.04e-10
</code></pre>
<p>I tried to fit a linear model before with different values of y,u,v (with two predictor variables, w was absent), and there also, v was being assigned NA, and only u was getting co-efficients. What's happening?</p>
|
<p>Two of your three regressors ($v$ and $w$, say) are linear combinations of the intercept column (the "1"-vector) and the third regressor ($u$, say), thus causing "perfect multicollinearity". For instance, the vector $v$ equals $u$ plus 20.</p>
<p>Technically speaking, the design matrix $X$ with rows
$$
X_{i,.} = (1, u_i, v_i, w_i)
$$
is not of full column rank, so the square matrix $(X'X)$ appearing in the least-squares solution $\hat \beta$
$$
\hat \beta =(X'X)^{-1}X'y
$$
cannot be inverted.</p>
<p>Fortunately, R is so smart that it still provides the correct solution by dropping two of the three linearly dependent regressors. So it uses the design matrix $\tilde X$ with rows</p>
<p>$$
\tilde X_i = (1, u_i)
$$
with the corresponding solution in your output.</p>
| 51
|
linear regression
|
Linear Regression Variable Selection
|
https://stats.stackexchange.com/questions/548572/linear-regression-variable-selection
|
<p>##Linear Regression Variable Selection</p>
<p>Hi I am running a simple single variable linear regression model where covid deaths per 100,000 are my dependent variable and my independent variable is % of population with iron deficiency. Does it make sense to regress these two variables together or should I be aligning my data and make iron deficiency per 100,000?</p>
|
<blockquote>
<p>or should I be aligning my data and make iron deficiency per 100,000?</p>
</blockquote>
<p>No, this is a bad idea. In that case you will have the response and the regressor <em>both</em> divided by the same variable - population - which will invoke bias due to mathematical coupling.</p>
| 52
|
linear regression
|
Hacking linear regression
|
https://stats.stackexchange.com/questions/491194/hacking-linear-regression
|
<p>Let's say I perform linear regression on some data that produces the following <span class="math-container">$R^2$</span>:</p>
<p><span class="math-container">$\text{RSS} = 1966815.13$</span></p>
<p><span class="math-container">$\text{TSS} = 2145213.91$</span></p>
<p><span class="math-container">$R^2 = 0.083$</span></p>
<p>Now let's say I bucket (take the average of successive equal size windows) the data and perform the exact same linear regression. This time the <span class="math-container">$R^2$</span> is:</p>
<p><span class="math-container">$\text{RSS} = 1187.56$</span></p>
<p><span class="math-container">$\text{TSS} = 4758.32$</span></p>
<p><span class="math-container">$R^2 = 0.75$</span></p>
<p><strong>What have I just done here?</strong></p>
<p>I clearly haven't improved the explanatory power of the model over the original data, yet I've hacked the <span class="math-container">$R^2$</span> by averaging. What's the bias / variance argument of why this occurs?</p>
<p>A graphical representation:</p>
<p><a href="https://i.sstatic.net/b4Pv5.png" rel="noreferrer"><img src="https://i.sstatic.net/b4Pv5.png" alt="enter image description here" /></a></p>
<p>Edit: I've reduced the total variance of the data, but the bias remains unaffected. <strong>When (if ever) is this a valid statistical approach?</strong></p>
|
<h2>You decreased the (total) variance in the data making it easier to explain.</h2>
<p>Thanks to the same <span class="math-container">$y$</span>-axis limits, it can be easily seen that the data of your first example is much more spread in this direction than in the <span class="math-container">$x$</span>-axis direction.
Your linear regression model does capture the slight trend of increasing <span class="math-container">$y$</span> with increasing <span class="math-container">$x$</span>, but it doesn't tell you anything about the variation along the <span class="math-container">$y$</span>-axis when differences in <span class="math-container">$x$</span> are small, i.e. around the regression line.
This is measured by residual sum of squares (<span class="math-container">$\text{RSS}$</span>) you included.</p>
<p>In other words, there is still a lot of "error", which hasn't been accounted for in this model.
Since <span class="math-container">$R^2$</span> represents the ratio between the explained variance and total variance, it remains small.</p>
<p>In your second set of data, most of the variation is explained by that same linear relationship between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.
Only a small part of the variance is left unexplained.
This is reflected both in a smaller ratio <span class="math-container">$\frac{\text{RSS}}{\text{TSS}}$</span> as well as a smaller value of total sum of squares (<span class="math-container">$\text{TSS}$</span>) as well.</p>
<p>To conclude, the same model performs much better in the second case, because that set of data is much easier to explain (with this type of model).</p>
<hr />
<p>You also asked when, if ever, this would be a good statistical approach. It depends on <em>what problem</em> you are approaching with this.</p>
<p>If you wanted to show that your model fits well to your data and only selected a subset of data for which it performs particularly well, that is an example of <a href="https://en.wikipedia.org/wiki/Cherry_picking" rel="nofollow noreferrer">cherry picking</a>, which is a deceitful practice when done intentionally.</p>
<p>On the other hand, if you consider the variation in the direction of <span class="math-container">$y$</span>-axis to be noise and you just want to give a succinct summary of your data, it might be acceptable to give it in some sort of average (like binning your data as you did).
However, the regression line also serves the purpose of illustrating the upward trend well without manipulating the data itself.
It also makes clear what is your data and what your model: the assumption of <span class="math-container">$y$</span> variance being noise (or error) is implicit here.</p>
| 53
|
linear regression
|
Linear regression cost function
|
https://stats.stackexchange.com/questions/561349/linear-regression-cost-function
|
<p>I'm looking at plain linear regression was wondering about the specifics of the cost function.</p>
<blockquote>
<p>The cost function associated with simple linear regression is given by:</p>
<p><span class="math-container">$$J(\theta) = \frac{1}{2n}\sum_{1=1}^n(y_i - \theta^tx_i)^2$$</span></p>
</blockquote>
<p>Where does the (<span class="math-container">$\tfrac{1}{2n}$</span>) term come from? Why not just <span class="math-container">$\tfrac{1}{n}$</span> so we achieve the average?</p>
|
<p><a href="https://en.wikipedia.org/wiki/Linear_regression" rel="nofollow noreferrer">Linear regression</a> minimizes squared error</p>
<p><span class="math-container">$$
J(\theta) = \sum_{i=1}^n (y_i - \theta^T x_i)^2
$$</span></p>
<p>You might want to put <span class="math-container">$1/n$</span> in front, so that it's units don't depend on sample size <span class="math-container">$n$</span>. The <span class="math-container">$1/2$</span> <a href="https://www.mathsisfun.com/calculus/derivatives-rules.html" rel="nofollow noreferrer">comes from the derivative</a></p>
<p><span class="math-container">$$
\frac{d}{dx} \big(x^2\big) = 2x
$$</span></p>
<p>and with</p>
<p><span class="math-container">$$
\frac{d}{dx} \Big( \frac{1}{2} x^2 \Big) = x
$$</span></p>
<p>so putting <span class="math-container">$1/2$</span> in front makes writing the derivatives simpler because you don't need to add <span class="math-container">$2$</span> in front. <span class="math-container">$1/(2n)$</span> does both.</p>
<p>No matter which form you choose (sum of squared errors, <span class="math-container">$1/n$</span>, <span class="math-container">$1/2$</span>, <span class="math-container">$1/(2n)$</span>), they have the same minimum, because multiplying function by a positive constant does not change its minimum, so they are equivalent.</p>
| 54
|
linear regression
|
Linear regression for classification
|
https://stats.stackexchange.com/questions/129571/linear-regression-for-classification
|
<p>Suppose, I have a classification problem with 2 classes (0 and 1) and evaluation criteria is AUC. I used the following method: fit a linear regression and then pass its predictions through the logistic function.
As far as I understand, it is not equivalent to logistic regression, because estimates of coefficients will be different. And strangely, it works better than logistic regression for my problem.
Does this linear regression method have any theoretical justifications?
Have you seen it before?
Thanks a lot!</p>
|
<p>This is not outrageous.</p>
<p>The logistic regression aims to minimize log loss, <span class="math-container">$L(y,\hat y) = -\sum\bigg[y_i\log(\hat y_i) + (1 - y_i)\log(1 -\hat y_i)
\bigg]$</span>.</p>
<p>Whatever your OLS-based model does, the competing logistic regression is not trying to optimize your metric of interest, so I find it completely believable that an alternative estimation method for the coefficients gives superior performance in terms of AUC.</p>
| 55
|
linear regression
|
Coefficient for linear and non-linear regression
|
https://stats.stackexchange.com/questions/506491/coefficient-for-linear-and-non-linear-regression
|
<p>I have used a deep NN for performing regression analysis with multiple independent variables and then predicting one dependent varible.</p>
<p>To understand the quality of the regression I have used <span class="math-container">$R^2$</span>, but it is typically used for linear regression.</p>
<p>My question is, Can I use <span class="math-container">$R^2$</span> coefficient for determining the quality of such regression. Please take into account that the problem I'm focusing on should be non-linear. If no, which would be the corrent coefficient, instead of <span class="math-container">$R^2$</span>, in case of non-linear regression.</p>
<p>Thank you in advance</p>
|
<p>R2 can be used. Also you can check all the loss functions used in regression settings, such as MSE (mean squares error) MAE (mean absolute error) etc.</p>
| 56
|
linear regression
|
Homoscedasticity assumption in simple linear regression
|
https://stats.stackexchange.com/questions/336101/homoscedasticity-assumption-in-simple-linear-regression
|
<p>What do you mean by a distribution is homoscedastic (i.e. $ σ(Y|X = x) = σ$) in the context of simple linear regression?</p>
<p>Why do we need this assumption in simple linear regression?</p>
<p>What will happen to the regression if a distribution is not homoscedastic?</p>
|
<p>When you perform a regression, you are making assumptions about the distributions of the random variables whose outcome you have observed. Those observations are your data.</p>
<p>Homoscedasticity means that the distribution you assume is generating the $Y$ value of your data points has the same variance no matter the value of $X$.</p>
<blockquote>
<p>Why do we need this assumption in simple linear regression?</p>
</blockquote>
<p>The way you fit a simple linear regression model is that your look for the parameters that make the data you observed as likely as possible. This is called maximum likelihood estimation. The common recipe for finding those parameters (via algebra) works under the assumption of homoscedasticity.</p>
<p><strong>Consider the following example:</strong></p>
<p>Three data points are given and simple linear regression yields the following regression line:</p>
<p><a href="https://i.sstatic.net/Flq8f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Flq8f.png" alt="enter image description here"></a> </p>
<p>Now, what if I told you that when $X$ takes the value $2$ the distribution of $Y$ has a very very small variance, same for the value $3$, while it has substantial variance given that $X$ takes the value $1$? In this case, assuming that the regression line is true, getting the data you got would be very unlikely, because the black dots are quite far from the line.</p>
<p>A regression line like this:</p>
<p><a href="https://i.sstatic.net/tM2mf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tM2mf.png" alt="enter image description here"></a></p>
<p>would give you a much greater likelihood.</p>
| 57
|
linear regression
|
Linear Regression feature transformation
|
https://stats.stackexchange.com/questions/342731/linear-regression-feature-transformation
|
<p>One way of making linear regression applicable more widely is to use basis
expansions, i.e., adding more features to the input set. Suppose that the
data is described by a p-tuple, $(x_1 , x_2 , . . . , x_p )$. Comment on the utility of
the following sets of features. Specifically describe the family of functions
that can be represented by a linear combination of these features.</p>
<p>$(a)( x_1 , . . . , x_p , x_1^2 , x_1 x_2 , x_1 x_3 , . . . , x_1 x_p , x_ 2^2 , x_2 x_3 , . . . , x^2_p)$<br>$
(b) (x^2_1 , x^2_2 , . . . , x^2_p)$</p>
<p>How to solve this type of questions? Any hint or idea.</p>
<p><strong>My Attempt :</strong>
I have studied linear regression from <a href="http://cs229.stanford.edu/notes/cs229-notes1.pdf" rel="nofollow noreferrer">Stanford notes</a>
Now according to this the equation of the predicted value of $y$ is given by a linear equation of the feature variables, but here in this question the feature variables given are not linear so will it be of the same form?</p>
<p>Further by family of function does it imply finding the equation of $y$ or it has some other meaning?
Here by $y$ I mean value being predicted using linear regression.</p>
|
<p>I hope I understood your question. The equation for linear regression (without the intercept) can be written as follows:
\begin{equation}
y_i=\beta_1x_{i,1}+\beta_2x_{i,2}+…+\beta_{p}x_{i,p}+\epsilon_i.
\end{equation}
For $(a)$ using the transformation $x_i=x_i+(x_i * x_{i+1})$ and we get the following:
\begin{equation}
y_i=\beta_1x_{i,1}+\beta_1x^2_{i,1}+\beta_2x_{i,2}+\beta_2x_{i,2}+\beta_2x_{i,2}x_{i,3}+...+\beta_{p}x^2_{i,p}+\epsilon_i
\end{equation}
For $(b)$ using the transformation $x_i=x_i^2$ and we get the following:
\begin{equation}
y_i=\beta_1x^2_{i,1}+\beta_2x^2_{i,2}+…+\beta_{p}x^2_{i,p}+\epsilon_i.
\end{equation}
where $i=1,2,...,t$ and $t$ is some finite integer.</p>
| 58
|
linear regression
|
Plotting linear regression with factors
|
https://stats.stackexchange.com/questions/179498/plotting-linear-regression-with-factors
|
<p>I'm working on a project with R and I don't think I'm using the appropriate linear regression or plot, I've made both but they don't seem to match. The study is an ANOVA comparing $CO_2$ emissions per capita with 5 groups of income levels and a relevant linear regression. For the linear regression I want use $CO_2$ as the dependent variable and $GDP$ as the independent variable and the 5 $income$ levels as dummy variables.</p>
<p>Begin by ordering the variables and remove the intercept:</p>
<pre><code>income_factor = factor(Data01$income, levels=c("Low income",
"Lower middle income", "Upper middle income", "High income: OECD", "High
income: nonOECD"))
lm.r = lm(CO2 ~ income_factor -1, data=Data01)
</code></pre>
<p>Gives</p>
<pre><code>summary(lm.r)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
income_factorLow income 0.2318 0.6943 0.334 0.73902
income_factorLower middle income 1.7727 0.6355 2.789 0.00603 **
income_factorUpper middle income 4.7685 0.6271 7.604 4.12e-12 ***
income_factorHigh income: OECD 8.7926 0.7305 12.036 < 2e-16 ***
income_factorHigh income: nonOECD 19.4642 1.3667 14.242 < 2e-16 ***
</code></pre>
<p>So that we may write the linear regression in the form:</p>
<p>$$ CO_2 = \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_3 + \beta_4 X_4 + \beta_5 X_5 $$</p>
<p>Where $X_i$ is a dummy variable 1 at the level of income and 0 otherwise</p>
<p>For the corresponding plot I used:</p>
<pre><code> plot <- ggplot(data=Data01, aes(x=GDP, y=CO2, colour=factor(income)))
plot + stat_smooth(method=lm, fullrange=FALSE) + geom_point()
</code></pre>
<p>Which gives the graph</p>
<p><a href="https://i.sstatic.net/5Iw53.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Iw53.png" alt="CO2 ~ GDP"></a></p>
<p>But here is my confusion, it looks like there is the <em>lm</em> term in the plot, but I don't think it is using the same values taken from the previous linear regression. As Looking at summary from the linear regression, High income: OECD the estimate is 8.79, but the line for it is pretty much flat.</p>
<p>While I was typing this I realized that the graph has $GDP$ as the X-axis, but is not included in the linear regression. Would multiplying by $income$_$factor*GDP$ help?</p>
|
<p>When you use dummy variables, the coefficients don't represent slopes, they represent a constant number which is added to the estimate when the variable equals 1.</p>
<p>So your "High income: OECD" results from the linear regression are entirely consistent with the graph-- you can see on the graph that the High income: OECD line runs almost horizontally at about CO2 = 9, compared to your linear regression result of 8.7926.</p>
<p>If I understand ggplot correctly, it's plotting a separate regression for each income level. (A regression of CO2 levels on GDP.) So that's what you'd have to do if you want to get the same results as displayed on the graph.</p>
<p>As for your linear regression design, GDP will likely have some strange interactions with the income factors that will make the results difficult to interpret.</p>
<p>If the income factors are based on GDP per capita, GDP basically equals $$income factor \times population $$ Your results would be much clearer if you could run the regression with population instead of GDP. Then the interaction variables income_factor*population would make a lot of sense.</p>
| 59
|
linear regression
|
Linear Regression on non-stationary regressor
|
https://stats.stackexchange.com/questions/323866/linear-regression-on-non-stationary-regressor
|
<p>I am doing a linear regression and the regressors, like GDP, inflation, etc, (independent variables) are non-stationary.</p>
<p>What should I do if those regressors are not stationary? Should I 'make' it to stationary before regression?</p>
| 60
|
|
linear regression
|
Linear regression and multicollinearity
|
https://stats.stackexchange.com/questions/405805/linear-regression-and-multicollinearity
|
<p>There is a multiple linear regression model being created.</p>
<p>Y=ax1+bx2+cx3</p>
<p>Following HYPOTHESES are formed</p>
<p>Variable x does not impact y</p>
<p>for all variables x1, x2 , x3 and so on.</p>
<p>We removed a variable , say , x2 because of high VIF value from regression model but that means it is highly correlated with x1 or x3.</p>
<p>What would happen to your HYPOTHESES for x2? Is it REJECTED?</p>
<p>Please clarify if feasible.</p>
|
<p>Your hypotheses can only relate to one model. In your model with three predictors, you have three hypotheses. If you decide to remove one of the variables, the model changes, and you cannot compare hypotheses. In particular, there is no sensible hypothesis for <span class="math-container">$x_2$</span> anymore, but also, the hypotheses for <span class="math-container">$x_1$</span> and <span class="math-container">$x_3$</span> are no longer comparable to the "previous" ones. </p>
<p>This system of hypothesis testing does not answer the question "does <span class="math-container">$x_2$</span> influence <span class="math-container">$y$</span>" in general. It only answers it with respect to a model that has to be specified. </p>
<p>More precisely, "does <span class="math-container">$x$</span> influence <span class="math-container">$y$</span>" is more a question of causality, which you also cannot answer within this framework. </p>
| 61
|
linear regression
|
Difference between kernel linear regression and non-parametric regression
|
https://stats.stackexchange.com/questions/546954/difference-between-kernel-linear-regression-and-non-parametric-regression
|
<p>A quick perplexity popped up in my mind while reading about <em>non-parametric</em> linear regression.</p>
<p>In linear regression, we model our response <span class="math-container">$\textbf{y} \sim \mathcal{N}(X\beta, \sigma^2I)$</span> so basically we try to estimate a linear function of the form</p>
<p><span class="math-container">$$f_\beta(\textbf{x}_i) =\textbf{x}_{i,1}\beta_1, \dots, \textbf{x}_{i,p}\beta _p$$</span></p>
<p>while in non-parametric regression we allow more possibilities for the structure of <span class="math-container">$f$</span> and the response is modeled as</p>
<p><span class="math-container">$$\textbf{y} \sim \mathcal{N}(f(x), \sigma^2I)$$</span></p>
<p>with <span class="math-container">$f$</span> respecting some smoothness assumptions.</p>
<p>What it's not too clear to me is what is the main difference between kernel linear regression and non-parametric one. It is well known that the word <strong>linear</strong> in linear regression refers to the parameters, so one typically applies a non-linear feature transformation <span class="math-container">$\phi : \mathbb{R}^p \rightarrow \mathbb{R}^d$</span> to the <strong>features</strong> and then searches for some hyperplane fitting the data (brought in higher dimension by the map <span class="math-container">$\phi$</span>).</p>
|
<p>A <em>parametric model</em> has fixed number of parameters, in case of <a href="https://stats.stackexchange.com/questions/268638/what-exactly-is-the-difference-between-a-parametric-and-non-parametric-model"><em>non-parametric model</em></a>, the number of parameters grows with the size of the data. What follows, with a parametric model we need to make stronger assumptions about the distribution of the data, while in case of the non-parametric model, it is "learned from the data" to greater degree, but the practical differences may be blurry in some cases. That is why models such as Gaussian processes <a href="https://stats.stackexchange.com/questions/46588/why-are-gaussian-process-models-called-non-parametric">are considered</a> as non-parametric, no matter that they make distributional assumptions and have parameters.</p>
<p><a href="https://en.wikipedia.org/wiki/Kernel_regression" rel="nofollow noreferrer">Kernel regression</a> is one of the non-parametric regression models, so it cannot differ from non-parametric models. It is a model that uses kernels to approximate the expected value of the distribution of the data. Other non-parametric models may use different ways of achieving this, for example in case of <a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" rel="nofollow noreferrer"><span class="math-container">$k$</span>-NN regression</a> the predicted mean would be just an average of the <span class="math-container">$k$</span> closest neighbors of the datapoint.</p>
| 62
|
linear regression
|
R: Anova and Linear Regression
|
https://stats.stackexchange.com/questions/76250/r-anova-and-linear-regression
|
<p>I am new to statistics and I am trying to understand the difference between ANOVA and linear regression. I am using R to explore this. I read various articles about why ANOVA and regression are different but still the same and how the can be visualised etc. I think I am pretty there but one bit is still missing.</p>
<p>I understand that ANOVA compares the variance within groups with the variance between groups to determine whether there is or is not a difference between any of the groups tested. (<a href="https://controls.engin.umich.edu/wiki/index.php/Factor_analysis_and_ANOVA" rel="noreferrer">https://controls.engin.umich.edu/wiki/index.php/Factor_analysis_and_ANOVA</a>)</p>
<p>For linear regression, I found a post in this forum which says that the same can be tested when we test whether b (slope) = 0.
(<a href="https://stats.stackexchange.com/questions/555/why-is-anova-taught-used-as-if-it-is-a-different-research-methodology-compared">Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?</a>)</p>
<p>For more than two groups I found a website stating:</p>
<p>The null hypothesis is: <span class="math-container">$\text{H}_0: µ_1 = µ_2 = µ_3$</span></p>
<p>The linear regression model is: <span class="math-container">$y = b_0 + b_1X_1 + b_2X_2 + e$</span></p>
<p>The output of the linear regression is, however, then the intercept for one group and the difference to this intercept for the other two groups.
(<a href="http://www.real-statistics.com/multiple-regression/anova-using-regression/" rel="noreferrer">http://www.real-statistics.com/multiple-regression/anova-using-regression/</a>)</p>
<p>For me, this looks like that actually the intercepts are compared and not the slopes?</p>
<p>Another example where they compare intercepts rather than the slopes can be found here:
(<a href="http://www.theanalysisfactor.com/why-anova-and-linear-regression-are-the-same-analysis/" rel="noreferrer">http://www.theanalysisfactor.com/why-anova-and-linear-regression-are-the-same-analysis/</a>)</p>
<p>I am now struggling to understand what is actually compared in the linear regression? the slopes, the intercepts or both? </p>
|
<blockquote>
<p>this looks like that actually the intercepts are compared and not the slopes?</p>
</blockquote>
<p>Your confusion there relates to the fact that you must be very careful to be clear about which intercepts and slopes you mean (intercept of what? slope of what?).</p>
<p>The role of a coefficient of a 0-1 dummy in a regression can be thought of both as a slope <em>and</em> as a difference of intercepts, simply by changing how you think about the model.</p>
<p>Let's simplify things as far as possible, by considering a two-sample case.</p>
<p>We can still do one-way ANOVA with two samples but it turns out to essentially be the same as a two-tailed two sample t-test (the equal variance case).</p>
<p>Here's a diagram of the population situation:</p>
<p><img src="https://i.sstatic.net/nBxeF.png" alt="two group means as regression, population situation" /></p>
<p>If <span class="math-container">$\delta = \mu_2-\mu_1$</span>, then the population linear model is</p>
<p><span class="math-container">$y = \mu_1 + \delta x + e$</span></p>
<p>so that when <span class="math-container">$x=0$</span> (which is the case when we're in group 1), the mean of <span class="math-container">$y$</span> is <span class="math-container">$\mu_1 + \delta \times 0 = \mu_1$</span> and when <span class="math-container">$x=1$</span> (when we're in group 2), the mean of <span class="math-container">$y$</span> is <span class="math-container">$\mu_1 + \delta \times 1 = \mu_1 + \mu_2 - \mu_1 = \mu_2$</span>.</p>
<p>That is the coefficient of the slope (<span class="math-container">$\delta$</span> in this case) and the difference in means (and you might think of those means as intercepts) is the same quantity.</p>
<p><span class="math-container">$ $</span></p>
<p>To help with concreteness, here are two samples:</p>
<pre><code>Group1: 9.5 9.8 11.8
Group2: 11.0 13.4 12.5 13.9
</code></pre>
<p>How do they look?</p>
<p><img src="https://i.sstatic.net/6VYPQ.png" alt="sample plot" /></p>
<p>What does the test of difference in means look like?</p>
<p>As a t-test:</p>
<pre><code> Two Sample t-test
data: values by group
t = -5.0375, df = 5, p-value = 0.003976
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-4.530882 -1.469118
sample estimates:
mean in group g1 mean in group g2
9.9 12.9
</code></pre>
<p>As a regression:</p>
<pre><code>Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 9.9000 0.4502 21.991 3.61e-06 ***
groupg2 3.0000 0.5955 5.037 0.00398 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.7797 on 5 degrees of freedom
Multiple R-squared: 0.8354, Adjusted R-squared: 0.8025
F-statistic: 25.38 on 1 and 5 DF, p-value: 0.003976
</code></pre>
<p>We can see in the regression that the intercept term is the mean of group 1, and the groupg2 coefficient ('slope' coefficient) is the difference in group means. Meanwhile the p-value for the regression is the same as the p-value for the t-test (0.003976)</p>
| 63
|
linear regression
|
Scaling in linear regression
|
https://stats.stackexchange.com/questions/208341/scaling-in-linear-regression
|
<p>The text is from <a href="http://www.amazon.in/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370?tag=googinhydr18418-21&tag=googinkenshoo-21&ascsubtag=b2873fcd-dac7-4d50-8a4f-ef395872fda3" rel="nofollow">Intro to Statistical Learning</a> Page no 380.Can anyone explain the both ideas clearly with an example if possible </p>
<blockquote>
<p>1) In linear regression scaling has no effect. </p>
<p>2)In linear regression,multiplying a variable by a factor of c will simply lead to multiplication of the corresponding coefficient estimate by a factor of 1/c, and thus will have no substantive effect on the model obtained</p>
</blockquote>
|
<p>It does not matter for fitted values and residuals if we change the units of measurement of $X$. Consider transforming $X $ by some invertible $k\times k$ matrix $A$, $XA$ (e.g., change months of schooling to years and meters to centimeters when explaining wages).</p>
<p>This is seen as follows,
\begin{eqnarray*}
P_{XA}&:=&XA\bigl((XA)'XA\bigr)^{-1}(XA)'\\
&=&XA\bigl(A'X'XA\bigr)^{-1}A'X'\\
&=&XAA^{-1}(X'X)^{-1}(A')^{-1}A'X'\\
&=&P_{X}
\end{eqnarray*}</p>
<p>What about $\hat{\beta}$? Consider
\begin{eqnarray*}
\hat{\beta}^\circ&=&\bigl(\underbrace{A'X'}_{``X'"}\underbrace{XA}_{``X"}\bigr)^{-1}\underbrace{A'X'}_{``X'"}y\\
&=&A^{-1}(X'X)^{-1}(A')^{-1}A'X'y\\
&=&A^{-1}(X'X)^{-1}X'y\\
&=&A^{-1}\hat{\beta}
\end{eqnarray*}
That is, if
$$
A=\begin{pmatrix}
1/12&0\\
0&100
\end{pmatrix}\qquad\text{so that}\qquad A^{-1}=\begin{pmatrix}
12&0\\
0&1/100
\end{pmatrix}
$$
in the above example, the effect of a change in the regressors is, sensibly, adjusted accordingly.</p>
| 64
|
linear regression
|
Simple Linear Regression Question Confusion
|
https://stats.stackexchange.com/questions/545954/simple-linear-regression-question-confusion
|
<p><a href="https://i.sstatic.net/vLSB9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vLSB9.png" alt="enter image description here" /></a></p>
<p>By using <a href="https://www.socscistatistics.com/tests/regression/default.aspx" rel="nofollow noreferrer">this site</a> I found the two linear regression models that the question asked.</p>
<p>The equations came out to be:
<span class="math-container">$$US=60.495+18.550x$$</span>
<span class="math-container">$$China =-2.08+18.296x$$</span></p>
<p>A follow-up question asked me to find "What year will china take over the US in terms of internet users"? Therefore I equated the two equations and got <span class="math-container">$x \approx -246$</span>.</p>
<p>Since I 'indexed' <span class="math-container">$1998$</span> as <span class="math-container">$0$</span> and <span class="math-container">$2000$</span> as <span class="math-container">$2$</span> and so on. The answer is <span class="math-container">$1998 - 246 = 1752$</span></p>
<p>The year 1752 and internet does not really make a lot of sense. I totally get why this answer results from the linear models mentioned above. However, I am confused and I think I could be missing something that is resulting in a wrong answer.</p>
<p>I see how some curves will fit here nicely but <a href="https://media.cheggcdn.com/study/1d6/1d627bd8-4cbc-4873-ac7b-0aad42c32f78/image.png" rel="nofollow noreferrer">The Question</a> specifically asks for linear regression models.</p>
|
<p>I think those equations are wrong. From equating them I get <span class="math-container">$x\approx-246$</span>.</p>
<p>I've copied the data from the chart into R:</p>
<pre class="lang-r prettyprint-override"><code>df <- tibble(
year = seq(1998, 2008, by = 2),
usa = c(44.23, 83.05, 169.57, 190.43, 206.49, 225.69),
china = c(2.1, 22.54, 59.09, 94.94, 138.33, 200.5)
)
</code></pre>
<p>And fitted the two regression models:</p>
<pre class="lang-r prettyprint-override"><code>usa_m <- lm(usa ~ year, data = df)
china_m <- lm(china ~ year, data = df)
</code></pre>
<p>I get the following equations:
<span class="math-container">$$
\operatorname{USA}=-37001.834+18.550\operatorname{year}\\
\operatorname{China}=-39264.688+19.646\operatorname{year}
$$</span></p>
<p>If you equate these:
<span class="math-container">$$
-37001.834+18.550\operatorname{year}=-39264.688+19.646\operatorname{year}\\
-1.096\operatorname{year}=-2262.854\\
\operatorname{year}=\frac{2262.854}{1.096}\approx2065
$$</span></p>
| 65
|
linear regression
|
Why is a linear regression not linear when you plot it?
|
https://stats.stackexchange.com/questions/586864/why-is-a-linear-regression-not-linear-when-you-plot-it
|
<p>I can't find a proper explanation for my question on <em><a href="https://stats.stackexchange.com/tour">Cross Validated</a></em>. The closest explanation was <a href="https://medium.com/@biswajit3071976/what-does-the-term-linear-in-linear-regression-mean-97ef717bed7b" rel="nofollow noreferrer">this one</a> from <a href="https://en.wikipedia.org/wiki/Medium_(website)" rel="nofollow noreferrer">Medium</a>, but still, I don't see the difference visually among the four cases in that explanation. So here we are.</p>
<p>I have this <code>df.head()</code> with two plots where:</p>
<ul>
<li><code>Y</code>: The column <code>Y</code></li>
<li><code>Y_predicted</code>: It's the output of the linear regression (see code below for details)</li>
<li><code>error</code>: <code>Y-Y_predicted</code></li>
</ul>
<p><a href="https://i.sstatic.net/Fy75H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy75H.png" alt="Enter image description here" /></a></p>
<p>The code of the linear regression:</p>
<pre><code>model = LinearRegression().fit(X, y)
y_pred = pd.Series(
model.predict(X),
index=X.index,
name='Fitted')
error = (y-y_pred).rename('Error')
</code></pre>
<p>I have always been taught that linear regression is <strong>linear</strong>, and I don't see a linear prediction here. I just can't understand why. Why is it not linear if it's a linear regression?</p>
<p>I have been playing around with this linear regression and as I add more features, the more complex it becomes (in other words, more "curved" is the linear regression), but still it's not linear in the plot. I have been trying also to get the linear equation from this model from <code>sklearn.linear_model.LinearRegression</code>, but it seems that it's only possible to get the intercepts. And with only the intercepts I can't see how the equation changes as I change the features. So I have two questions:</p>
<ol>
<li>Why is this linear regression not linear?</li>
<li>What is the explanation, at least visually, for the differences among the four equations in the explanation linked? Specifically the differences among the equations below. Are all of them are linear?</li>
</ol>
<ul>
<li>(1) Y = a + bx</li>
<li>(2) Y = a+bx+cx^2</li>
<li>(3) Y = a+(b^2)X</li>
<li>(4) Y =a +(b^2)X+cx^2</li>
</ul>
<ol start="3">
<li>When I used one feature (<code>const</code>) in <code>X</code>, it was a straight line with <code>slope = 0</code>. When I used <code>const</code> and <code>trend</code> it was a straight line with a <code>slope!=0</code>. Then I added one column of the Fourier series, and the straight line changed to be a curved one. Does this mean that, if I plot it in 3D, it would still be a straight line? (I don't imagine how.)</li>
</ol>
|
<p>Linear regression is "linear" in the sense of modeling the data with a <a href="https://en.wikipedia.org/wiki/Linear_function" rel="noreferrer">linear function</a>, i.e.</p>
<p><span class="math-container">$$
f(x) = a + b x
$$</span></p>
<p>If you put a sinusoid in place of <span class="math-container">$x$</span>, after multiplying it by a number <span class="math-container">$b$</span> and adding a number <span class="math-container">$a$</span>, it will still be a sinusoid. If you create a new feature <span class="math-container">$x' = x^2$</span>, <span class="math-container">$a + bx'$</span> is still a linear function of the feature <span class="math-container">$x'$</span>. The same applies to Fourier transformations or any other transformations of the features. Linear regression is not about a straight line, but a linear function.</p>
<p>You are saying that on the plot of <span class="math-container">$x$</span> (time) against <span class="math-container">$y$</span> you would expect to see the “straight line” , <span class="math-container">$a + bx$</span>, but in fact you have the</p>
<p><span class="math-container">$$
f(x) = a + bx + c \sin x + d \cos x
$$</span></p>
<p>(simplified) that is still linear <em>in parameters</em> (use imagination to replace <span class="math-container">$x’ = \sin x$</span>, etc and the function is again linear), but has more than two dimensions and clearly is not a “straight line” in the <span class="math-container">$y$</span> vs <span class="math-container">$x$</span> dimensions.</p>
| 66
|
linear regression
|
robust linear regression with interaction
|
https://stats.stackexchange.com/questions/641936/robust-linear-regression-with-interaction
|
<p>I am looking how epigenetic age associated with toxic element exposure in four different road buffer zone (1000m, 2000m, 3000m, &4000m) using robust linear regression. I can run robust linear regression without interaction term, but could you please help me how to run robust linear regression with interaction term (Hg*buffer zone). I have tried below to run robust regression with interaction term but I am not getting 1000m buffer zone interaction with Hg and also I am not getting Pvalue. I want to use Age, sex, and smoking status as covariates.</p>
<pre><code> > summary(rr.huber <- rlm(PCGrimAge_IEAA ~ Hg*buffer.zone + Sex +Age+ Smoking_Status, data = ph))
Call: rlm(formula = PCGrimAge_IEAA ~ Hg * buffer.zone +
Sex + Age + Smoking_Status, data = ph)
Residuals:
Min 1Q Median 3Q Max
-7.68190 -1.59376 -0.01048 1.58263 16.07211
Coefficients:
Value Std. Error t value
(Intercept) -0.4037 0.5335 -0.7567
Hg -0.2404 0.3345 -0.7187
buffer.zone2000m 0.4443 0.6275 0.7081
buffer.zone3000m -0.3850 0.3601 -1.0692
buffer.zone4000m -0.6141 0.5109 -1.2020
Sex -1.9878 0.1239 -16.0398
Age 0.0011 0.0066 0.1720
Smoking_Status 2.7252 0.0922 29.5725
Hg:buffer.zone2000m -0.6463 0.8488 -0.7614
Hg:buffer.zone3000m 0.2163 0.3691 0.5860
Hg:buffer.zone4000m 0.5614 0.5351 1.0491
Residual standard error: 2.352 on 1801 degrees of freedom
(1 observation deleted due to missingness)
> library(emmeans)
> EMM <- emmeans(rr.huber, ~ Hg * buffer.zone)
> EMM
Hg buffer.zone emmean SE df asymp.LCL asymp.UCL
0.876 1000m 0.0311 0.1602 NA -0.283 0.3452
0.876 2000m -0.0911 0.2949 NA -0.669 0.4868
0.876 3000m -0.1644 0.0774 NA -0.316 -0.0127
0.876 4000m -0.0909 0.1572 NA -0.399 0.2173
Results are averaged over the levels of: Sex
Confidence level used: 0.95
</code></pre>
<p>Some elements are having huge variation between 1st and 3rd quantile or even o value; do you think this datasets need some cleaning before running robust linear regression or regression model can handle this?</p>
<pre><code>> summary(x)
As Cd Al Ba
Min. : 0.00001 Min. :0.00001 Min. :0.00001 Min. : 0.00
1st Qu.: 3.80000 1st Qu.:0.23381 1st Qu.:1.56500 1st Qu.: 83.12
Median : 6.20000 Median :0.28333 Median :1.97000 Median :105.00
Mean : 6.37249 Mean :0.31966 Mean :2.18731 Mean :109.46
3rd Qu.: 8.12222 3rd Qu.:0.36368 3rd Qu.:2.54100 3rd Qu.:130.33
Max. :35.10000 Max. :2.46500 Max. :5.97333 Max. :446.67
</code></pre>
|
<p>Let's run through the model parameters and <code>emmeans</code>. Your model itself will predict the mean epigenetic age given a set of predictor variables. I'm not so sure a normal approximation is necessarily best for such outcome, but let's side-step that for now.</p>
<p>For every categorical predictor you will have a reference level (R chooses the first level of a factor by default), and then additional parameters that model the <em>difference</em> from that reference level. Any model will work like this because if you were to include a separate parameter for every category the model would be overparametrized, and an infinite number of solutions would exist. The one exception here is the overall intercept that can be replaced by the reference level of any one categorical predictor; this will make all predictors in that variable model the mean in each category rather than a reference & differences. This causes a slight change in interpretation of those parameters but is otherwise equivalent, and only works for one variable.</p>
<p>Otherwise, the reference category for a categorical predictor is absorbed into the overall intercept or into the main effect for an interaction. This means that the <code>buffer.zone1000m</code> mean is modeled by <code>(Intercept)</code>, and the way the mean changes by <code>Hg</code> <em>for that category</em> through the <code>Hg</code> effect. The other <code>buffer.zone</code> parameters model differences in means versus the intercept, and the interactions model differences in means by <code>Hg</code> for those categories. The intercept does not only model <code>buffer.zone1000m</code> though, it also includes all other references (e.g. for <code>Sex</code>) and the zero value for any numerical predictors.</p>
<p>Specifically, this means that a subject having <code>buffer.zone1000m</code> and all other predictors at zero (reference sex, smoking status, age and Hg of zero) will have a predicted epigenetic age of <span class="math-container">$-0.4037$</span>. This usually isn't too meaningful in itself, I consider it very unlikely that there's anyone in your dataset with age zero -- unless you centered this predictor, but it looks like you didn't. For every unit increase of <code>Hg</code> such subject is predicted to decrease their epigenetic age by <span class="math-container">$-0.2404$</span>, so if they had for example <code>Hg = 10</code> their predicted epigenetic age would be <span class="math-container">$-0.4037 + 10\times -0.2404=-2.8077$</span>. Would the very same subject have been in <code>buffer.zone2000m</code> instead you would have to account for two additional parameters: the one modelling the difference between these two categories (the <code>buffer.zone2000m</code> effect) and its interaction with <code>Hg</code>. Their predicted epigenetic age would become, with parameters in order of the printout, <span class="math-container">$-0.4037 + 10\times -0.2404+0.4443+10\times -0.6463=-8.8264$</span></p>
<p>The <code>emmeans</code> output shows the result of exactly such calculations. You've asked for <code>buffer.zone</code> and <code>Hg</code> only, so it keeps other covariates constant at some default value. <code>Hg</code> is fixed at what is presumably its mean, <span class="math-container">$0.876$</span>. Calculation of the specific predicted means would require knowledge of the values that sex, age, and smoking status were fixed at (the bottom line suggests that <code>Sex</code> was held at <span class="math-container">$0.5$</span> or possibly the observed proportion in the data), but we can still recover the differences between the categories: with everything else fixed, going from <code>buffer.zone1000m</code> to <code>buffer.zone2000m</code> the difference is <span class="math-container">$0.0311 + 0.0911=0.1222$</span>, which is equal within rounding precision to the parameters of <code>buffer.zone2000m + 0.876 * Hg:buffer.zone2000m</code> or <span class="math-container">$0.4443+0.876\times -0.6463=-0.1218$</span>. In this way you can use the model parameters to predict the epigenetic age of any combination of predictor values. Calculations of standard errors of such estimates are a bit more involved, so that's where the pre-packaged code very much comes in handy.</p>
<p><strong>Edit</strong>: I realize I missed your request for <em>P</em>-values. <a href="https://stat.ethz.ch/pipermail/r-help/2006-July/108659.html" rel="nofollow noreferrer">This R-help answer</a> might be informative in that regard (summarizing: with robust regression you cannot rely on the same distributional assumptions as for the standard linear model). You could use the reported means and standard errors to construct Wald-type statistics, with all the (possibly wrong!) assumptions that those entail. <code>emmeans</code> has already produced the confidence interval in that way.</p>
| 67
|
linear regression
|
Poisson regression VS log-linear regression VS linear regression with log transformation
|
https://stats.stackexchange.com/questions/554641/poisson-regression-vs-log-linear-regression-vs-linear-regression-with-log-transf
|
<p>Could someone explain the differences among the three? It looks to me the function form is the same so they're doing the same thing, but the potential assumption on Y distribution is different between 1 and 3. And I think 1 and 2 is exactly the same thing.</p>
<ol>
<li>Log Transformations on Y in a Linear Model</li>
<li>Log-linear regression</li>
<li>Poisson regression</li>
</ol>
| 68
|
|
linear regression
|
Gauss-Markov assumptions for Multiple linear regression
|
https://stats.stackexchange.com/questions/493299/gauss-markov-assumptions-for-multiple-linear-regression
|
<p>Are the Gauss-Markov assumptions the same for Simple Linear Regression and Multiple Linear Regression? I cant seem to find the answer for this and my literature seem to suggest that they have different formulas.</p>
<p>(Literature: An introduction to Econometrics - James H. Stock, Mark W. Watson.)</p>
<p>These are the Gauss-Markov assumptions used in the Simple linear regression chapter:</p>
<p><a href="https://i.sstatic.net/V4DoD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V4DoD.png" alt="Gauss-Markov assumptions for Linear Regression?" /></a></p>
<p>According to My book, these below here are the Gauss Markov assumptions for Multiple Linear Regression, and you can note that the second assumption is written in matrix form.</p>
<p><a href="https://i.sstatic.net/jj2IA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jj2IA.png" alt="Gauss-Markov Assumptions according " /></a></p>
|
<p><strong>YES</strong></p>
<p>Simple linear regression is a special case of multiple linear regression that only has one feature (<span class="math-container">$x$</span> variable). Consequently, any theorem that applies to multiple linear regression must apply to simple linear regression, so, yes, the Gauss-Markov assumptions are the same.</p>
<p>It then becomes an issue of how to translate the multiple linear regression notation into simple linear regression. The answer is that your model matrix just has a column of <span class="math-container">$1$</span>s for the intercept and then one column for your lone feature. Depending on how you want to express the assumptions, you might want to write the matrix as <span class="math-container">$n\times 2$</span>, or you might just want to say that you have feature <span class="math-container">$X_1$</span> and that’s all.</p>
| 69
|
linear regression
|
Using PCA vs Linear Regression
|
https://stats.stackexchange.com/questions/410516/using-pca-vs-linear-regression
|
<p>I'm looking to analyzing data from a study and previous studies that are similar have used either PCA or hierarchical linear regression to analyze the data. I've used both PCA and linear regression previously. From my understanding PCA breaks the data down into principal components and is useful for learning what factors may be strong indicators of our dependent variable, and that linear regression can be used to compare correlation.</p>
<p>How should I be approaching this? If I'm simply wanting to find out what correlates the strongest with my studies dependent variable what would be the best option? Can I use both PCA and then hierarchical linear regression?</p>
|
<p>PCA does not involve a dependent variable: All the variables are treated the same. It is primarily dimension reduction method. </p>
<p>Factor analysis also doesn't involve a dependent variable, but its goal is somewhat different: It is to uncover latent factors.</p>
<p>Some people use either the components or the factors (or a subset of them) as independent variables in a later regression. This can be useful if you have a lot of IVs: If you want to reduce the number while losing as little variance as possible, that's PCA. If you think these IVs represent some factors, that's FA.</p>
<p>If you think there are factors, then it may be best to use FA; but if you are just trying to reduce the number of variables, then there is no guarantee that the components will relate well to the DV. Another method is partial least squares. That does include the DV. </p>
| 70
|
linear regression
|
Correlated regressors in linear regression
|
https://stats.stackexchange.com/questions/358606/correlated-regressors-in-linear-regression
|
<p>I have a sample of 412 young subjects, measured twice in an interval between 20 days and 3 years.
I am interested in how two external factor (lets say sunlight and ice-cream) relates to growth. Subjects were exposed to sunlight and ice-cream somewhat randomly, and I have calculated the cumulative exposure (CE) as the sum of sunny days over the time between the measures (CE_sun), and the sum of ice-cream over the test period (CE_ice-cream)
So, for example, subject A was tested on June 1 and June 30, and those 30 days were all sunny days with lots of ice-cream, so Subject A has a CE_sun of 30 (June 1 sun, June 2 sun, ...) and a CE_ice-cream of 30 (ice cream every day). Subject B was tested on November 1 and November 30, but since november was gloomy, she only had 5 days of CE, and 0 CE_ice-cream.</p>
<p>So in my linear regression, I have a factor for subject age, a factor for time between measures, and factors for CE_sun and CE_ice_cream. I want to test if CE_sun and CE_ice-cream effects growth.
The problem is that that time between tests is highly correlated to CE_sun (r=0.97) and to CE_ice-cream (r=0.898). CE_sun also correlates with CE_ice-cream (r=0.78). So in the linear regression, I am not sure the CE_sun and CE_ice-cream are estimated well.
To convince myself, an idea was to define the 100 subjects where "time between tests" and CE_sun and CE_ice-cream are <strong>least</strong> correlated, and run the linear regression in this subset. In this subset, the correlation between time and CE_sun is R = -0.10, time and CE_ice-cream is r=0.70, and between CE_sun and CE_ice-cream r= -0.77.</p>
<p>Is this approach incorrect?</p>
|
<p>In short, yes. You're basically cherry-picking data to find an effect.</p>
<p>If you have significantly correlated predictors, you should consider removing some of them.</p>
| 71
|
linear regression
|
Is log-linear regression a generalized linear model?
|
https://stats.stackexchange.com/questions/330412/is-log-linear-regression-a-generalized-linear-model
|
<p>Does log-linear regression fall into the class of generalized linear models? Here I'm defining "log-linear regression" as the model $\log(y) = x'\beta + \eta$ where $\eta \sim N(0, \sigma^2)$.</p>
<p>Thanks.</p>
|
<p>Normally, loglinear models <em>for contingency tables</em> are considered as generalized linear models (Fox 2016). They are sometimes called Poisson regression for contingency tables (Bilder & Loughlin 2015). In the case of Poisson regression, we have a response random variable $Y$, and $p \geq 1$ explanatory variables, $x_1,\dots,x_p$, and for observations $i=1,\ldots,n$, we assume that </p>
<p>$$Y_i \sim Po(\mu_i)$$</p>
<p>where </p>
<p>$$\mu_i = \exp(\beta_0+\beta_1x_{i1}+\ldots+\beta_px_{ip}).$$</p>
<p>So, the generalized linear model has a Poisson random component, linear predictor (systematic component), and link function (<em>log-link</em>):</p>
<p>$$\log(\mu)=\beta_0+\beta_1x_1+\ldots+\beta_px_p.$$</p>
<p>Looks similar to your equation. However, first, you seem to transform the outcome, $y$, directly. So, it looks like you follow what Agresti (2015:6) calls transformed-data approach (i.e., $E[g(y_i)] = \beta_0+\beta_ix_{i1}$ instead of $g[E(y_i)]= \beta_0+\beta_ix_{i1}$, $g$ is the link function). And second, (I think) you specify error distribution, $\eta \sim N(0, \sigma^2)$. As you can see in this <a href="https://stats.stackexchange.com/a/212433/109647">answer</a>, in GLMs, "You don't specify the "error" distribution, you specify the conditional distribution of the response." The exception is latent variable approach.</p>
<p>To answer your question: yes, log-linear regression falls into the class of generalized linear models, but your model looks like a linear regression model with a log-transformed outcome. </p>
<hr>
<p>Agresti, Alan. 2015. <em>Foundations of Linear and Generalized Linear Models</em>. Hoboken, NJ: Wiley.</p>
<p>Bilder, Christopher R. and Thomas M. Loughlin. 2015. <em>Analysis of Categorical Data with R</em>. Boca Raton, London and New York: CRC Press.</p>
<p>Fox, John. 2016. <em>Applied Regression Analysis and Generalized Linear Models</em>. 3rd ed. Los Angeles: Sage Publications.</p>
| 72
|
linear regression
|
Linear regression parameters question
|
https://stats.stackexchange.com/questions/60094/linear-regression-parameters-question
|
<p>Are the slope and intercept of a simple linear regression model always normally distributed? </p>
<p>Is there ever a difference between the distribution of the estimated slope and intercept and the actual ones? </p>
<p>I have only just begun learning about the subject but I am still not clear on the details. </p>
<p>A final question: is the least squares method the same as linear regression in that it gives information like the $R^2$? Thanks!</p>
|
<blockquote>
<p>Are the slope and intercept of a simple linear regression model always normally distributed?</p>
</blockquote>
<p>No. If the data ($y$'s) are (conditionally) normal and the other assumptions hold, they will be, and you can get asymptotic normality under some conditions, but generally, no.</p>
<blockquote>
<p>Is there ever a difference between the distribution of the estimated slope and intercept and the actual ones? </p>
</blockquote>
<p>Are you asking about bias? Yes, you can get bias in a variety of ways, such as errors in the $x$'s.</p>
<blockquote>
<p>is the least squares method the same as linear regression in that it gives information like the $R^2$? </p>
</blockquote>
<p>Least squares is the usual method (overwhelmingly so) for fitting linear regression, but you can have linear fits that don't use the usual ordinary least squares; it's still generally called 'regression'.</p>
<blockquote>
<p>So the error term is what generates in some sense the distribution, is that correct? My thought was that when e is normally distributed then b0 and b1 must also be</p>
</blockquote>
<p>The least squares estimates of the parameters are linear combinations of the observations. The distribution of the error (combined with the other assumptions) impacts the distribution through that.</p>
<p>So in multiple regression $\hat \beta = (X^\top X)^{-1} X^\top y = Ay$, say, for $A$ a $p\times n$ matrix.</p>
<p>Since linear combinations of multivariate normals are themselves normal, the parameter estimates are normal when the errors are.</p>
| 73
|
linear regression
|
Linear regression Vs Logistic regression
|
https://stats.stackexchange.com/questions/201299/linear-regression-vs-logistic-regression
|
<p>I have a time series dataset. The,</p>
<p>X (Independent variable) is time and is denoted as 1,2,3,4,5,6..1000.etc
Y (Dependent variable ) is a percentage scale as 99%, 98.7%, 96%, 91% ...etc. This is a continuous data set. </p>
<p>I have 1000 such data points. The first 700 data points used as training set and rest 300 is used for testing.</p>
<p>I tried to use simple linear regression but when predicting sometimes the prediction is more than 100%. And the case is even worse when I calculated the confidence interval and prediction interval.</p>
<p>So I tried to use logistic regression as there is a boundary ( from 0% to 100%). But logistic regression can take only binary data. I am confused on how to appropriately convert my existing time series data so that I can try how logistic regression on that.</p>
|
<p>You're correct that logistic regression is only for binary response data, which is not applicable here. What you may be wanting to do is simply apply the <a href="https://en.wikipedia.org/wiki/Logit" rel="nofollow">logit</a> transform to the response data (i.e. the $Y$ values) and then use linear regression on the transformed data. Then apply the inverse logit transform to predictions to put them back on the original scale.</p>
<p>However, if you are trying to forecast a time series, simple linear regression may not be the best approach. It may be better to fit a time series model to the data (after applying the logit transform to the response) and use that as a basis for prediction.</p>
| 74
|
linear regression
|
Linear regression multicollinearity
|
https://stats.stackexchange.com/questions/146694/linear-regression-multicollinearity
|
<p>I run linear regression with Posttest scores as DV and Pretest scores and Group as IVs.
Collinearity Statistics Tolerance shows .998 both for Pretest and Group (VIF 1.002).
Is this one of the situations where violating Collinearity might be ignored?</p>
|
<p>I think you are misinterpreting what tolerance means. Much like in real life, in statistics you <em>want</em> high tolerance. Tolerance is $1-R^2$ where $R^2$ is the squared correlation of the two variables being compared (in your case pretest scores and group assignment). Thus, .998 means there is <em>almost no multicollinearity</em>. It means that your variables have a correlation ($r$) of about .04, which is rather low. Also for future reference, if you didn't know, $VIF=\frac{1}{tolerance}=\frac{1}{1-R^2}$.</p>
<p>Lastly, you ask about times when you can ignore multicollinearity. You can do this always if you aren't interpreting your coefficients and only care about having a high $R^2$ of your model (this is rare unless you are merely making a prediction model). Other than that it depends on the situation, and what you mean by high. Any multicollinearity changes how you interpret your coefficients. So make sure you understand that coefficients are marginal effects and interpret them thusly. Very high multicollinearity essentially means two variables are effectively measuring the same thing. You can ignore it in cases like when the collinearity is between an $X$ variable and $X^2$ in a quadratic model. <a href="http://statisticalhorizons.com/multicollinearity" rel="nofollow">Here</a> are a few others.</p>
| 75
|
linear regression
|
Linear Regression with Bootstrapping
|
https://stats.stackexchange.com/questions/341880/linear-regression-with-bootstrapping
|
<p>I wish to run a linear regression model, with a dependent variable Y and several explanatory variables.</p>
<p>The distribution of Y looks like this:</p>
<p><a href="https://i.sstatic.net/g6JZZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g6JZZ.png" alt="enter image description here"></a> </p>
<p>Clearly not normally distributed. The sample size is about 40 observations.</p>
<p>In this problem I wish to use SPSS, and I didn't find an option for robust regression. This leads me to bootstrapping.</p>
<p>Can I use linear regression with bootstrapping when I have skewed data like this, with an outlier? Do you have other suggestions for modeling this kind of data?</p>
<p>Any tips will be most appreciated ! Thank you !</p>
|
<ol>
<li><p>Linear regression does <strong>not</strong> require a normally distributed dependent variable, only the error term should be normally distributed ( but that is only important in small samples)</p></li>
<li><p>A bootstrap with outliers means that your estimates of the sampling distributions are going to be bimodal. That is fine if you think the outliers are real, but you again rely a lot on those special observations and they better be right.</p></li>
</ol>
| 76
|
linear regression
|
Likelihood distribution for linear regression
|
https://stats.stackexchange.com/questions/328305/likelihood-distribution-for-linear-regression
|
<p>I am reading <a href="https://www.crcpress.com/Statistical-Rethinking-A-Bayesian-Course-with-Examples-in-R-and-Stan/McElreath/p/book/9781482253443" rel="nofollow noreferrer">Statistical Rethinking (Section 4.2)</a>.</p>
<p>When defining the components of a model description the author says:</p>
<blockquote>
<p>... we define a likelihood distribution that defines the plausibility of individual observations. <strong>In linear regression, this distribution is always Gaussian.</strong></p>
</blockquote>
<p>Why is the likelihood distribution always Gaussian for linear regression?</p>
<h3>Edit:</h3>
<p>After re-reading the chapter introduction, the author states:</p>
<blockquote>
<p>This chapter introduces linear regression as a Bayesian procedure. Under a probability interpretation, which is necessary for Bayesian work, linear regression uses a Gaussian (normal) distribution to describe our golem’s uncertainty about some measurement of interest. This type of model is simple, flexible, and commonplace.</p>
</blockquote>
<p>which specifies the context for the use of the Gaussian distribution in this case.</p>
|
<p><a href="https://i.sstatic.net/ybpuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ybpuA.png" alt="enter image description here"></a></p>
<p>This image gives a good probabilistic interpretation for Linear regression. We want the mean of the residual (fancy word for error) at every point on our line to be zero. We also want a good majority of the points to be close to the mean (our prediction value) - what this means is that we don't want our variance to be too high. When we maximize the likelihood for each point with respect to the normal distribution we can clearly see that the max likelihood would occur if all points were on the linear predictor (the mean). If we find points that are slightly further away from the mean, <strong>from either direction</strong>, we would penalize the likelihood relative to how far the data points are. </p>
<p>With this in mind, we can see why a normal distribution would be pretty convenient. Firstly, we often don't want to penalize negative errors more than positive errors, so a symmetric distribution is called for. Secondly, Normal distributions occur very frequently in datasets, so it is a really natural choice. </p>
<p>Lastly, the wording of the textbook isn't exactly accurate, because you could technically use a different distribution if you wanted to. The t-distribution is also most notably used in special circumstances instead of a normal one. </p>
| 77
|
linear regression
|
Using several linear regression
|
https://stats.stackexchange.com/questions/117838/using-several-linear-regression
|
<p>I am currently trying to build an algorithm to predict a continuous output (Y) from a list of predictors (X). My first idea was to use a simple linear regression to see how it performs. Distribution of residual errors is not normal.</p>
<p>I have a lot of data and I was wondering if I can take advantage to this to split my training dataset into different datasets where the relationship between X and Y would behave differently. I then would train different linear regressions that would perform better on these subsets. My question is : does it bring a significant improvement and how to split my dataset optimally. </p>
<p>NB: the reason why I want to stick to simple linear regression is that I want to be able to make predictions very quickly.</p>
<p>Thanks in advance</p>
|
<p>If the distribution of the residuals are not normal, then you might want to consider other methods since the predictions and confidence intervals are likely to be misleading. Ease-of-computation doesn't seem like a good enough reason.</p>
<p>In terms of clock cycles, it's more expensive to create models than it is to make predictions from them. I would imagine that you'd create the model(s) relatively infrequently (but use them a lot), and so the model creation speed might be decoupled from your process.</p>
<p>To illustrate this, I created five toy models based on Hadley's fueleconomy dataset:</p>
<p><img src="https://i.sstatic.net/kAVh7.png" alt="model creation vs model prediction timings"></p>
<p>A couple of things stood out to me:</p>
<ol>
<li>the models took <em>much</em> longer to create than they did to make predictions.</li>
<li>if speed is important, it's worth looking at the <code>gputools</code> R package. On my workstation, the GPU optimized linear regression (<code>gpuLm</code>) was about <a href="https://gist.github.com/alexwoolford/18a6f7fe3ad47055bb63" rel="nofollow noreferrer">100x faster</a> to create, and 10x faster to predict, than the standard R <code>lm</code>.</li>
</ol>
| 78
|
linear regression
|
Logistic vs linear regression difference
|
https://stats.stackexchange.com/questions/311591/logistic-vs-linear-regression-difference
|
<p>Can someone please differentiate logistic regression vs linear regression? I know that logistic regression is discrete (1, 0) and Linear regression is continuous. Could you provide two examples that set the two apart? Im just really confused on when to use which.</p>
|
<p>You answered the question by yourself: linear regression is used for predicting continuous variables and logistic regression is used to predict binary variables. Here are some examples:</p>
<ul>
<li>Predict the price for a house in US dollar (a positive number): linear regression</li>
<li>Predict the if certain person can afford certain home (can or cannot afford): logistic regression</li>
<li>Predict a person's height in cm: linear regression</li>
<li>Predict if a person has growth disorder (has disease or not): logistic regression</li>
<li>Predict how much a person will spend per month: linear regression</li>
<li>Predict if there are fraud transactions in a month (has fraud or no fraud): logistic regression</li>
</ul>
<p>You can come up with more examples by yourself</p>
| 79
|
linear regression
|
Linear regression vs. Pearson's
|
https://stats.stackexchange.com/questions/397083/linear-regression-vs-pearsons
|
<p>I understand that linear regression is finding the "best fitting line" and Pearson's r is measuring correlation between two variables, but I can't visualize this difference.</p>
<p>I had a project where I was finding if certain brain cancers were correlated to age, or sex for example, and I was advised to use linear regression for this, but from the definition above, in my head it sounds like Pearson's r was what I was looking for?</p>
<p>Can someone clarify this difference?</p>
|
<p>Check out <a href="https://stats.stackexchange.com/questions/2125/whats-the-difference-between-correlation-and-simple-linear-regression">this previous post</a> to understand the differences/similarities between the two and how they are related.</p>
<p>I would assume the person advising you was implying that you should look at multiple predictors in the same model (e.g., regression) rather than look at each one separately (e.g., bivariate correlations).</p>
| 80
|
linear regression
|
Alternative to linear regression
|
https://stats.stackexchange.com/questions/223004/alternative-to-linear-regression
|
<p>I need to run hundreds of linear regression models, with the same set of independent variables, but with varying dependent variables. I have checked normality for a few dozens. Some are normally distributed and some are not. </p>
<p>My intention, for practical reasons, is to write a macro that will run this automatically and store the P-Values of the last model (I will use stepwise or similar methods), and the association between the predicting variables and the predicted variables. My question is, since I can't use linear regression for all models, can I simply use robust regression for all models, without checking for normality? Maybe loess regression? </p>
|
<p>There is a lot of misunderstandings here, mostly posted out in comments. So I will make a summary here.</p>
<ol>
<li>You should not use stepwise methods in any form, they lead to invalid inferences. Many question on this site about that, here is a good one: <a href="https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection">Algorithms for automatic model selection</a> which have good answers explaining why it is a bad idea. </li>
<li>If you have many variables and need some model reduction, consider lasso or ridge regression instead. Look at <a href="https://stats.stackexchange.com/questions/93181/ridge-lasso-and-elastic-net">Ridge, lasso and elastic net</a></li>
<li>Linear regression do not assume that the response variable have a normal (or any other) distribution. It is the error term that should be normal (if you want to use the usual normal-based inference), and that can be checked by plotting the distribution of the <em>residuals</em>, not the response. See <a href="https://stats.stackexchange.com/questions/337879/why-do-we-use-residuals-to-test-the-assumptions-on-errors-in-regression">Why do we use residuals to test the assumptions on errors in regression?</a> and <a href="https://stats.stackexchange.com/questions/204088/does-the-assumption-of-normal-errors-imply-that-y-is-also-normal/204098#204098">Does the assumption of Normal errors imply that Y is also Normal?</a> </li>
</ol>
| 81
|
linear regression
|
Loss function of linear regression
|
https://stats.stackexchange.com/questions/380099/loss-function-of-linear-regression
|
<p>How do we decide whether mean absolute error or mean square error is better for linear regression? Are there other loss functions that are commonly used for linear regression?</p>
|
<p>Put simply: it matters what error metric matters most to you. I have not personally seen a useful application of absolute error loss. In 99% of cases, people use squared error loss. Regression, by definition, is about modeling trend lines that approximate a mean response over a range of predictors. </p>
<p>If the CLT applies, that mean response tends to a normal distribution. Squared error loss minimax estimator of the mean gives the MLE for normally distributed data. This means that asymptotically, you will get correct 95% CI and p-values for the statistical tests about model parameters.</p>
<p>Even the small sample performance of squared error loss is surprising and favorable, based on my experience. Robust error estimation by bootstrap is usually not profoundly different, or when it is, it seems to be a result of small sample sizes in which both methods perform poorly ("No Free Lunch" theorem for statistics). </p>
<p>In a somewhat pathological alternate example, if the errors are double exponential, the absolute error minimax estimator of the mean gives the MLE. There are some strange sorts of counterexamples to the CLT where the asymptotic distribution of the test statistic tends toward exponential (Huzurbazar), and thus with a mixture could be double exponential.</p>
<p>A concluding remark is one of efficiency. In the case of mean estimation: the central limit theorem says the rate of convergence to the limiting distribution is root N. This is because <span class="math-container">$\sqrt{n} \left( \hat{\theta}_{L_2} - \theta \right) \rightarrow_d $</span> a non-singular distribution. As @whuber points out in the comments below, the absolute error loss yields the median as the optimal measure of central tendency in a univariate estimation case. Like the mean, the median <em>also</em> tends to a normal distribution as <span class="math-container">$n \rightarrow \infty$</span> but at a much lower rate. In fact, <span class="math-container">$n^{1/4} \left( \hat{\theta}_{L_1} - \theta \right) \rightarrow d $</span> a non-singular distribution. So we say that the median is a root-root-n consistent estimator whereas the mean is a root-n consistent estimator: the mean goes to what it's estimating much faster, and thus provides more powerful tests.</p>
<p>Hence, if a solution is to be offered on strictly statistical terms, I would prefer squared error loss not just for it's predominant usage, but for it's theoretically sound probability model for the residuals given correct mean model specification.</p>
| 82
|
linear regression
|
Linear regression analysis assumptions not met
|
https://stats.stackexchange.com/questions/463082/linear-regression-analysis-assumptions-not-met
|
<p>I want to demonstrate a possible association between a dichotomous independent variable and a continuous dependent variable. Therefore, I wanted to use a linear regression analysis. However, the dependent variable is not normally distributed, while normality is an assumption of linear regression analysis. The other assumptions are met. How can I solve this problem or which other test can I use for this?</p>
|
<p>You may transform the variable in several ways, in order to reduce skewness. For instance, you may take the log, square- or cube root of the variable. Which transformation will yield a most normal-like distribution depends on the nature of your data.</p>
<p>This is a useful article that explains some different approaches:
<a href="https://medium.com/@TheDataGyan/day-8-data-transformation-skewness-normalization-and-much-more-4c144d370e55" rel="nofollow noreferrer">https://medium.com/@TheDataGyan/day-8-data-transformation-skewness-normalization-and-much-more-4c144d370e55</a></p>
| 83
|
linear regression
|
Neural Network vs Linear regression
|
https://stats.stackexchange.com/questions/582202/neural-network-vs-linear-regression
|
<p>We know that the neural network will perform like a linear regression if there is only one hidden unit. So, the NN method should perform at least as well as a linear regression method. I have built a tidymodel model using the following line of code:</p>
<pre><code>Data_nnet_mod <- mlp(hidden_units = tune(), penalty = tune(), epochs = tune()) %>%
set_engine("nnet") %>%
set_mode("regression")
</code></pre>
<p>and have tuned it using</p>
<pre><code>Data_nnet_fit <-
Data_nnet_wflow %>%
tune_grid(val_set,
grid = 25,
control = control_grid(save_pred = TRUE),
metrics = metric_set(rmse))
</code></pre>
<p>It turns out that the linear regression output has a smaller RMSE than the NN method.</p>
<p>I wonder why the best RMSE that the NN method produces is larger than that of the regression method. Theoretically speaking, should the NN method not be at least as well as the regression method?</p>
|
<p>A neural network with one hidden unit and linear activation <em>is</em> linear regression. There may be differences though, for example, neural networks are usually trained with variants of gradient descent, while linear regression with <a href="https://en.wikipedia.org/wiki/Ordinary_least_squares" rel="nofollow noreferrer">ordinary least squares</a>, so you have no guarantees that they end up with the same results. There also may be implementation details that differ. If you use regularization, other activation, loss, etc those would be different models so again you have no guarantees of finding the same solution, or an equally good one. Unless both models are exactly the same, you don't really have guarantees of same performance. Because all of the above reasons, linear regression may outperform neural networks for regression problems, or <a href="https://arxiv.org/abs/1904.01983" rel="nofollow noreferrer">logistic regression can for classification</a>.</p>
| 84
|
linear regression
|
Correlation vs simple linear regression
|
https://stats.stackexchange.com/questions/473112/correlation-vs-simple-linear-regression
|
<p>Both correlation and linear regression explain the linearity in data but to get a high correlation coefficient the data must be linear with a slope close to 1. In some cases you can have linear data that can be fit on a regression line with a slope less than one, in which case the correlation coefficient will be low.
My question is, should not we consider linear regression rather than correlation?</p>
|
<p>Your conjecture that the correlation is only one for slope one is wrong, as you can easily test with data on a line with slope 0.5:</p>
<pre><code>cor(c(2,4,6), c(1,2,3))
</code></pre>
<p>This returns 1 because the correlation is 1 whenever all data points lie exactly on a line with slope greater than zero.</p>
| 85
|
linear regression
|
Efficient online linear regression
|
https://stats.stackexchange.com/questions/6920/efficient-online-linear-regression
|
<p>I'm analysing some data where I would like to perform ordinary linear regression, however this is not possible as I am dealing with an on-line setting with a continuous stream of input data (which will quickly get too large for memory) and need to update parameter estimates while this is being consumed. i.e. I cannot just load it all into memory and perform linear regression on the entire data set.</p>
<p>I'm assuming a simple linear multivariate regression model, i.e.</p>
<p>$$\mathbf y = \mathbf A\mathbf x + \mathbf b + \mathbf e$$</p>
<p>What's the best algorithm for creating a continuously updating estimate of the linear regression parameters $\mathbf A$ and $\mathbf b$? </p>
<p>Ideally:</p>
<ul>
<li>I'd like an algorithm that is most $\mathcal O(N\cdot M)$ space and time complexity per update, where $N$ is the dimensionality of the independent variable ($\mathbf x$) and $M$ is the dimensionality of the dependent variable ($\mathbf y$).</li>
<li>I'd like to be able to
specify some parameter to determine
how much the parameters are updated
by each new sample, e.g. 0.000001
would mean that the next sample would
provide one millionth of the
parameter estimate. This would give
some kind of exponential decay for
the effect of samples in the distant
past.</li>
</ul>
|
<p>Maindonald describes a sequential method based on <a href="http://en.wikipedia.org/wiki/Givens_rotation" rel="noreferrer">Givens rotations</a>. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the <a href="http://en.wikipedia.org/wiki/Design_matrix" rel="noreferrer">design matrix</a> <span class="math-container">$\mathbf{X}$</span> into a triangular matrix <span class="math-container">$\mathbf{T}$</span> via an orthogonal transformation <span class="math-container">$\mathbf{Q}$</span> so that <span class="math-container">$\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$</span>. (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row <span class="math-container">$v$</span> below <span class="math-container">$\mathbf{X}$</span>, you effectively extend <span class="math-container">$(\mathbf{T}, \mathbf{0})'$</span> by a nonzero row, too, say <span class="math-container">$t$</span>. The task is to zero out this row while keeping the entries in the position of <span class="math-container">$\mathbf{T}$</span> diagonal. A sequence of Givens rotations does this: the rotation with the first row of <span class="math-container">$\mathbf{T}$</span> zeros the first element of <span class="math-container">$t$</span>; then the rotation with the second row of <span class="math-container">$\mathbf{T}$</span> zeros the second element, and so on. The effect is to premultiply <span class="math-container">$\mathbf{Q}$</span> by a series of rotations, which does not change its orthogonality.</p>
<p>When the design matrix has <span class="math-container">$p+1$</span> columns (which is the case when regressing on <span class="math-container">$p$</span> variables plus a constant), the number of rotations needed does not exceed <span class="math-container">$p+1$</span> and each rotation changes two <span class="math-container">$p+1$</span>-vectors. The storage needed for <span class="math-container">$\mathbf{T}$</span> is <span class="math-container">$O((p+1)^2)$</span>. Thus this algorithm has a computational cost of <span class="math-container">$O((p+1)^2)$</span> in both time and space.</p>
<p>A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do <a href="http://books.google.com/books?id=GECBEUJVNe0C&printsec=frontcover&dq=Belsley,+Kuh,+%26+Welsch&source=bl&ots=b6k5lVb7E3&sig=vOuq7Ehg3OrO01CepQ8p-DZ1Bww&hl=en&ei=m7hNTeyfNsSAlAeY3eXcDw&sa=X&oi=book_result&ct=result&resnum=5&ved=0CDMQ6AEwBA#v=onepage&q&f=false" rel="noreferrer">Belsley, Kuh, & Welsh</a>. Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional <span class="math-container">$O(k (p+1))$</span> storage for a window of width <span class="math-container">$k$</span>. It appears that <span class="math-container">$1/k$</span> would be the analog of the influence parameter.</p>
<p>For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data.</p>
<h3>References</h3>
<p>J. H. Maindonald, <em>Statistical Computation.</em> J. Wiley & Sons, 1984. Chapter 4.</p>
<p>D. A. Belsley, E. Kuh, R. E. Welsch, <em>Regression Diagnostics: Identifying Influential Data and Sources of Collinearity.</em> J. Wiley & Sons, 1980.</p>
| 86
|
linear regression
|
Linear Regression coefficients through ANN
|
https://stats.stackexchange.com/questions/442294/linear-regression-coefficients-through-ann
|
<p>I am struggling to get ANN to estimate constant and coefficients of a linear regression problem. Unfortunately my results are way off from the expected. Kindy take a look at the reproducible code below.</p>
<pre><code>from sklearn import datasets
from sklearn.model_selection import train_test_split
X, y = sklearn.datasets.make_regression(n_samples=500, n_features=2, n_targets=1, bias=10.0, noise=15, shuffle=True, random_state=4)
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler()
X = mm.fit_transform(X)
Xc = sm.add_constant(X)
X_train, X_test, y_train, y_test = train_test_split(Xc, y, test_size=0.33, random_state=42)
## Let's evaluate linear regression results
import statsmodels.api as sm
reg = sm.OLS(y_train, X_train)
reg=reg.fit()
y_train_pred = reg.predict(X_train)
y_test_pred = reg.predict(X_test)
print(r2_score(y_train,y_train_pred))
print(r2_score(y_test,y_test_pred))
reg.summary()
#### coef std err t P>|t| [0.025 0.975]
#const -216.2056 3.672 -58.876 0.000 -223.429 -208.982
#x1 174.5213 4.913 35.520 0.000 164.856 184.186
#x2 272.8751 4.779 57.094 0.000 263.473 282.277
# Let's build an ANN and evaluate results
X_train = X_train[:,1:] ## constant dropped because bias=True
X_test = X_test[:,1:]
model = Sequential()
model.add(Dense(1, input_dim=2, kernel_initializer='normal', activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs=500, validation_split=0.2, verbose = 0)
import matplotlib.pyplot as plt
plt.scatter(y_train, y_train, color='black')
plt.plot(y_train, model.predict(X_train), color='blue', linewidth=3)
plt.show()
for layer in model.layers:
weights = layer.get_weights()
print(weights)
## [array([[0.9757047], [0.839206 ]], array([0.8604008]]
</code></pre>
<p>The weights are very different from regression coefficients. Where am I going wrong? </p>
|
<p>Although more sophisticated optimizers do well in general for solving highly non-convex problems, using vanilla gradient descent may very well suffice for a convex one such as this (i.e. linear regression):</p>
<pre><code>model.compile(loss='mean_squared_error', optimizer='sgd')
</code></pre>
<p>A small touch on your plotting:</p>
<pre><code>plt.plot(y_train, model.predict(X_train), 'x', color='blue', linewidth=3)
</code></pre>
<p>which gives the following fit (using 1.5K epochs):</p>
<p><a href="https://i.sstatic.net/u15yB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u15yB.png" alt="enter image description here"></a></p>
| 87
|
linear regression
|
correlation and simple linear regression
|
https://stats.stackexchange.com/questions/532147/correlation-and-simple-linear-regression
|
<p>what this sentence means"The correlation squared (r2 or R2) has special meaning in simple linear regression. It represents the proportion of variation in Y explained by X".</p>
| 88
|
|
linear regression
|
Linear regression and arithmetic mean
|
https://stats.stackexchange.com/questions/43209/linear-regression-and-arithmetic-mean
|
<p>I do experiments with a certain parameter x. The result is y. I assume y is linearly related to x.</p>
<p>Suppose I can do 1000 experiments, which method will give me a better estimation of the linear relation?</p>
<ul>
<li>Select 1000 different values of x, get a single y for each x, and do linear regression?</li>
<li>Select 100 different values of x, run 10 experiments for each x, average the y values for each x, and then do linear regression on the 100 averages?</li>
<li>Select 100 different values of x, run 10 experiments for each x, and do linear regression without averaging first?</li>
</ul>
<p>What if I am not sure that the relation is linear?</p>
|
<p>I assume the question refers to the error on the parameter estimates. To assess the linear relationship between two variables x and y we use linear regression to estimate the two parameters intercept and slope. </p>
<p>It is easy to demonstrate that the last two options are identical, because during linear regression we minimize the sum of squared residuals which in this particular case amounts to the same as averaging all y at a particular x value.</p>
<p>However, there is a slight difference between the first option and the last two. We have the same number of data points or measurements, but in the first case we sample y at more x locations; in the second case we sample at any particular x value more often.</p>
<p>The following R code simulates the first and second scenario.</p>
<pre><code>for (i in 1:1e3) {
#1000 different values of x, get a single y for each x
x<-runif(1e3);
noise<-rnorm(length(x),sd=0.1);
y<-x+noise;
p1<-rbind(p1,as.array(lm(y~x)$coeff));
#100 different values of x, run 10 experiments for each x
x_rep<-rep(runif(1e2),times=1e1);
noise<-rnorm(length(x_rep),sd=0.1);
y_rep<-x_rep+noise;
p2<-rbind(p2,as.array(lm(y_rep~x_rep)$coeff));
};
# differences between standard deviations of intercepts and slopes
apply(p1,2,sd)[[1]]-apply(p2,2,sd)[[1]]
apply(p1,2,sd)[[2]]-apply(p2,2,sd)[[2]]
</code></pre>
<p>The for loop repeats the two scenarios many times, running linear regression each time. At the end we calculate the standard deviation of intercept and slope across repeats. The first scenario might be slightly better as its standard deviation seems to be consistently smaller.</p>
| 89
|
linear regression
|
Linear regression - minimum sample size
|
https://stats.stackexchange.com/questions/448977/linear-regression-minimum-sample-size
|
<p>I would like to perform a simple linear regression on data that shows a clear linear relationship.</p>
<p>How to determine the minimum sample size for a simple linear regression analysis?</p>
<p>My sample size is small, so even if the linear relationship is evident, I don't know how to determine if the sample size is adequate or not.</p>
<p>Thank you very much!</p>
|
<p>Well, I suppose the <em>minimum</em> is 2. But it really depends on what the goal is. If all you want to do is hint at a linear relationship, you won't need many. If your goal is to perform a test of hypothesis that the coefficient for the slope of your line has a particular sign, you can do a sample size calculation to obtain a pre-specified statistical power.</p>
| 90
|
linear regression
|
Linear Regression and Quantile Regression
|
https://stats.stackexchange.com/questions/564642/linear-regression-and-quantile-regression
|
<p>Linear regression using the method of least squares estimates the conditional mean of the response variable across values of the predictor variables.</p>
<p>Quantile regression estimates a conditional quantile of the response variable across values of the predictor variables.</p>
<p>The least squares method minimises the sum of squared residuals, and quantile regression minimises a loss function, <span class="math-container">$\rho(\tau)$</span>, that depends on the quantile of interest.</p>
<p>I think I understand the methods.</p>
<p>My problem: I have a model for the failure times of a material, <span class="math-container">$\log(T) \sim N(\mu(x),\sigma)$</span>, where <span class="math-container">$\mu(x) = \beta_0 + \beta_1 x_1 +\beta x_2 + \dots + \beta_n x_n$</span> is the mean, <span class="math-container">$x_1, \dots, x_n$</span> are covariates, and <span class="math-container">$\sigma$</span> is the standard deviation.</p>
<p>I usually approach a problem from a Bayesian perspective (well, I call myself a Bayesian but I often implement a model using Stan and uninformative priors so I am not sure if I am classed as a Bayesian).</p>
<p>I specified the likelihood and prior distributions and obtained estimates for the model parameters (conditional on the covariates). This worked fine. I am able to obtain any quantile of interest for the (log) failure times from the posterior distribution.</p>
<p>I was happy with this analysis until I started to overthink the problem.</p>
<p>"When you changed from linear regression to quantile regression you had to change the loss function. You have performed a Bayesian analysis like usual, you must adjust for quantile regression".</p>
<p>I think my above thoughts are incorrect. If least squares estimates provided parameter estimates for the log failure times, one could then obtain any estimate of interest. However, least squares estimates the conditional mean <span class="math-container">$E[\log(T)]$</span>, and not <span class="math-container">$\log(T)$</span> itself. Therefore, I have to change the loss function of interest to <span class="math-container">$\rho(\tau)$</span>, if I want the <span class="math-container">$\tau$</span> quantile of <span class="math-container">$\log(T)$</span>. Another loss function would be required if I want the <span class="math-container">$\tau^*$</span> quantile. This is because these methods estimate quantities of interest only, and not the random variable of interest.</p>
<p>The Bayesian approach (I am not saying only a Bayesian approach can do this. ML estimates with bootstrap or something would also provide estimates with uncertainty for <span class="math-container">$\log(T)$</span>. I am comparing a Bayesian approach to least squares and quantile regression.) estimates the distribution of <span class="math-container">$\log(T)$</span> and not of <span class="math-container">$E[\log(T)]$</span>, and hence I do not need to adjust anything, like when going from linear regression to quantile regression.</p>
<p>Can anyone please confirm my understanding. I think I was just overthinking for a moment because I had to adjust the loss function when moving from linear regression to quantile regression and thought "a standard Bayesian approach must need to be adjusted when you're interested in quantile regression". I hope this makes sense.</p>
|
<p>What you have seems to be a fairly standard log-normal survival/reliability model of continuous-time failure data. You presumably didn't model quantiles directly, but rather the entire function describing <span class="math-container">$\log (T)$</span> as a function of covariates and time. The way you did this was with likelihood-based methods. That doesn't seem to be what is usually considered "<a href="https://en.wikipedia.org/wiki/Quantile_regression" rel="nofollow noreferrer">quantile regression</a>." Rather, it's a Bayesian survival analysis.</p>
<p>If a cumulative distribution of failure times conditional on covariates is <span class="math-container">$F(t)$</span>, then the corresponding distribution of survival times is <span class="math-container">$S(t)=1-F(t)$</span>. All you are doing to get quantiles of survival times is sampling from your posterior estimate of <span class="math-container">$F(t)$</span>.</p>
<p>One caution: did you observe failure times for all samples of the material, or did some samples not fail at all? In the latter case, make sure that your model incorporated the contribution of such samples with right-censored survival times to the likelihood. Otherwise your estimates are likely to be biased.</p>
| 91
|
linear regression
|
Is classification using linear regression called logistic regression or linear disriminant analysis?
|
https://stats.stackexchange.com/questions/523340/is-classification-using-linear-regression-called-logistic-regression-or-linear-d
|
<p>I have heard people describe logistic regression as linear regression except as it is deployed for classification. But I have heard the exact same comment about LDA (linear discriminant analysis). Out of logistic regression and LDA, which is closer to what happens in linear regression?</p>
|
<p>They are both close, but in different ways</p>
<ul>
<li>If you run ordinary least-squares regression with a binary class variable as the outcome (label) variable, you get exactly the 2-class case of linear discriminant analysis. So LDA (in the 2-class case) is linear regression run on a classification problem. It's conceptually different from linear regression in that the original derivation of LDA uses assumptions about the distribution of the predictor (feature) variables, which regression does not.</li>
<li>Logistic regression is a natural generalisation of linear regression to binary data, in which you model the mean of the outcome variable (which is the probability that it is 1 vs 0) using a linear combination of predictors, but with a 'link' function in between so that the probability stays between 0 and 1. Like linear regression, it's a special case of the generalised linear model, and wasn't derived based on assumptions about the distributions of predictor variables.</li>
</ul>
<p>So, LDA is <em>computationally</em> just linear regression as applied to a classification problem, but it's quite different as a model; logistic regression is closer as a model, but less similar computationally.</p>
| 92
|
linear regression
|
Support Vector Regression vs. Linear Regression
|
https://stats.stackexchange.com/questions/633091/support-vector-regression-vs-linear-regression
|
<p>I am new to ML and I am learning the different algorithms one can use to perform regression. Keep in mind that I have a strong mathematical background, but I am new in the ML field.</p>
<p>So I understand the math behind Support Vector Regression and behind Linear Regression. Now I just want to understand when is it best to use each. I would very much appreciate if you could give some real-life examples on when one works better than the other.</p>
|
<p>Contrary to popular belief (including beliefs implicit in <a href="https://stats.stackexchange.com/a/633112/247274">another answer</a>), linear regressions can handle extremely complicated relationships between variables, including curves and interactions. In that regard, a linear regression should be able to handle (basically) anything that a seemingly more sophisticated model like a support vector regression can do. To bring some mathematical rigor to this argument, consider the Stone-Weierstrass theorem.</p>
<p>A possible drawback to the linear model is that these relationships need to be specified for the linear regression to fit them. If you don't tell the linear model to consider an interaction between variables, for instance, it will not.</p>
<p>For better or for worse, complex machine learning models like support vector regressions (random forests, neural networks, etc) go and figure out these complex relationships on their own. That sounds extremely powerful, and there is a sense it which it absolutely is. However, it also becomes easy to fit to coincidences in the data and find apparent patterns that aren't really there, because the model is so flexible and able to go chase what it think are patterns.</p>
<p>Linear models will be most useful when you have some sense of what relationships matter. If you know to anticipate some nonlinearity, you can use polynomials or, perhaps better yet, splines to allow for curvature. If you anticipate interactions between variables, you can include them. If you anticipate interactions between nonlinear transformations, you can interact polynomials and splines, too. It is also possible to throw a huge amount of flexibility at the model, such as in the Earth/MARS method, possible penalizing with some regularization. In terms of philosophy, however, this is not so different from the more sophisticated machine learning methods like support vector machines</p>
<p>Support vector regressions will be most useful when you do not know what relationships exist between the variables, just that the given variables should be predictive of the outcome (perhaps humans can look at those variables and reliably make accurate predictions, such as speech recognition where speakers of a particular language typically understand each other, based only on the audio signal). Since you are making the model not only figure out the relationship but which relationships to figure out, data requirements increase in order to keep from fitting to coincidences in the training data that cannot be counted on to exist when the model is deployed.</p>
<p>Vanderbilt's Frank Harrell, also a Cross Vallidated <a href="https://stats.stackexchange.com/users/4253/frank-harrell">member</a>, has at least two related blog posts worth reading, even if they aren't about support vector machines in particular. This answer reiterates some of his arguments.</p>
<p><a href="https://www.fharrell.com/post/stat-ml/" rel="noreferrer">Road Map for Choosing Between Statistical Modeling and Machine Learning</a></p>
<p><a href="https://www.fharrell.com/talk/mlhealth/" rel="noreferrer">Musings on Statistical Models vs. Machine Learning in Health Research</a></p>
| 93
|
linear regression
|
Exponentially weighted moving linear regression
|
https://stats.stackexchange.com/questions/9931/exponentially-weighted-moving-linear-regression
|
<p>I have a problem where I need to calculate linear regression as samples come in. Is there a formula that I can use to get the exponentially weighted moving linear regression? Not sure if that's what you would call it though.</p>
|
<p>Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your linear regression formula.</p>
<p><a href="http://www.jstor.org/pss/2627674" rel="noreferrer">http://www.jstor.org/pss/2627674</a> </p>
<p><a href="http://en.wikipedia.org/wiki/Exponential_smoothing" rel="noreferrer">http://en.wikipedia.org/wiki/Exponential_smoothing</a></p>
| 94
|
linear regression
|
Linear Regression Prediction vs Extrapolation Prediction
|
https://stats.stackexchange.com/questions/208259/linear-regression-prediction-vs-extrapolation-prediction
|
<p>Suppose we observe $x$ and $y$ and we want to predict at $x=5$. A naive way would be to take each observation and compute $5/(x/y)$ or similarly $5*(y/x)$ and then take the overall mean. Thi is basically rescaling each observation to the unit scale and then extrapolating to 5.</p>
<p>A more sophisticated approach is to perform the linear regression and then predict at $x=5$.</p>
<p>Is there reason to believe the linear regression approach is more accurate compared to the first method? I believe the first approach is very sample dependent.</p>
<p>Here is an example:</p>
<pre><code>library(ggplot2)
set.seed(123)
nobs=1000
x=runif(nobs,0.1,100)
y=abs(x*.05+rnorm(nobs,0,1))
a2=data.frame(x,y)
ggplot(a2,aes(x=x,y=y))+geom_point()+geom_smooth()+geom_smooth(method='lm')
### Linear Regression Prediction
fit=lm(y~x)
print(predict(fit,newdata=data.frame(x=100),interval='confidence'))
# fit lwr upr
# 1 4.844763 4.729187 4.960339
### Naive Prediction
print(mean(100/(a2$x/a2$y)))
# 8.49
</code></pre>
|
<p>Yes, there is. We are modeling using $y = \theta x$. consider a simple example:
$(1,2), (3,3)$</p>
<p>Using naive, we are taking the average of $\frac{y}{x}$ for all data points as $\theta$ (see below for mathematical explantion). We have: $$\theta_{naive} = (1/2)(2+1) = 1.5$$</p>
<p>Using LR, we have $$ \theta_{LR} = 1.1$$</p>
<p>The training SSE with naive is $(2-1.5)^2 + (3-4.5)^2 = 2.5$ while with LR it is $(2-1.1)^2 + (3- 3.3)^2 = 0.9$.</p>
<p>Why is this? We can compare how the two $\theta$ differ mathematically</p>
<p>The naive predictor:</p>
<p>$$\hat{y} = \frac{1}{n} \sum^n_{i=1} \frac{y_i}{x_i}x = (\sum^n_{i=1} \frac{1}{n} \frac{y_i}{x_i})x$$</p>
<p>$$ \theta_{naive} = \sum^n_{i=1} \frac{1}{n} \frac{y_i}{x_i} $$</p>
<p>Linear regression:</p>
<p>$$ \theta_{LR} = (X^T X)^{-1} X^T Y= \frac{\sum^n_{i=1} x_i y_i}{ \sum^n_{i=1} x_i^2 } = \frac{\sum^n_{i=1} x_i^2 \frac{y_i}{x_i}}{ \sum^n_{i=1} x_i^2 } = \sum^n_{i=1} \frac{x_i^2}{ \sum^n_{j=1} x_j^2} \frac{y_i}{x_i} $$</p>
<p>So, they differ in how much they weigh each $\frac{y}{x}$. Naive weighs all $\frac{y}{x}$ equally (using $\frac{1}{n}$), whereas linear regression weighs it relative to $x$ (using $\frac{x_i^2}{ \sum^n_{j=1} x_j^2}$). It makes sense that we should weigh the slopes of points with larger $x$ values more: consider if we had (1,1) and (1,0). $\theta = 0.5$ minimizes error. If we had (1,1) and (10000,0), $\theta$ should decrease, not stay the same, despite the slopes staying the same.</p>
| 95
|
linear regression
|
Linear Factor Model vs. Linear Regression Model
|
https://stats.stackexchange.com/questions/111231/linear-factor-model-vs-linear-regression-model
|
<p>I've been reading some literature that discusses 'linear factor models' which appear to describe the general equation often used in OLS regression. When people refer to a 'linear regression model' are they essentially just referring to a linear factor model? Where does the term linear factor model fit in in statistics and why, if at all, is it necessary to make the distinction?</p>
|
<p>Until now, I've only heard of linear regression models(LRM) as opposed to linear factor models(LFM). It looks like these are interchangeable terms, though different uses of the word 'factor' can be misleading here.</p>
<p>Here's two links calling the same generic form of the model by different names:</p>
<p>Factor: <a href="http://web.stanford.edu/~wfsharpe/mia/fac/mia_fac2.htm" rel="nofollow">http://web.stanford.edu/~wfsharpe/mia/fac/mia_fac2.htm</a></p>
<p>Regression: <a href="http://en.wikipedia.org/wiki/Linear_model" rel="nofollow">http://en.wikipedia.org/wiki/Linear_model</a></p>
<p>From briefly searching the web, it looks like 'factor models' are used to describe economic and financial areas. Factor, here, means an independent variable in the model. An additional factor means another column (or row) in the model's matrix representation. Factors can be considered continuous, numeric variables since the Stanford article lists macro-economic variables and returns on portfolios as examples of factors down the page.</p>
<p>On the other hand, programming in the statistics language R, a factor variable or factor (also known as categorical variable) can only assume a limited set of different values. Like the months of the year or simply the set {1,2}.</p>
<p>Finally, as of this writing, both meanings of 'factor' are currently used in academic papers.</p>
| 96
|
linear regression
|
Linear Regression to detect between a linear and non-linear trend
|
https://stats.stackexchange.com/questions/194236/linear-regression-to-detect-between-a-linear-and-non-linear-trend
|
<p>I have measured the area of spread of a number of plants through time. I'm interested in trying to ascertain whether a linear or a non-linear relationship (i.e. quadratic) best represents the increase in the sqrt of the area occupied by these plants through time </p>
<p>My first feeling was that I could use a linear regression for this, whereby I fit a linear regression model containing a squared term to each plant's growth through time i.e. </p>
<p>y = a + b.x + c.x^2 (where x equals time)</p>
<p>and compare this to </p>
<p>y = a + b.x</p>
<p>via standard linear regression simplification to see whether a non-linear trend explains significantly more of the variation than just a linear trend. </p>
<p>However, I'm aware that linear regression probably shouldn't be used for time series data.</p>
<p>It is possible to do it in this way? </p>
<p>Thanks </p>
|
<p>Consider your linear model
$$
y_t = a + bt + e_t
$$
where $e_t$ is the error term at time $t$.</p>
<p>If you took the difference
$$
y_t - y_{t-1} = a + bt + e_t -a - b(t-1) - e_{t-1} = b + e_t - e_{t-1}
$$
which is an integrated moving average model (ARIMA(0,1,1)) model. You can write it as </p>
<p>$$
\Delta y_t = b + e_t - \theta e_{t-1}
$$
where in this particular case $\theta=1$. With ARIMA (Autoregressive Integrated Moving Average) models you could investigate the more general ARIMA(p,1,q);</p>
<p>$$
\Delta y_t = b + \sum_{i=1}^p \phi_i \Delta y_{t-i} + e_t + \sum_{j=1}^q \theta_j e_{t-j}$$ </p>
<p>for which your linear model is a subset </p>
<p>You can work without differencing $y_t$ but the series must be stationary. You can check out the <a href="https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average" rel="nofollow">Wikipedia entry</a> on ARIMA for more details.</p>
<p>For model selection you can use information criterion like AIC, BIC, or one of its variants (generally speaking, a lower AIC/BIC implies a better predictive model). <code>R</code> has particularly good functions for estimating and selecting among ARIMA models see the <code>Arima</code> and <code>auto.arima</code> functions and documentation. However, you have to be particularly careful about comparing models that difference/transform $y_t$ with those that do not (AIC and BIC are not appropriate for that comparison). Using Mean-squared prediction error or something similar to that would be more appropriate. </p>
<p>Non-linear time series can get hairy, there are methods out there but they tend to be harder to implement and interpret. One thing that practitioners often do is model $\ln[y_t]$ with an ARIMA, this implies non-linear exponential growth/decay in $y_t$. </p>
| 97
|
linear regression
|
Is the equation is Linear Regression?
|
https://stats.stackexchange.com/questions/397287/is-the-equation-is-linear-regression
|
<p>Employees Salary = 3000 + x(Employee Age)^2,
is this a Linear Regression?</p>
|
<p>First, for there being a regression, there should be parameters! I will assume 3000 and 1 are in this case</p>
<p>It is linear regression if you consider your "employee age squared" as a variable, so, strictly speaking, it is a linear regression only after a transformation.</p>
<p>In general, there are many ways to transform apparently non-linear models into linear ones by means of one of those (exponential, logs, roots, powers and so on...)</p>
| 98
|
linear regression
|
Linear regression forecast underestimation
|
https://stats.stackexchange.com/questions/27700/linear-regression-forecast-underestimation
|
<p>I have the following multiple linear regression model:</p>
<pre><code>Call:
lm(formula = Y ~ X1 + X2 + X2 + X3 + X4 + X5 + X6 + X7,
data = my.model, na.action = na.omit)
Residuals:
Min 1Q Median 3Q Max
-43.836 -1.507 0.010 1.485 46.231
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.0244927 0.0245157 -0.999 0.318
X1 -0.3484619 0.0134383 -25.931 <2e-16 ***
X2 0.1195273 0.0106940 11.177 <2e-16 ***
X3 0.1224587 0.0108849 11.250 <2e-16 ***
X4 -0.0010173 0.0028247 -0.360 0.719
X5 0.5496942 0.0156319 35.165 <2e-16 ***
X6 -0.2287941 0.0145018 -15.777 <2e-16 ***
X7 -0.2315801 0.0146361 -15.823 <2e-16 ***
X8 0.0005465 0.0003595 1.520 0.128
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.936 on 35849 degrees of freedom
(12534 observations deleted due to missingness)
Multiple R-squared: 0.05968, Adjusted R-squared: 0.05947
F-statistic: 284.4 on 8 and 35849 DF, p-value: < 2.2e-16
</code></pre>
<p>The model is affected by multicollinearity but my question is about the forecast, so this shouldn't be an issue.</p>
<p>I checked the absolute values of my model forecast and compared against the actual Y absolute values. The average of the absolute predicted values is significantly lower than the absolute observed values mean:</p>
<pre><code>> lm1.predict = predict(lm1, mydata)
> mean(abs(lm1.predict))
[1] 0.3294776
> mean(abs(mydata$Y))
[1] 1.206954
</code></pre>
<p>Does this mean that the linear regression variables I am using tend to underestimate the outcomes? Can any other conclusion be derived from this simple comparison?</p>
<p><strong>EDIT</strong></p>
<p>Another way to look at this is to calculate the absolute difference between each observation and the relative outcome:</p>
<pre><code>> mean(abs(mydata$Y - lm1.predict))
[1] 1.208378
</code></pre>
<p>These are the diagnostic from the regression:</p>
<p><img src="https://i.sstatic.net/Y7zmf.jpg" alt="enter image description here"></p>
|
<p>The variance of predictions is always going to be less than the variance of the observations. The predictions are estimates of the means of the distributions conditional on the predictors. So, assuming the mean of the data is not too far from zero, you are comparing the dispersion of the means with the dispersion of the observations. </p>
| 99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.