text
stringlengths
81
47k
source
stringlengths
59
147
Question: <p>Is it normal to get values for response that are $&gt; 1$ even though in logistic regression the response has meaning only in the range $[0,1]$?</p> <p>Does one then have to truncate all $&gt; 1$ values to mean just $1.0$?</p> <hr> <p>The reason for asking is that</p> <p>I got something that looked like it might have response $&gt; 1$ using a skin variable (1 means skin that's more prone to get sunburn and 0 means normal skin), a trt variable (1 signifies that the patient took beta carotene and 0 that the patient didn't) and then a cancer variable that signifies, whether the patient got skin cancer during the study. So $cancer \text{ ~ } trt+skin$ produces $$cancer=0.1331410+0.5570949⋅trt+0.627890⋅skin$$ (these coefficients are after taking <code>invlogit()</code> of the coefficients returned by <code>glm()</code>)<br> which is $&gt;1$ if both variables $==1$. </p> <p>Although, when seeing whether such rows actually exist (where $skin=trt=1$), no such rows exist.</p> Answer: <p>You are not getting probabilities because you are confusing the order of operation between adding the coefficients and taking the inverse logit.</p> <p>Explanation: You have a logistic model given by <span class="math-container">$$logit(Pr(cancer=1|trt,skin,\beta)) = \beta_0 + \beta_1 * trt + \beta_2 * skin$$</span></p> <p>You are trying to calculate a probability (I denote this quantity by <span class="math-container">$Pr^*$</span>, since it's not a probability) by doing the following:</p> <p><span class="math-container">$$Pr^*(cancer=1|trt,skin,\beta) = logit^{-1}(\beta_0) + logit^{-1}(\beta_1 * trt) + logit^{-1}(\beta_2 * skin)$$</span></p> <p>When it should be this:</p> <p><span class="math-container">$$Pr(cancer=1|trt,skin,\beta) = logit^{-1}(\beta_0 + \beta_1 * trt + \beta_2 * skin)$$</span></p> <p>Note that, in general <span class="math-container">$$logit^{-1}(A+B) \neq logit^{-1}(A) + logit^{-1}(B)$$</span></p> <p>We would say that a non-linear transformation on a sum of two variables is not equal to the sum of the transformed variables, except in special, trivial cases.</p> <p>This would be easier to see if you used a log-linear model instead of a logistic model: <span class="math-container">$$exp(A+B)\neq exp(A) + exp(B)$$</span> Since we know that <span class="math-container">$$exp(A+B)= exp(A)*exp(B)$$</span></p> <p>In other words, to get a probability, you first find the log-odds at trt=skin=1 as</p> <p><span class="math-container">$logit(Pr(cancer=1|trt,skin,\beta)) = -1.873468 + 0.229380*trt + 0.5231755*skin$</span> <span class="math-container">$logit(Pr(cancer=1|trt=1,skin=1,\beta)) = -1.873468 + 0.229380*1 + 0.5231755*1$</span> <span class="math-container">$logit(Pr(cancer=1|trt=1,skin=1,\beta)) = -1.120912$</span></p> <p>And then transform: <span class="math-container">$$Pr(cancer=1|trt=1,skin=1,\beta) = logit^{-1}(-1.120912) = 0.2458422$$</span></p>
https://stats.stackexchange.com/questions/241159/getting-1-responses-in-logistic-regression
Question: <p>I am currently learning about the assumptions of logistic regression and am having a hard time wrapping my head around <em>why</em> independence of observations is necessary for this test. Any guidance would be appreciated.</p> Answer:
https://stats.stackexchange.com/questions/488362/why-is-independence-of-observations-an-assumption-in-logistic-regression
Question: <p>I'm trying to run a logistic regression in R to determine what independent variables may determine if a sea turtle becomes entangled in fishing net or not. My independent variables vary significantly from each other in both scale and class e.g. Mesh size (7mm-1500mm), Twine diameter(0.33-4mm) Colour (red,blue green etc.) Construction (Multi or Mono). Must I first convert all independent variables to a similar scale to run a glm command. If so how do I standardise factors such as Colour and Construction? Also is it necessary to produce a testing dataset and a training data set as I see some people do this and others incorporate the entire model in the <code>glm</code> command ?</p> Answer:
https://stats.stackexchange.com/questions/235467/logistic-regression-and-different-independent-variable-classes-what-to-do
Question: <p>I am running some logistic regressions in R. I need some help with interpreting coefficients. </p> <p>So, if my DV is 1 = yes and 0 = no, and I have five groups (a, b, c, d, e) and I make a the reference group (dummy coding), and the coefficient for b is significant and positive:</p> <ul> <li>does this mean the odds of b saying yes compared to the odds of a saying yes is greater? </li> <li>Or is there a greater odds of saying yes for group a compared to b?</li> </ul> Answer: <p>Denote the coefficient for $b$ by $\beta$ and the model's intercept by $\alpha$. We let group $a$ be the reference level. </p> <p>Logistic regression fits the model:</p> <p>$logit(\pi(x)) = \alpha + \beta_1 x_1 + ... + \beta_p x_p$</p> <p>Where $logit(\pi(x)) = log(\frac{\pi(x)}{1-\pi(x)}) = log(odds(\pi(x)))$</p> <p>Our reference group $a$ has the fit:</p> <p>$logit(\pi(a)) = \alpha$</p> <p>Hence, the odds of success for an individual from group $a$ is given by</p> <p>$odds(\pi(a)) = e^\alpha$ </p> <p>While our group $b$ with coefficient $\beta$ has the fit:</p> <p>$logit(\pi(b)) =\alpha + \beta$</p> <p>and so in the same way as above, we find the odds for group $b$ by taking the exponential of both sides:</p> <p>$odds(\pi(b)) = e^{\alpha + \beta} = e^\alpha \times e^\beta = odds(\pi(a))\times e^\beta$</p> <p>Hence, under your model, the odds for group $b$ are $e^\beta$ times the odds for group $a$. It follows that if $\beta$ is positive, then $e^\beta&gt;1$ and so the odds for group $b$ will be larger than group $a$. On the other hand, if $\beta$ is negative, then $e^\beta&lt;1$ and so the odds of group $b$ will be smaller than that of group $a$.</p>
https://stats.stackexchange.com/questions/235640/help-with-interpreting-coefficients-in-logistic-regression
Question: <p>I am working on a project for a faculty member who wants to know if placement in Developmental Reading (IV1: Dev RDG &amp; NonDev RDG) and/or placement in Developmental English (IV2: Dev ENG &amp; NonDev ENG) affects the success rate (DV: Successful &amp; Unsuccessful) in a course (HIS101 for example).</p> <p>I used a Logistic Regression to try and analyze this result. When I look at my results, however, something seems wrong. Would a different analysis be more appropriate? My main hang up is that I see significant differences throughout my results but my crosstabs do not seem to agree.</p> Answer: <p>Your dependent variable is categorical so yes Logistic Regression is most likely appropriate here. Your independent variables are all categorical but you can still apply a logistic regression. </p> <p>Your model looks something like this </p> <p>$log(\frac{p}{1-p})= \beta_0 + \beta_1X_1 + \beta_2X_2$</p> <p>where </p> <p>$p$ is the probability of success in HIS101</p> <p>$\frac{p}{1-p}$ is the odds of success in HIS101</p> <p>$log(\frac{p}{1-p})$ are the log-odds of success in HIS101</p> <p>$X_1$ is your IV1: Placement in Developmental English. If the child is placed in Developmental English, then $X_1$ = 1, if not then $X_1$ = 0</p> <p>$X_2$ is you IV2: Placement in Developmental Reading. If the child is placed in Developmental Reading, then $X_2$ = 1, if not then $X_2$ = 0</p> <p>$\beta_1$ will tell you "How much do the log-odds of success in HIS101 change if the student is placed in developmental vs non-developmental Reading?"</p> <p>$\beta_2$ will tell you "How much do the log-odds of success in HIS101 change if the student is placed in developmental vs. non-developmental English?"</p> <p>If there are other variables that could confound your results, like demographic data on the child (i.e age, gender, race) or economic data (e.g family income, zip code), then it would probably help your analysis. A regression parameter $\beta_i$ tells you how much the log-odds of success changes for every unit change of $X_i$ <em>holding all the other variables constant</em>. So if a student is placed in Developmental English, $X_1 = 1$ and the log-odds of HIS101 success changes by $\beta_1$ <em>for any given combination of Developmental Reading placement, age, race group, income level, and whatever else you put in the model</em>. The assumption in the regression model is that all of these variables independently affect the probability of success. If you're looking at the impact of DevEng and DevRead on HIS101 success, you will have a stronger statistical argument of causation if you include more relevant variables in your model to control for their effects. </p> <p>See also: <a href="https://stats.stackexchange.com/questions/65818/regression-with-only-categorical-variables">Regression with only categorical variables</a></p>
https://stats.stackexchange.com/questions/237355/is-logistic-regression-the-right-analyses-when-a-study-has-1-categorical-dv-and
Question: <p>I am running a logit model trying to predict purchases on a dataset including change variables, i.e. I have a dataset of this kind:</p> <pre><code> webvisits.month1 webvisits.month2 webvisits.month3 Purchase contract1 34 21 22 0 contract2 11 2 2 1 contract3 9 22 17 1 contractn 5 44 42 0 </code></pre> <p>The model is not performing well at all, would it be a good idea to try and normalize my variables? Would that affect the outcome? If this is the case, should I normalize them by month (considering the values by variable webvisits.month1, webvisits.month2 and so on) or rather by contract (e.g. considering the distribution contract1 [34,21,22,0], contract2 [11,2,2,1] and so on? Thanks, hope this makes sense.</p> Answer: <p>What you need to improve your model is not normalisation, but create extra features which could affect the target. e.g. captured the change across months in independent variables: webvisits.month2-webvisits.month1 or average, max of 3 months. capture the increasing and decreasing trend. Again just webvisits might not be good predictors, you might need to include other information in model, like what they did in the webvisit. Hope this helps!</p>
https://stats.stackexchange.com/questions/237559/normalization-of-change-variables-in-logistic-regression
Question: <p><strong>Preface</strong></p> <p>I've looked at <a href="https://stats.stackexchange.com/questions/11800/how-should-we-convert-sports-results-data-to-perform-a-valid-logistical-regressi">How should we convert sports results data to perform a valid logistical regression?</a> and <a href="https://stats.stackexchange.com/questions/26910/how-to-simulate-head-to-head-competition-based-on-winning-percentages">How to simulate head to head competition based on winning percentages?</a> but I didn't get it to a 100%.</p> <p><strong>Question</strong></p> <p>We have 5 rows about head to head competition between elderly people playing bridge. We have competitor_home's age, competitor_visitor's age and the outcome (1 if home wins and 0 if visitor wins).</p> <p>(trying to simulate a table below).</p> <pre><code>home_age visitor_age outcome 72 68 1 75 63 1 78 74 1 79 77 1 71 71 1 </code></pre> <p>The question is how I would create a logistic regression model that would be able to predict who the winner is depending on home and visitor age when the outcome is always 1. My idea is to duplicate the table and switch place with visitor_age and home_age so that we get 5 outcomes with zero. Is that a valid approach? Like below.</p> <pre><code>home_age visitor_age outcome 72 68 1 75 63 1 78 74 1 79 77 1 71 71 1 68 72 0 63 75 0 74 78 0 77 79 0 71 71 0 </code></pre> <p>The variance is still the same.</p> Answer: <p>I can suggest to you 3 ideas:</p> <ul> <li>From your input data, you can create a new training set with only one feature: the age of the team. It almost the same idea that you suggest in your question. Your training set will look like:</li> </ul> <blockquote> <pre><code>age outcome 72 1 75 1 78 1 79 1 71 1 68 0 63 0 74 0 77 0 71 0 </code></pre> </blockquote> <ul> <li>Maybe you prefer to use the difference of age between the two teams, so you can create this training set:</li> </ul> <blockquote> <pre><code>diff outcome 4 1 -4 0 12 1 -12 0 4 1 -4 0 2 1 -2 0 0 1 0 0 </code></pre> </blockquote> <ul> <li>The last idea is to use the <a href="https://en.wikipedia.org/wiki/Bradley%E2%80%93Terry_model" rel="nofollow noreferrer">Bradley-Terry model</a> who is build to solve this kind of problem. Here is the documentation of the R package <a href="https://cran.r-project.org/web/packages/BradleyTerry2/vignettes/BradleyTerry.pdf" rel="nofollow noreferrer">BreadleyTerry2</a>.</li> </ul> <p>However, I hope you have more than 5 rows as input data to build a relevant model. Also, I think it's totally useless to keep the "home/visitor" information because, at bridge, this information has no importance.</p>
https://stats.stackexchange.com/questions/253515/how-to-model-a-logistic-regression-with-head-to-head-data
Question: <p>I have a variable $Y$=Control=$C$ and three variables:</p> <ul> <li>Fraud := $F$</li> <li>Error := $E$</li> <li>Waste := $W$</li> </ul> <p>all numerical variables. I am studying the effect of control methods on each of $F,E,W$, as well as on the combination of the three. </p> <p>To study the three efficiently variables simultaneously, I am then modeling $\frac{P(A)}{1-P(A)}$ Where $A$ = the event where $F=E=W=1$ , so that $A = A_F \cup A_E \cup A_W$ , where $A_F$ is the event that $F=1$ , etc. </p> <p>For this last, to study the effects of control on "general efficiency", I am trying to do a logistic regression where I combine all measures of efficiency; $F,E,W$ into a single variable, say $X$, which I define as $X=\frac{1}{3}(F+E+W)$ and then I regress $C$ against $X$. Is there a standard way of doing this? I am not sure that the $F,E,W$ are pairwise independent. If I have the cutoff values, say $F=F_0, E=E_0, W=W_0$ (meaning I decide that $F=1$ if $F&gt;F_0$, $E=1$ if $E&gt;E_0, W=1$ if $W&gt;W_0$) , is the cutoff value of X the average of the three cutoff values?</p> <p>Are there any other considerations for this case? Should I address the issue of the pairwise independence of $F,E,W$?</p> Answer:
https://stats.stackexchange.com/questions/255969/1-1-mapping-in-logistic-regression
Question: <p>could someone help me. <a href="http://sites.stat.psu.edu/~jiali/course/stat597e/notes2/logit.pdf" rel="nofollow noreferrer">http://sites.stat.psu.edu/~jiali/course/stat597e/notes2/logit.pdf</a> (page 4) What exactly are $\beta_{10}$ and $\beta_{20}$. How are they defined? </p> <p>I don't understand this, it's stated that $\beta=(\beta_{10}, \beta_1)^{T}$</p> Answer: <p>That is a multinomial logit model. The outcome has $K$ categories, one of which is the reference, so you are modeling $K-1$ odds. $\beta_{10}$ is the constant for the first odds, $\beta_{20}$ is the constant for the second odds, etc.</p>
https://stats.stackexchange.com/questions/258271/question-about-logistic-regression-formula
Question: <p>H0: There is no effect of treatment (Road vs control) on rat occupancy<br> H1: Road has an effect on rat occupancy</p> <pre><code>Mod1 &lt;- glmer(Rat.Present ~ Treatment * Set.distance + (1|Site/Trap.Night), data = df.sub1, family = binomial) summary(Mod1) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: binomial ( logit ) Formula: Rat.Present ~ Treatment * Set.distance + (1 | Site/Trap.Night) Data: df.sub1 AIC BIC logLik deviance df.resid 1481.7 1589.3 -722.9 1445.7 2889 Scaled residuals: Min 1Q Median 3Q Max -0.5000 -0.2926 -0.2491 -0.2168 5.8191 Random effects: Groups Name Variance Std.Dev. Trap.Night:Site (Intercept) 0.14220 0.3771 Site (Intercept) 0.03856 0.1964 Number of obs: 2907, groups: Trap.Night:Site, 42; Site, 7 Fixed effects: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -2.55908 0.33450 -7.650 2e-14 *** TreatmentRoad 0.19659 0.42991 0.457 0.6475 TreatmentRoad:Set.distance30 -0.36999 0.57663 -0.642 0.5211 TreatmentRoad:Set.distance60 0.61724 0.65301 0.945 0.3445 TreatmentRoad:Set.distance90 0.09708 0.59444 0.163 0.8703 TreatmentRoad:Set.distance120 -0.45507 0.59755 -0.762 0.4463 TreatmentRoad:Set.distance150 0.19878 0.59059 0.337 0.7364 TreatmentRoad:Set.distance180 0.83808 0.56708 1.478 0.1394 TreatmentRoad:Set.distance210 0.06726 0.54020 0.125 0.9009 Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 </code></pre> <p><code>Rat.Present</code> is a binary (Yes/No) <code>Treatment</code> is categorical predictor variable of <code>road</code> or <code>control</code> <code>Set.distance</code> is a categorical predictor variable of the distance <code>0</code>, <code>30</code>, <code>60</code>, <code>90</code> $\dots$ <code>210</code> m at both treatments.</p> <p>My random effects are the sites I sampled. At each site I sampled 6 nights, that is why <code>Trap.Night</code> is nested in <code>Site</code>.</p> <p>Question 2. How do I obtain the effect size since it is a binomial data? I have 8 sites (3 controls; 5 roads). N=72 per site At a significant level of 0.05</p> Answer:
https://stats.stackexchange.com/questions/263315/how-to-perform-a-power-analysis-for-the-following-binomial-glmm
Question: <p>I conducted binary logistic regression analysis (DV is measured in yes, no). Among my IVs, One IV is about partnership measured in dichotomous (yes, no) and another IV is population density and measured as (Low=1, average =2, high (reference group) =3).</p> <p>The output shows that:</p> <ul> <li>For partnership: $\beta$ value of partnership(1) is .795 while it is significant. </li> <li>For population density: $\beta$ values are: low(1) = -.976, average = -.641</li> </ul> <p>I need your help on how to interpret beta values especially the negative signs</p> Answer:
https://stats.stackexchange.com/questions/263583/how-to-interpret-beta-in-logistic-regression
Question: <p>i made different models . in first I took a dependent variable and four independent variables . in second model I took different dependent variable and similar independent variable like wise I made four models but when I ran binary logistic regression I found similar p values in all models despite of different dependent variables could this happen or am I making any mistake actually I code the dependent variable as 0 and 1 in all models and independent variables in all models are same as bmi, whr, age and % body fat then p values in binary logistic regression become similar I am confused here rather the dependent variables are also linked with each other but in original test results they have different values </p> Answer: <p>"Similar" p values can certainly happen, especially if the dependent variables are related to each other. </p> <p>However, without seeing your code it's not possible to say for sure what you did or whether it was a mistake. </p> <p>E.g. suppose one DV was "Voted for McCain" and another was "Voted for Romney" and another was "Voted for Obama in 2008". Those would give very similar p values. </p>
https://stats.stackexchange.com/questions/200822/binary-logistic-regression-p-values-problem
Question: <p>I have some data that- in its raw form - represents group binomial data.Y vector is the probability of an event. Using logistic regression I get one set of parameter coefficients. </p> <p>Turning the data into a longer form - Bernoulli format (i.e. Y vector is 1 or 0), I use logistic regression again and get a different set of parameter coefficients from before, why is this? </p> Answer:
https://stats.stackexchange.com/questions/200840/binomial-glm-vs-bernoulli-glm
Question: <p>I have conducted a survey where all my questions are asked in a dichotomous manner (Yes/No).</p> <p>Eg IV:"Are you a smoker?", "Are you obese", "Is your gender male/Female" etc. DV: "Have you ever had a stroke?"</p> <p>Therefore both my dependent variable and independent variables are all dichotomous(Binary= measured in 0s and 1s).</p> <p><strong>My question is, is it appropriate to run a regression to determine the independent variables that drives the dependent variable given the fact that every single one of my variables (both dependent and independent) are dichotomous in nature?</strong></p> <p>If so, what kind of regression is the most appropriate? (Logistic regression?) and is there anything I should do to make the regression model more accurate?</p> <p>I have rudimentary understanding of statistics and regression modelling and would be so grateful if someone would point me in the right direction.</p> Answer: <p>In this case, you are relating binary properties of a person (answers to questions) to binary outcome (stroke/no stroke). A good place to start is to formulate this as a <a href="https://en.wikipedia.org/wiki/Logistic_regression" rel="nofollow">logistic regression</a> problem, since it will constrain your dependent variable to be between 0 and 1. The result can be interpreted as the probability that the person will have a stroke <em>given</em> their answers to the survey. (Assumes we code "Yes=1, No=0").</p> <p>Of course, you will need to (a) ensure your sample was representative of the group you intend to use it on (or of the general population being studied) and (b) <a href="https://en.wikipedia.org/wiki/Cross-validation_%28statistics%29" rel="nofollow">cross-validate</a> your data to see how robust your findings are.</p>
https://stats.stackexchange.com/questions/215490/can-i-run-a-regression-when-both-independent-and-dependent-variables-are-all-dic
Question: <p>One of my friends was asked in the interview following question:</p> <p>There are 35000 independent variables and 7 million observation over those variables. There is a binary response variable. There is a success rate of 1%. What will be your approach here?</p> Answer:
https://stats.stackexchange.com/questions/223368/approaching-a-regression-problem-with-many-independent-variables-and-binary-resp
Question: <p>I am running a logistic regression in order to determine the error rate of an outcome given some covariates. Two of my covariates are indicator flags for the location. When I include an intercept, one of the location flags is dropped which I understand. What I do not understand is that my $R^2$ also drops from around 0.82 to around 0.06. My parameter estimates do not change at all apart from the remaining location flag, and my intercept takes the value of the location flag that was removed.</p> <p>Essentially,</p> <p>$$ logit(Y_i = 1) = \mathbf{\beta X} + \gamma_1i_1 + \gamma_2i_2 $$ has an $R^2$ of around 0.82, while $$ logit(Y_i = 1) = \beta_0 + \mathbf{\beta X} + \gamma_1i_1 $$ has an $R^2$ of around 0.06</p> Answer: <p>Keep in mind there is no real $R^2$ for logistic regression. There may be a variety of pseudo-$R^2$s, but their mileage can vary.</p> <p>For your first model, the baseline model for the pseudo-$R^2$ is logit=0, i.e., prob $Y_i=1$ is 0.5. For nearly any data, this is an awful model, so no wonder than adding anything shows a big improvement.</p> <p>For your second model, the baseline is logit = const, not necessarily 0, so prob $Y_i=1$ is const not necessarily 0.5.</p> <p>If your actual proportion of ones is say 0.1, with $X_i$ having a moderate amount of explanatory power, then your second model will show a modest $R^2$, while the first model only works to the extent that it uses $\bar X$ as a crutch in place of the intercept to move the predicted probabilities towards 0.1.</p>
https://stats.stackexchange.com/questions/149074/why-does-the-inclusion-of-an-intercept-in-my-logistic-regression-cause-my-r2
Question: <hr> <p>Could someone point me toward a specific method to model data that consists of two groups of observations having the same dependent variable and sharing some explanatory variables, BUT also having explanatory variables that are defined for one group and not for another? A situation like this:</p> <hr> <ul> <li>binary dependent var: y </li> <li>shared explanatory variables for all groups: x1 and x2 </li> <li>if group=1, then x3 and x4 are among explanatory variables</li> <li>if group=2, then x5 and x6 are among explanatory variables</li> </ul> <hr> <p>My first reaction was to interact the group variable with the non-shared variables, but I don't think that's a good idea.</p> <p>I appreciate any hints and clues.</p> Answer:
https://stats.stackexchange.com/questions/149510/logistic-regression-when-data-consists-of-shared-and-non-shared-variables
Question: <p>My question is regarding the LR cost function from andrews ML course (<a href="http://feature-space.com/en/document50.pdf" rel="nofollow noreferrer">http://feature-space.com/en/document50.pdf</a> , page -5)</p> <p>$cost= \frac{1}{m}[ -y \times \log(\psi) - (1-y) \times \log(\kappa) ]$</p> <p>The vector y holds values for the digits (1-10), so if we plug these values in the cost function then the cost function takes ambiguous values. For instance if y=5, then the cost function will have both the parameters </p> <p>$cost (y=4) = \frac{1}{m} [ -4 \times \log(\psi) -(1+4) \times \log(\kappa) ]$</p> <p>As per <strong>Andrew's</strong> lecture I remember him saying that only one of the log terms would remain inside the cost function as if classified correct $\log(\psi)$ term remains else $\log(\kappa)$ remains.</p> <p>Please help me where I'm getting it wrong.</p> Answer: <p>$y$ always takes on values of 1 or 0, as you noted. For the multi-class problem, you're going to solve for the "one vs. all" case. You'll need to transform your $y$ vector into a vector of 1's and 0's depending on the class you are minimizing for. So for the number 5, you'll solve for $P(y=5)$ vs. $P(y \ne 5)$. You repeat that for all your digits. You come up with 10 different $h_{i}{\theta}$'s, i.e. $h_{1i}{\theta}$, $h_{2i}{\theta}$, ...</p> <p>Also, the following is from your link at the bottom of page 8:</p> <blockquote> <p>When training the classifier for class k ∈ {1, ..., K}, you will want a m-dimensional vector of labels y, where yj ∈ 0, 1 indicates whether the j-th training instance belongs to class k (yj = 1), or if it belongs to a different class (yj = 0).</p> </blockquote>
https://stats.stackexchange.com/questions/156284/logistic-regression-cost-function-intuition
Question: <p>I am building a model that predicts a proportion: $y_i \sim f(x_{1,i}, x_{2,i},.., x_{n,i})$, where $y_i \in [0,1]$.</p> <p>One thing I find is that 40% of the observations have $y_i=0$. For the remaining 60%, if I plot $logit^{-1}(y_i)$, it looks like a nice bell curve. </p> <p>My question here is if I should really build two models. The first model is a logistic regression that predicts if $y_i=0$ and the second model predicts $y_i$ when $y_i&gt;0$.</p> <p>To put it together, if the logistic regression gives an estimate that with $P(y_i&gt;0|x)$ that my $y_i$ is not 0, and my second regression gives me $E[y_i|x_, y_&gt;0]$, then $E[y_i|x_i]= E[y_i|x_i,y_i&gt;0] P(y_i&gt;0)$</p> <p>Does this sound acceptable? Is there any other/better way to handle the bi-modal distribution of $y$?</p> Answer:
https://stats.stackexchange.com/questions/161038/proportion-predictive-model-with-bi-modal-distribution
Question: <p>I have the following problem.</p> <p>Three hospitals of similar structure have the very different mortality rate on one certain disease. I would like to analyse the data, whether the location as a factor has an influence on the mortality after adjusting for age, gender, urgencies etc.</p> <p>My plan is to try logistic regression for this. If a location would show a significant OR, this would mean that the parameters that we cannot measure now (say, the real qualification of the staff) should be investigated further.</p> <p>Does it make sense?</p> <p>I have some experience with R including several Coursera courses. I have no formal computer science qualification however and I hesitate to just use the "gun" I have on my computer without some advice.</p> <p>Thank you very much!</p> Answer: <p>I think that could make sense - if I understand what you'd want to do, you'd have an indicator for whether or not the person passed away, and that would be the target of your model. If the hospitals are of similar structure (similar services, resources, etc.) then you could just include a dummy indicator in the regression for each hospital. If the coefficient on that indicator ends up being significant, it would mean that there is something about that hospital that you're not capturing in your model that may be contributing to the deaths.</p> <p>As an additional note, if you wanted to model across many dissimilar hospitals, for which you would then be including information about each hospital in the regression, you'd want to investigate using a multilevel (aka hierarchical) model.</p>
https://stats.stackexchange.com/questions/168037/general-question-on-the-analysis-design
Question: <p>My research concerns the language of Alzheimer's patients. As the disease progresses, their language becomes more concrete and less abstract - they seem to 'lose' their abstract vocabulary more quickly. Tracking that change over the course of the disease might have clinical benefits. </p> <p>I have identified a number of factors that measure (to an accuracy of about 85%) the relative concreteness of nouns, within an SPSS binary logistic regression (BLR) model, comprising a constant and four independent variables. The BLR model produces a 'score' for each individual noun: low or negative for abstract nouns, higher and positive for concrete nouns. The objective is not simply to classify the nouns as abstract or concrete, but rather to rank them along a gradient. </p> <p>To obtain a 'concreteness rating' for a text, I simply calculate the mean of the scores of all the nouns in the text. Although this has given good results in testing, it has been suggested that this is not a legitimate application of BLR (my knowledge of which has been gleaned from YouTube videos). </p> <p>So - is there a fundamental flaw in my method? And if so, what might be an alternative? </p> <p>Any help and advice would be very gratefully received. </p> <p>Kevin </p> Answer: <p>It doesn't seem fundamentally flawed to me. For this to work, you need</p> <ol> <li>A training set of nouns that are coded "abstract" or "concrete".</li> <li>A BLR model that relates concreteness to your independent variables.</li> <li>Test this model on a test set of nouns whose concreteness you know.</li> </ol> <p>What the BLR does is estimate the logit (log odds) of concreteness for each noun. So although each noun is either concrete or abstract, the score can be any real number. This gives you your gradient. Alternatively, you could use the associated probability, which has to lie on [0,1], but I think the log odds would perform better.</p> <p>As far as it goes, your approach seems to make sense. The real test is whether the scores that you get for pieces of text reflect the truth of the matter ... do "concrete" texts seem concrete when read through by a discerning English speaker?</p> <p>Statistically speaking, this isn't really about binary logistic regression. You are using logistic regression to build a score. Whether this makes sense or not depends on whether the score you get is sufficiently subtle to do what you want.</p> <p>For example, taking the average of the word scores balances concrete against abstract nouns. Think "Zen and the art of motorcycle riding" --- how would you rate a title like that? You might want to rate abstract nouns higher than concrete ones, for example .. since everyone can use concrete nouns.</p> <p>Anyway, I would play around with a bunch of weighting options and test them against normal people and Alzheimer people to see what works.</p> <p>Note: I am assuming here that you are using logistic regression because it is easier to calculate the independent variables than the concreteness or otherwise of English nouns. However, if you have a way of knowing the concreteness, based on some dictionary someone has compiled, or by hand coding, then you are better off basing your score on the binary variable concrete/abstract than on your estimate. The regression model is trying to predict concreteness. It can never do better than actually knowing concreteness. An English prof might actually know where such a word list would be. That's not really a question for this site, however.</p>
https://stats.stackexchange.com/questions/169522/gradient-scores-from-binary-logistic-regression
Question: <p>I have two questions I hope you could help me with.</p> <p>I am doing a stepwise logistic regression.</p> <ol> <li>I have a variable that includes information other variables include already. For example "price_missing" ($1$ means price missing) and "price" ($0$ means price). Would it be a normal process to drop these variables before doing a regression? It seems to make the model worse.</li> <li>Some of the data is skewed. Would it make sense to do a transformation of the variable before doing a stepwise logit regression?</li> </ol> Answer: <p>Agreed with @gung on the stepwise LR. Here is my personal thought:</p> <ol> <li><p>Difficult to answer based on the information you provided. From the statistical point of view, you may consider <a href="https://stats.stackexchange.com/search?q=collinearity%20">the collinearity problem</a>. But In the building phase of the model, we should not consider only the statistical/scientific point of view, the independent variables should be chosen also based on experience and based on the meaning of the variable in the reality. Consequently, based on the same strategy of selecting variables, two different people with different backgrounds may end up choosing different models. It is rarely evident to say which one is better. </p></li> <li><p>Before transforming the variable, lets first look at the relationship between the independent variable you want to transform, and the dependent variable, the goal is to see what kind of relationship exists between them. Base on that, you may better understand if it's necessary to transform the variable or not. If yes, it helps you to choose the transforming function as well.</p></li> </ol>
https://stats.stackexchange.com/questions/172572/stepwise-logistic-regression-drop-variables-transform-variables
Question: <p>I am trying to develop a model for prediction of retention. The problem is that the retention is very rare - aprox. 0.2 %. So far I have been using logistical regression. Without much success however. For example, in the interval of predicted probability above 70 % I am getting 4 true retention clients and 157 wrongly predicted retentions.</p> <p>I have spent quite a lot of time deciding what covariates to choose and what transformation to apply. I don't think I can improve that. </p> <p>The question could be formulated also as the following. <strong>How to use (or what to use instead of) logistical regression when the response vector "Y" contains very few 1's (0.2 % in my case)?</strong></p> <p>If you think you should know what data I have, tell me and I will provide you with this information. But I don't think it's important. Anyway, I have enough data. About 1.5 mil rows, so about 3000 of 1's.</p> Answer: <p>You have a quite high lift in the scored data. So your predictions are not necessarily "bad". What are classification statistics (e.q AUC) in the test data set? </p>
https://stats.stackexchange.com/questions/175684/logistical-regression-very-few-1s-in-response-vector-y
Question: <p>For a (binary) logistic regression, I have two IV's in my model. The first IV has three categories (one person, two persons, three or more persons). The second variable is binary (communication exists vs. not exists) For the first category, the second IV has no meaning, but for the second and third category is does.</p> <p>Question is, how can I include all cases in one regression? The DV is always the same.</p> Answer: <p><strong>This will happen naturally, with no intervention on your part.</strong></p> <p>Consider, for instance, <a href="http://www.ats.ucla.edu/stat/mult_pkg/faq/general/dummy.htm">dummy coding</a>. This system uses vectors of zeros and ones to indicate the categorical variables in a way that allows straightforward interpretation of the coefficients. A variable with $k$ categories is represented by $k-1$ terms (along with an "intercept"). A standard vector notation to describe this uses vector notation.</p> <ul> <li><p>The "base" contribution to the response is the intercept $\beta_0$. The corresponding vector is $(1,0,\ldots,0)$ with $k$ components.</p></li> <li><p>The contribution of the <em>second</em> category <em>relative to the first</em> is $\beta_1$, whence the contribution of the second category is $\beta_0 + \beta_1$. The corresponding vector is $(1,1,0,\ldots,0)$.</p></li> <li><p>$\cdots$</p></li> <li><p>The contribution of category $k$ relative to the first is $\beta_{k-1}$, whence the contribution of category $k$ is $\beta_0 + \beta_{k-1}$. The corresponding vector is $(1,0,\ldots,0,1)$.</p></li> </ul> <p>Thus, each vector has an initial $1$ (for the intercept). The vectors for all categories but the base have a single additional $1$. Each observation, as given by its vector $\mathbf{x}$, contributes</p> <p>$$\mathbf{x} \cdot (\beta_0, \beta_1, \ldots, \beta_{k-1})$$</p> <p>to the response. These dot products give the values $\beta_0, \beta_0+\beta_1, \ldots, \beta_0 + \beta_{k-1}$ mentioned in the bulleted list above.</p> <p>The same system is used when more than one categorical variable is included among the regressors, <em>but they all share the same intercept.</em> In other words, the "base" case is the one where all categorical variables have their base values.</p> <p><strong>The principal advantage of this coding system</strong>--besides being automatic in just about any statistical computing platform--is that the coefficients have simple natural interpretations. To evaluate whether the existence of communication is significant, for instance, you would examine the coefficient associated with $x_2$ ($\beta_3$ in this example) and test whether it differs significantly from zero. This test is usually automatically conducted by software and shown in its summary output.</p> <hr> <p><strong>The question provides a good example.</strong> The following table (automatically created by <code>R</code>) shows all six possible combinations of a three-category regressor $x_1$, with values "1", "2", and "3+", and a two-category regressor $x_2$ with values "No" and "Yes".</p> <pre><code> x1 x2 Intercept x1=2 x1=3+ x2=Yes Coefficient 1 No 1 0 0 0 b0 2 No 1 1 0 0 b0 + b1 3+ No 1 0 1 0 b0 + b2 1 Yes 1 0 0 1 b0 + b3 -- there won't be any rows like this 2 Yes 1 1 0 1 b0 + b1 + b3 3+ Yes 1 0 1 1 b0 + b2 + b3 </code></pre> <p>The left two columns show the combined values of $x_1$ and $x_2$. The next remaining four columns correspond to (a) an intercept common to both variables, (b) $3-1=2$ components for the effects of $x_1$ relative to the base, and (c) $2-1=1$ components for the effects of $x_2$ relative to the base (that is, the difference between having communications and not). We may call their coefficients $\beta_0, \beta_1, \beta_2, \beta_3$, in order from left to right. The dot product, showing the contribution of each row to the response, is summarized in the rightmost column (in which <code>b0</code> stands for $\beta_0$, <em>etc</em>).</p> <p><strong>When certain combinations are not possible,</strong> such as <code>x1=1</code> and <code>x2=Yes</code> (represented in the fourth row), <strong>they simply will not appear in the dataset.</strong> Because of this, some might argue that the interpretation of $\beta_3$ should change subtly. Whereas before it would have been understood as the difference between communications and no communications, now it is understood as that difference <em>for the cases where communications make sense.</em></p> <p>Here is an example of software output (for a logistic regression) using this coding:</p> <pre><code>Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 0.65625 0.07841 8.369 3.09e-14 *** x1.2 -0.33594 0.10373 -3.238 0.00147 ** x1.3+ -0.50781 0.10373 -4.895 2.43e-06 *** x2Yes 0.04687 0.07841 0.598 0.55085 </code></pre> <p>The four lines correspond to the four similarly-labeled columns in the table. In this case, the software has performed a t-test for <code>x2Yes</code>, which is $\beta_3$, and obtained a p-value of $0.55085$. This would not be considered significant by anyone. The conclusion would be that although there is some evidence that communications increases the chance of a response (as evidenced by the positive estimate $\hat\beta_3 = 0.04687$), it is not significant in this dataset.</p>
https://stats.stackexchange.com/questions/176190/regression-variable-has-no-meaning-for-one-category
Question: <p>Is there a measure in logistic regression that maybe penalizes you for having too many independent variables like in multiple regression with the adjusted R squared?</p> <p>That is, does having too many independent variables in a logistic regression hurt the model?</p> <p>What about dummy variables? Can you have too many of those to the point of unpredictability? </p> Answer: <p>For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you entertain putting into the model. The rationale for that "rule" is that it results in a model performance metric that is likely to be as good or as bad in new data as it appears to be in the training data. But you need 96 observations just to estimate the intercept so that the overall predicted risk is within a $\pm 0.1$ margin of error of the true risk with 0.95 confidence.</p>
https://stats.stackexchange.com/questions/79366/maximum-number-of-independent-variables-in-logistic-regression
Question: <p>My understanding of Logistic Regression is that it is actually a classifier, hence used for predicting either a categorical outcome (ie. binary or an outcome with specific labels) as opposed to a continuous outcome. I would have expected that predicting a stock price would be a continuous outcome, so I don't understand how a stock price can actually be a classification. Can someone please enlighten me?</p> <p><a href="https://ieeexplore.ieee.org/document/5260596" rel="nofollow noreferrer">An example of research paper using Logistic Regression to predict a stock price</a>.</p> Answer: <p>Instead of predicting how much the stock gains or loses, the models are predicting the <strong>sign</strong> of the gain or loss, i.e. a binary outcome.</p>
https://stats.stackexchange.com/questions/178757/why-is-logistic-regression-mentioned-by-many-sources-as-useful-in-predicting-sto
Question: <p>Suppose we have $n$ observations. For example, consider $n$ people who each have their blood pressure ($x_1$), pulse ($x_2$), and blood glucose ($x_3$) levels measured. So there are are $3$ explanatory variables measured for each person. The outcome variable is presence or absence of obesity ($Y$). In this case, does logistic regression assume that the data are distributed as $\text{Bernoulli}(p_i)$? For example, for the first person, we measure $x_1,x_2,x_3$ and compute $p_1$ (the probability of observing this)?</p> Answer: <p>Yes: the model is <span class="math-container">$\operatorname{logit} p_i = \beta_0 +\beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i}$</span>.</p> <p>That's true for bog-standard logistic regression anyway - the term is sometimes used where there's an extra parameter for dispersion, or for an estimating equation approach for which the Bernoulli model isn't assumed.</p> <p>Re your comment: <span class="math-container">$\sum_{i=1}^{m_j} Y_{ij}$</span> has a binomial distribution <span class="math-container">$\operatorname{Bin}(m_j,p_j)$</span> for groups of <span class="math-container">$m_j$</span> people (from the original <span class="math-container">$n$</span>) who have the same covariate pattern—the same blood pressure, pulse rate &amp; glucose levels—&amp; therefore the same probability <span class="math-container">$p_j$</span> of obesity. If no-one has the same covariate pattern, then there are <span class="math-container">$n$</span> groups, each with <span class="math-container">$m_j=1$</span>, i.e. <span class="math-container">$n$</span> different Bernoulli distributions. To be clear, for each <em>individual</em> person <span class="math-container">$Y_i\sim\operatorname{Bin}(1,p_i)\equiv\operatorname{Bern}(p_i)$</span>, &amp; as @Frank says, there's no real need to consider people grouped together by covariate pattern, though it's sometimes useful for diagnostics.</p> <p>To be really clear, if your model says this:–</p> <blockquote> <p>Tom: 90 mmHg, 80 /min, 6 mmol/l =&gt; 60% chance of obesity</p> <p>Dick: 90 mmHg, 80 /min, 6 mmol/l =&gt; 60%</p> <p>Harry: 60 mmHg, 60 /min, 5 mmol/l =&gt; 20%</p> </blockquote> <p>you can write this:–</p> <p><span class="math-container">$$Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}\sim \operatorname{Bin}(2,60\%)$$</span> <span class="math-container">$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$</span></p> <p>or this:–</p> <p><span class="math-container">$$Y_{\mathrm{Tom}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$</span> <span class="math-container">$$Y_{\mathrm{Dick}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$</span> <span class="math-container">$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$</span></p> <p>Note that <span class="math-container">$Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}+Y_{\mathrm{Harry}}$</span> is not binomially distributed because there's not a common probability for each person.</p>
https://stats.stackexchange.com/questions/64603/distribution-in-logistic-regression
Question: <p>I am wondering what are the advantages/disadvantages of breaking down a logistic regression in multiple steps, when they are available.</p> <p>Let me explain what I mean by <em>multiple steps</em>: Think of it like the customer journey: A cold lead (<code>A</code>) becomes a prospect (<code>B</code>) who then becomes a customer (<code>C</code>).</p> <p><code>A -&gt; B -&gt; C</code></p> <p>I'm interested in predicting the conversion from <code>A</code> to <code>C</code>, which can be done with a logistic regression.</p> <p>I wonder if I could also do two logistic regressions, first from <code>A</code> to <code>B</code>, then from <code>B</code> to <code>C</code>, and multiply the predictions. </p> <p><strong>What are the differences between the two approaches?</strong></p> <p>Things to consider:</p> <ul> <li>What if the conversion rate from <code>A</code> to <code>B</code> is small? (Then the sample size for the 2nd model is small as well)</li> <li>Where does most of the signal come from? Maybe my explanatory variables explain most of <code>A</code> to <code>B</code> but nothing of <code>B</code> to <code>C</code>, or the other way around.</li> </ul> Answer: <p>That sounds like a sequential logit to me. You can compute a "total" effect of explanatory variables on the finale outcome and decompose that into a weighted sum of the effect of that explanatory variable on each step/transition. See: <a href="http://dx.doi.org/10.1177/0049124115591014" rel="nofollow noreferrer">http://dx.doi.org/10.1177/0049124115591014</a></p>
https://stats.stackexchange.com/questions/289309/advantages-of-breaking-down-a-logistic-regression-in-multiple-steps
Question: <p>I want to calculate the crude and adjusted odds ratios for exposure to occupational risk factors such as aluminum and fossil fuels in my case control study. My cases are 180 demented patients and I have 370 controls. Which type of logistic regression model should I use? When I adjust for age and education the odds ratios get bigger, isn't that wrong?</p> Answer: <p>The type of model you should use depends on the way the dependent variable (DV) is measured. It appears that your DV is dichotomous (demented/controls) which would indicate "regular" logistic regression.</p> <p>It is not necessarily wrong that the odds ratios (ORs) increase when you control for demographics. The change in the ORs when adding control variables depends on the relationships among the variables. They can go up, go down or stay almost the same.</p>
https://stats.stackexchange.com/questions/384808/conditional-logistic-regression-for-calculation-odds-ratios
Question: <p>Apologies for the rudimentary question. I'm taking on a project at work that's a bit out of my wheelhouse and I want to bounce my ideas off of those more experienced than myself.</p> <p>We use Salesforce.com at the software company where I work, and I want to identify which lead behaviors (whitepaper downloads, demo views, webinar attendances, etc.) are predictive of those leads turning into qualified sales opportunities. The idea is that we can use this data to create a model, on which we'll base a scoring model going forward. I've identified binary logistic regression, using stepwise selection, as the best choice, based on my research.</p> <p>Essentially, my thinking is that the dependent variable (opportunity status) is binary (Opportunity = 0, Not an Opportunity = 1), which would indicate that logistic regression would be the best approach. Also, I'm not sure which behaviors and data points will ultimately be predictive of the lead becoming an opportunity, so stepwise selection seems like a good approach. </p> <p>Can anyone think of a more appropriate analysis technique, or am I on the right track?</p> Answer: <p>If the outcome variable $Y$ is truly all-or-nothing, like falling off a cliff, then binary logistic model is likely to be appropriate. But stepwise variable selection is an invalid method.</p>
https://stats.stackexchange.com/questions/67094/is-binary-logistic-regression-the-right-choice
Question: <p>Is it acceptable to run a logistic regression on a yes/no DV and include a predictor variable that is a count of the number of times something happened previously, but none of the cases has a zero count? It seems to me you would be testing if more than 1 event matters, but not whether the overall number matters compared to no events. </p> <p>Thanks.</p> <p>To give some context, it's a study to see how interaction with government officials affects future use of government services. If the participants were limited to those who had at least one interaction, could you then use a regression to identify the potential effect of each additional interaction? It would seem to be a better design and more robust results if you had some "zero" participants too. Thoughts? </p> <p>Thanks for the thoughtful responses. It's not my study, I'm assessing someone else's. It just struck me as odd to talk about the effect of something without including the comparison of people who had no experience with X. It may not be the number of times something happened that creates the effect, but rather that it happened at all. you may detect an intensification of the effect with more of X, but without zero, do you know if it would hold if there was no X. And, yes, there are many other issues, as it's apparent the number of previous X is definitely not random. Thanks again</p> Answer: <p>This should be fine; I am not sure I understand why you think what you say in your last sentence. If no one has no events, then you can't say anything about people with no events, but that doesn't invalidate the rest of the model. </p>
https://stats.stackexchange.com/questions/70511/assessing-role-of-a-count-variable-in-regression-do-you-need-a-zero
Question: <p>This question was motivated, but is separate from, the question I posted here: <a href="https://stats.stackexchange.com/questions/94026/how-can-i-improve-the-predictive-power-of-this-logistic-regression-model">How can I improve the predictive power of this logistic regression model?</a>.</p> <p>In that case the 'cancer' outcome was occurring with ~92% probability. It was commented to me that "these variables don't discriminate your data very well. Since most people have cancer in this data set you can do just as well at predicting whether they have cancer by just saying they all have it." In this instance the predictor variables were poorly chosen and it may not have mattered much what proportion of people had cancer.</p> <p>Thinking more generally, at what point does the preponderance of one outcome become sufficiently great that logistic regression becomes a poor choice? Are there any rules of thumb to guide judgement in this area?</p> Answer: <p>There's an excellent answer to this exact question <a href="http://www.statisticalhorizons.com/logistic-regression-for-rare-events" rel="nofollow noreferrer">here</a>, based on King &amp; Zeng (2001) (<a href="http://gking.harvard.edu/files/gking/files/0s.pdf" rel="nofollow noreferrer">pdf</a>).</p> <p>The gist, from that article:</p> <blockquote> <p>The problem is that maximum likelihood estimation of the logistic model is well-known to suffer from small-sample bias. And the degree of bias is strongly dependent on the number of cases in the less frequent of the two categories. So even with a sample size of 100,000, if there are only 20 events in the sample, you may have substantial bias.</p> </blockquote>
https://stats.stackexchange.com/questions/94060/why-does-preponderance-of-a-single-outcome-render-binary-logistic-regression-ine
Question: <p>I have data where the dependent variable is discrete and lies between 20 and 40 (possible values are 20, 20.5, 21, 21.5, ..., 39, 39.5, 40). The variable measures some results from a game which can be between 20 (lowest achievable value) and 40 (highest). After some hours of research on the web, I could not find a regression model that ideally fits those characteristics of the described dependend variable. Given what I found, maybe a multinomial logistic regression fits best although my dependent variable is not nomial. I would be very thankful if you could propose me a regression model that fits my problem best according to your consideration.</p> <p>Thanks a lot!!! :)</p> Answer: <p>As long as the range of achieved scores isn't too narrow, you might treat your variable as effectively continuous, but with bounds. The bounds will impact linearity (a relationship can't just blast through a bound, so it must have a curve or bend) and constant variance assumptions (as the mean approaches a bound more closely, the variance will tend to decrease).</p> <p>So you wouldn't use linear regression if there are scores that closely approach either of the bounds, but you might use nonlinear regression or GLMs of appropriate kinds, for example.</p> <p>I see two reasonable possibilities that might work for such data - beta regression and quasi-binomial GLMs. Either might be fitted using a logistic model.</p> <p>If you want to deal with it in a slightly more "correct" fashion, you might write a model where there's an underlying continuous variable, but where you only observe it in bins (and where the recorded value is the bin-center, say). You could then write a likelihood for the observed data and try to fit the model that way; this would still require accounting for the boundary issues with mean and variance I mentioned before. (Such an approach suggests considering something like an EM algorithm as a possibility.)</p>
https://stats.stackexchange.com/questions/97593/suitable-regression-model-for-limited-discrete-dependent-variable
Question: <p>My dataset contains samples of the following variables: <br></p> <ul> <li>$X_0$: the state of the system at time 0 (a continuous scalar)<br></li> <li>$X_1$: the state of the system at time 1 (a continuous scalar)<br></li> <li>$Y$: some binary variable describing the system at time 1 <br></li> <li>$\boldsymbol{C}$: a bunch of covariates</li> </ul> <p>I would like to investigate whether the system "auto-regulates" by investigating whether larger $X_0$ tends to lead to larger $Y$ which, in turn, leads to smaller $X_1$. If this is the case, then, through the variable $Y$, the state of the system tends to be regulated. The way I have been looking at it for now is through a logistic regression: $$ \text{logit } Y = \beta_0 + \beta_{x,0}X_0+\beta_{x,1}X_1+\boldsymbol{\beta_c}\boldsymbol{C} $$ I realized that whenever $\beta_{x,0}$ is positive and significantly different from zero, $\beta_{x,1}$ is negative and significantly different from zero (Wald test). Regardless of the details of the estimation (accounting for multi-collinearity, etc.), I am wondering whether the logistic regression is the right way to approach such a problem and, if not, what other types of model could I investigate.</p> Answer:
https://stats.stackexchange.com/questions/100191/logistic-regression-for-self-regulation
Question: <p>I'm interested in modeling the probability of successfully arriving at a spawning site for an individual $i$, given two impediments that are conditioned on one another. I know whether an individual made it pass hurdle 1 ($y_{1,i}$) and hurdle 2 ($y_{2,i}$), and several other measurements that I want to use to model said probabilities with. I'm interested on the effect of these variables on the two (assumed conditionally independent) hurdle probabilities. $$ Pr(Y|y_{2,i}=1,y_{1,i}=1,X) = Pr(y_{2,i}=1|y_{1,i}=1,X)Pr(y_{1,i}=1|X) $$ Is is appropriate to use two separate, independent logistic regressions to model this system (assuming that the processes describing the two hurdles are different)?</p> Answer:
https://stats.stackexchange.com/questions/100916/modeling-conditionally-independent-observations-using-logistic-regressions
Question: <p>I am analysing a set of data where I try to predict an outcome (Level of women’s nutrition knowledge; whether it is High or Low) by using certain covariates (demographic characteristics of the sample). I have already done Chi-square analysis and now I am progressing to binary logistic regression.</p> <p>To avoid an overly complicated presentation of the results by inclusion of a large number of non-significant variables, demographic factors found in Chi-square test to be significantly associated (p&lt;0.05) with women’s knowledge were entered into the logistic regression analysis. These factors are: Prior Pregnancies, Planned pregnancy, Education levels, Household income, First language and Having health and/or nutrition related qualification.</p> <ol> <li>Dependent variable = Total score of nutrition knowledge of pregnant women; which coded as: a. Low : 0 b. High : 1</li> </ol> <p>2.Independent variables: <strong>a-Prior Pregnancies</strong> has 3 levels and coded as:.Tow and more: 0 None: 1 One: 2 <strong>b-Planned pregnancy</strong> has 2 levels and coded as: N0:0 Yes: 1<br> <strong>c-Education levels</strong> has 4 levels and coded as: Some high school or less: 0 High school completed: 1 TAFE: 2 Tertiary education: 3 <strong>d-Household income</strong> has 3 levels and coded as: &lt; $25000/yr: 0 $25000-50000/yr: 1 >$50000/yr: 2<br> <strong>e-First language</strong> has 2 levels and coded as: N0:0 Yes: 1<br> <strong>f-Having health and/or nutrition related qualification</strong> has 2 levels and coded as: N0:0 Yes: 1 </p> <p>(As seen above, the levels or class of the independents categorical variables were coded 0 for the lowest interest and 1 for the greatest interest in case of dichotomous variable and so forth for other variables). </p> <p><strong>My Questions:</strong> 1- Which category, the first category (the one of the lowest value) or the last one, should I designate it as the reference category? </p> <p>2- In case of variable like Age groups which has 4 classes and the first class has the lowest value but the last class has not the highest value; for example the age groups are as the following: Under 20 yrs (has the lowest score in the nutrition knowledge), 20-29 yrs, 30-39 yrs (has the highest score in the nutrition knowledge) and 40 yrs and above (the score of knowledge decreased) How could I choose the category with the highest value to be the reference category when I run the binary logistic regression analysis while it is not the first or the last category?</p> <p>3- In regard binary logistic regression, which method is better: enter or one of the forward or backward elimination methods? What is the deference between them? Based on what should I choose the method? </p> Answer: <p>You have engaged in dichotomania. Categorizing age, education, knowledge, and other continuous or ordinal variables will result in a host of problems. What is the rawest form of your variables?</p> <p>Neither forwards selection nor backward elimination work as advertised, and you did not provide any motivation for the use of variable selection. It doesn't solve any problem for you and creates new problems such as meaningless $P$-values and confidence intervals. There is nothing wrong with having "insignificant" variables in a model.</p> <p>What do you mean by "progressing to binary logistic regression"? Did the $\chi^2$ analysis inform model specification? This would be even worse than forward variable selection.</p>
https://stats.stackexchange.com/questions/114151/in-regard-binary-logistic-regression-which-method-is-better-enter-or-one-of-th
Question: <p>Is the requirement of monotonic sigmoidal relation between p and X'B in logistic regression equivalent as logit[p] having linear relation with X'B? X is the vector of independent variables and B is a vector of estimates.</p> Answer:
https://stats.stackexchange.com/questions/127348/is-monotonic-sigmoidal-relation-between-p-and-xb-in-logistic-regression-equival
Question: <p>I am working on data for logistic regression I used enter method to deal with variables Is it enough or i have to use forward and backward? Is there any references or reports supporting using enter method alone?</p> Answer: <p>You certainly should <em>not</em> use forward and backward. Using the enter method alone is enough if you have strong hypotheses about which variables belong in the model. Some will say that you should drop variables that are not significant but I disagree. </p> <ol> <li><p>If you have a strong hypothesis that a variable should be related to the dependent variable, then finding that it is <em>not</em> is important. </p></li> <li><p>Effect sizes are what's interesting</p></li> <li><p>A variable may be important as a control variable</p></li> <li><p>By doing exactly what you set out to do (test one hypothesis) you remove some potential problems.</p></li> </ol>
https://stats.stackexchange.com/questions/129909/using-enter-method-to-deal-with-variables-in-logistic-regression
Question: <p>I am doing a logistic regression . My outcome is a categorical (yes/ no) pain after surgery. The predictors i wish to model for includes the type of anaesthesia , among other predictors. The problem is the types i wish to include are general anaesthetic, plain spinal anaesthetic, spinal anaesthetic with morphine and finally spinal anaesthetic with diamorphine. Now, the 3 spinal anaesthetic categories are not mutually exclusive unlike general anaesthetic versus spinal anaesthetic alone. Is this appropriate for logistic regression? Or should i be using spare predictors ; i.e. GA vs Spinal and then spinal vs spinal morphine vs spinal diamorphine.. Any advice (in plain language please) would be gratefully received. </p> Answer: <p>One option is to make the categories:</p> <p>General anesthetic</p> <ol> <li>Spinal anesthetic with neither morphine nor diamorphine</li> <li>SA with morphine only</li> <li>SA with diamorphine only</li> <li>SA with both</li> </ol> <p>Then they are mutually exclusive. </p>
https://stats.stackexchange.com/questions/135301/logistic-regression-non-exclusive-predictors
Question: <p>I used the multinomial logistic regression to predict the percentages of students who voted 'acceptable', 'uncertain', and 'unacceptable' to natural ventilation use in three observed classrooms during cool and hot seasons. </p> <p>The sets of significant IVs of the cool and hot season cases were different; for example, the effect of window size (i.e. small, medium, large) on the acceptance of natural ventilation was significant for cool season case but not for hot season case.</p> <p>Question 1: how can I compare the percentages of 'acceptable' votes of the two seasons? Can I look at the differences in the predicted percentages between two seasons directly?</p> <p>Question 2: how can I compare the percentages of 'acceptable' votes of students in the rooms with different window size?</p> <p>Thank you. Please forgive me for my bad English as I am not a native English speaker.</p> Answer:
https://stats.stackexchange.com/questions/34990/can-we-simply-compare-the-predicted-percentages-of-the-outcome-between-studies
Question: <p>What I am looking to do is test for a correlation between an activity (in this case nesting) with cumulative rainfall from the previous two weeks. For example, say one individual nested on DayX where the total rainfall from the previous 2 weeks is 4cm and another individual nested on DayY with 8cm of prior rain, and there are days where it had rained with no nesting events. An example of the data is such:</p> <pre><code>Day 2wk_rain number_nested number_available 1 2 0 9 2 2 0 9 3 4 0 9 4 3 1 9 5 6 1 9 6 2 2 9 : : : : 15 8 9 9 </code></pre> <p>Analyzing this with cumulative rainfall throughout the period seemed straightforward, but when I decided to look further at a more localized temporal scale it threw me for a loop. My main objective is to find out whether rainfall can trigger nesting behavior within a few days, hence scaling it down.</p> <p>I still think I need to stick with logistic regression due to the nature of the response variable, but how I go about analyzing it and presenting it in an understandable way seems to elude me. </p> Answer: <p>I think you want either A) a survival analysis with time-varying covariates. The dependent variable is then "time to nesting" and the covariate is "amount of rain" or B) a survival analysis where the dependent variable is "rainfall to nesting". Which one would depend on whether time also is of interest (I'm guessing it is, but it's your field). Cox proportional hazards would probably be a good choice of survival models. </p> <p>I would start, though, with some graphs. If you don't have a great many birds, you could graph each one's behavior. If you do have many (more than, say, about 30) then the standard plots from survival analysis would be good. </p> <p>What software are you using? </p>
https://stats.stackexchange.com/questions/45491/logistic-regression-for-abiotic-influences-on-behavior
Question: <p>I´m trying to find variables predicting a disease by using first logistic regression for each variable on the disease and then entering the significant variables into a multiple logistic regression model. However, one of the variables in the multivariate model is a clinical score, which contains some of the variables adjusted for (among others bmi&lt;18). My question is: does it make sense to have both in the multivariate model? And if yes, what is the interpretation of an insignificant clinical score in the multivariate model? Is it insignificant because I controlled for some of its variables?</p> Answer: <p>Two comments:</p> <p>First, I would explore other methods of variable selection. Looking at a set of unadjusted regressions and choosing the "significant" variables to then include in a final model is not an ideal approach. Your options are plentiful - search around here for topics on model selection to get you started. There are experts on this topic floating around, so perhaps some of them will add more here. </p> <p>Second, think about the interpretation of the regression coefficient in a multiple regression. An increase in your clinical score is associated with X change in Y, holding covariates constant. Theoretically, if your clinical score increases, and you hold some component of that score constant, than the estimated association must be due to the other aspects of the clinical scale. </p> <p>What to do really depends on what your goals are. Are you more interested in predicting things, or making any causal inferences? Can you discard the clinical scale entirely and focus on its components,or are you stuck using it?</p>
https://stats.stackexchange.com/questions/47314/variable-entered-in-logistic-regression-model-is-part-of-another-variable-entere
Question: <p>I got the equation for logistic reg, and I am comfortable with the result. Let's say logit(p), ln(p/q), or the model is something like<br> $$\text{logit}(p) = b+a_1X_1 + a_2X_2 + a_3X_3$$ For example --> <code>$b = 10 , a_1 = 0.5 , a_2 = 0.6 , a_3 =0.7$</code></p> <p>So my equation is <code>$\text{logit(p)} = 10 + 0.5X_1 + 0.6X_2 + 0.7X_3</code></p> <p>Let's say if I want my probability to be 40% ( $p = 0.4$ ) to be my cut off rate in the system. </p> <p>How can I use this in the live system? </p> <p>I'm thinking of giving the developer this logit equation but use in the live system this equation and block account that have fall below success rate threshold? </p> Answer: <p>Here is how I would approach it:</p> <p>Do the logistic regression in R, make the data into an R object, The estimated regression is then an object, and prediction can be done using it via the <code>predict()</code> function. Put this data object and regression model object into an R package. Then the developer programming the live system need only load this package!</p>
https://stats.stackexchange.com/questions/48054/how-can-i-implement-logistic-regression-in-live-decision-system
Question: <p>In market research I'm building a logistic regression model to estimate the likelihood that clients may change banks. The proportion of events is roughly 10% in my sample. From university I remember that a proportion of events that is too small introduces bias into the estimate. Or is it the standard error that gets biased? (Question 1)</p> <p>As a rule of thumb, what is an appropriate proportion of events? (Question 2) Please give a straight and rough answer, rather than a technical explanation supported by various references.</p> Answer: <p>The proportion of events is not the issue. It's the overall sample size and the number of explanatory variables. A smaller proportion requires a larger sample size. And more parameters (explanatory variables) means that you need a bigger sample size. When comparing two proportions, you would want each sample to be such that np > 10, for each sample (as a rule of thumb). This lets you have confidence in the normal approximation that you will be using for the test.</p> <p>Logistic regression estimates binomial probabilities, so it is like the simple case of estimating a proportion - only more so, because you have more parameters and messier test statistics. </p> <p>A general rule of thumb is 30 observations per parameter when estimating a complex model. I would up that to 100 observations per parameter in your case, using the np>10 rule. Note, that that would be a bare minimum.</p> <p>RE: bias. Your software is going to do a maximum likelihood estimate of the parameters, which will work regardless of sample size. The estimates will be biased, but plausible. The problem will be the significance tests. They are based on a chi-squared approximation to the likelihood (-2 Log Lik, but you don't want technicalities), and convergence is slow with the binomial. That means that your estimates may be trustworthy but your ability to assess their significance will not, until you have a healthy sample size. (Technical point: it's not the bias of your estimates that will be the problem but the significance tests).</p> <p>But my guess is that you should have plenty of data if you are using customer records from a bank. How hard would it be to get a few thousand records? </p> <p>If you have access to a lot of data, you can use part of it to fit the model and part of it to test the model. If you like what you see, you don't need to worry about the convergence of the test statistics - since you won't be using them to evaluate the fit of your model. </p>
https://stats.stackexchange.com/questions/49931/what-is-an-acceptable-proportion-of-events-in-logistic-regression
Question: <p>I am not really sure about how it behaves when using batch gradient descent in logistic regression.</p> <p>As we do each iteration, $L(W)$ is getting bigger and bigger, it will jump across the largest point and $L(W)$ is going down. How do I know it without computing $L(W)$ but only knowing old $w$ vector and updated $w$ vector?</p> <p>If I use regularized logistic regression, will the weights become smaller and smaller or any other patterns?</p> Answer: <p>Regularization is designed to combat overfitting, but not aid in gradient descent convergence.</p> <p>If you are minimizing a function $J$ parameterized by vector $\theta$ and where each element of $\theta$ is identified by $\theta_j$, (i.e. minimize $J(\theta)$).</p> <p>Then the basic idea in batch gradient descent is to iterate until convergence by computing a new value of $\theta$ from the previous one in the following way. Updated each $\theta_j$ simultaneously with the formula.</p> <p>$\theta_j := \theta_j - \alpha\frac{\partial }{\partial \theta_j}J(\theta)$</p> <p>That $\alpha$ term is called the learning rate. It's arbitrary and if it's very small then the algorithm will converge slowly which will make the algorithm take a long time, but if it's too large, then what can happen is exactly what you are experiencing. In this case $\theta$ will be updated in the right direction, but will go too far and jump past the minimum or it can even climb out and increase.</p> <p>The remedy is to simply decrease $\alpha$ until it doesn't happen. A sufficiently small learning rate guarantees that $J(\theta)$ will decrease on every iteration. The trick is to determine what value of $\alpha$ is a good one that allows fast convergence but avoids non-convergence. </p> <p>A useful approach is to plot $J(\theta)$ while the algorithm is running to observe how it decreases. Start with a small value (e.g. 0.01) increase it if it appears to result in slow convergence or decrease it still further in the case of non-convergence. </p>
https://stats.stackexchange.com/questions/55992/convergence-of-batch-gradient-descent-in-logistic-regression
Question: <p>Forgive me for a potential dupe, as I don't know the correct terminology for searching for an existing question. Also please add tag "trends" or similar, as I don't have the reputation to create new tags.</p> <p>I have market data like so:</p> <pre><code>X Y S 10 20 0 20 30 1 20 25 0 15 10 0 ... </code></pre> <p>Where X and Y are certain variables used to calculate a quote to a customer, and S is whether the customer took the offer (0 = no, 1 = yes).</p> <p>I would now like to calculate some kind of a 2-dimensional trend for X, Y and S, and to produce a function f(X, Y) = s, where s is the probability (0..1) of the customer accepting the offer for given X and Y. We can assume the "trend" is a plane, and not some funky 3D surface.</p> <p>So:</p> <ol> <li><p>How do I determine f and</p></li> <li><p>if it's not something easily done in Excel, what is the proper temrinology to look for when searching for a programming language library for this purpose?</p></li> </ol> Answer: <p>The standard approach would be to form a logistic regression expression. The log-odds of S=1 is modeled as a regression function of X and Y. Since Excel is pretty much never the right answer for anything, you should pick different modeling software, R being a free, complete, and accurate alternative to Excel.</p> <pre><code> reg.mdl &lt;- glm( S ~ X + Y, data=dfrm, family="binomial") </code></pre> <p>(It appears that Excel errors are at the bottom of a recent academic controversy: <a href="http://www.nextnewdeal.net/rortybomb/researchers-finally-replicated-reinhart-rogoff-and-there-are-serious-problems" rel="nofollow">http://www.nextnewdeal.net/rortybomb/researchers-finally-replicated-reinhart-rogoff-and-there-are-serious-problems</a> )</p>
https://stats.stackexchange.com/questions/56325/calculating-trend-for-3-dimensional-data
Question: <p>If I have 10 Variables (Q,W,E,R,T,Y,U,I,P,A) and I want Q to be my response variable and other 9 to be my predictors variable. Do I write it in R like this </p> <p><code>EXAMPLE&lt;-glm(Q~W+E+R+T+Y+U+I+P+A,family=binomial)</code></p> <p>Furthermore, what if Q is Binary (goes from 1 to 0) and all the other 9 variables are categorically numbered. is it still the same or do i need to write it diff in R</p> Answer: <p>If those are the only variables in the data frame (I presume you have then ten variables in a data frame? If not do it!), and that data frame is named <code>foo</code>, then the following is a simpler way to specify the model:</p> <pre><code>mod &lt;- glm(Q ~ ., data = foo, family = binomial) </code></pre> <p>The <code>.</code> means <em>all variables not already specified in the model</em>.</p> <p><code>?glm</code> tells us what is acceptable for the response. This can be a numeric variable <code>0</code>, <code>1</code>, a factor with two or more levels (the first level is failure or <code>0</code>, the other levels are success or <code>1</code>), or a two column matrix with the first column being the successes and the second the failures.</p> <p>If the predictor variables are numeric but should be factors, you should convert them first to factors. First look at the output from</p> <pre><code>str(foo) </code></pre> <p>and check whether the data types for the covariates are factor or not. Here is a worked example to follow using some dummy data in <code>foo</code>: </p> <pre><code>foo &lt;- data.frame(A = sample(10, 5), B = sample(10, 5), C = sample(10, 5)) </code></pre> <p><code>A</code> is the response, <code>B</code> and <code>C</code> are the covariates that should be factors. The conversion can be done as follows</p> <pre><code>foo &lt;- data.frame(A = sample(10, 5), B = sample(10, 5), C = sample(10, 5)) want &lt;- names(foo) != "A" foo[want] &lt;- lapply(foo[, want], as.factor) &gt; str(foo) 'data.frame': 5 obs. of 3 variables: $ A: int 4 1 5 3 10 $ B: Factor w/ 5 levels "2","5","6","7",..: 1 3 5 2 4 $ C: Factor w/ 5 levels "1","2","6","8",..: 3 2 1 5 4 </code></pre>
https://stats.stackexchange.com/questions/56559/fit-a-logistic-regression-code-in-r
Question: <p>I ran annual logisitic regression on time-series datas. The most important independant variable have coefficient that are significant in a lot of years, that's a relief. But the "controlling variables", have non-significant coefficients. I'm far from an expert in stats.</p> <p>My sample is very small compared to the litterature that made that test, because I'm analysing a sub-industry that has very few companies in it. But I can't change my sample.</p> <p>In the literature, the authors that made this precise analysis on bigger samples found significant coefficients to controlling variables too, fact they use to state that: "the effect of the main variable is therefore a separate effect than the ones of controlling variables".</p> <p>As I can't say that, should I test the correlation between these variables and the main one, and eliminate some of them that present a very high correlation with it every year, stating that these are redundant variables ? If so, which level is considered high ? 0.75, 0.8 ?</p> <p>I'm planning to say:</p> <ul> <li>Main variable is significant, but not controlling variable</li> <li>There are two potential reasons: 1) My yearly samples are too small 2) The effect of the main variable is the same as the effect of some of these controlling variables</li> <li>To try to rule out 2), I tested correlation and found that bla bla bla (if low correlations, probably the samples are just too small, if high, I eliminate some variables and re-run regressions)</li> </ul> <h2>------------------------------------------------------------</h2> <p>Following Peter's answer (thank you !), I think I should add some details about what I'm testing my study:</p> <p><strong><em>Is the propensity to distribute dividends (1 or 0) dependant of the firm's life-cycle stage (using a ratio) ?</em></strong></p> <p>Controlling for profitability, asset growth rate, size (market cap.), and lagged dividends (1 or 0).</p> <p>In every year, the coefficient for lagged dividend is highly significant. And for some of the years (sufficiently for my student thesis), the coefficient of firm's life-cycle stage is significant. The signs of all the other controlling variables are as predicted (most of the time), but always far from significant. I'm hoping that doesn't make my model irrelevant. My pseudo-R square are pretty high. (By the way, does this pseudo-R take in account the variables that are non-significant, or is it the value of the model including only variables that have a significant coefficient ?)</p> Answer: <p>There are reasons to include control variables even if they are not significant. E.g.</p> <p>1) Including them may affect the parameter on the main independent variable (to my mind, this is the true meaning of a "control" variable).</p> <p>2) Finding a small effect may be important, if others have found a large one. The idea that the null hypothesis always has to be "something" = 0 is not right. Maybe you want to test if the covariates are the same as in some other work?</p> <p>3) They may be so standard in the field that not including them would lead to lots of skepticism about the model. </p>
https://stats.stackexchange.com/questions/59107/logistic-regression-controlling-variables-not-significant-what-should-i-conclu
Question: <p>So I have a large dataset, and I was wondering what the best way to conduct statistical analysis of it is. I'm very green in terms of statistical methods, but I learn quickly. Basically, each item has a couple attributes, and each attribute has several possibilities. Each item has their specific attribute set-up in terms of booleans for each possibility of each attribute (i.e. 1 means it has that possible version of that attribute, 0 means it doesn't). For some of the attributes, each item can only have one possible value, and for some it can have multiple. I'm using R through RStudio to conduct my analysis. Any help would be appreciated!</p> <p>Basically, I have a bunch of error messages generated by different customers. Each error has different attributes, which include server downtime, country of origin, the type of change, which specific item(s) caused the problem, and a couple others. I'm mostly trying to look for which applications/types of changes prompt downtime.</p> Answer: <p>If you are trying to predict downtime from attributes of messages, it sounds like you want some form of regression.</p> <p>If downtime is a binary variable (yes/no) then you probably want logistic regression.</p> <p>If downtime is continuous (e.g. in minutes or seconds) you probably want "regular" (ordinary least squares) regression.</p> <p>If downtime is in some other format, please tell us.</p> <p>Both these forms of regression (and others) are available in the <code>R</code> function <code>glm</code>. </p>
https://stats.stackexchange.com/questions/63006/statistical-analysis-on-mostly-boolean-values
Question: <p>Let $T(y_i,\pmb x_i)$ be a regression estimator (of the scalar $y_i$ unto the $p$-vector $\pmb x_i$). When $T$ is the usual LS estimator and $\nu\in\mathbb{R}^p$, we have that:</p> <p>$$T(y_i+\pmb x_i'\pmb\nu,\pmb x_i)=T(y_i,\pmb x_i)+\pmb\nu$$</p> <p>This property is called regression equivariance and plays much the same role, in the linear regression context, as translation equivariance does in the context of multivariate estimators.</p> <p>I was wondering whether there is a similar property (equivariance to a form of translation) for logistic regression.</p> Answer: <blockquote> <p>In linear regression there exist two other types of equivariance: one about adding a linear function to the response (‘regression equivariance’) and one about multiplying the response by a constant factor (‘y-scale equivariance’), but these obviously do not apply to logistic regression.</p> </blockquote> <p>Page 6 of the following document:</p> <p><a href="http://www.stoch.uni-bayreuth.de/de/CHRISTMANN/Christmann_files/ChristmannRousseeuw_wemel.pdf" rel="nofollow">http://www.stoch.uni-bayreuth.de/de/CHRISTMANN/Christmann_files/ChristmannRousseeuw_wemel.pdf</a></p> <p>The paper studies a different type of equivariance (page 6 as well):</p> <blockquote> <p>A property shared by all logistic regression estimators is $x$-affine equivariance.</p> </blockquote>
https://stats.stackexchange.com/questions/63195/counterpart-to-regression-equivariance-in-logistic-regression
Question: <p>Suppose we have a dataset with various variables <span class="math-container">$\{X_1, X_2, ...\}$</span> with unknown distributions, and a binary response variable <span class="math-container">$K$</span> that is a direct function of one of the <span class="math-container">$X_i$</span>'s. Let us say an indicator function. Suppose further that we want to estimate the probability that <span class="math-container">$K = 1$</span> and that we want the following two constraints to be fulfilled:</p> <ol> <li>The sum of <span class="math-container">$P(K = 1)$</span> for the entire dataset equals the number of elements with <span class="math-container">$K = 1$</span>;</li> <li>The sum of <span class="math-container">$1/P(K = 1)$</span> for the subset with <span class="math-container">$K = 1$</span> equals the size of the dataset.</li> </ol> <p>1 is fulfilled automatically by logistic regression, so that seems like a good candidate. But what about 2? Is there anything one can introduce to a logistic regression routine that would make it fulfill 2 as well? Or must one look elsewhere for appropriate methods?</p> Answer:
https://stats.stackexchange.com/questions/543993/estimate-probabilities-of-independent-events-given-constraints
Question: <p>I am trying to train a Gradient Boosting on a '%-target variable', i.e. having values in the interval [0,1]. The bad thing about this particular case is that the target variable is very narrowly distributed around the value 0.99. It is not constant, there are different values, it is just that they all lie very close to 0.99. Running a usual Gradient Boosting gave me a constant model, i.e. all the trees were degenerate to one single node predicting just one number.</p> <p><strong>Question: How to force Gradient Boosting (or, more generally, any regression model) to become non-degenerate when the target variable values lie very very close to each other?</strong></p> <p>Conceptually the model is right (it has a small 'absolute' error) so this is what I tried/thought about so far:</p> <ol> <li>Apply preprocessing step that 'pulls apart' different values of the target function</li> <li>Just multiply the cost function by a constant</li> </ol> <p><strong>On 1.:</strong></p> <p>Surprisingly, 1. did not really work. I tried to linearly scale the target variable so that the min of all of the values (0.989) becomes 0 and the maximal value (0.999) becomes 1. I also tried a 'smoother' version of that where I rescaled all the boundaries of different quantiles, etc. However I can not see any obvious mistake in the code but the model does not perform well. Even worse: When I test this on an other target variable where the already exists a working model, the performance totally drops when I apply this preprocessing step (and of course, I rescale the prediction in the end :-)) while without this step, the model performance is fine.</p> <p><strong>Question (2): Aren't trees supposed to work with splits and so on? It should not make much of a difference in a regression task whether I use the original target variable or a scaled version of it, right?</strong></p> <p>Maybe it is due to the fact that we use such a weird loss function (cross entropy) for logistic regression?</p> <p><strong>On 2.:</strong></p> <p><strong>Question (3): Does anybody know how to easily scale the cost function in xgboost?</strong></p> <p>Is that even a valid approach? Doesn't this only mean that the gradients will grow with that constant?</p> Answer: <p>In case other people may be interested: I think I figured out how to deal with such a situation.</p> <p><strong>The way it worked for me:</strong> Just use a translation (i.e. add a constant value <span class="math-container">$c=(0.5-\text{mean of target variable})$</span> to the target column) instead of a rescaling. That step effectively moves the mean of the data to 0.5. The reason why that works is that the squashing function (the sigmoid function) has the highest slope at that point, i.e. predictions with very little differences will cause big changes in the loss function. That is actually what I wanted to do with the scaling: I wanted the model to be focus more on 'small differences'.</p> <p><strong>Why does scaling hurt so much?</strong> Still no answer... I am unable to explain...</p> <p><strong>Even if scaling did not hurt so much, why didn't it work?</strong> No explanation still... I guess that the scaling interferes too much with the sigmoidal function somewhat...</p> <p>Hope this helps and please leave a comment or an answer I can accept if you can answer the other questions...</p>
https://stats.stackexchange.com/questions/425541/regression-with-small-target-variable-interval
Question: <p>I am currently self-studying statistics and I'm confused about the null model in binary logistic regression. I understand that the null model is used to be compared with the model you designed, but what exactly is the null model? Just ln(x)=y?</p> Answer: <p>The full model is $$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$ where $x_i$ is the $i$<sup>th</sup> predictor, $\beta_i$ its coefficient, &amp; $$\pi=\Pr(Y=1)$$ where $Y$ is the response (coded 1 for "success" &amp; 0 for "failure")</p> <p>The null model, as @Michael says, contains just the intercept: $$\ln \frac {\pi}{1-\pi}=\beta_0$$ So the intercept is the log-odds of "success", estimated without reference to any predictors.</p>
https://stats.stackexchange.com/questions/82940/is-the-null-model-for-binary-logistic-regression-just-the-natural-log-function
Question: <p>I understand that binary logistic regression is applied to binary classification problems where the dependent variable <span class="math-container">$Y$</span> has only two possible outcomes. The independent variables are <span class="math-container">$x$</span>. The result of logistic regression is assigning a probability <span class="math-container">$p$</span> to one of the two outcomes and a probability <span class="math-container">$p-1$</span> to the other possible outcome.</p> <p>I am confused on how the linear combination of the independent variables <span class="math-container">$w_1 x_1 +w_2 x_2 +w_3 x_3 $</span>, <span class="math-container">$log \frac {p} {1-p}$</span>, the probability <span class="math-container">$p$</span>, the logistic function <span class="math-container">$\frac {1}{1+e^{-x}}$</span> are connected to each other.</p> <p>Can someone help me logically understand how these concepts go together so I can finally appreciate how logistic regression works?</p> <p>Thank you!</p> Answer: <p>This is the logistic regression model, where the log-odds are posited to change as a linear function of some predictors.</p> <p><span class="math-container">$$ \log\bigg( \dfrac{p}{1-p} \bigg) = X\beta $$</span></p> <p><span class="math-container">$X\beta$</span> is the linear combination. You denote it as <span class="math-container">$w_1 x_1 +w_2 x_2 +w_3 x_3 $</span>. A more traditional way to write it would use <span class="math-container">$\beta$</span> as the symbol for coefficients and would involve an intercept, so more like: <span class="math-container">$$X\beta = \beta_0 +\beta1x_1 + \beta_2x_2+\beta_3x_3$$</span></p> <p>In order to solve for <span class="math-container">$p$</span>, we must do some algebra.</p> <p><span class="math-container">$$ \log\bigg( \dfrac{p}{1-p} \bigg) = X\beta\implies\\ \dfrac{p}{1-p} = \exp(X\beta)\implies\\ p = (1 - p) \exp(X\beta)\implies\\ p = \exp(X\beta) - p \exp(X\beta)\implies\\ p+p\exp(X\beta) = \exp(X\beta)\implies\\ p(1 + \exp(X\beta)) = \exp(X\beta)\implies\\ p = \dfrac{\exp(X\beta)}{1 + \exp(X\beta)\implies}\\ p = \bigg( \dfrac{1 + \exp(X\beta)}{\exp(X\beta)} \bigg)^{-1}\implies\\ p =\bigg( \dfrac{1}{\exp(X\beta)} + 1 \bigg)^{-1}\implies\\ p =\bigg( \exp(-X\beta) + 1 \bigg)^{-1}\implies\\ p = \dfrac{1}{1 + \exp(-X\beta)} $$</span></p>
https://stats.stackexchange.com/questions/556135/understanding-binary-logistic-regression-as-a-linear-model
Question: <p>I am reading through the book <em>Practical Statistics for Data Scientists</em> and I am on a section covering logistic regression. In this section the book covers how the coefficients to the logistic regression function are on the log-odds scale. As an example, there is some R output that specifies (among others) a coefficient called payment-to-income-ratio that is 0.07974.</p> <p>The author gives an example regarding a change in X and what the means that I cannot follow. It says:<br /> <code>For example, the effect of increasing the payment-to-income ratio from, say, 5 to 6 increases the odds of the loan defaulting by factor of exp(0.08244) ~ 1.09</code></p> <p>Where did .08244 come from? Why is it not <em>exp(0.07974)</em> since that is the coefficient value and the increase is by 1 unit? I am sure I am missing something very obvious. . .</p> Answer: <p>I just looked at two editions of the book on line (Chapter 5, &quot;Classification&quot;; section &quot;Logistic Regression,&quot; subsection &quot;Logistic Regression and the GLM&quot;). There is a discrepancy between the <a href="https://www.oreilly.com/library/view/practical-statistics-for/9781491952955/" rel="nofollow noreferrer">first</a> and <a href="https://www.oreilly.com/library/view/practical-statistics-for/9781492072935/" rel="nofollow noreferrer">second</a> editions that leads to questions about quality control in publication.</p> <p>The <code>logistic_model</code> in question in the first edition apparently wasn't coded according to the claim (in both editions):</p> <blockquote> <p>The response is outcome, which takes a 0 if the loan is paid off and 1 if the loan defaults,</p> </blockquote> <p>as in the first edition things that clearly should have been associated with lower odds of default like <code>borrower_score</code> had positive coefficients, erroneously implying greater odds of default. In the first edition, even the direction stated in the explanatory text thus differed from that indicated by the model coefficient.</p> <p>That pervasive problem was fixed in the second edition (evidently the version referenced by the OP), with all coefficients having signs reversed from the first-edition values. I suspect that the regression coefficient reported for <code>payment_inc_ratio</code> in the second edition, 0.07974, is correct and that the statement in the quote included in the OP is just another error from the first edition that wasn't caught in the second edition.</p> <p>I suppose I could be missing something, but I don't see what. You might correspond with the authors.</p>
https://stats.stackexchange.com/questions/559610/help-interpreting-coefficients-to-logistic-regression
Question: <p>I'm interpreting the coefficients of a regression with all categorical variables and all but one make sense, in that I was expecting the association/direction from my descriptive statistics. I know that the regression controls for other variables, but one coefficient makes no sense - in other words the opposite to what I was expecting. It's social science, so the model itself is weak (can't actually predict using just these variables) but I'm interpreting the coefficients anyway as per my supervisors advice. I was expecting multicollinearity but correlations not showing anything out of the ordinary. My sample size is enormous (25,000). Any advice greatly appreciated. </p> Answer:
https://stats.stackexchange.com/questions/275124/unexpected-logistic-regression-coefficients-opposite-to-chi-square-cross-tabs
Question: <p>I have a dataset with more 15 independent variables trying explain a binary outcome. The results seemed dubious and the confidence interval profiling failed by providing lower bounds of the confint larger than the upper bounds-- I found the variable creating this mess which is a four category variable. Furthermore, I understand that it is stupid to use the var==2 as a reference category since there are no "Yes" outcomes for that value, hence giving trouble for all the other values. I could relevel and just have problems with that level giving a huge SE, or maybe I should collapse level two and three. However, I would prefer not to - given the actual interpretation of the variable. Any ideas how I can sneak around this?</p> <pre><code>with(w,table(outcome,var) 2 3 4 5 &lt;NA&gt; Sum No 35 226 281 463 0 1005 Yes 0 18 36 268 0 322 &lt;NA&gt; 0 0 0 0 0 0 Sum 35 244 317 731 0 1327 &gt; glm(outcome~var,family=binomial,data=w)-&gt;l &gt; summary(l) Call: glm(formula = outcome ~ var, family = binomial, data = w) Deviance Residuals: Min 1Q Median 3Q Max -0.95571 -0.95571 -0.49101 -0.00036 2.28333 Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -16.57 405.60 -0.041 0.967 var3 14.04 405.60 0.035 0.972 var4 14.51 405.60 0.036 0.971 var5 16.02 405.60 0.039 0.968 (Dispersion parameter for binomial family taken to be 1) Null deviance: 1470.6 on 1326 degrees of freedom Residual deviance: 1313.6 on 1323 degrees of freedom AIC: 1321.6 Number of Fisher Scoring iterations: 15 &gt; confint(l) Waiting for profiling to be done... 2.5 % 97.5 % (Intercept) -140.643404 2.508492 var3 306.741916 188.855827 ### var4 -4.058638 141.355062 var5 -3.034361 140.211069 There were 40 warnings (use warnings() to see them) </code></pre> Answer:
https://stats.stackexchange.com/questions/275216/logistic-regression-with-zero-event-in-one-category
Question: <p>I have 52,840 survey responses covering 2012-2015. I've produced 14 different small area estimates for survey variables like obesity, binge drinking, smoking, etc. These estimates were created using a generalized linear mixed model approach. </p> <p>I'd like to see whether or not there are overlapping areas with high smoking predicted prevalence and high (insert variable) prevalence. I'm going to be using the 1st and 2nd highest quintiles to define an area of high smoking prevalence. However I'd like to determine which handful of other variables to model. I have an idea based on peer literature review but want to show why I chose these. </p> <p>Can I model smoking prevalence based on the other prevalences via logistic regression? Is there too much error from the previous modeling iteration, even if I'm treating the small area prevalence as "true"? Is it better to model on the unweighted survey responses that initially went into the prevalence small area estimation? </p> <p>Thanks</p> Answer:
https://stats.stackexchange.com/questions/278316/can-i-model-one-prevalence-on-another
Question: <p>Will the estimates and odds ratios change for an independent variable if it is by itself vs if there are other independent variables? I would think that it would change since thinking of it as an equation $y=x$ (one variable) is different than $y=x+z+q$ (3 variables).</p> Answer: <p>The estimate of the effect of $x$ will certainly change if $z$ or $q$ (or both) have an effect on $y$ net of $x$. It will change even if $z$ and $q$ are orthogonal to $x$ as long as $z$ and $q$ explain any portion of $y$. This happens because adding new variables changes the scale in which the entire model is expressed. The logistic model given by:</p> <p>(1)$$ \ln\bigg(\frac{1}{1-p_i}\bigg) = \beta_{0} + \beta_{1}x_{1i} $$</p> <p>Is expressed in the latent variable formulation as:</p> <p>(2)$$ y^* =\alpha_{0} + \alpha_{1}x_{1i} + \sigma \varepsilon $$</p> <p>The total variance in the model is made of explained (modelled) and unexplained (residual/errors) variance. The logistic model forces the errors to have a fixed variance of 3.29 (and a logistical distribution). Hence, any changes in the amount of explained variance will force the total variance of $y*$ to change, causing its scale to change because the variance of the errors is fixed and cannot change. This affects the size of the coefficients because they now explain change in $y*$ in a different scale. The scaling factor is given by $\sigma$ in (2). The alphas in (2) are related to the betas in (1) by:</p> <p>(3)$$ \beta_{j} = \frac{\alpha_{j}}{\sigma}\;\;j=1,...,J. $$</p> <p>Adding more covariates that explain any portion of $y$ will reduce the amount of unexplained heterogeneity and will consequentially change the other coefficients in the model. You can refer to <a href="https://stats.stackexchange.com/a/71696/29707">my answer here</a> or to the following literature:</p> <p><strong>References</strong></p> <ul> <li>Allison, P. D. (1999). <a href="http://smr.sagepub.com/content/28/2/186.abstract" rel="nofollow noreferrer">Comparing Logit and Probit Coefficients Across Groups.</a> Sociological Methods &amp; Research, 28(2), 186–208.</li> <li>Mood, C. (2010). <a href="http://esr.oxfordjournals.org/content/26/1/67" rel="nofollow noreferrer">Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do About It.</a> European Sociological Review, 26(1), 67–82.</li> </ul>
https://stats.stackexchange.com/questions/278367/logistic-regression-with-multiple-independent-variables-vs-one-independent-varia
Question: <p>I found these expressions for the probability of an outcome $y$ given variables $x$ and parameter $W$. $\theta$ is the logistic function.</p> <p>$p(y \mid x,W) = Bernoulli(y \mid \theta(W^\intercal X) ) )$ </p> <p>adapted from [<a href="https://mitpress.mit.edu/books/machine-learning-0" rel="nofollow noreferrer">1</a>]</p> <p>$p(y \mid x,W) = \theta(y W^\intercal X) $ [<a href="https://mitpress.mit.edu/books/machine-learning-0" rel="nofollow noreferrer">1</a>]</p> <p>adapted from [2]</p> <p>I presume both are correct. How can interpret the first one where the argument of the Bernoulli distribution has a conditional.</p> <p>[2] youtube/qSTHZvN8hzs?t=44m1s</p> Answer: <p>The first equation cannot be correct. The left hand size is a number, and the right hand side is a distribution (so it does not type check). The correct way to write what the first equation is getting at is</p> <p>$$ y \mid X, W \sim Bernoulli(y, p = \theta(W^\intercal X) ) ) $$</p> <p>where $\sim$ is pronounced "is distributed as".</p> <p>The second equation is correct as written*.</p> <p>* Assuming that $\theta$ is the logistic function.</p>
https://stats.stackexchange.com/questions/279245/probability-notation-in-logistic-regression
Question: <p>Imagine I have objects with 5 different properties which are either present (1) or not (0). Further, I have some other variables that I expect to predict the presence of a property.</p> <p>Focusing on a single property out of the five, I could use a logistic regression to infer the influence of my variables on the properties presence. This, however, would give me five different models and I'd need to assume that the properties are independent of each other.</p> <p>Is there an elegant way to combine all five attributes in a single model? Probably using some hierarchical model? For the implementation I use <code>rstan</code>, but some theoretical idea where to start would be helpful.</p> Answer: <p>What you are describing is a <strong>multivariate logistic regression</strong>, NOT a multiple logistic regression. Note that by convention:</p> <ol> <li>multivariate implies >1 dependent/target variable </li> <li>multiple implies >1 independent/predictor variable and only 1 dependent/target variable</li> </ol> <p>This important difference is frequently confused, so be careful when you read papers using the terms. </p> <p>Also note that if the 5 properties (i.e. dependent variables) are uncorrelated, you will not benefit from a multivariate analysis; you might as well use 5 separate logistic regressions in this case.</p>
https://stats.stackexchange.com/questions/278900/logistic-regression-with-multiple-dependent-variables-in-a-single-model
Question: <p>I'm working on a customer churn model. Currently i have a variable for increased returns (1/0). After i run the model and convert the coefficient to and odds ratio, then convert that to probability; I wind up with 70%. (Churn =1, Not Churn = 0)</p> <p>My question is can i use the inverse of this probability to say that a customer with decreased or flat returns has a 30% probability to "not churn". If i actually flip this relationship in the data and feed it to the model, it starts to generate a lot more false positives, and accuracy falls drastically.</p> <p>Any help would be extremely appreciated. Thanks!</p> Answer: <p>If $$P(churn = 1 \mid increasedReturns = 1) = 0.70$$ then $$P(churn = 0 \mid increasedReturns = 1) = 0.30$$ </p> <p>70% is the probability of churn <em>given</em> that the customer has increased returns.</p> <p>So, 30% is the probability of "no churn" given that the customer has increased returns.</p> <p>To find the probability of a churn for a customer with decreased of flat return, you need calculate the probability of churn when increasedReturn = 0. $$P(churn = 1 \mid increasedReturns = 0)$$</p>
https://stats.stackexchange.com/questions/299791/logistic-regression-question-about-inverse-of-a-features-probability
Question: <p>How to build a regression model with a continuous response variable bounded from 0 to 1? </p> <p>I think it is not logistic regression, where I am not predicting a binary response variable, Right?</p> <p>Sorry for duplication if any, I tried to search but not find. (I feel this question must be asked many times.)</p> Answer:
https://stats.stackexchange.com/questions/305056/how-to-build-a-model-with-a-continuous-response-variable-bounded-from-0-to-1
Question: <p>I want to develop a logistic regression model. There are 1000 cases in the dataset and there are only 180 'Yes'. Therefore, the proportion is 18%. I was told that I should have at least 500 Yes in the dataset in order to build a good logistic regression model. How can I handle this problem? Do I need to have at least 500 Yes?? </p> Answer:
https://stats.stackexchange.com/questions/323802/sample-size-and-no-of-events-in-logistics-regression-model
Question: <p>question 1: I have 6 variables where 2 binary predictor variables have a much higher odds ratio than the other variables. One variable has 8.40 odds ratio and the second has 3.16 odds ratio. The other variables are between 1.42 and 1.54 odds ratio. It seems like the variable with 1.42 odds ratio would be a much more reasonable result to get, because 1 unit increase in that variable, would give a 42% higher probability of success. 8.40 odds ratio would on the other hand give something like 740 % higher probability of success. Does that make sense, or is there something wrong with my regression? </p> <p>Question 2: The other variables with an odds ratio between 1.42 and 1.54 have an insignificant p-value (0.10 and 0.25 at a 0.05 significant-level). Does this mean that i can't interpret on those variables' odds ratio, or is that still an option? </p> <p>Screen shot of the analysis: <a href="http://prntscr.com/i959r7" rel="nofollow noreferrer">http://prntscr.com/i959r7</a></p> Answer:
https://stats.stackexchange.com/questions/326452/high-odds-ratio-and-insignificant-p-value-in-multiple-logistic-regression
Question: <p>I am conducting a meta-analysis and I am extracting pearson's correlation coefficient (r) from studies in order to meta-analyse them. Some studies have not used correlations so I am having to calculate r from the statistics they report. One study has reported a logistic regression, is it meaningful to squareroot r2 to get r, or should I convert the odds ratio to cohen's d and then cohen's d to pearson's r?</p> Answer: <p>As you can probably tell from Whuber's comment, the short answer is no.</p> <p>The relation between $R^2$ and pearson's correlation coefficient only exists for linear regression, and can then only be used the other way around (from Pearson's $r$ to $R^2$) because you wouldn't know whether you'd have to pick the positive or negative root and hence whether it is an upward or downward sloping line. Unless of course you have the coefficients of the regression themselves.</p> <p>At a more fundamental level there is no such thing as "the $R^2$" of a logistic regression. The reason is that whereas in linear regression we use ordinary least squares to estimate the equation of interest, in logistic regression we use maximum-likelihood. </p> <p>Least squares estimation generates a sum of squares of the model and a sum of squares of residuals, which have strong relations with the variance and covariance of your variables. These relationships in turn generate the relation between Pearson's $r$ and the $R^2$.</p> <p>Maximum-likelihood does not generate sums of squares, it generates a likelihood. This likelihood is often used to generate a number that looks like and works a bit like an $R^2$, but it is something fundamentally different. They are officially called Pseudo $R^2$'s. In fact, there is not a single one, but multiple possibilities to calculate a Pseudo $R^2$ for logistic regression. Some examples include Mc-Fadden's $R^2$, Nagelkerke's $R^2$ or the Count $R^2$. For an overview of how these are calculated, see <a href="https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/" rel="nofollow noreferrer">here</a>. It usually involves comparing the likelihood of your full model with that of a simple model that has no predictors, or, as in the case of the count $R^2$, the number of correct predictions.</p> <p>I am no expert in meta-analysis, but before you turn to your other proposed method (from odds-ratio to Cohen's d to r), I'd like to warn you that I am not so sure whether that really has the desired effect. A logistic model is something completely different than a linear model as you can tell from the graph below showing a logistic, probit and linear model (modeling the probability of unemployment depending on age in the Netherlands). The conversion from logit to linear via Cohen's D, to me, seems a conversion too far.</p> <p><a href="https://i.sstatic.net/xIVyK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xIVyK.jpg" alt="Linear, logistic and probit model Probability of Unemployment versus Age in the Netherlands. Data: European Value Survey 2008"></a> Linear, logistic and probit model Probability of Unemployment versus Age in the Netherlands. Data: European Value Survey 2008.</p>
https://stats.stackexchange.com/questions/328874/can-i-squareroot-r2-to-get-r-in-a-logistic-regression
Question: <p>My survey data contains 10 different questions all recoded into 'Correct' (1) and 'Incorrect'. I have 2 IVs which are also categorical. I need to find out whether each treatment condition affects the answer to the questions. In order to do this in SPSS I have 2 options:</p> <ol> <li><p>Run binomial logistic regression with each question as my DV and the two categorical IVs. However, do I run binomial logistic regression for each question separately? Does SPSS allow to run this with multiple dependent variables?</p></li> <li><p>Run multinomial logistic regression. In this case, I created a variable called 'correct_answers' which indicate the number of correct answers given by each participant. So, in order to run multinomial logistic regression, I would put 'Correct_answers' as my DV (reference as last by default) and Factors (since categorical) or IVs would contain each question and the two previous IVs. Will this work or am I doing something wrong? </p></li> </ol> <p>Thank you.</p> Answer: <p>Questions about how to code are off topic here, but your question has a statistics component as well.</p> <p>Multinomial logistic will not be right here, as far as I can tell. If you are interested in the number of correct answers then that would be the DV and you would use a count regression model such as Poisson or negative binomial regression with your two IVs.</p> <p>Whether to run multiple logistic regressions with each DV or one regression with all of them depends on whether you are concerned about the relationships among the DVs. If you are, then you would need multivariate logistic regression. I don't know if SPSS can do this. Be careful in searching as some people use this term to mean one DV and several IVs (which is not what you want). These models are tricky, it's much simpler to run separate models for each DV. </p>
https://stats.stackexchange.com/questions/334024/running-logistic-regression-in-spss
Question: <p>I want to know if <em>depending on country</em> will an individuals response to 6 different questions which predicts an outcome variable scored as (yes/no) differ. E.g someone from country X may score higher on the 6 questions which in turn predicts whether they answered yes/no to my outcome variable. I'm confused on how to analyze this because if i include country as another IV it wont tell me whether being from a particular country influenced the way respondents answered the 6 questions which in turn affected their outcome response- it simply tell me if country as another IV along with the 6 i already have predicts the outcome.</p> <p>Could i use the selection variable box on SPSS and run a log.reg. for each country separately? I.e so i would report the logistic model results for each country individually by selecting those cases for country x and again do this for country Y.</p> Answer: <p>If you have multiple people from each country and believe that people from one country have some similarity to each other (which seems reasonable) then you will violate the assumption of independent error and regular regression is not appropriate. </p> <p>What you propose in your second paragraph is called stratification and it is one reasonable thing to do, but you will get separate results for each country with no statistical analysis of the differences.</p> <p>Another possibility is to use a multilevel model (MLM) . Since your DV is binary, you will need a nonlinear MLM. </p>
https://stats.stackexchange.com/questions/331779/how-to-use-logistic-regression-for-this-scenario
Question: <p>My current understanding is that logistic regression can be used for 2 tasks:</p> <p>1) Binary Classification 2) Computing a probability between 0 and 1 for data generated by a Bernoulli process?</p> <p>I also know there's more than one way to solve a logistic regression problem, one being the bayesian way, one being MLE and the other having to do with general linear models and link functions. </p> <p>In what circumstances is bayesian logistic regression preferred? </p> Answer:
https://stats.stackexchange.com/questions/336895/when-is-bayesian-logistic-regression-mcmc-preferable-to-glm-logistic-regressio
Question: <p>I'm a medical student and for a research project, I'm trying to predict the success of a medical procedure. An independent variable of interest is the amount of prior experience the doctor has performing the procedure. This effect is probably non-linear. In other words, you learn more the first time you try something then the 100th time. I would like to quantify this effect, but regular logistic models assume a linear effect of a continuous variable (as far as my textbooks explain). </p> <p>Is there a method that allows for non-linear logistic regression?</p> Answer:
https://stats.stackexchange.com/questions/338396/non-linear-logistic-regression-one-predictor-with-non-linear-effect
Question: <p>For my study i used a model which had 6 independent factors that predicted a binary outcome variable (yes/no). The general assumption is that a higher score should predict a yes response. </p> <p>I tested this model in two different countries to understand what predicts the outcome behaviour in each. Should i first report the regression model which contains data from both samples to test my hypothesis regarding the basic assumption of the predictors influencing the outcome.</p> <p>Then should i report the 2 seperate models for each country Or should i not report the first 1 and simply report the 2 separate sample models. Also, a final qs, with regards to APA format, how do i report the results of a logistic regression in a write up?</p> Answer: <p>To decide if you need to report one, two, or all three models, you should run yet one more model. In this new model, include a new dummy variable for the country. Include the interaction terms with the dummy variable. If any of the interaction terms with the dummy variable or the main effect for the dummy variable is statistically significant, then this indicates that you have different models for each group. In this case, you can simply report the two separate models. If they are n.s., then you are justified in collapsing the two countries into one model.</p> <p>Note, you can also run an omnibus model comparison test and assess if there is anything gained by adding any of the dummy variable and interactions.</p> <p>As for APA format, the best strategy is to Google the title of a common APA journal (say Journal of Educational Psychology) and "logistic regression". Then you can scan a few articles to see how those authors reported their results.</p>
https://stats.stackexchange.com/questions/341938/reporting-binary-regression-models
Question: <p>carried out an ordered logistic regression but sample sizes were not equal, one is much larger than the other (490) compared to 224 and 219, the result for this group was non-significant could this be the result of a larger sample size? If not, are there other negatives to having such big differences in cohort sizes; in terms of validity or reliability etc</p> Answer: <p>With a big enough sample size, even tiny differences would be significant. Your sample sizes seem fine, as long as the model is not very complex.</p> <p>As to why your results are not significant, it could be that your model is simply not very strong - look at the effect sizes. </p>
https://stats.stackexchange.com/questions/349063/ordered-logistic-regression-unequal-sample-sizes
Question: <p>Whenever I am building the first model in logistic regression, it is throwing the error shown below. My code is:</p> <pre><code>mo2 &lt;- glm(train3$Medal ~ ., data = train3[, -15], family = "binomial") Error in `contrasts&lt;-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels </code></pre> <p>That is what it is showing, please help me with it.</p> Answer: <p>The error message indicates that you include in your model categorical variables (i.e., <code>factor</code>s in R) that only have one category/level. You could exclude those with the following code:</p> <pre><code>keep &lt;- function (x) { if (is.factor(x) || is.character(x)) { length(unique(x[!is.na(x)])) &gt; 1 } else TRUE } train3. &lt;- train3[sapply(train3, keep)] </code></pre> <p>Then you will need to use <code>train3.</code> in the call to <code>glm()</code>, i.e.,</p> <pre><code>mo2 &lt;- glm(Medal ~ ., data = train3., family = binomial()) </code></pre> <p>I have not excluded the 15th column in <code>train3.</code> as you did in your original code; you will need to do it yourself.</p>
https://stats.stackexchange.com/questions/368739/whenever-i-am-building-the-first-model-in-logistic-regression-there-is-an-error
Question: <p>Can you offer any assistance on clarifying the meaning of the following content - specifically the section on "Then a link function must be used to reverse the logarithm transformation, exponentiating the modeled value." This analysis is logistic regression and analyzed in SPSS. Also, how are beta coefficients best defined?</p> <p>The probability is modeled by taking the log of the odds ratio,</p> <p>Ln (p/1-p) = B0 + B1X1</p> <p>The model results in estimated Beta coefficients B0 + B1X1. Then a link function must be used to reverse the logarithm transformation, exponentiating the modeled value. The result provides the model predicted probability of dropout (or transfer) and the change in model predicted probability if the person received counseling or PTSD diagnosis. </p> Answer: <p>In your original model you have <span class="math-container">$$ \ln [p / (1-p)] = B_0 + B_1X_1 $$</span> so if there is a unit change in <span class="math-container">$X_1$</span> then it will change the left hand side by <span class="math-container">$B_1$</span> units. This is hard to interpret. So if you exponentiate <span class="math-container">$B_1$</span> you will get <span class="math-container">$\exp(B_1)$</span> which is the amount that <span class="math-container">$p/(1-p)$</span> is multiplied by. Since <span class="math-container">$p/(1-p)$</span> is usually called the odds this results in <span class="math-container">$\exp(B_1)$</span> being called the odds ratio. So now we know that a unit change in the predictor variable multiplies the odds by the quantity we calculated. I assume from the rest of what you quote that <span class="math-container">$X_1$</span> only has two values so this odds ratio is the change in odds for one category compared to the other.</p>
https://stats.stackexchange.com/questions/373980/logistic-regression-what-does-this-mean-then-a-link-function-must-be-used-to-re
Question: <p>I am working with logistic regression in R by means of glm. I have fitted a logistic (0-1) regression model with seven predictor variables. I obtain a model where the variables have high p-values (>0.1) (not significative) but the r^2 of Mcfadden is high (0.6).</p> <p>McFadden is equivalent to Pearson's r^2 in linear regression (I am not totally sure). Therefore, how it is possible to obtain a high correlation with no statistically significant variables?</p> Answer: <p>There are several possible reasons which could be responsible for the scenario you describe.</p> <p><strong>1. Collinearity among some or all of your predictor variables</strong></p> <p>Did you check if some of your predictor variables are engaged in collinearity? That might be one possible explanation. You can use the vif() function in the car package to check for collinearity. If you find any predictor variables which have high VIF (variance inflation factor) values - say larger than 5 - you may need to exclude some of those from your model and see if that resolves your collinearity. </p> <p><strong>2. Too many predictor variables in your model relative to the number of events</strong></p> <p>How many observations do you have in your model and how many events? (An event corresponds to Y = 1, where Y is your response variable.) There are rules of thumb for how many events you should have per predictor variable included in your model (e.g., 10 events per variable), which you can use to determine if your model includes too many predictor variables relative to the available number of events. If it does, you will need to include fewer predictor variables in your model. </p> <p><strong>3. Too few observations in your model</strong></p> <p>Perhaps the predictor variables included in your model have significant effects on the log odds of success (i.e., achieving a value of 1 for Y), but you just don't have enough observations in your model to detect these effects. To see if this might be the case, look at the confidence intervals for each predictor in relation to zero and see (i) how wide the confidence intervals are and (ii) how far the centers of the confidence intervals are from 0. (Use the confint() function to extract the 95% confidence intervals from your model and the coef() function to extract the centers of these intervals.) If the centers of the intervals are not too close to zero and the intervals are wide, that would suggest you need more data in your study to be able to detect the effects of interest. Recall that R works on the log odds scale by default. </p> <p>There may be other reasons as well - perhaps others on this forum can highlight them.</p>
https://stats.stackexchange.com/questions/387069/logistic-regression-with-high-correlation-but-no-significative-variables
Question: <p>Normally when we do logistic regression, we would have a dataset something like:</p> <pre><code> X1 X2 Y 1: A 3 0 2: A 4 0 3: A 3 0 4: B 4 1 </code></pre> <p>(4 observations)</p> <p>However, for some reasons, I only have the aggregated version:</p> <pre><code> X1 X2 count Y_count 1: A 3 2 0 2: A 4 1 0 3: B 4 1 1 </code></pre> <p>(i.e. a summary table. e.g. 2 records with <code>X1 = A and X2 = 3</code> but the number of <code>Y = 1</code> is 0)</p> <p>Now, I understand that I can simply replicate and do the logistic regression normal way; but my question is whether I can simply use the summary table to do logistic regression (potentially in R). Any implications?</p> <p>There're other posts of similar questions: <a href="https://stats.stackexchange.com/questions/242175/difference-between-logistic-regression-and-linear-regression-on-aggregated-datas">here</a>. Also I'm aware that "weights in logistic regression differs from ... weights in linear regression) (<a href="https://christophm.github.io/interpretable-ml-book/logistic.html" rel="nofollow noreferrer">ref</a>)</p> Answer:
https://stats.stackexchange.com/questions/388304/logistic-regression-on-aggregated-counts
Question: <p>I have a dataset (unfortunately cannot disclose any part of it) which has a binary response variable. For each independent variable, I calculate the log odds of the positive cases given each value of the IV and plot them to check linearity, i.e., x-axis is the IV and y-axis is the <span class="math-container">$logodds(DV=1|IV)$</span>. I find that at least 50% of my variables (which includes interaction effects) are not linear in log odds with the DV. How does my model converge then? Is the linearity assumption not a very strong one or any model can converge but in such cases simply cannot be trusted? If any additional information is required, just let me know and I will try to my best to provide to clarify my question even further.</p> <p>As a side question, I am wondering if my approach of plotting logodds against each IV to check linearity is correct because everywhere else, people just advice to use the Box-Tidwell approach.</p> Answer: <p>Maximum likelihood will give you the "best" parameters <em>given the model</em>, but "best" does not necessarily mean "good enough". Especially when your model is not very appropriate, the best given the bad model can be quite poor. However, that does not necessarily preclude the model from converging. </p>
https://stats.stackexchange.com/questions/388369/how-does-a-logistic-regression-model-converge-if-most-variables-are-not-linear-w
Question: <p>I am conducting analysis to assess agreement between self-report and lab data on adherence to a certain drug intervention. I know that medication adherence in the population of interest can be influenced by variables such as age, sex, socioeconomic status, etc.. </p> <p>I need to conduct logistic regression to determine to what extent these variables explain the level of agreement between self-report data and lab data. What type of regression should I use to achieve this?</p> Answer: <p>Assuming that by "type of logistic regression" you mean binary, ordinal or multinomial, it depends on the nature of your dependent variable.</p> <p>If agreement for each person is a dichotomy - agree vs. not - then you want binary logistic. If agreement is ordinal - e.g. agree completely, agree somewhat, did not agree at all (or it could be a variation of this) - then ordinal logistic is a good start.</p> <p>If agreement is a count (days per week) then you don't want logistic at all, but some sort of count regression. Poisson or negative binomial would be starting places.</p> <p>And if agreement is measured some other way, then please clarify by editing your question. </p>
https://stats.stackexchange.com/questions/390514/what-type-of-logistic-regression-should-i-use
Question: <p>I have a regression with a log-transformed independent variable, and I would like to know the proper way to explain its effect on my binary dependent variable.</p> <p>For example, say the equation is: </p> <p>(binary_variable)i = b0 + -0.03(log_variable)i</p> <p>Does a 1% increase in log_variable mean a 0.03% (or 0.0003%?) reduction in binary_variable?</p> <p>Thanks</p> Answer:
https://stats.stackexchange.com/questions/398932/interpreting-coefficient-of-a-logarithmic-coefficient-in-a-logistic-regression
Question: <p>I have been thinking about this for a while.</p> <p>I have a panel dataset with two time periods. My outcome variable is personal income. Since this study was conducted in a low-income country and the entire set of respondents of women, I have a lot of zero, and near-zero values.</p> <p>I initially changed the zero values into 1 and used the logged form. However, I read that it's not a good practice.</p> <p>Furthermore, many of those who had zero or near-zero in the first time period had a decent-sized personal income value in the second survey. This leads to a very large percentage change between the two periods (think, from USD 1 to USD 1,000).</p> <p>This latter problem arises also when I use asinh instead of log.</p> <p>I know that if I use level value instead of a log, then asymptotically it will take a normal distribution. However, the sample size right now is around 3,000, and I think just using level would lead to inference problems.</p> <p>How do you tackle this problem? Did you come across anything like this in your case?</p> Answer:
https://stats.stackexchange.com/questions/403077/dealing-with-logged-outcome-variable-in-a-regression-with-zero-values
Question: <p>I have a retrospective cohort study with matching (1:3) done. The response variable is charity care which is a continuous variable and the primary independent variable is hospital ranking which is a binary variable. Most statistical books suggest conditional logistic regression model to account for matching when the response variable is binary. However, given that I have a continuous response variable and matching was done, I'm having challenges getting a suitable regression model that accounts for matching. I can't convert the response variable to a binary variable (e.g. using the median) as it would have little or no practical meaning. I thought of converting it to tertiles; however, SAS (which I use) does not currently support conditional logistic regression with polytomous response variable. I will greatly appreciate suggestions on how to address this challenge. Thank you.</p> Answer:
https://stats.stackexchange.com/questions/404664/regression-model-for-matched-retrospective-cohort-study-with-continuous-response
Question: <p>I try to predict whether households use a certain service (<strong>TRUE</strong> or <strong>FALSE</strong>) based on various variables, using logistic (LASSO) regression.</p> <p>Among many others, I have the variables <strong>percentage man</strong> and <strong>percentage woman</strong>, which have a -.85 Pearson's correlation coefficient with each other. However, when I run the logistic regression they both have a beta-coefficient of respectively 3.34 and 3.16, which puts them both in the top 40 of most predictive variables among the 150 variables I use. </p> <p>How can they both be a positive predictor for the label when they are so negatively correlated with each other?</p> <p>EDIT: some extra info that might be of interest: <strong>percentage man</strong> correlates with the outcome variable by a Pearson's correlation coefficient of 0.041, and <strong>percentage woman</strong> by -0.045</p> Answer:
https://stats.stackexchange.com/questions/411332/why-can-negatively-correlated-variables-have-similar-beta-coeficients-in-logisti
Question: <p>I am also looking for guidance on how to conduct a logistic regression with three categorical dependent variables. My two independent variables are dichotomous and experimentally manipulated. Of the three dependent variables: two are dichotomous, one has four categories. None of the categories in the IVs or DVs are ordered.</p> <p>I'm in the process of exploring a 5-way logit (loglinear) approach following Wuensch's description on his teaching website. I did a screening run using hierarchical loglinear analysis (ignoring the fact that some variables are DVs), followed by a loglinear analysis of a model that include only effects involving the DVs (i.e., IV-only effects not included from the outset). The rationale Wuensch gives is that our random assignment of respondents to IVs ensures the effects of the IV-only terms are zero or near-zero, which makes logical sense and is what I found.</p> <p>Not sure this is the correct way, but it's all I've found to date. Would really appreciate comments on this approach! especially around the issue of excluding effects that are IV-only.</p> <p>Thanks!</p> Answer:
https://stats.stackexchange.com/questions/415360/guidance-on-how-to-conduct-a-logistic-regression-with-three-categorical-dependen
Question: <p>I'm trying to interpret these results of using R confint function , but I can not understand. This is a logistic regression about breast cancer. how to interpret confint function. What the 2.5% and 97.5% means?</p> <pre><code>logit1 = glm(goodmodel, data=train, family=binomial(link = logit)) summary(logit1) confint(logit1) summary(logit1) Call: glm(formula = goodmodel, family = binomial(link = logit), data = train) Deviance Residuals: Min 1Q Median 3Q Max -1.76595 -0.00368 -0.00011 0.00000 2.57451 Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -58.106512 37.736869 -1.540 0.1236 radius_mean -4.791077 1.933346 -2.478 0.0132 * texture_mean 0.543094 0.222894 2.437 0.0148 * compactness_mean -93.504749 52.018859 -1.798 0.0723 . `concave points_mean` 212.367971 86.957163 2.442 0.0146 * radius_se 18.268981 7.771317 2.351 0.0187 * smoothness_se 394.202770 347.004692 1.136 0.2560 concavity_se -78.413761 64.179173 -1.222 0.2218 `concave points_se` -26.914245 469.938101 -0.057 0.9543 radius_worst 5.978607 4.897600 1.221 0.2222 area_worst -0.003889 0.045800 -0.085 0.9323 concavity_worst 27.647216 14.104588 1.960 0.0500 * symmetry_worst 25.702777 15.984453 1.608 0.1078 fractal_dimension_worst 10.690937 79.071866 0.135 0.8924 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 533.015 on 397 degrees of freedom Residual deviance: 26.608 on 384 degrees of freedom AIC: 54.608 Number of Fisher Scoring iterations: 12 &gt; confint(logit1) Waiting for profiling to be done... 2.5 % 97.5 % (Intercept) -1.482910e+02 7.95727142 radius_mean -9.791086e+00 -1.66857433 texture_mean 1.961861e-01 1.13920334 compactness_mean -2.299043e+02 -5.16464149 `concave points_mean` 6.027894e+01 432.99676482 radius_se 6.704669e+00 39.25468272 smoothness_se -1.961348e+02 1307.22045246 concavity_se -3.122203e+02 25.69408799 `concave points_se` -1.038278e+03 891.07772306 radius_worst -2.977840e+00 17.32103028 area_worst -9.549641e-02 0.09462597 concavity_worst 6.661555e+00 64.03373503 symmetry_worst 1.562622e+00 63.43243116 fractal_dimension_worst -1.619176e+02 168.07512057 There were 50 or more warnings (use warnings() to see the first 50) <span class="math-container">````</span> </code></pre> Answer:
https://stats.stackexchange.com/questions/417708/having-trouble-interpreting-confint-function-using-r-logistic-regression
Question: <p>I am setting up a logistic regression using the following (simplified) form:</p> <p>Logit(Y) = Constant + A<em>x + B</em>z</p> <p>The real-life scenario is that I am trying to understand the probability of a sales prospect converting from a phone call and x = time_from_prospect_upload_to_call and z = channel of prospect (dummy).</p> <p>I include x in the model because I want to hold the time_to_call constant for calculating the other coefficients, but if I want to get the predicted Y for new prospects, is it OK to not include x in the linear equation?</p> Answer:
https://stats.stackexchange.com/questions/419000/logistic-regression-do-not-include-variable-used-in-regression-in-linear-equati
Question: <ol> <li><p>I have been told that Nagelkerke should not be used in a model of binary logistic regression, but instead a R2 as a measure of goodness of fit. So, how can I apply R2 if I am not using a linear regression.</p></li> <li><p>How can I compare 2 or 3 models of binary logistic regression? is there any software?</p></li> </ol> <p>Thanks!</p> Answer:
https://stats.stackexchange.com/questions/426854/about-binary-logistic-regression
Question: <p>I have been working on a logistic regression model to predict 'yeses' in a yes/no classification problem. The objective of my problem is not necessarily to predict the outcome, but it's rather to just get a better understanding of my variables and how they influence the outcome. </p> <p>For example, I want to say that feature X is 2.2 more likely to achieve 'yes' then my reference level, and Y feature Y is 2.5 less likely to achieve 'yes', etc.</p> <p>I did my model, received the output, and I know how to read the coefficients, but I want to know if there are any tests I can do to confirm the confidence of the coefficients.</p> <p>I did a few things already: 1. checked the P-value of the summary (focused on the ones &lt;.05) 2. Ran each model predictor by predictor to look at the the change in prediction, BIC, AIC, AUC, confusion matrix, ROC curve, etc. 3. I tried to do a chisq test of independence on my independent variables (since they are all categorical), but I'm getting weird results with R saying the results may not be accurate and the result is p_value &lt;.05. The contingency table do show some very low frequency so I'm wondering if that has to do with anything. 4. Did a partial dependency plot....which seems to show opposite direction of my coefficient, but that may be just a software problem on my part. 5. the original data is unbalanced, so I 'up-sampled' it in R. 6. Accuracy is about 70%, AUC is about 79%</p> <p>Besides these things, are there anything else that I can do? Basically, if I say that a feature is 2.5x more likely to vote 'yes', I want to make sure that is more or less correct. </p> <p>Right now, I feel like i"m just taking the output at face value and it's a bit uncomfortable.</p> Answer:
https://stats.stackexchange.com/questions/436585/how-to-have-better-confidence-in-my-logistic-regression-model
Question: <p>I am learning the tool of regression. In the text, I was introduced with the measurement of the diameter of different spheres of the same material many times and estimate the volume with formula. In this case, the variable X is the diameter and the Y is the resulting volume. It is easy to understand. But I was given some data in marketing with X refers to the sale price, Y is the customers' response (Y=1, means a good product, Y=0 means a bad product). The data is collecting by questionnaire to many customers' buying the product in different places so at different prices. However, for those customers who are buying the product at the same price, the response could vary. We observe that for constant X, there is some percentage of responses with Y=1 and some percentage of responses with Y=0. It is confusing that in this case, can we fit the X, Y with a line or curve? In the diameter measuring example, we measure different objects with a different diameter so all (X, Y) pair are different, it lays out clear points set for fitting. But for the marketing example, so many data pair (X, Y) are referring to the same point, says I have (X=3, Y=1) and (X=3, Y=0) at the same times, what does it mean in the concept of regression?</p> Answer:
https://stats.stackexchange.com/questions/458028/the-basic-idea-of-regression-with-multiple-data
Question: <p>I have run a logistic regression model on a target variable and get a list of probabilities like [0.50, 0.30, 0.20, 0.10, 0.05, 0.05, 0.01].</p> <p>For the target situation, I know that there are always going to be 3 positive results. I’m looking at a soccer league and comparing some stats (goals, last year rank, etc) to predict likelihood of top 3 in a given year. Each previous subgroup/year will have 3 positive results. However, since the probabilities calculated are independent of each other, the predicted values for a next year’s group will not sum to 3. Since I know the group of teams for this year, I would want the sum of the predicted values to equal my target of 3.</p> <p>I'm wondering if there is a way to properly scale/adjust these probabilities so that they add up to 3, obviously without any going over 1 (as they would if I simply scaled by sum). I am currently using scaled up/down probabilities (summing to 1) based on their proportion to the group and then simulating results based on these probabilities, but I'm not sure if this is mathematically sound.</p> <p>I'm open to other approaches and would love some guidance. Thanks!</p> Answer:
https://stats.stackexchange.com/questions/458913/normalizing-logistic-regression-probabilities-to-fixed-number
Question: <p>I have fitted the logistic model that has coefficient of age and level of income. The dataset has values for age 18-60 so my thinking is that since we cannot set age to 0, interpreting the intercept will not make sense. Am I thinking right? </p> Answer: <p>Exactly.</p> <p>Interpreting the intercept in <em>any</em> model only makes sense if a setting of zero for all predictors (and the reference level for factor predictors) makes sense. And setting age to zero for a model for income obviously doesn't.</p> <p>(I don't know whether discretizing income, which is a continuous variable, and using a logistic model makes a lot of sense, either.)</p>
https://stats.stackexchange.com/questions/464571/interpreting-the-logistic-model-intercept
Question: <p>I have a model with one predictor and 1 control variable. The dependent variable is binary, either 0 or 1. </p> <p>But the intercept is around 2.5? How is this possible? I thought the logistic regression would limit the function between 0 and 1?</p> Answer: <p>The logit scale goes from minus infinity to infinity. So there is nothing anomalous with it. A logit value approaching plus infinity back transforms to a probability approaching 1. And a logit value approaching minus infinity corresponds to a probability approaching zero. A logit value of zero corresponds to a probability of 0.5</p>
https://stats.stackexchange.com/questions/467542/how-can-the-intercept-of-a-logistic-regression-be-more-than-1
Question: <p>I have few exposure variables (from a survey N= 1241) two of which are 1) dichotomous response of the question: "Did you have enough water in past 30 days?" and 2) dichotomous response of "Have you spent 2 or more days without water in past 30 days?". I want to run logistic regression to see if they are related with disease outcome in the families. Now, if I combine these two variable into one: "yes"= had both enough water and did not spend days without water and "no"= either one or both of the above questions were "yes", am I doing anything wrong? Is it statistically okay to do so? Some other variables are water quality test result, access to toilet, hand washing practices etc. I don't have income data. </p> Answer:
https://stats.stackexchange.com/questions/271065/combining-exposure-variable
Question: <p>I ran a logistic regression with categorical variables. The estimates and odds ratios are: Marital_Status- Estimate: .6605 Odds Ratio: 3.747 Professional Suffix: .5342 Odds Ratio: 2.911</p> <p>I understand that the odds ratio says : "The odds of the dependent variable happening is 3.747 times higher if someone is married than if someone is single"<br> and "The odds of the dependent variable happening is 2.911 times higher if someone has a professional suffix than if they don't"</p> <p>Question: Is there a way to say "If someone is married AND they have a professional suffix then they odds of the dependent variable happening will be ___? Would it be Y(1)= intercept + .6605O + .5342? Or is that unnecessary to do? Should results only be looked at with holding all other independent variables constant?</p> Answer: <p>The interpretation you suggest is in fact the one that is expected. Interaction effects and the effects of the constituent predictors need to be interpreted jointly, and one computes marginal effects for this.</p> <p>See e.g. Buis M. 2010. "Stata tip 87: Interpretation of interactions in nonlinear models" <em>The Stata Journal</em>, 10(2) or <a href="https://cran.r-project.org/web/packages/margins/vignettes/Introduction.html" rel="nofollow noreferrer">the equivalent in R</a>.</p> <p>You might also want to read: Brambor, T., Clark, W. R., and Golder, M. (2006). Understanding interaction models: Improving empirical analyses. <em>Political Analysis</em>, 14(1):63–82.</p>
https://stats.stackexchange.com/questions/283026/understanding-multiple-logistic-regression-interactions
Question: <p>I have two categorical variables and I want to run a binary logistic regression. I am stuck about checking the multicollinearity between the two and how to incorporate them.</p> Answer: <p>The predictors, let's call them var1 and var2, will be turned into dummy variables by the software performing your logistic regression. For example, if var1 and var2 were binary (i.e. values of 0 or 1), the model would look like: <span class="math-container">$logit(p)=log(\frac{p}{1-p})=\beta0+\beta1\times var1 +\beta2\times var2$</span></p> <p>However, if var2 had 3 categories -- low, medium, high for example -- then the model would look like: <span class="math-container">$logit(p)=log(\frac{p}{1-p})=\beta0+\beta1\times var1 +\beta2\times var2Medium + \beta3\times var2High$</span></p> <p>In this case, low is the 'reference category' for variable var2. When var2 is low, the dummy variable <code>var2Medium</code> is 0 and the dummy variable <code>var2High</code> is 0. If var2 is medium, then <code>var2Medium</code> is 1 and <code>var2High</code> is 0. If var2 is high, then <code>var2Medium</code> is 0 and <code>var2High</code> is 1. If there are X categories in your categorical predictor variable, you will need X-1 dummy variables to model it.</p> <p>The use of dummy variables to model categorical predictors in a regression model is crucial to all regression models, not just logistic regression. It is generally covered in beginner regression classes. <a href="https://www.youtube.com/watch?v=fTfMdCQJz4s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=fTfMdCQJz4s</a></p> <p>To check for multicollinearity, you can check the variance inflation factor of each variable. If the VIF is &gt; 2.5, you probably have some collinearity to sort out. However, for dummy variables (i.e. categorical variables with 3 or more categories, as demonstrated above), the VIF will be high and is nothing to worry about. Read this for more information: <a href="https://statisticalhorizons.com/multicollinearity" rel="nofollow noreferrer">https://statisticalhorizons.com/multicollinearity</a></p>
https://stats.stackexchange.com/questions/513131/how-to-incorporate-two-categorical-variables-in-a-logistic-regression
Question: <p>I have done binary logistic regression for a dichotomous outcome and used 5 predictors (3 continuous and 2 dichotomous); one of the dichotomous predictor gave a big number of OR and 95% CI (108.28, CI= 6.64- 1764.6, $p &lt; .001$). Is such a big number okay to report, or is something wrong? Sample size was 55 cases.</p> <p>By the way, the continuous independent variables were linearly related to the logit of the dependent variable. I tested this Box-Tidwell procedure by adding the interaction terms between the existing continuous independent variables and their natural log transformations to the logistic regression, and they were nonsignificant.</p> Answer:
https://stats.stackexchange.com/questions/81439/the-or-and-95-ci-for-logistic-regression-were-very-high
Question: <p>Suppose I have data from coin-flip experiments done in differenet conditions X. I want to estimate P(X), the probability of getting heads.</p> <p>What I would normaly do is try logistic regression, where I assume $P(X) = \frac{1}{1+e^{\beta (X-X_0)}}$, but in this case I know for sure that the probability is not monotnonic in X. In fact, I'm specifically looking for the value of X which maximizes P(X). </p> <p>Is there a standard way of doing this, or do I have to make up my own model, such as $P(X) = \frac{1}{1+e^{\beta (X-X_0)^2}}$ ?</p> Answer:
https://stats.stackexchange.com/questions/154833/choice-of-discrete-non-monotonic-response-model
Question: <p>I am using logistic regression (PROC LOGISTIC) and for both of my two models, the Hosmer and Lemeshow Test is significant. I also computed AUC :</p> <p>AUC(model 1) = 0.583 and AUC(model 2) = 0.604.</p> <p>How can I choose one of them ?</p> Answer: <p>Model 2 has the higher area under the response curve. So it therefore appears to be slightly better.</p>
https://stats.stackexchange.com/questions/271025/choose-a-model-when-the-hosmer-and-lemeshow-test-is-significant
Question: <p>Hey I want to build a model which predict probability of bankruptcy. One of my independent variables is categorical and takes only two values: 1 or 0. How to decide if I should create two separate models becauase of this variable? Which tests should I use?</p> Answer: <p>It is totally fine to have a categorical variable in your regression model. For example, consider studying the effect of education on wages. We might right a model like:</p> <p><span class="math-container">$$wage_i = \beta_0 + \beta_1\times educ_i + \beta_2\times male_i$$</span></p> <p>Where both education and male might be categorical variables in this specification and wage might be continuous. One thing to note is that when programming this you may need to use some kind of language-specific <strong>categorical encoding</strong> these variables. For example in R (assuming male is binary and education has multiple levels):</p> <pre><code> lm(wage~factor(educ)+male) </code></pre> <p>Since you are interested in predicting a probability it is useful to note this stands for linear probability models, logits, or probits. To answer your last question, this alone is not enough reason to use two models, in fact, the variation in this binary variable might even be useful in explaining the outcome variable.</p>
https://stats.stackexchange.com/questions/524622/regression-analysis-two-models-instead-of-one
Question: <p>I see a lot of examples of linear regression like this:</p> <p>y = a1<em>x1 + a2</em>x2 + a3<em>x3 + a4</em>x4 + (a3*a5)<em>x5 + (a4</em>a5)*x6.</p> <p>But I would like to write something similar for a logistic regression. I am not interested in being mathematically precise because the message I want to convey is simply:</p> <p>the outcome (is a function of) predictor-a, predictor-b, etc</p> <p>For example,</p> <p>Diabetes Mellitus (is a function of) age, sex, HbA1c, hypertension, ischemic heart disease, chronic kidney disease, socio-economic status.</p> <p>Is there a symbol typically employed in these situations for the relationship between the outcome (the thing you are predicting) and the predictor variables?</p> <p>Thanks</p> Answer: <p>logistic regression is a specific part of generalised linear models (GLM) where you can found more here <a href="https://en.wikipedia.org/wiki/Generalized_linear_model" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_linear_model</a>.</p> <p>As for logistic regression, let's say that you have y as a response which follows bernoyli(p), then</p> <p>log(p/(1-p)) = X_{i1}*beta_{1} + .... + X_{in}*beta_{n}</p> <p>where you should estimate betas and X_{i1} could be the age of the person 1 and so one as for the other variables.</p> <p>Hope this is helpful!</p>
https://stats.stackexchange.com/questions/526540/how-can-i-express-a-logistic-regression-equation
Question: <p>Basically, how do you convert a one unit change in <span class="math-container">$x_1$</span> to a <span class="math-container">$Z\%$</span> change in <span class="math-container">$Y$</span>?</p> Answer: <p>You can't (not without more information). The point here is that the logit / logistic is not a <a href="https://en.wikipedia.org/wiki/Affine_transformation" rel="nofollow noreferrer">linear transformation</a>. Therefore, you cannot get a constant correspondence between a starting percentage and a subsequent percentage even though you use the same log odds ratio to move from the one to the other each time. Here are a few demonstrative numbers for <span class="math-container">$1$</span>-unit changes in <span class="math-container">$X$</span> with a log odds ratio of <span class="math-container">$1$</span> (thus, the log odds will simply increase by <span class="math-container">$1$</span>):<br> <span class="math-container">\begin{array}{c} \text{starting %} &amp;\text{starting lo} &amp; &amp;\text{subsequent lo} &amp;\text{subsequent %} &amp; &amp;\text{% difference} \\ \hline 0.20 &amp;-1.37\quad &amp;\Rightarrow &amp;-0.37\quad &amp;0.40 &amp; &amp;0.20 \\ 0.50 &amp;0\quad\ &amp;\Rightarrow &amp;1.0\ \ &amp;0.73 &amp; &amp;0.23 \\ 0.90 &amp;2.20 &amp;\Rightarrow &amp;3.20 &amp;0.96 &amp; &amp;0.06 \end{array}</span></p> <p>Alternatively, if you simply want to compare two groups (which could be coded <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, for 'control' and 'treatment'), you would need to know the base rate in the control group. </p>
https://stats.stackexchange.com/questions/423167/how-do-you-convert-a-log-odds-ratio-into-a-marginal-effect