idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
43,801
How does the variance measure the information about the data?
Variance measures the amount of information in the data set. How? Information is a slippery concept, so it pays to be a little concrete. So specialize to the case of a regression model. You have some variables $x_1, x_2, \dotsc, x_p$, say, which you wants to use to predict or explain $Y$. Lets say now that all the obs...
How does the variance measure the information about the data?
Variance measures the amount of information in the data set. How? Information is a slippery concept, so it pays to be a little concrete. So specialize to the case of a regression model. You have some
How does the variance measure the information about the data? Variance measures the amount of information in the data set. How? Information is a slippery concept, so it pays to be a little concrete. So specialize to the case of a regression model. You have some variables $x_1, x_2, \dotsc, x_p$, say, which you wants t...
How does the variance measure the information about the data? Variance measures the amount of information in the data set. How? Information is a slippery concept, so it pays to be a little concrete. So specialize to the case of a regression model. You have some
43,802
How does the variance measure the information about the data?
One way to think about the amount of information in a dataset is to see how spread out the data points are. For example, if you had a dataset of five identical body weights x = [120, 120, 120, 120, 120], there is very little information here as there is no spread or variability at all - knowing the weight of one person...
How does the variance measure the information about the data?
One way to think about the amount of information in a dataset is to see how spread out the data points are. For example, if you had a dataset of five identical body weights x = [120, 120, 120, 120, 12
How does the variance measure the information about the data? One way to think about the amount of information in a dataset is to see how spread out the data points are. For example, if you had a dataset of five identical body weights x = [120, 120, 120, 120, 120], there is very little information here as there is no s...
How does the variance measure the information about the data? One way to think about the amount of information in a dataset is to see how spread out the data points are. For example, if you had a dataset of five identical body weights x = [120, 120, 120, 120, 12
43,803
How does the variance measure the information about the data?
This is not about variance but rather about PCA and principal components. PCA searches for orthogonal principal components (a linear combination of variables) which maximize explained variability (or equivalently minimize the squared distance from the points to the line; think of it as $R^2$ - the higher it is the more...
How does the variance measure the information about the data?
This is not about variance but rather about PCA and principal components. PCA searches for orthogonal principal components (a linear combination of variables) which maximize explained variability (or
How does the variance measure the information about the data? This is not about variance but rather about PCA and principal components. PCA searches for orthogonal principal components (a linear combination of variables) which maximize explained variability (or equivalently minimize the squared distance from the points...
How does the variance measure the information about the data? This is not about variance but rather about PCA and principal components. PCA searches for orthogonal principal components (a linear combination of variables) which maximize explained variability (or
43,804
How does the variance measure the information about the data?
Say you have a dataset about persons with two features: weight and number of eyes. Which of these two features do you think is more informative for describing persons, i.e. given a data point, which feature makes it easier for you to identify the person it represents? Here, given enough data, weight will have a much gr...
How does the variance measure the information about the data?
Say you have a dataset about persons with two features: weight and number of eyes. Which of these two features do you think is more informative for describing persons, i.e. given a data point, which f
How does the variance measure the information about the data? Say you have a dataset about persons with two features: weight and number of eyes. Which of these two features do you think is more informative for describing persons, i.e. given a data point, which feature makes it easier for you to identify the person it r...
How does the variance measure the information about the data? Say you have a dataset about persons with two features: weight and number of eyes. Which of these two features do you think is more informative for describing persons, i.e. given a data point, which f
43,805
How does the variance measure the information about the data?
Others already answered conveying perfectly some concepts but the major misunderstatement remains to be clarified: variance IS NOT information, is not even vaguely linked to information. In some application as PCA it is a parameter that allows you to evaluate how much information you can extract from your data with tha...
How does the variance measure the information about the data?
Others already answered conveying perfectly some concepts but the major misunderstatement remains to be clarified: variance IS NOT information, is not even vaguely linked to information. In some appli
How does the variance measure the information about the data? Others already answered conveying perfectly some concepts but the major misunderstatement remains to be clarified: variance IS NOT information, is not even vaguely linked to information. In some application as PCA it is a parameter that allows you to evaluat...
How does the variance measure the information about the data? Others already answered conveying perfectly some concepts but the major misunderstatement remains to be clarified: variance IS NOT information, is not even vaguely linked to information. In some appli
43,806
What does a subscript on a probability represent?
Maybe this helps: $P$ is the distribution of random variable $x$ given the value of random variable $y$. And this distribution has parameters $\theta$. By varying the parameters, you get different distributions. For example, probability distribution over a random variable $x$ with uniform distribution on support $[a,b...
What does a subscript on a probability represent?
Maybe this helps: $P$ is the distribution of random variable $x$ given the value of random variable $y$. And this distribution has parameters $\theta$. By varying the parameters, you get different dis
What does a subscript on a probability represent? Maybe this helps: $P$ is the distribution of random variable $x$ given the value of random variable $y$. And this distribution has parameters $\theta$. By varying the parameters, you get different distributions. For example, probability distribution over a random variab...
What does a subscript on a probability represent? Maybe this helps: $P$ is the distribution of random variable $x$ given the value of random variable $y$. And this distribution has parameters $\theta$. By varying the parameters, you get different dis
43,807
What does a subscript on a probability represent?
Abhinav Gupta gave a nice example (+1). The general answer is that you can use the subscript to carry descriptive information about the distribution. For example, the definition of independence can be written as $$ P_{X,Y}(x,y) = P_X(x) \, P_Y(y) $$ Or you could define mixture distribution as $$ P_X(x) =\sum_k \pi_k P_...
What does a subscript on a probability represent?
Abhinav Gupta gave a nice example (+1). The general answer is that you can use the subscript to carry descriptive information about the distribution. For example, the definition of independence can be
What does a subscript on a probability represent? Abhinav Gupta gave a nice example (+1). The general answer is that you can use the subscript to carry descriptive information about the distribution. For example, the definition of independence can be written as $$ P_{X,Y}(x,y) = P_X(x) \, P_Y(y) $$ Or you could define ...
What does a subscript on a probability represent? Abhinav Gupta gave a nice example (+1). The general answer is that you can use the subscript to carry descriptive information about the distribution. For example, the definition of independence can be
43,808
Non-parametric alternative to simple t-test
Let's look at one variable at a time. As I understand it you have $n_1 =60$ observations from Population 1 which is distributed $\mathsf{Norm}(\mu_1, \sigma_1)$ and $n_2 =60$ observations from Population 2 which is distributed $\mathsf{Norm}(\mu_2, \sigma_2).$ You want to test $H_0: \mu_1 = \mu_2$ against $H_a: \mu_1 ...
Non-parametric alternative to simple t-test
Let's look at one variable at a time. As I understand it you have $n_1 =60$ observations from Population 1 which is distributed $\mathsf{Norm}(\mu_1, \sigma_1)$ and $n_2 =60$ observations from Popula
Non-parametric alternative to simple t-test Let's look at one variable at a time. As I understand it you have $n_1 =60$ observations from Population 1 which is distributed $\mathsf{Norm}(\mu_1, \sigma_1)$ and $n_2 =60$ observations from Population 2 which is distributed $\mathsf{Norm}(\mu_2, \sigma_2).$ You want to te...
Non-parametric alternative to simple t-test Let's look at one variable at a time. As I understand it you have $n_1 =60$ observations from Population 1 which is distributed $\mathsf{Norm}(\mu_1, \sigma_1)$ and $n_2 =60$ observations from Popula
43,809
Non-parametric alternative to simple t-test
The t-test does not assume normality of the dependent variable; it assumes normality conditional on the predictor. (See this thread: Where does the misconception that Y must be normally distributed come from?). A simple way to condition on your grouping variable is to look at a histogram of the dependent variable, spli...
Non-parametric alternative to simple t-test
The t-test does not assume normality of the dependent variable; it assumes normality conditional on the predictor. (See this thread: Where does the misconception that Y must be normally distributed co
Non-parametric alternative to simple t-test The t-test does not assume normality of the dependent variable; it assumes normality conditional on the predictor. (See this thread: Where does the misconception that Y must be normally distributed come from?). A simple way to condition on your grouping variable is to look at...
Non-parametric alternative to simple t-test The t-test does not assume normality of the dependent variable; it assumes normality conditional on the predictor. (See this thread: Where does the misconception that Y must be normally distributed co
43,810
Non-parametric alternative to simple t-test
One thing to keep in mind- outside of some contexts in physics, no process in nature will generate purely normally distributed data (or data with any particular nicely behaved distribution). What does this mean in practice? It means that if you possessed an omnipotent test for normality, the test would reject 100% of...
Non-parametric alternative to simple t-test
One thing to keep in mind- outside of some contexts in physics, no process in nature will generate purely normally distributed data (or data with any particular nicely behaved distribution). What doe
Non-parametric alternative to simple t-test One thing to keep in mind- outside of some contexts in physics, no process in nature will generate purely normally distributed data (or data with any particular nicely behaved distribution). What does this mean in practice? It means that if you possessed an omnipotent test ...
Non-parametric alternative to simple t-test One thing to keep in mind- outside of some contexts in physics, no process in nature will generate purely normally distributed data (or data with any particular nicely behaved distribution). What doe
43,811
What is the expected absolute difference between sample and population mean?
This is an addendum to @Aksakal's answer. As he points out, we need to find the value of $E|Y|]$ where $Y \sim \mathcal N(0,\sigma^2/n)$. This can be done very straightforwadly via the law of the unconscious statistician, without needing to think of $\chi$ random variables etc. We have \begin{align} E[|Y|] &= \int_{-\i...
What is the expected absolute difference between sample and population mean?
This is an addendum to @Aksakal's answer. As he points out, we need to find the value of $E|Y|]$ where $Y \sim \mathcal N(0,\sigma^2/n)$. This can be done very straightforwadly via the law of the unco
What is the expected absolute difference between sample and population mean? This is an addendum to @Aksakal's answer. As he points out, we need to find the value of $E|Y|]$ where $Y \sim \mathcal N(0,\sigma^2/n)$. This can be done very straightforwadly via the law of the unconscious statistician, without needing to th...
What is the expected absolute difference between sample and population mean? This is an addendum to @Aksakal's answer. As he points out, we need to find the value of $E|Y|]$ where $Y \sim \mathcal N(0,\sigma^2/n)$. This can be done very straightforwadly via the law of the unco
43,812
What is the expected absolute difference between sample and population mean?
The sample mean is going to be normal since the underlying distribution is normal. The distribution of a sample mean is $\mathcal{N}(\mu,\sigma^2/n)$. It's easy to compute the expectation of the absolute deviation then: $$\bar x-\mu\sim\mathcal{N}(0,\sigma^2/n)$$ All you need is the expectation of absolute value of a n...
What is the expected absolute difference between sample and population mean?
The sample mean is going to be normal since the underlying distribution is normal. The distribution of a sample mean is $\mathcal{N}(\mu,\sigma^2/n)$. It's easy to compute the expectation of the absol
What is the expected absolute difference between sample and population mean? The sample mean is going to be normal since the underlying distribution is normal. The distribution of a sample mean is $\mathcal{N}(\mu,\sigma^2/n)$. It's easy to compute the expectation of the absolute deviation then: $$\bar x-\mu\sim\mathca...
What is the expected absolute difference between sample and population mean? The sample mean is going to be normal since the underlying distribution is normal. The distribution of a sample mean is $\mathcal{N}(\mu,\sigma^2/n)$. It's easy to compute the expectation of the absol
43,813
What is the expected absolute difference between sample and population mean?
Consider a normal random variable $Y$ with mean $\mu$ and variance $\tau^2$, and let $Z=\frac{Y-\mu}{\tau}$ (so $Z$ is standard normal). $$\:\:E(|Z|)=2\int_0^\infty z\cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{z^2}{2}} dz$$ $\quad$ Let $u=\frac{z^2}{2}$, so $du=z \,dz$. $$\qquad=\frac{2}{\sqrt{2\pi}}\int_0^\infty e^{-u} du$$...
What is the expected absolute difference between sample and population mean?
Consider a normal random variable $Y$ with mean $\mu$ and variance $\tau^2$, and let $Z=\frac{Y-\mu}{\tau}$ (so $Z$ is standard normal). $$\:\:E(|Z|)=2\int_0^\infty z\cdot \frac{1}{\sqrt{2\pi}} e^{-\f
What is the expected absolute difference between sample and population mean? Consider a normal random variable $Y$ with mean $\mu$ and variance $\tau^2$, and let $Z=\frac{Y-\mu}{\tau}$ (so $Z$ is standard normal). $$\:\:E(|Z|)=2\int_0^\infty z\cdot \frac{1}{\sqrt{2\pi}} e^{-\frac{z^2}{2}} dz$$ $\quad$ Let $u=\frac{z^2}...
What is the expected absolute difference between sample and population mean? Consider a normal random variable $Y$ with mean $\mu$ and variance $\tau^2$, and let $Z=\frac{Y-\mu}{\tau}$ (so $Z$ is standard normal). $$\:\:E(|Z|)=2\int_0^\infty z\cdot \frac{1}{\sqrt{2\pi}} e^{-\f
43,814
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
The result shown is correct. Indeed, in general, for any true error variance $\sigma^2$ and under the usual linear model assumptions, $$ \hat\beta \sim N_p(\beta, \sigma^2(X^\top X)^{-1}), $$ where $\beta = (\beta_1,\ldots,\beta_{p})$. On the other hand, for a single component of $\hat\beta$, say $\hat\beta_r$, we have...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
The result shown is correct. Indeed, in general, for any true error variance $\sigma^2$ and under the usual linear model assumptions, $$ \hat\beta \sim N_p(\beta, \sigma^2(X^\top X)^{-1}), $$ where $\
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution The result shown is correct. Indeed, in general, for any true error variance $\sigma^2$ and under the usual linear model assumptions, $$ \hat\beta \sim N_p(\beta, \sigma^2(X^\top X)^{-1}), $$ where $\beta = (\beta_1,\ldots,\beta_{p})$. On the oth...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution The result shown is correct. Indeed, in general, for any true error variance $\sigma^2$ and under the usual linear model assumptions, $$ \hat\beta \sim N_p(\beta, \sigma^2(X^\top X)^{-1}), $$ where $\
43,815
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
$\hat\beta_3 \sim N(\beta_3, 0.022 \sigma^2)$ is the distribution of the estimate $\hat\beta_3$ conditional on the values of $\beta_3$ and $\sigma$. In inference, you often do not know $\beta_3$ and $\sigma$ and compute a statistic $\frac{\hat\beta_3}{\hat{\sigma}}$. That is the statistic which is t-distributed (if the...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
$\hat\beta_3 \sim N(\beta_3, 0.022 \sigma^2)$ is the distribution of the estimate $\hat\beta_3$ conditional on the values of $\beta_3$ and $\sigma$. In inference, you often do not know $\beta_3$ and $
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution $\hat\beta_3 \sim N(\beta_3, 0.022 \sigma^2)$ is the distribution of the estimate $\hat\beta_3$ conditional on the values of $\beta_3$ and $\sigma$. In inference, you often do not know $\beta_3$ and $\sigma$ and compute a statistic $\frac{\hat\be...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution $\hat\beta_3 \sim N(\beta_3, 0.022 \sigma^2)$ is the distribution of the estimate $\hat\beta_3$ conditional on the values of $\beta_3$ and $\sigma$. In inference, you often do not know $\beta_3$ and $
43,816
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
In OLS, we typically assume the following for the error terms $u$: A(1): $ E(u) = 0 $ A(2): $ u \sim N(0, \sigma^2I) $ this leads to the distribution $ \widehat{\beta} \sim N(\beta, \operatorname{Var}(\widehat{\beta})) $ in small samples, if assumption A(2) holds true and even approximate for large samples, if A(2) i...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution
In OLS, we typically assume the following for the error terms $u$: A(1): $ E(u) = 0 $ A(2): $ u \sim N(0, \sigma^2I) $ this leads to the distribution $ \widehat{\beta} \sim N(\beta, \operatorname{Va
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution In OLS, we typically assume the following for the error terms $u$: A(1): $ E(u) = 0 $ A(2): $ u \sim N(0, \sigma^2I) $ this leads to the distribution $ \widehat{\beta} \sim N(\beta, \operatorname{Var}(\widehat{\beta})) $ in small samples, if as...
OLS - Why coefficient Beta has Normal Distribution but not t-Distribution In OLS, we typically assume the following for the error terms $u$: A(1): $ E(u) = 0 $ A(2): $ u \sim N(0, \sigma^2I) $ this leads to the distribution $ \widehat{\beta} \sim N(\beta, \operatorname{Va
43,817
Can we remove significant variables in a regression?
To just address the actual question: Significance means that there is evidence that the variable has a nonzero contribution given all other variables in the model. This means that correlation is not a valid reason to remove a significant variable, because its significance means that its contribution can not be accounte...
Can we remove significant variables in a regression?
To just address the actual question: Significance means that there is evidence that the variable has a nonzero contribution given all other variables in the model. This means that correlation is not a
Can we remove significant variables in a regression? To just address the actual question: Significance means that there is evidence that the variable has a nonzero contribution given all other variables in the model. This means that correlation is not a valid reason to remove a significant variable, because its signifi...
Can we remove significant variables in a regression? To just address the actual question: Significance means that there is evidence that the variable has a nonzero contribution given all other variables in the model. This means that correlation is not a
43,818
Can we remove significant variables in a regression?
Please do not choose variables for a regression model based on p-values. Also please do not choose variables so that you achieve results that "meet expectations". If you include variables in your model to begin with, this was presumably because they were identified as possible confounders or as variables that were not ...
Can we remove significant variables in a regression?
Please do not choose variables for a regression model based on p-values. Also please do not choose variables so that you achieve results that "meet expectations". If you include variables in your mode
Can we remove significant variables in a regression? Please do not choose variables for a regression model based on p-values. Also please do not choose variables so that you achieve results that "meet expectations". If you include variables in your model to begin with, this was presumably because they were identified a...
Can we remove significant variables in a regression? Please do not choose variables for a regression model based on p-values. Also please do not choose variables so that you achieve results that "meet expectations". If you include variables in your mode
43,819
Can we remove significant variables in a regression?
And this is causing the sign of the coefficient of other variables to go opposite way (not as per expectation or theory). Basically (i.e. this is an oversimplification), this means that the effect of the second variable is negative, once the effect of the first is controlled for. For example, suppose you're doing a re...
Can we remove significant variables in a regression?
And this is causing the sign of the coefficient of other variables to go opposite way (not as per expectation or theory). Basically (i.e. this is an oversimplification), this means that the effect of
Can we remove significant variables in a regression? And this is causing the sign of the coefficient of other variables to go opposite way (not as per expectation or theory). Basically (i.e. this is an oversimplification), this means that the effect of the second variable is negative, once the effect of the first is c...
Can we remove significant variables in a regression? And this is causing the sign of the coefficient of other variables to go opposite way (not as per expectation or theory). Basically (i.e. this is an oversimplification), this means that the effect of
43,820
Fitting known equation to data
Fitting data to an equation without free parameters First, let's clarify what it means "to see how well my data fits to the equation" when you have a fixed equation as in your original question. You have data on $x$ and $y$, and your equation: $$y=(3.5-(x/10))(x/25)^{5/2}$$ has no free parameters. Here's a plot; data p...
Fitting known equation to data
Fitting data to an equation without free parameters First, let's clarify what it means "to see how well my data fits to the equation" when you have a fixed equation as in your original question. You h
Fitting known equation to data Fitting data to an equation without free parameters First, let's clarify what it means "to see how well my data fits to the equation" when you have a fixed equation as in your original question. You have data on $x$ and $y$, and your equation: $$y=(3.5-(x/10))(x/25)^{5/2}$$ has no free pa...
Fitting known equation to data Fitting data to an equation without free parameters First, let's clarify what it means "to see how well my data fits to the equation" when you have a fixed equation as in your original question. You h
43,821
What does the term "Estimation error" mean?
A common decomposition of the error incurred when forming a predictive model is into three pieces. 1) Bayes Error: Even the best predictor will sometimes be wrong. Imagine predicting height based on gender. If you had the best predictor available you would still incur error because height does not depend solely on ...
What does the term "Estimation error" mean?
A common decomposition of the error incurred when forming a predictive model is into three pieces. 1) Bayes Error: Even the best predictor will sometimes be wrong. Imagine predicting height based o
What does the term "Estimation error" mean? A common decomposition of the error incurred when forming a predictive model is into three pieces. 1) Bayes Error: Even the best predictor will sometimes be wrong. Imagine predicting height based on gender. If you had the best predictor available you would still incur err...
What does the term "Estimation error" mean? A common decomposition of the error incurred when forming a predictive model is into three pieces. 1) Bayes Error: Even the best predictor will sometimes be wrong. Imagine predicting height based o
43,822
What does the term "Estimation error" mean?
Found this on a research paper: hope it helps.
What does the term "Estimation error" mean?
Found this on a research paper: hope it helps.
What does the term "Estimation error" mean? Found this on a research paper: hope it helps.
What does the term "Estimation error" mean? Found this on a research paper: hope it helps.
43,823
What does the term "Estimation error" mean?
Let $F$ be a family of functions, $f^\prime$ is the best function given training dataset $D_n$, $R(f)$ be a function that give the estimation of loss of a given function $f$. $R^*$ is the minimum statistical risk (true risk) for all functions (including but not limited to those in $F$). Expected Risk - Minimum Statist...
What does the term "Estimation error" mean?
Let $F$ be a family of functions, $f^\prime$ is the best function given training dataset $D_n$, $R(f)$ be a function that give the estimation of loss of a given function $f$. $R^*$ is the minimum stat
What does the term "Estimation error" mean? Let $F$ be a family of functions, $f^\prime$ is the best function given training dataset $D_n$, $R(f)$ be a function that give the estimation of loss of a given function $f$. $R^*$ is the minimum statistical risk (true risk) for all functions (including but not limited to tho...
What does the term "Estimation error" mean? Let $F$ be a family of functions, $f^\prime$ is the best function given training dataset $D_n$, $R(f)$ be a function that give the estimation of loss of a given function $f$. $R^*$ is the minimum stat
43,824
What does the term "Estimation error" mean?
The intuition is this: imagine you have to listen to what someone's saying and transcribe it. If you're sitting in a quite room with a person it's much easier to do than in a night club where music is blasting. The person is saying the same thing in the same voice, but it's harder to catch what he's saying because of t...
What does the term "Estimation error" mean?
The intuition is this: imagine you have to listen to what someone's saying and transcribe it. If you're sitting in a quite room with a person it's much easier to do than in a night club where music is
What does the term "Estimation error" mean? The intuition is this: imagine you have to listen to what someone's saying and transcribe it. If you're sitting in a quite room with a person it's much easier to do than in a night club where music is blasting. The person is saying the same thing in the same voice, but it's h...
What does the term "Estimation error" mean? The intuition is this: imagine you have to listen to what someone's saying and transcribe it. If you're sitting in a quite room with a person it's much easier to do than in a night club where music is
43,825
Is winning a soccer match independent of previous wins\losses?
It is often the case in sports analytics that people ask questions about more ethereal concepts like momentum, clutch, or home-field advantage. At the surface it would sound silly to say that these things don't exist. However, whether or not they exist is a separate question from whether or not we can meaningfully us...
Is winning a soccer match independent of previous wins\losses?
It is often the case in sports analytics that people ask questions about more ethereal concepts like momentum, clutch, or home-field advantage. At the surface it would sound silly to say that these t
Is winning a soccer match independent of previous wins\losses? It is often the case in sports analytics that people ask questions about more ethereal concepts like momentum, clutch, or home-field advantage. At the surface it would sound silly to say that these things don't exist. However, whether or not they exist is...
Is winning a soccer match independent of previous wins\losses? It is often the case in sports analytics that people ask questions about more ethereal concepts like momentum, clutch, or home-field advantage. At the surface it would sound silly to say that these t
43,826
Is winning a soccer match independent of previous wins\losses?
I think most people will agree that successive outcomes of soccer matches (of the same team?!) are not independent of each other. Clearly there are factors, such as injured players, making matches that are close in time dependent. The exact nature of these invisible ties is nearly impossible to state correctly and in ...
Is winning a soccer match independent of previous wins\losses?
I think most people will agree that successive outcomes of soccer matches (of the same team?!) are not independent of each other. Clearly there are factors, such as injured players, making matches tha
Is winning a soccer match independent of previous wins\losses? I think most people will agree that successive outcomes of soccer matches (of the same team?!) are not independent of each other. Clearly there are factors, such as injured players, making matches that are close in time dependent. The exact nature of these...
Is winning a soccer match independent of previous wins\losses? I think most people will agree that successive outcomes of soccer matches (of the same team?!) are not independent of each other. Clearly there are factors, such as injured players, making matches tha
43,827
Is winning a soccer match independent of previous wins\losses?
I am a beginner in sports analyses but perhaps a quick empirical example: There is a package "vcd" which contains all soccer games in the German Bundesliga from 1963 to 2008. We can use this dataset to have a look whether we see some (preliminary) evidence for a correlation between the performance across three consecut...
Is winning a soccer match independent of previous wins\losses?
I am a beginner in sports analyses but perhaps a quick empirical example: There is a package "vcd" which contains all soccer games in the German Bundesliga from 1963 to 2008. We can use this dataset t
Is winning a soccer match independent of previous wins\losses? I am a beginner in sports analyses but perhaps a quick empirical example: There is a package "vcd" which contains all soccer games in the German Bundesliga from 1963 to 2008. We can use this dataset to have a look whether we see some (preliminary) evidence ...
Is winning a soccer match independent of previous wins\losses? I am a beginner in sports analyses but perhaps a quick empirical example: There is a package "vcd" which contains all soccer games in the German Bundesliga from 1963 to 2008. We can use this dataset t
43,828
Is winning a soccer match independent of previous wins\losses?
I found an article related to this. There the runs test and the chi squared goodness of fit test is used to test if the number of winning streaks is in line with the theoretical expectation under independence. Google: Winning Streaks in Sports and the Misperception of Momentum.
Is winning a soccer match independent of previous wins\losses?
I found an article related to this. There the runs test and the chi squared goodness of fit test is used to test if the number of winning streaks is in line with the theoretical expectation under inde
Is winning a soccer match independent of previous wins\losses? I found an article related to this. There the runs test and the chi squared goodness of fit test is used to test if the number of winning streaks is in line with the theoretical expectation under independence. Google: Winning Streaks in Sports and the Mispe...
Is winning a soccer match independent of previous wins\losses? I found an article related to this. There the runs test and the chi squared goodness of fit test is used to test if the number of winning streaks is in line with the theoretical expectation under inde
43,829
Random number generation using t-distribution or laplace distribution
Here's how to do this in Matlab using TINV from that statistics toolbox: %# choose the degree of freedom df = 4; %# note you can also choose an array of df's if necessary %# create a vector of 100,000 uniformly distributed random varibles uni = rand(100000,1); %# look up the corresponding t-values out = tinv(uni,df);...
Random number generation using t-distribution or laplace distribution
Here's how to do this in Matlab using TINV from that statistics toolbox: %# choose the degree of freedom df = 4; %# note you can also choose an array of df's if necessary %# create a vector of 100,00
Random number generation using t-distribution or laplace distribution Here's how to do this in Matlab using TINV from that statistics toolbox: %# choose the degree of freedom df = 4; %# note you can also choose an array of df's if necessary %# create a vector of 100,000 uniformly distributed random varibles uni = rand...
Random number generation using t-distribution or laplace distribution Here's how to do this in Matlab using TINV from that statistics toolbox: %# choose the degree of freedom df = 4; %# note you can also choose an array of df's if necessary %# create a vector of 100,00
43,830
Random number generation using t-distribution or laplace distribution
Easy answer: Use R and get n variables for a $t$-distribution with df degrees of freedom by rt(n, df). If you don't use R, maybe you can write what language you use, and others may be able to tell precisely what to do. If you don't use R or another language with a built in random number generator for the $t$-distribut...
Random number generation using t-distribution or laplace distribution
Easy answer: Use R and get n variables for a $t$-distribution with df degrees of freedom by rt(n, df). If you don't use R, maybe you can write what language you use, and others may be able to tell pre
Random number generation using t-distribution or laplace distribution Easy answer: Use R and get n variables for a $t$-distribution with df degrees of freedom by rt(n, df). If you don't use R, maybe you can write what language you use, and others may be able to tell precisely what to do. If you don't use R or another ...
Random number generation using t-distribution or laplace distribution Easy answer: Use R and get n variables for a $t$-distribution with df degrees of freedom by rt(n, df). If you don't use R, maybe you can write what language you use, and others may be able to tell pre
43,831
Random number generation using t-distribution or laplace distribution
By looking at the Wikipedia article, I've written a function to generate random variables from the Laplace dsistribution. Here it is: function x = laplacernd(mu,b,sz) %LAPLACERND Generate Laplacian random variables % % x = LAPLACERND(mu,b,sz) generates random variables from a Laplace % distribution having parameter...
Random number generation using t-distribution or laplace distribution
By looking at the Wikipedia article, I've written a function to generate random variables from the Laplace dsistribution. Here it is: function x = laplacernd(mu,b,sz) %LAPLACERND Generate Laplacian ra
Random number generation using t-distribution or laplace distribution By looking at the Wikipedia article, I've written a function to generate random variables from the Laplace dsistribution. Here it is: function x = laplacernd(mu,b,sz) %LAPLACERND Generate Laplacian random variables % % x = LAPLACERND(mu,b,sz) gene...
Random number generation using t-distribution or laplace distribution By looking at the Wikipedia article, I've written a function to generate random variables from the Laplace dsistribution. Here it is: function x = laplacernd(mu,b,sz) %LAPLACERND Generate Laplacian ra
43,832
Random number generation using t-distribution or laplace distribution
The best (fastest to run, not fastest to code;) free solution I have found in Matlab was to wrap R's MATHLIB_STANDALONE c library with a mex function. This gives you access to R's t-distribution PRNG. One advantage of this approach is that you also can use the same trick to get variates from a non-central t distributio...
Random number generation using t-distribution or laplace distribution
The best (fastest to run, not fastest to code;) free solution I have found in Matlab was to wrap R's MATHLIB_STANDALONE c library with a mex function. This gives you access to R's t-distribution PRNG.
Random number generation using t-distribution or laplace distribution The best (fastest to run, not fastest to code;) free solution I have found in Matlab was to wrap R's MATHLIB_STANDALONE c library with a mex function. This gives you access to R's t-distribution PRNG. One advantage of this approach is that you also c...
Random number generation using t-distribution or laplace distribution The best (fastest to run, not fastest to code;) free solution I have found in Matlab was to wrap R's MATHLIB_STANDALONE c library with a mex function. This gives you access to R's t-distribution PRNG.
43,833
Random number generation using t-distribution or laplace distribution
You can use the same approach that was described in response to your question about generating random numbers from a t-distribution. First generate uniformly distributed random numbers from (0,1) and then apply the inverse cumulative distribution function of the Laplace distribution, which is given in the Wikipedia ar...
Random number generation using t-distribution or laplace distribution
You can use the same approach that was described in response to your question about generating random numbers from a t-distribution. First generate uniformly distributed random numbers from (0,1) and
Random number generation using t-distribution or laplace distribution You can use the same approach that was described in response to your question about generating random numbers from a t-distribution. First generate uniformly distributed random numbers from (0,1) and then apply the inverse cumulative distribution fu...
Random number generation using t-distribution or laplace distribution You can use the same approach that was described in response to your question about generating random numbers from a t-distribution. First generate uniformly distributed random numbers from (0,1) and
43,834
Correct formula for MSE
Assuming that the slide is talking about linear regression with one input variable, i.e. $$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i$$, the correct formula for MSE is: $$ \operatorname{MSE} = \frac{1}{n-2} \sum_{i=1}^{n} (Y_i - \hat{Y}_i)^2 \ . $$ To reiterate, for the specific case of a linear model with only one in...
Correct formula for MSE
Assuming that the slide is talking about linear regression with one input variable, i.e. $$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i$$, the correct formula for MSE is: $$ \operatorname{MSE} = \frac{
Correct formula for MSE Assuming that the slide is talking about linear regression with one input variable, i.e. $$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i$$, the correct formula for MSE is: $$ \operatorname{MSE} = \frac{1}{n-2} \sum_{i=1}^{n} (Y_i - \hat{Y}_i)^2 \ . $$ To reiterate, for the specific case of a linea...
Correct formula for MSE Assuming that the slide is talking about linear regression with one input variable, i.e. $$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i$$, the correct formula for MSE is: $$ \operatorname{MSE} = \frac{
43,835
Correct formula for MSE
Both are correct. As said by blooraven (+1), this is the same kind of correction as in the unbiased estimator for sample variance. The second formula is used with linear regression corrects for the number of degrees of freedom. Notice that the second formula would not make sense in every context. Some models can be use...
Correct formula for MSE
Both are correct. As said by blooraven (+1), this is the same kind of correction as in the unbiased estimator for sample variance. The second formula is used with linear regression corrects for the nu
Correct formula for MSE Both are correct. As said by blooraven (+1), this is the same kind of correction as in the unbiased estimator for sample variance. The second formula is used with linear regression corrects for the number of degrees of freedom. Notice that the second formula would not make sense in every context...
Correct formula for MSE Both are correct. As said by blooraven (+1), this is the same kind of correction as in the unbiased estimator for sample variance. The second formula is used with linear regression corrects for the nu
43,836
Impose a condition on neural network
A dirt-simple solution is to add a regularization term, so your loss function is $\text{loss} + \lambda \text{ReLU} (i_3 - O)$. This adds a penalty whenever your inequality is violated, so the model will tend to respect the constraint. While this solution is inexact, It will be more challenging to solve this exactly be...
Impose a condition on neural network
A dirt-simple solution is to add a regularization term, so your loss function is $\text{loss} + \lambda \text{ReLU} (i_3 - O)$. This adds a penalty whenever your inequality is violated, so the model w
Impose a condition on neural network A dirt-simple solution is to add a regularization term, so your loss function is $\text{loss} + \lambda \text{ReLU} (i_3 - O)$. This adds a penalty whenever your inequality is violated, so the model will tend to respect the constraint. While this solution is inexact, It will be more...
Impose a condition on neural network A dirt-simple solution is to add a regularization term, so your loss function is $\text{loss} + \lambda \text{ReLU} (i_3 - O)$. This adds a penalty whenever your inequality is violated, so the model w
43,837
Impose a condition on neural network
Could you just let the output be un-constrained, and then postprocess by doing something like $O + i3$? You can even put this directly into your loss function.
Impose a condition on neural network
Could you just let the output be un-constrained, and then postprocess by doing something like $O + i3$? You can even put this directly into your loss function.
Impose a condition on neural network Could you just let the output be un-constrained, and then postprocess by doing something like $O + i3$? You can even put this directly into your loss function.
Impose a condition on neural network Could you just let the output be un-constrained, and then postprocess by doing something like $O + i3$? You can even put this directly into your loss function.
43,838
Impose a condition on neural network
After devoting good time, finally I found how to implement the solution in keras/tensorflow library with the regard to previous useful answers to my question. First if we want to implement a costume keras loss function with some parameters and also accessing to inputs we have to define: def custom_loss(alpha): def ...
Impose a condition on neural network
After devoting good time, finally I found how to implement the solution in keras/tensorflow library with the regard to previous useful answers to my question. First if we want to implement a costume k
Impose a condition on neural network After devoting good time, finally I found how to implement the solution in keras/tensorflow library with the regard to previous useful answers to my question. First if we want to implement a costume keras loss function with some parameters and also accessing to inputs we have to def...
Impose a condition on neural network After devoting good time, finally I found how to implement the solution in keras/tensorflow library with the regard to previous useful answers to my question. First if we want to implement a costume k
43,839
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
There are some misconceptions in your question that I need to clear up before getting to the answer. The null hypothesis $H_0$ in a statistical test is always the claim you want to argue against. The alternative hypothesis $H_1$ is the claim you hope to be true. The null and the alternative need to be mutually exclusi...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
There are some misconceptions in your question that I need to clear up before getting to the answer. The null hypothesis $H_0$ in a statistical test is always the claim you want to argue against. The
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? There are some misconceptions in your question that I need to clear up before getting to the answer. The null hypothesis $H_0$ in a statistical test is always the claim you want to argue against. The alternative hypothesis $H...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? There are some misconceptions in your question that I need to clear up before getting to the answer. The null hypothesis $H_0$ in a statistical test is always the claim you want to argue against. The
43,840
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
the data may strongly support $\mu > 0$, but does not constitute sufficient evidence for $\mu \neq 0$ I don't know if this reasoning helps or if it is 100% correct... You accept to be wrong $\alpha$-percent of the times in hypothetical repeats of the test; for example, you accept to be wrong 5% of the times in rejecti...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
the data may strongly support $\mu > 0$, but does not constitute sufficient evidence for $\mu \neq 0$ I don't know if this reasoning helps or if it is 100% correct... You accept to be wrong $\alpha$-
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? the data may strongly support $\mu > 0$, but does not constitute sufficient evidence for $\mu \neq 0$ I don't know if this reasoning helps or if it is 100% correct... You accept to be wrong $\alpha$-percent of the times in h...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? the data may strongly support $\mu > 0$, but does not constitute sufficient evidence for $\mu \neq 0$ I don't know if this reasoning helps or if it is 100% correct... You accept to be wrong $\alpha$-
43,841
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
Hypothesis testing: why $\mu > 0$ (or even $\mu > \epsilon$) "seems easier” to substantiate than $\mu \neq 0$? It seems easier because the one-sided t-test and two-sided t-test have different sensitivity for different values. The two-sided t-test has sensitivity split for both positive and negative values. The one-si...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$?
Hypothesis testing: why $\mu > 0$ (or even $\mu > \epsilon$) "seems easier” to substantiate than $\mu \neq 0$? It seems easier because the one-sided t-test and two-sided t-test have different sensiti
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? Hypothesis testing: why $\mu > 0$ (or even $\mu > \epsilon$) "seems easier” to substantiate than $\mu \neq 0$? It seems easier because the one-sided t-test and two-sided t-test have different sensitivity for different values...
Why does $\mu > 0$ (or even $\mu > \epsilon$) "seem easier” to substantiate than $\mu \neq 0$? Hypothesis testing: why $\mu > 0$ (or even $\mu > \epsilon$) "seems easier” to substantiate than $\mu \neq 0$? It seems easier because the one-sided t-test and two-sided t-test have different sensiti
43,842
CLT may fail under this condition?
The central limit theorem applies to infinite sequences of random variables $X_1,X_2,X_3,...$ rather than finite vectors of random variables. This is evident in the fact that we take the limit $n \rightarrow \infty$ in these theorems. So what this means is that these problems deal implicitly with an infinite populati...
CLT may fail under this condition?
The central limit theorem applies to infinite sequences of random variables $X_1,X_2,X_3,...$ rather than finite vectors of random variables. This is evident in the fact that we take the limit $n \ri
CLT may fail under this condition? The central limit theorem applies to infinite sequences of random variables $X_1,X_2,X_3,...$ rather than finite vectors of random variables. This is evident in the fact that we take the limit $n \rightarrow \infty$ in these theorems. So what this means is that these problems deal i...
CLT may fail under this condition? The central limit theorem applies to infinite sequences of random variables $X_1,X_2,X_3,...$ rather than finite vectors of random variables. This is evident in the fact that we take the limit $n \ri
43,843
CLT may fail under this condition?
If you're going to run an asymptotic argument with $n$ close to $N$, you need sequences of finite populations and samples. For each $m=1,2,3,\dots$ suppose you have a population of size $N_m$ and a sample of size $n_m$, with $N_m\geq n_m$ and $n_m\to\infty$. We'll need some assumptions about the populations; we can g...
CLT may fail under this condition?
If you're going to run an asymptotic argument with $n$ close to $N$, you need sequences of finite populations and samples. For each $m=1,2,3,\dots$ suppose you have a population of size $N_m$ and a s
CLT may fail under this condition? If you're going to run an asymptotic argument with $n$ close to $N$, you need sequences of finite populations and samples. For each $m=1,2,3,\dots$ suppose you have a population of size $N_m$ and a sample of size $n_m$, with $N_m\geq n_m$ and $n_m\to\infty$. We'll need some assumpti...
CLT may fail under this condition? If you're going to run an asymptotic argument with $n$ close to $N$, you need sequences of finite populations and samples. For each $m=1,2,3,\dots$ suppose you have a population of size $N_m$ and a s
43,844
CLT may fail under this condition?
No, it does not, since Gaussian distribution converges to Dirac Delta for very small s.d. $\epsilon$, $\delta(x)=\lim\limits_{\epsilon\to 0^+}\frac{e^{-\frac{1}{2}\frac{x^2}{\epsilon^2}}}{\epsilon\sqrt{2\pi}}$. By CLT, for i.i.d. r.v.s $X_1,\ldots, X_n$, with population mean $\mu$ and finite variance $\sigma^2$, $\bar{...
CLT may fail under this condition?
No, it does not, since Gaussian distribution converges to Dirac Delta for very small s.d. $\epsilon$, $\delta(x)=\lim\limits_{\epsilon\to 0^+}\frac{e^{-\frac{1}{2}\frac{x^2}{\epsilon^2}}}{\epsilon\sqr
CLT may fail under this condition? No, it does not, since Gaussian distribution converges to Dirac Delta for very small s.d. $\epsilon$, $\delta(x)=\lim\limits_{\epsilon\to 0^+}\frac{e^{-\frac{1}{2}\frac{x^2}{\epsilon^2}}}{\epsilon\sqrt{2\pi}}$. By CLT, for i.i.d. r.v.s $X_1,\ldots, X_n$, with population mean $\mu$ and...
CLT may fail under this condition? No, it does not, since Gaussian distribution converges to Dirac Delta for very small s.d. $\epsilon$, $\delta(x)=\lim\limits_{\epsilon\to 0^+}\frac{e^{-\frac{1}{2}\frac{x^2}{\epsilon^2}}}{\epsilon\sqr
43,845
CLT may fail under this condition?
My question is: if the sample size goes closer or even equal to the population size, wouldn't the average values cram so closely near the mean, that violate the "bell shape" distribution? Indeed, if you sample without repetition from a population, then for $n$ closer to the population size you get a distribution that ...
CLT may fail under this condition?
My question is: if the sample size goes closer or even equal to the population size, wouldn't the average values cram so closely near the mean, that violate the "bell shape" distribution? Indeed, if
CLT may fail under this condition? My question is: if the sample size goes closer or even equal to the population size, wouldn't the average values cram so closely near the mean, that violate the "bell shape" distribution? Indeed, if you sample without repetition from a population, then for $n$ closer to the populatio...
CLT may fail under this condition? My question is: if the sample size goes closer or even equal to the population size, wouldn't the average values cram so closely near the mean, that violate the "bell shape" distribution? Indeed, if
43,846
CLT may fail under this condition?
Random variables are generally modelled as coming from an infinite population. Even when we're sampling from a finite set of instances, we can still view the population as being infinite. For instance, if you have an urn with 10 red balls and 30 green balls, you can consider your random variable to be "the color of a r...
CLT may fail under this condition?
Random variables are generally modelled as coming from an infinite population. Even when we're sampling from a finite set of instances, we can still view the population as being infinite. For instance
CLT may fail under this condition? Random variables are generally modelled as coming from an infinite population. Even when we're sampling from a finite set of instances, we can still view the population as being infinite. For instance, if you have an urn with 10 red balls and 30 green balls, you can consider your rand...
CLT may fail under this condition? Random variables are generally modelled as coming from an infinite population. Even when we're sampling from a finite set of instances, we can still view the population as being infinite. For instance
43,847
What's the advantage of importance sampling? [closed]
Importance resampling is not for plotting the PDF. It is for sampling that PDF. Sometimes, even if you know the formula of the target PDF, it's hard to sample from it. For example, a typical method is inverse transform method, where you need to be able to analytically calculate the inverse of the CDF. A typical example...
What's the advantage of importance sampling? [closed]
Importance resampling is not for plotting the PDF. It is for sampling that PDF. Sometimes, even if you know the formula of the target PDF, it's hard to sample from it. For example, a typical method is
What's the advantage of importance sampling? [closed] Importance resampling is not for plotting the PDF. It is for sampling that PDF. Sometimes, even if you know the formula of the target PDF, it's hard to sample from it. For example, a typical method is inverse transform method, where you need to be able to analytical...
What's the advantage of importance sampling? [closed] Importance resampling is not for plotting the PDF. It is for sampling that PDF. Sometimes, even if you know the formula of the target PDF, it's hard to sample from it. For example, a typical method is
43,848
What's the advantage of importance sampling? [closed]
Importance sampling is a Monte Carlo integration method that can be used to estimate the expected value of a function of a random variable. The method is useful in cases where the PDF is known, but the expected value of interest is unknown (and cannot be computed analytically from the PDF). In these cases, the method...
What's the advantage of importance sampling? [closed]
Importance sampling is a Monte Carlo integration method that can be used to estimate the expected value of a function of a random variable. The method is useful in cases where the PDF is known, but t
What's the advantage of importance sampling? [closed] Importance sampling is a Monte Carlo integration method that can be used to estimate the expected value of a function of a random variable. The method is useful in cases where the PDF is known, but the expected value of interest is unknown (and cannot be computed a...
What's the advantage of importance sampling? [closed] Importance sampling is a Monte Carlo integration method that can be used to estimate the expected value of a function of a random variable. The method is useful in cases where the PDF is known, but t
43,849
What's the advantage of importance sampling? [closed]
The code is actually producing samples by sampling importance resampling (Rubin 1987), that is by drawing points from the original iid sample $(y_1,\ldots,y_n)\sim\mathcal U(0,1)$ x_samples = rand(N,1); according to the distribution $$\mathbb P(X=y_i)=\omega_i\big/\sum_{j=1}^n \omega_j$$ where the $\omega_j$'s are th...
What's the advantage of importance sampling? [closed]
The code is actually producing samples by sampling importance resampling (Rubin 1987), that is by drawing points from the original iid sample $(y_1,\ldots,y_n)\sim\mathcal U(0,1)$ x_samples = rand(N,
What's the advantage of importance sampling? [closed] The code is actually producing samples by sampling importance resampling (Rubin 1987), that is by drawing points from the original iid sample $(y_1,\ldots,y_n)\sim\mathcal U(0,1)$ x_samples = rand(N,1); according to the distribution $$\mathbb P(X=y_i)=\omega_i\big...
What's the advantage of importance sampling? [closed] The code is actually producing samples by sampling importance resampling (Rubin 1987), that is by drawing points from the original iid sample $(y_1,\ldots,y_n)\sim\mathcal U(0,1)$ x_samples = rand(N,
43,850
Unexpected residuals plot of mixed linear model using lmer (lme4 package) in R
Your residual structure is totally expected with this model specification and an indication of an ill-specified model. What you basically are trying to do is to fit a linear line through points that can only take values of 0 and 1 on the $y$-axis. Let's look at a simple example with arbitrarily generated variables: #--...
Unexpected residuals plot of mixed linear model using lmer (lme4 package) in R
Your residual structure is totally expected with this model specification and an indication of an ill-specified model. What you basically are trying to do is to fit a linear line through points that c
Unexpected residuals plot of mixed linear model using lmer (lme4 package) in R Your residual structure is totally expected with this model specification and an indication of an ill-specified model. What you basically are trying to do is to fit a linear line through points that can only take values of 0 and 1 on the $y$...
Unexpected residuals plot of mixed linear model using lmer (lme4 package) in R Your residual structure is totally expected with this model specification and an indication of an ill-specified model. What you basically are trying to do is to fit a linear line through points that c
43,851
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear?
If you look at Anscombe's quartet you can see examples of linear with noise, linear with outliers and non-linear sets of data with the same $r^2$, means and variances. This image is from the Wikipedia article
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep
If you look at Anscombe's quartet you can see examples of linear with noise, linear with outliers and non-linear sets of data with the same $r^2$, means and variances. This image is from the Wikiped
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear? If you look at Anscombe's quartet you can see examples of linear with noise, linear with outliers and non-linear sets of data with the same $r^2$, means and variances. This imag...
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep If you look at Anscombe's quartet you can see examples of linear with noise, linear with outliers and non-linear sets of data with the same $r^2$, means and variances. This image is from the Wikiped
43,852
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear?
Usually not. The model $$y_i = \beta + \varepsilon_i,$$ $\varepsilon \sim \text{iid}$, $\mathbb{E}[\varepsilon]=0$ for the relation between $(y_i)$ and $(x_i)$ is perfectly linear, yet has an $r^2$ of zero. For other examples of what $r^2$ does not say about linearity, see the illustrations in my reply at Is $R^2$ use...
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep
Usually not. The model $$y_i = \beta + \varepsilon_i,$$ $\varepsilon \sim \text{iid}$, $\mathbb{E}[\varepsilon]=0$ for the relation between $(y_i)$ and $(x_i)$ is perfectly linear, yet has an $r^2$ o
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear? Usually not. The model $$y_i = \beta + \varepsilon_i,$$ $\varepsilon \sim \text{iid}$, $\mathbb{E}[\varepsilon]=0$ for the relation between $(y_i)$ and $(x_i)$ is perfectly linea...
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep Usually not. The model $$y_i = \beta + \varepsilon_i,$$ $\varepsilon \sim \text{iid}$, $\mathbb{E}[\varepsilon]=0$ for the relation between $(y_i)$ and $(x_i)$ is perfectly linear, yet has an $r^2$ o
43,853
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear?
In addition to the above answers, a commonly used (in econometrics) test for general regression nonlinearity is Ramsey's RESET test. Suppose you ran your main regression and obtained residuals $\hat\epsilon_i$ and fitted values $\hat y_i$ in it. Then RESET test is the test of the overall significance in an auxiliary re...
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep
In addition to the above answers, a commonly used (in econometrics) test for general regression nonlinearity is Ramsey's RESET test. Suppose you ran your main regression and obtained residuals $\hat\e
In linear regression, is the $R^2$ value enough to assess whether the relationship between the independent and dependent variable is linear? In addition to the above answers, a commonly used (in econometrics) test for general regression nonlinearity is Ramsey's RESET test. Suppose you ran your main regression and obtai...
In linear regression, is the $R^2$ value enough to assess whether the relationship between the indep In addition to the above answers, a commonly used (in econometrics) test for general regression nonlinearity is Ramsey's RESET test. Suppose you ran your main regression and obtained residuals $\hat\e
43,854
What is the best tool for customer segmentation?
I'm afraid you are mistaking software programs and statistical algorithms for thinking, judging beings. No tool can give you the Good, the Bad, and the Ugly. You'll have to exercise your own judgment along the way! What you need is not so much a tool but well-thought-out criteria for classifying each customer. Then...
What is the best tool for customer segmentation?
I'm afraid you are mistaking software programs and statistical algorithms for thinking, judging beings. No tool can give you the Good, the Bad, and the Ugly. You'll have to exercise your own judgmen
What is the best tool for customer segmentation? I'm afraid you are mistaking software programs and statistical algorithms for thinking, judging beings. No tool can give you the Good, the Bad, and the Ugly. You'll have to exercise your own judgment along the way! What you need is not so much a tool but well-thought-...
What is the best tool for customer segmentation? I'm afraid you are mistaking software programs and statistical algorithms for thinking, judging beings. No tool can give you the Good, the Bad, and the Ugly. You'll have to exercise your own judgmen
43,855
What is the best tool for customer segmentation?
Survival analysis of LTV (lifetime value) is a good place to start. It's pretty basic, but it gets the job done. But there is a lot of business intelligence work that you could do with what you have. If you have response rates to advertisements and such it could also provide you with a good way to look at effectiveness...
What is the best tool for customer segmentation?
Survival analysis of LTV (lifetime value) is a good place to start. It's pretty basic, but it gets the job done. But there is a lot of business intelligence work that you could do with what you have.
What is the best tool for customer segmentation? Survival analysis of LTV (lifetime value) is a good place to start. It's pretty basic, but it gets the job done. But there is a lot of business intelligence work that you could do with what you have. If you have response rates to advertisements and such it could also pro...
What is the best tool for customer segmentation? Survival analysis of LTV (lifetime value) is a good place to start. It's pretty basic, but it gets the job done. But there is a lot of business intelligence work that you could do with what you have.
43,856
What is the best tool for customer segmentation?
I would suggest with your limited data (and perhaps limited experience with clustering), you simply create an RFM coding and separate into the three bins your desire. Otherwise, cluster analysis on the data is a basic method for customer segmentation based on transactional variables (of course your dates have to become...
What is the best tool for customer segmentation?
I would suggest with your limited data (and perhaps limited experience with clustering), you simply create an RFM coding and separate into the three bins your desire. Otherwise, cluster analysis on th
What is the best tool for customer segmentation? I would suggest with your limited data (and perhaps limited experience with clustering), you simply create an RFM coding and separate into the three bins your desire. Otherwise, cluster analysis on the data is a basic method for customer segmentation based on transaction...
What is the best tool for customer segmentation? I would suggest with your limited data (and perhaps limited experience with clustering), you simply create an RFM coding and separate into the three bins your desire. Otherwise, cluster analysis on th
43,857
What is the best tool for customer segmentation?
Generally I would agree with rolando2. However, if you interested in unsupervised categorization, there are methods that exist that can provide you with unlabeled groups of your data. One such method is latent dirichlet process (LDA) which has been used for automatic topic discovery. K-Means might be a better fit for y...
What is the best tool for customer segmentation?
Generally I would agree with rolando2. However, if you interested in unsupervised categorization, there are methods that exist that can provide you with unlabeled groups of your data. One such method
What is the best tool for customer segmentation? Generally I would agree with rolando2. However, if you interested in unsupervised categorization, there are methods that exist that can provide you with unlabeled groups of your data. One such method is latent dirichlet process (LDA) which has been used for automatic top...
What is the best tool for customer segmentation? Generally I would agree with rolando2. However, if you interested in unsupervised categorization, there are methods that exist that can provide you with unlabeled groups of your data. One such method
43,858
What is the best tool for customer segmentation?
One way to approach this is to build a probability model of the customer data. If you have some understanding of the customer level behavior, you can model this and make predictions of who are your most valuable customers. For example, you could assume that customers make purchases at a constant rate until they 'die.'...
What is the best tool for customer segmentation?
One way to approach this is to build a probability model of the customer data. If you have some understanding of the customer level behavior, you can model this and make predictions of who are your m
What is the best tool for customer segmentation? One way to approach this is to build a probability model of the customer data. If you have some understanding of the customer level behavior, you can model this and make predictions of who are your most valuable customers. For example, you could assume that customers ma...
What is the best tool for customer segmentation? One way to approach this is to build a probability model of the customer data. If you have some understanding of the customer level behavior, you can model this and make predictions of who are your m
43,859
What is the best tool for customer segmentation?
You can look at the problem as one with multiple objectives. Let's say a good customer is one who: Spends a high average amount per purchase (Brings in money) Makes many purchases (Shows trust) Makes purchases over a long duration of time (Shows loyalty) The corresponding objectives are therefore: Maximize $Average...
What is the best tool for customer segmentation?
You can look at the problem as one with multiple objectives. Let's say a good customer is one who: Spends a high average amount per purchase (Brings in money) Makes many purchases (Shows trust) Makes
What is the best tool for customer segmentation? You can look at the problem as one with multiple objectives. Let's say a good customer is one who: Spends a high average amount per purchase (Brings in money) Makes many purchases (Shows trust) Makes purchases over a long duration of time (Shows loyalty) The correspond...
What is the best tool for customer segmentation? You can look at the problem as one with multiple objectives. Let's say a good customer is one who: Spends a high average amount per purchase (Brings in money) Makes many purchases (Shows trust) Makes
43,860
What is the best tool for customer segmentation?
If you want to use a probablistic approach which has already been mentioned by aaronjg, have a look at the R package CLVTools (https://cran.r-project.org/web/packages/CLVTools/index.html). As an output, you basically get an estimate for every customer in terms of his/her future value to a business. Based on this variab...
What is the best tool for customer segmentation?
If you want to use a probablistic approach which has already been mentioned by aaronjg, have a look at the R package CLVTools (https://cran.r-project.org/web/packages/CLVTools/index.html). As an outpu
What is the best tool for customer segmentation? If you want to use a probablistic approach which has already been mentioned by aaronjg, have a look at the R package CLVTools (https://cran.r-project.org/web/packages/CLVTools/index.html). As an output, you basically get an estimate for every customer in terms of his/her...
What is the best tool for customer segmentation? If you want to use a probablistic approach which has already been mentioned by aaronjg, have a look at the R package CLVTools (https://cran.r-project.org/web/packages/CLVTools/index.html). As an outpu
43,861
Exponentially weighted moving linear regression
Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your linear regression formula. http://www.jstor.org/pss/2627674 http://en.wikipedia.org/wiki/Exponential_smoothing
Exponentially weighted moving linear regression
Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your line
Exponentially weighted moving linear regression Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your linear regression formula. http://www.jstor.org/pss/2627674 http://en.wikip...
Exponentially weighted moving linear regression Sounds like what you want to do is a two-stage model. First transform your data into exponentially smoothed form using a specified smoothing factor, and then input the transformed data into your line
43,862
Exponentially weighted moving linear regression
Sure, just add a weights= argument to lm() (in case of R): R> x <- 1:10 ## mean of this is 5.5 R> lm(x ~ 1) ## regression on constant computes mean Call: lm(formula = x ~ 1) Coefficients: (Intercept) 5.5 R> lm(x ~ 1, weights=0.9^(seq(10,1,by=-1))) Call: lm(formula = x ~ 1, weights = 0.9^(seq(10, 1...
Exponentially weighted moving linear regression
Sure, just add a weights= argument to lm() (in case of R): R> x <- 1:10 ## mean of this is 5.5 R> lm(x ~ 1) ## regression on constant computes mean Call: lm(formula = x ~ 1) Coefficients: (Int
Exponentially weighted moving linear regression Sure, just add a weights= argument to lm() (in case of R): R> x <- 1:10 ## mean of this is 5.5 R> lm(x ~ 1) ## regression on constant computes mean Call: lm(formula = x ~ 1) Coefficients: (Intercept) 5.5 R> lm(x ~ 1, weights=0.9^(seq(10,1,by=-1))) Ca...
Exponentially weighted moving linear regression Sure, just add a weights= argument to lm() (in case of R): R> x <- 1:10 ## mean of this is 5.5 R> lm(x ~ 1) ## regression on constant computes mean Call: lm(formula = x ~ 1) Coefficients: (Int
43,863
Exponentially weighted moving linear regression
If you are looking for an equation of the form $$y=\alpha_n + \beta_n x$$ after $n$ pieces of data have come in, and you are using an exponential factor $k \ge 1$ then you could use $$\beta_n = \frac{\left(\sum_{i=1}^n k^i\right) \left(\sum_{i=1}^n k^i X_i Y_i\right) - \left(\sum_{i=1}^n k^i X_i\right) \left(\sum_...
Exponentially weighted moving linear regression
If you are looking for an equation of the form $$y=\alpha_n + \beta_n x$$ after $n$ pieces of data have come in, and you are using an exponential factor $k \ge 1$ then you could use $$\beta_n = \fr
Exponentially weighted moving linear regression If you are looking for an equation of the form $$y=\alpha_n + \beta_n x$$ after $n$ pieces of data have come in, and you are using an exponential factor $k \ge 1$ then you could use $$\beta_n = \frac{\left(\sum_{i=1}^n k^i\right) \left(\sum_{i=1}^n k^i X_i Y_i\right) -...
Exponentially weighted moving linear regression If you are looking for an equation of the form $$y=\alpha_n + \beta_n x$$ after $n$ pieces of data have come in, and you are using an exponential factor $k \ge 1$ then you could use $$\beta_n = \fr
43,864
Exponentially weighted moving linear regression
Yes you can. The method you are looking for is called exponentially weighted least squares method. It is a variation on the recursive least squares method: \begin{align} Θ ̂(k+1)&=Θ ̂(k)+K[z(k+1)-x^T (k+1) Θ ̂(k)] \\ K(k+1) &= D(k) x(k+1) [λ+x^T (k+1)D(k)x(k+1)]^(-1) \\ D(k+1) &=\frac 1 λ \bigg(D(k)-D(k)x(k+1)\bigg...
Exponentially weighted moving linear regression
Yes you can. The method you are looking for is called exponentially weighted least squares method. It is a variation on the recursive least squares method: \begin{align} Θ ̂(k+1)&=Θ ̂(k)+K[z(k+1)-x^T
Exponentially weighted moving linear regression Yes you can. The method you are looking for is called exponentially weighted least squares method. It is a variation on the recursive least squares method: \begin{align} Θ ̂(k+1)&=Θ ̂(k)+K[z(k+1)-x^T (k+1) Θ ̂(k)] \\ K(k+1) &= D(k) x(k+1) [λ+x^T (k+1)D(k)x(k+1)]^(-1) ...
Exponentially weighted moving linear regression Yes you can. The method you are looking for is called exponentially weighted least squares method. It is a variation on the recursive least squares method: \begin{align} Θ ̂(k+1)&=Θ ̂(k)+K[z(k+1)-x^T
43,865
Exponentially weighted moving linear regression
I'm not sure of the actual relationship of this to exponentially weighted moving linear regression, but a simple online formula for estimating an exponentially-weighted slope and offset is called Holt-Winters double exponential smoothing. From the Wikipedia page: Given a time series $x_0 ... x_t$, and smoothing parame...
Exponentially weighted moving linear regression
I'm not sure of the actual relationship of this to exponentially weighted moving linear regression, but a simple online formula for estimating an exponentially-weighted slope and offset is called Holt
Exponentially weighted moving linear regression I'm not sure of the actual relationship of this to exponentially weighted moving linear regression, but a simple online formula for estimating an exponentially-weighted slope and offset is called Holt-Winters double exponential smoothing. From the Wikipedia page: Given a...
Exponentially weighted moving linear regression I'm not sure of the actual relationship of this to exponentially weighted moving linear regression, but a simple online formula for estimating an exponentially-weighted slope and offset is called Holt
43,866
Exponentially weighted moving linear regression
If you form the Transfer Function Model y(t)=W(B)*X(t)+[THETA(B)/PHI(B)]*a(t) the operator [THETA(B)/PHI(B)] is the "smoothing component". For examnple if PHI(B)=1.0 and THETA(B)=1-.5B this would imply a set of weights of .5,.25,.125,... . in this way you could provide the answer to optimizing the "weighted moving line...
Exponentially weighted moving linear regression
If you form the Transfer Function Model y(t)=W(B)*X(t)+[THETA(B)/PHI(B)]*a(t) the operator [THETA(B)/PHI(B)] is the "smoothing component". For examnple if PHI(B)=1.0 and THETA(B)=1-.5B this would impl
Exponentially weighted moving linear regression If you form the Transfer Function Model y(t)=W(B)*X(t)+[THETA(B)/PHI(B)]*a(t) the operator [THETA(B)/PHI(B)] is the "smoothing component". For examnple if PHI(B)=1.0 and THETA(B)=1-.5B this would imply a set of weights of .5,.25,.125,... . in this way you could provide th...
Exponentially weighted moving linear regression If you form the Transfer Function Model y(t)=W(B)*X(t)+[THETA(B)/PHI(B)]*a(t) the operator [THETA(B)/PHI(B)] is the "smoothing component". For examnple if PHI(B)=1.0 and THETA(B)=1-.5B this would impl
43,867
Exponentially weighted moving linear regression
First time here, first time posting, probably incorrect, but bare with me. So the classical linear regression calculation is as follows: \begin{align} y=\alpha + \beta \cdot x \\\\ \alpha=\frac{\left(\sum_{i}^N Y_i\right)\left(\sum_{i}^N X_i^2\right) - \left(\sum_{i}^N X_i\right)\left(\sum_{i}^N X_i\cdot Y_i\right)}{N\...
Exponentially weighted moving linear regression
First time here, first time posting, probably incorrect, but bare with me. So the classical linear regression calculation is as follows: \begin{align} y=\alpha + \beta \cdot x \\\\ \alpha=\frac{\left(
Exponentially weighted moving linear regression First time here, first time posting, probably incorrect, but bare with me. So the classical linear regression calculation is as follows: \begin{align} y=\alpha + \beta \cdot x \\\\ \alpha=\frac{\left(\sum_{i}^N Y_i\right)\left(\sum_{i}^N X_i^2\right) - \left(\sum_{i}^N X_...
Exponentially weighted moving linear regression First time here, first time posting, probably incorrect, but bare with me. So the classical linear regression calculation is as follows: \begin{align} y=\alpha + \beta \cdot x \\\\ \alpha=\frac{\left(
43,868
Why not use the same distribution for the prior in Bayesian statistics?
For example, the binomial distribution has the Beta distribution as the prior. They are very similar except for a normalization constant due to the Beta distribution. Why not use another binomial distribution (which can be also uninformative) as the prior? Actually, it's a great counterexample. The beta-binomial model...
Why not use the same distribution for the prior in Bayesian statistics?
For example, the binomial distribution has the Beta distribution as the prior. They are very similar except for a normalization constant due to the Beta distribution. Why not use another binomial dist
Why not use the same distribution for the prior in Bayesian statistics? For example, the binomial distribution has the Beta distribution as the prior. They are very similar except for a normalization constant due to the Beta distribution. Why not use another binomial distribution (which can be also uninformative) as th...
Why not use the same distribution for the prior in Bayesian statistics? For example, the binomial distribution has the Beta distribution as the prior. They are very similar except for a normalization constant due to the Beta distribution. Why not use another binomial dist
43,869
Why not use the same distribution for the prior in Bayesian statistics?
The nature of your question itself suggests a conceptual misunderstanding. When we consider a binomial PMF, e.g. $$X \sim \operatorname{Binomial}(n, p)$$ with $$\Pr[X = x] = \binom{n}{x} p^x (1-p)^{n-x}, \quad x \in \{0, 1, \ldots, n\},$$ the support of this random variable is $X \in \{0, 1, 2, \ldots, n\}$. This repr...
Why not use the same distribution for the prior in Bayesian statistics?
The nature of your question itself suggests a conceptual misunderstanding. When we consider a binomial PMF, e.g. $$X \sim \operatorname{Binomial}(n, p)$$ with $$\Pr[X = x] = \binom{n}{x} p^x (1-p)^{n-
Why not use the same distribution for the prior in Bayesian statistics? The nature of your question itself suggests a conceptual misunderstanding. When we consider a binomial PMF, e.g. $$X \sim \operatorname{Binomial}(n, p)$$ with $$\Pr[X = x] = \binom{n}{x} p^x (1-p)^{n-x}, \quad x \in \{0, 1, \ldots, n\},$$ the suppo...
Why not use the same distribution for the prior in Bayesian statistics? The nature of your question itself suggests a conceptual misunderstanding. When we consider a binomial PMF, e.g. $$X \sim \operatorname{Binomial}(n, p)$$ with $$\Pr[X = x] = \binom{n}{x} p^x (1-p)^{n-
43,870
Why not use the same distribution for the prior in Bayesian statistics?
While I can't speak for textbook authors, I can think of two reasons why they might choose to do this. Both come from the fact that a conjugate prior will lead to a posterior distribution of a known distribution family. (1) Simplicity: A well-written textbook should distill the content to the key points, and an exampl...
Why not use the same distribution for the prior in Bayesian statistics?
While I can't speak for textbook authors, I can think of two reasons why they might choose to do this. Both come from the fact that a conjugate prior will lead to a posterior distribution of a known
Why not use the same distribution for the prior in Bayesian statistics? While I can't speak for textbook authors, I can think of two reasons why they might choose to do this. Both come from the fact that a conjugate prior will lead to a posterior distribution of a known distribution family. (1) Simplicity: A well-writ...
Why not use the same distribution for the prior in Bayesian statistics? While I can't speak for textbook authors, I can think of two reasons why they might choose to do this. Both come from the fact that a conjugate prior will lead to a posterior distribution of a known
43,871
What does standard deviation mean in this case?
Interpretation of the Mean When we say that the average value spent on meals was 50 USD - it means that if we take the total amount spent on fast foods and equally divide the sum among all the people who made the purchase - each person would get 50 USD. However, this number hides a lot of information. We can get an ave...
What does standard deviation mean in this case?
Interpretation of the Mean When we say that the average value spent on meals was 50 USD - it means that if we take the total amount spent on fast foods and equally divide the sum among all the people
What does standard deviation mean in this case? Interpretation of the Mean When we say that the average value spent on meals was 50 USD - it means that if we take the total amount spent on fast foods and equally divide the sum among all the people who made the purchase - each person would get 50 USD. However, this numb...
What does standard deviation mean in this case? Interpretation of the Mean When we say that the average value spent on meals was 50 USD - it means that if we take the total amount spent on fast foods and equally divide the sum among all the people
43,872
What does standard deviation mean in this case?
I'm surprised no one answered with the rule-of-thumb for regarding the standard deviation in data with normal distributions: The 68-95-99.7 rule. (See Wikipedia article. If you have normally-distributed data, about 68% of observations fall within the range of the mean ± the standard deviation. And about 95% of obser...
What does standard deviation mean in this case?
I'm surprised no one answered with the rule-of-thumb for regarding the standard deviation in data with normal distributions: The 68-95-99.7 rule. (See Wikipedia article. If you have normally-distrib
What does standard deviation mean in this case? I'm surprised no one answered with the rule-of-thumb for regarding the standard deviation in data with normal distributions: The 68-95-99.7 rule. (See Wikipedia article. If you have normally-distributed data, about 68% of observations fall within the range of the mean ...
What does standard deviation mean in this case? I'm surprised no one answered with the rule-of-thumb for regarding the standard deviation in data with normal distributions: The 68-95-99.7 rule. (See Wikipedia article. If you have normally-distrib
43,873
What does standard deviation mean in this case?
Standard deviation refers to the distribution of the data (or the distribution from which the data were drawn). Standard error refers to estimating a parameter. It is not quite right to say that people tend to spend between 43 and 57, but that is closer to the correct interpretation. Some of the confusion comes from th...
What does standard deviation mean in this case?
Standard deviation refers to the distribution of the data (or the distribution from which the data were drawn). Standard error refers to estimating a parameter. It is not quite right to say that peopl
What does standard deviation mean in this case? Standard deviation refers to the distribution of the data (or the distribution from which the data were drawn). Standard error refers to estimating a parameter. It is not quite right to say that people tend to spend between 43 and 57, but that is closer to the correct int...
What does standard deviation mean in this case? Standard deviation refers to the distribution of the data (or the distribution from which the data were drawn). Standard error refers to estimating a parameter. It is not quite right to say that peopl
43,874
What does standard deviation mean in this case?
Suppose that someone was collecting samples and he was trying to estimate the average amount of money people spend on purchasing fast food meals. The calculated mean was USD 50 and the standard deviation was USD 7. This doesn't tell us enough information. But that is ok, we can tease out the likely answer. So, our pr...
What does standard deviation mean in this case?
Suppose that someone was collecting samples and he was trying to estimate the average amount of money people spend on purchasing fast food meals. The calculated mean was USD 50 and the standard deviat
What does standard deviation mean in this case? Suppose that someone was collecting samples and he was trying to estimate the average amount of money people spend on purchasing fast food meals. The calculated mean was USD 50 and the standard deviation was USD 7. This doesn't tell us enough information. But that is ok...
What does standard deviation mean in this case? Suppose that someone was collecting samples and he was trying to estimate the average amount of money people spend on purchasing fast food meals. The calculated mean was USD 50 and the standard deviat
43,875
Why is the intercept changing in a logistic regression when all predictors are standardized?
As Noah says but just with formulas ... Consider logistic regression $$ \Pr(Y=1) = \frac{\exp(\beta_0 + \mathbf x^\top\beta)}{1+ \exp(\beta_0 + \mathbf x^\top\beta)}$$ and then offcourse $$ \Pr(Y=0) = 1- \Pr(Y=1)=1 - \frac{\exp(\beta_0 + \mathbf x^\top\beta)}{1+ \exp(\beta_0 + \mathbf x^\top\beta)} = \frac{1}{1+\exp(\b...
Why is the intercept changing in a logistic regression when all predictors are standardized?
As Noah says but just with formulas ... Consider logistic regression $$ \Pr(Y=1) = \frac{\exp(\beta_0 + \mathbf x^\top\beta)}{1+ \exp(\beta_0 + \mathbf x^\top\beta)}$$ and then offcourse $$ \Pr(Y=0) =
Why is the intercept changing in a logistic regression when all predictors are standardized? As Noah says but just with formulas ... Consider logistic regression $$ \Pr(Y=1) = \frac{\exp(\beta_0 + \mathbf x^\top\beta)}{1+ \exp(\beta_0 + \mathbf x^\top\beta)}$$ and then offcourse $$ \Pr(Y=0) = 1- \Pr(Y=1)=1 - \frac{\exp...
Why is the intercept changing in a logistic regression when all predictors are standardized? As Noah says but just with formulas ... Consider logistic regression $$ \Pr(Y=1) = \frac{\exp(\beta_0 + \mathbf x^\top\beta)}{1+ \exp(\beta_0 + \mathbf x^\top\beta)}$$ and then offcourse $$ \Pr(Y=0) =
43,876
Why is the intercept changing in a logistic regression when all predictors are standardized?
Welcome to CV. You have misunderstood the interpretation of the intercept. The intercept is the log odds (not the odds ratio) of the outcome when all the predictors are at 0 (not the marginal log odds, as you described). When the predictors are standardized, this corresponds to when all the raw predictors are at their ...
Why is the intercept changing in a logistic regression when all predictors are standardized?
Welcome to CV. You have misunderstood the interpretation of the intercept. The intercept is the log odds (not the odds ratio) of the outcome when all the predictors are at 0 (not the marginal log odds
Why is the intercept changing in a logistic regression when all predictors are standardized? Welcome to CV. You have misunderstood the interpretation of the intercept. The intercept is the log odds (not the odds ratio) of the outcome when all the predictors are at 0 (not the marginal log odds, as you described). When t...
Why is the intercept changing in a logistic regression when all predictors are standardized? Welcome to CV. You have misunderstood the interpretation of the intercept. The intercept is the log odds (not the odds ratio) of the outcome when all the predictors are at 0 (not the marginal log odds
43,877
Why is the intercept changing in a logistic regression when all predictors are standardized?
an alternative explanation is the margin odds are incorporated into your fitted values. The ML gradient equations (set to 0) are equal to the following constraints.... $$\sum_i p_i = \sum_i y_i$$ $$\sum_i x_{1i}p_i = \sum_i x_{1i}y_i$$ ... $$\sum_i x_{ki}p_i = \sum_i x_{ki}y_i$$ Where $p_i$ is the fitted probability, $...
Why is the intercept changing in a logistic regression when all predictors are standardized?
an alternative explanation is the margin odds are incorporated into your fitted values. The ML gradient equations (set to 0) are equal to the following constraints.... $$\sum_i p_i = \sum_i y_i$$ $$\s
Why is the intercept changing in a logistic regression when all predictors are standardized? an alternative explanation is the margin odds are incorporated into your fitted values. The ML gradient equations (set to 0) are equal to the following constraints.... $$\sum_i p_i = \sum_i y_i$$ $$\sum_i x_{1i}p_i = \sum_i x_{...
Why is the intercept changing in a logistic regression when all predictors are standardized? an alternative explanation is the margin odds are incorporated into your fitted values. The ML gradient equations (set to 0) are equal to the following constraints.... $$\sum_i p_i = \sum_i y_i$$ $$\s
43,878
Why is the empirical cumulative distribution of 1:1000 a straight line?
The cumulative distribution function of a random variable $X$ has nothing to do with summing the random variable. It is the probability that $X$ will take a value less than or equal to $x$. And of course, the probability that a value randomly sampled from your vector $(1, \dots, 1000)$ is less than or equal to 200 is...
Why is the empirical cumulative distribution of 1:1000 a straight line?
The cumulative distribution function of a random variable $X$ has nothing to do with summing the random variable. It is the probability that $X$ will take a value less than or equal to $x$. And of c
Why is the empirical cumulative distribution of 1:1000 a straight line? The cumulative distribution function of a random variable $X$ has nothing to do with summing the random variable. It is the probability that $X$ will take a value less than or equal to $x$. And of course, the probability that a value randomly sam...
Why is the empirical cumulative distribution of 1:1000 a straight line? The cumulative distribution function of a random variable $X$ has nothing to do with summing the random variable. It is the probability that $X$ will take a value less than or equal to $x$. And of c
43,879
Why is the empirical cumulative distribution of 1:1000 a straight line?
Empirical cumulative distribution function is a cumulative sum of frequencies of observed $x_i$'s divided by total sample size. Your data is a vector of values from $1$ to $1000$, where each of the values appears exactly once. This means that your "variable" follows a discrete uniform distribution, that has a flat CDF....
Why is the empirical cumulative distribution of 1:1000 a straight line?
Empirical cumulative distribution function is a cumulative sum of frequencies of observed $x_i$'s divided by total sample size. Your data is a vector of values from $1$ to $1000$, where each of the va
Why is the empirical cumulative distribution of 1:1000 a straight line? Empirical cumulative distribution function is a cumulative sum of frequencies of observed $x_i$'s divided by total sample size. Your data is a vector of values from $1$ to $1000$, where each of the values appears exactly once. This means that your ...
Why is the empirical cumulative distribution of 1:1000 a straight line? Empirical cumulative distribution function is a cumulative sum of frequencies of observed $x_i$'s divided by total sample size. Your data is a vector of values from $1$ to $1000$, where each of the va
43,880
Why is the empirical cumulative distribution of 1:1000 a straight line?
You can think about it mechanically, too. The ECDF $\hat F$ evaluated at $x$ is the proportion of observations with value $x$ or below. Since you have exactly 1,000 observations $\{y_i\}_{i=i}^{1000}$, the difference between $\hat F(y_i)$ and $\hat F(y_{i+1})$ is always 0.001 for any $1 \le i < 1000$. Moreover, your sa...
Why is the empirical cumulative distribution of 1:1000 a straight line?
You can think about it mechanically, too. The ECDF $\hat F$ evaluated at $x$ is the proportion of observations with value $x$ or below. Since you have exactly 1,000 observations $\{y_i\}_{i=i}^{1000}$
Why is the empirical cumulative distribution of 1:1000 a straight line? You can think about it mechanically, too. The ECDF $\hat F$ evaluated at $x$ is the proportion of observations with value $x$ or below. Since you have exactly 1,000 observations $\{y_i\}_{i=i}^{1000}$, the difference between $\hat F(y_i)$ and $\hat...
Why is the empirical cumulative distribution of 1:1000 a straight line? You can think about it mechanically, too. The ECDF $\hat F$ evaluated at $x$ is the proportion of observations with value $x$ or below. Since you have exactly 1,000 observations $\{y_i\}_{i=i}^{1000}$
43,881
Why is the empirical cumulative distribution of 1:1000 a straight line?
The empirical distribution function of a sample $Y_1, ..., Y_n$ is defined as $$ \widehat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathcal{I} \{ Y_i \leq x \} $$ In your data set, $Y_i = i$. So, $ \widehat{F}(x) = x/n$, for $x = 1, 2, ..., 1000$. Plotted the way you did, this looks like a linear function of $x$.
Why is the empirical cumulative distribution of 1:1000 a straight line?
The empirical distribution function of a sample $Y_1, ..., Y_n$ is defined as $$ \widehat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathcal{I} \{ Y_i \leq x \} $$ In your data set, $Y_i = i$. So, $ \wideh
Why is the empirical cumulative distribution of 1:1000 a straight line? The empirical distribution function of a sample $Y_1, ..., Y_n$ is defined as $$ \widehat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathcal{I} \{ Y_i \leq x \} $$ In your data set, $Y_i = i$. So, $ \widehat{F}(x) = x/n$, for $x = 1, 2, ..., 1000$. Plot...
Why is the empirical cumulative distribution of 1:1000 a straight line? The empirical distribution function of a sample $Y_1, ..., Y_n$ is defined as $$ \widehat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathcal{I} \{ Y_i \leq x \} $$ In your data set, $Y_i = i$. So, $ \wideh
43,882
What is the optimal $k$ for the $k$ nearest neighbour classifier on the Iris dataset?
Lets say you want to use Accuracy (or % correct) to evaluate "optimal," and you have time to look at 25 values for k. The following R code will answer your question using 15 repeats of 10-fold cross-validation. It will also take a long time to run. library(caret) model <- train( Species~., data=iris, me...
What is the optimal $k$ for the $k$ nearest neighbour classifier on the Iris dataset?
Lets say you want to use Accuracy (or % correct) to evaluate "optimal," and you have time to look at 25 values for k. The following R code will answer your question using 15 repeats of 10-fold cross-
What is the optimal $k$ for the $k$ nearest neighbour classifier on the Iris dataset? Lets say you want to use Accuracy (or % correct) to evaluate "optimal," and you have time to look at 25 values for k. The following R code will answer your question using 15 repeats of 10-fold cross-validation. It will also take a l...
What is the optimal $k$ for the $k$ nearest neighbour classifier on the Iris dataset? Lets say you want to use Accuracy (or % correct) to evaluate "optimal," and you have time to look at 25 values for k. The following R code will answer your question using 15 repeats of 10-fold cross-
43,883
Can a binomial distribution have negative x values?
Poisson, binomial, Bernoulli, negative binomial, etc. are just model distributions - that is distributions that are analytically tractable and/or can be derived under rather simple assumptions. One could thus reformulate the question as: Are there known model discrete distributions with a support containing negative n...
Can a binomial distribution have negative x values?
Poisson, binomial, Bernoulli, negative binomial, etc. are just model distributions - that is distributions that are analytically tractable and/or can be derived under rather simple assumptions. One co
Can a binomial distribution have negative x values? Poisson, binomial, Bernoulli, negative binomial, etc. are just model distributions - that is distributions that are analytically tractable and/or can be derived under rather simple assumptions. One could thus reformulate the question as: Are there known model discret...
Can a binomial distribution have negative x values? Poisson, binomial, Bernoulli, negative binomial, etc. are just model distributions - that is distributions that are analytically tractable and/or can be derived under rather simple assumptions. One co
43,884
Can a binomial distribution have negative x values?
The binomial distribution is a distribution for a sum of Bernoulli trials: the sum of zeros and ones is non-negative. But it is not true that all discrete distributions are non-negative. For a trivial example, say that $X$ follows a Poisson distribution and you create another variable $Y = -X$. $Y$ would be discrete an...
Can a binomial distribution have negative x values?
The binomial distribution is a distribution for a sum of Bernoulli trials: the sum of zeros and ones is non-negative. But it is not true that all discrete distributions are non-negative. For a trivial
Can a binomial distribution have negative x values? The binomial distribution is a distribution for a sum of Bernoulli trials: the sum of zeros and ones is non-negative. But it is not true that all discrete distributions are non-negative. For a trivial example, say that $X$ follows a Poisson distribution and you create...
Can a binomial distribution have negative x values? The binomial distribution is a distribution for a sum of Bernoulli trials: the sum of zeros and ones is non-negative. But it is not true that all discrete distributions are non-negative. For a trivial
43,885
Can a binomial distribution have negative x values?
A less intuitive but important example for a discrete distribution with negative support would be the distribution of (unit or dollar) sales for a particular product at a given point of sale, conditional on various predictors like the day of the week, time of the year, promotions etc. This is discrete because most prod...
Can a binomial distribution have negative x values?
A less intuitive but important example for a discrete distribution with negative support would be the distribution of (unit or dollar) sales for a particular product at a given point of sale, conditio
Can a binomial distribution have negative x values? A less intuitive but important example for a discrete distribution with negative support would be the distribution of (unit or dollar) sales for a particular product at a given point of sale, conditional on various predictors like the day of the week, time of the year...
Can a binomial distribution have negative x values? A less intuitive but important example for a discrete distribution with negative support would be the distribution of (unit or dollar) sales for a particular product at a given point of sale, conditio
43,886
Coefficient of 0.001 with p < 0.005 [duplicate]
A simple thought experiment: suppose your predictor was a length, originally expressed in millimetres. If you express it instead in kilometres and fit the model again, you have not really changed anything meaningful about the relationship, but your coefficient will drop by several orders of magnitude. You can also get ...
Coefficient of 0.001 with p < 0.005 [duplicate]
A simple thought experiment: suppose your predictor was a length, originally expressed in millimetres. If you express it instead in kilometres and fit the model again, you have not really changed anyt
Coefficient of 0.001 with p < 0.005 [duplicate] A simple thought experiment: suppose your predictor was a length, originally expressed in millimetres. If you express it instead in kilometres and fit the model again, you have not really changed anything meaningful about the relationship, but your coefficient will drop b...
Coefficient of 0.001 with p < 0.005 [duplicate] A simple thought experiment: suppose your predictor was a length, originally expressed in millimetres. If you express it instead in kilometres and fit the model again, you have not really changed anyt
43,887
Coefficient of 0.001 with p < 0.005 [duplicate]
This is a known phenomenon, as p-values depend both on the effect size and the sample size. As you get many observations, you get convincing evidence that a tiny effect is real. In other words, that coefficient probably isn’t zero; you have a very unusual result for a situation where the coefficient isn’t zero. However...
Coefficient of 0.001 with p < 0.005 [duplicate]
This is a known phenomenon, as p-values depend both on the effect size and the sample size. As you get many observations, you get convincing evidence that a tiny effect is real. In other words, that c
Coefficient of 0.001 with p < 0.005 [duplicate] This is a known phenomenon, as p-values depend both on the effect size and the sample size. As you get many observations, you get convincing evidence that a tiny effect is real. In other words, that coefficient probably isn’t zero; you have a very unusual result for a sit...
Coefficient of 0.001 with p < 0.005 [duplicate] This is a known phenomenon, as p-values depend both on the effect size and the sample size. As you get many observations, you get convincing evidence that a tiny effect is real. In other words, that c
43,888
In ML, once we remove a feature, can we safely assume that feature will not be important again?
No, you cannot safely assume that. The reason is that conditional independence does not imply independence and vice versa (wiki). Moreover the forward selection style approach you follow suffers from a fundamental problem: model selection criteria like that usually rely on p-values/t-statistics/... To be based on the ...
In ML, once we remove a feature, can we safely assume that feature will not be important again?
No, you cannot safely assume that. The reason is that conditional independence does not imply independence and vice versa (wiki). Moreover the forward selection style approach you follow suffers from
In ML, once we remove a feature, can we safely assume that feature will not be important again? No, you cannot safely assume that. The reason is that conditional independence does not imply independence and vice versa (wiki). Moreover the forward selection style approach you follow suffers from a fundamental problem: m...
In ML, once we remove a feature, can we safely assume that feature will not be important again? No, you cannot safely assume that. The reason is that conditional independence does not imply independence and vice versa (wiki). Moreover the forward selection style approach you follow suffers from
43,889
In ML, once we remove a feature, can we safely assume that feature will not be important again?
You seem to be assuming that the models work in additive fashion, so adding a feature to the model just "adds" some stuff related to this feature alone and does not influence the rest of the model, same with removing the feature. That is not the case. If machine learning models worked like this, then to build a model w...
In ML, once we remove a feature, can we safely assume that feature will not be important again?
You seem to be assuming that the models work in additive fashion, so adding a feature to the model just "adds" some stuff related to this feature alone and does not influence the rest of the model, sa
In ML, once we remove a feature, can we safely assume that feature will not be important again? You seem to be assuming that the models work in additive fashion, so adding a feature to the model just "adds" some stuff related to this feature alone and does not influence the rest of the model, same with removing the fea...
In ML, once we remove a feature, can we safely assume that feature will not be important again? You seem to be assuming that the models work in additive fashion, so adding a feature to the model just "adds" some stuff related to this feature alone and does not influence the rest of the model, sa
43,890
What's the event space of a single coin toss?
The event $\{H,T\}$ is that the result of the flip is either $H$ or $T$; this has probability $1$ The event $\emptyset = \{\,\}$ is that the result of the flip is neither $H$ nor $T$; this has probability $0$ So there is no problem; $\mathscr{F}= \{\emptyset,\{H\},\{T\},\{H,T\}\}$ as you might expect
What's the event space of a single coin toss?
The event $\{H,T\}$ is that the result of the flip is either $H$ or $T$; this has probability $1$ The event $\emptyset = \{\,\}$ is that the result of the flip is neither $H$ nor $T$; this has proba
What's the event space of a single coin toss? The event $\{H,T\}$ is that the result of the flip is either $H$ or $T$; this has probability $1$ The event $\emptyset = \{\,\}$ is that the result of the flip is neither $H$ nor $T$; this has probability $0$ So there is no problem; $\mathscr{F}= \{\emptyset,\{H\},\{T\},\...
What's the event space of a single coin toss? The event $\{H,T\}$ is that the result of the flip is either $H$ or $T$; this has probability $1$ The event $\emptyset = \{\,\}$ is that the result of the flip is neither $H$ nor $T$; this has proba
43,891
Probability that fewer than 24 people logging into the site will make a purchase [closed]
I agree with @whuber that none of the answers is exactly correct. However, if (oblivious to a continuity correction) you take the key probability to be $P(X < 24),$ round excessively, and do a normal approximation to binomial using printed tables, you get $0.8264.$ First, $\mu = np = 200(.1) = 20$ and $\sigma = \sqrt{...
Probability that fewer than 24 people logging into the site will make a purchase [closed]
I agree with @whuber that none of the answers is exactly correct. However, if (oblivious to a continuity correction) you take the key probability to be $P(X < 24),$ round excessively, and do a normal
Probability that fewer than 24 people logging into the site will make a purchase [closed] I agree with @whuber that none of the answers is exactly correct. However, if (oblivious to a continuity correction) you take the key probability to be $P(X < 24),$ round excessively, and do a normal approximation to binomial usi...
Probability that fewer than 24 people logging into the site will make a purchase [closed] I agree with @whuber that none of the answers is exactly correct. However, if (oblivious to a continuity correction) you take the key probability to be $P(X < 24),$ round excessively, and do a normal
43,892
Probability that fewer than 24 people logging into the site will make a purchase [closed]
None of the answers is correct. Evidently the answer ought to be substantially greater than $1/2=0.5$ because the mean of this Binomial distribution is $20=200\times 0.1$ and the event "less than 24" includes all values less than the mean and a substantial number greater than it. This alone indicates (A) is the intend...
Probability that fewer than 24 people logging into the site will make a purchase [closed]
None of the answers is correct. Evidently the answer ought to be substantially greater than $1/2=0.5$ because the mean of this Binomial distribution is $20=200\times 0.1$ and the event "less than 24"
Probability that fewer than 24 people logging into the site will make a purchase [closed] None of the answers is correct. Evidently the answer ought to be substantially greater than $1/2=0.5$ because the mean of this Binomial distribution is $20=200\times 0.1$ and the event "less than 24" includes all values less than ...
Probability that fewer than 24 people logging into the site will make a purchase [closed] None of the answers is correct. Evidently the answer ought to be substantially greater than $1/2=0.5$ because the mean of this Binomial distribution is $20=200\times 0.1$ and the event "less than 24"
43,893
Negative relationship but regression analytics gives positive correlation coefficient
The correlation coefficient is $r$. $R^2$ is the square of $r$, and it is of course always positive, regardless of the sign of $r$. Taking the square root gives that $r= \pm 0.8489$, and since the relationship is negative, you can conclude that $r = -0.8489$.
Negative relationship but regression analytics gives positive correlation coefficient
The correlation coefficient is $r$. $R^2$ is the square of $r$, and it is of course always positive, regardless of the sign of $r$. Taking the square root gives that $r= \pm 0.8489$, and since the rel
Negative relationship but regression analytics gives positive correlation coefficient The correlation coefficient is $r$. $R^2$ is the square of $r$, and it is of course always positive, regardless of the sign of $r$. Taking the square root gives that $r= \pm 0.8489$, and since the relationship is negative, you can con...
Negative relationship but regression analytics gives positive correlation coefficient The correlation coefficient is $r$. $R^2$ is the square of $r$, and it is of course always positive, regardless of the sign of $r$. Taking the square root gives that $r= \pm 0.8489$, and since the rel
43,894
Negative relationship but regression analytics gives positive correlation coefficient
For additional context, $R^2$ is known as the coefficient of determination (often also called Pearson's R-squared) https://en.wikipedia.org/wiki/Coefficient_of_determination $R^2$ is a common measure of goodness of fit - it tells you something about how well your models predicts the test data. R in this interpretation ...
Negative relationship but regression analytics gives positive correlation coefficient
For additional context, $R^2$ is known as the coefficient of determination (often also called Pearson's R-squared) https://en.wikipedia.org/wiki/Coefficient_of_determination $R^2$ is a common measure
Negative relationship but regression analytics gives positive correlation coefficient For additional context, $R^2$ is known as the coefficient of determination (often also called Pearson's R-squared) https://en.wikipedia.org/wiki/Coefficient_of_determination $R^2$ is a common measure of goodness of fit - it tells you ...
Negative relationship but regression analytics gives positive correlation coefficient For additional context, $R^2$ is known as the coefficient of determination (often also called Pearson's R-squared) https://en.wikipedia.org/wiki/Coefficient_of_determination $R^2$ is a common measure
43,895
Negative relationship but regression analytics gives positive correlation coefficient
The other answers are correct, but I just wanted to add more detail in case you are interested in what is meant by these numbers. Suppose you were to draw a horizontal line on your graph which represented the average value of y, ie the average of the mortality rates in your data. For your example, 9.02 is approximat...
Negative relationship but regression analytics gives positive correlation coefficient
The other answers are correct, but I just wanted to add more detail in case you are interested in what is meant by these numbers. Suppose you were to draw a horizontal line on your graph which repre
Negative relationship but regression analytics gives positive correlation coefficient The other answers are correct, but I just wanted to add more detail in case you are interested in what is meant by these numbers. Suppose you were to draw a horizontal line on your graph which represented the average value of y, ie ...
Negative relationship but regression analytics gives positive correlation coefficient The other answers are correct, but I just wanted to add more detail in case you are interested in what is meant by these numbers. Suppose you were to draw a horizontal line on your graph which repre
43,896
Why this simple mixed model fail to converge?
This is, in all likelihood, not a warning that you need to worry about. As you can see, the parameter estimates are the same in both cases. The version of lmer in lmertest apparently has a more conservative check for convergence than the current lme4 version. The problem in lmertest::lmer is caused by the variables be...
Why this simple mixed model fail to converge?
This is, in all likelihood, not a warning that you need to worry about. As you can see, the parameter estimates are the same in both cases. The version of lmer in lmertest apparently has a more conser
Why this simple mixed model fail to converge? This is, in all likelihood, not a warning that you need to worry about. As you can see, the parameter estimates are the same in both cases. The version of lmer in lmertest apparently has a more conservative check for convergence than the current lme4 version. The problem i...
Why this simple mixed model fail to converge? This is, in all likelihood, not a warning that you need to worry about. As you can see, the parameter estimates are the same in both cases. The version of lmer in lmertest apparently has a more conser
43,897
Why this simple mixed model fail to converge?
I can’t speak to the calculation issue, but six data points is not very much at all, much less if you want to fit random effects — of which you only have 2 and only 3 examples of each. The not-statistically-significant result makes a lot of sense here: you have a tiny amount of data that hints at something, but not eno...
Why this simple mixed model fail to converge?
I can’t speak to the calculation issue, but six data points is not very much at all, much less if you want to fit random effects — of which you only have 2 and only 3 examples of each. The not-statist
Why this simple mixed model fail to converge? I can’t speak to the calculation issue, but six data points is not very much at all, much less if you want to fit random effects — of which you only have 2 and only 3 examples of each. The not-statistically-significant result makes a lot of sense here: you have a tiny amoun...
Why this simple mixed model fail to converge? I can’t speak to the calculation issue, but six data points is not very much at all, much less if you want to fit random effects — of which you only have 2 and only 3 examples of each. The not-statist
43,898
Why do we use the natural exponential in logistic regression?
Because base $e$ is convenient, and it doesn't matter if you can freely scale your coefficient estimate. Would using a functional form of $\frac{a^\mathbf{x\cdot b}}{1 + a^\mathbf{x\cdot b} }$ change your explanatory power? No. Explanation: I gave basically the same answer here for the softmax function. Observe that $ ...
Why do we use the natural exponential in logistic regression?
Because base $e$ is convenient, and it doesn't matter if you can freely scale your coefficient estimate. Would using a functional form of $\frac{a^\mathbf{x\cdot b}}{1 + a^\mathbf{x\cdot b} }$ change
Why do we use the natural exponential in logistic regression? Because base $e$ is convenient, and it doesn't matter if you can freely scale your coefficient estimate. Would using a functional form of $\frac{a^\mathbf{x\cdot b}}{1 + a^\mathbf{x\cdot b} }$ change your explanatory power? No. Explanation: I gave basically ...
Why do we use the natural exponential in logistic regression? Because base $e$ is convenient, and it doesn't matter if you can freely scale your coefficient estimate. Would using a functional form of $\frac{a^\mathbf{x\cdot b}}{1 + a^\mathbf{x\cdot b} }$ change
43,899
Why do we use the natural exponential in logistic regression?
In binary regression, one can use any cdf to relate the probability $\mathbb{P}(Y=1|\mathbf{x})$ and $\mathbf{x}$ in a generalised linear way $$\mathbb{P}(Y=1|\mathbf{x})=\Phi(\mathbf{x}^\text{T}\beta)$$as in logistic cdf, $\Phi(t)=1/\{1+1/e^t\}$ probit (Normal) cdf, $\Phi(t)=\int_{-\infty}^t \varphi(x)\text{d}x$ log-...
Why do we use the natural exponential in logistic regression?
In binary regression, one can use any cdf to relate the probability $\mathbb{P}(Y=1|\mathbf{x})$ and $\mathbf{x}$ in a generalised linear way $$\mathbb{P}(Y=1|\mathbf{x})=\Phi(\mathbf{x}^\text{T}\beta
Why do we use the natural exponential in logistic regression? In binary regression, one can use any cdf to relate the probability $\mathbb{P}(Y=1|\mathbf{x})$ and $\mathbf{x}$ in a generalised linear way $$\mathbb{P}(Y=1|\mathbf{x})=\Phi(\mathbf{x}^\text{T}\beta)$$as in logistic cdf, $\Phi(t)=1/\{1+1/e^t\}$ probit (No...
Why do we use the natural exponential in logistic regression? In binary regression, one can use any cdf to relate the probability $\mathbb{P}(Y=1|\mathbf{x})$ and $\mathbf{x}$ in a generalised linear way $$\mathbb{P}(Y=1|\mathbf{x})=\Phi(\mathbf{x}^\text{T}\beta
43,900
Why do we use the natural exponential in logistic regression?
For a Bernoulli likelihood, the variance is a function of the mean such that: $$\text{var}(Y) = E(Y)(1-E(Y))$$ It turns out that a sigmoid function, also called the "inverse link" (for a logistic regression) function: $S(x) = \frac{\exp(x)}{1+\exp(x)}$ has the property that: $$\frac{\partial}{\partial x} S(X) = S(X)(1-...
Why do we use the natural exponential in logistic regression?
For a Bernoulli likelihood, the variance is a function of the mean such that: $$\text{var}(Y) = E(Y)(1-E(Y))$$ It turns out that a sigmoid function, also called the "inverse link" (for a logistic regr
Why do we use the natural exponential in logistic regression? For a Bernoulli likelihood, the variance is a function of the mean such that: $$\text{var}(Y) = E(Y)(1-E(Y))$$ It turns out that a sigmoid function, also called the "inverse link" (for a logistic regression) function: $S(x) = \frac{\exp(x)}{1+\exp(x)}$ has t...
Why do we use the natural exponential in logistic regression? For a Bernoulli likelihood, the variance is a function of the mean such that: $$\text{var}(Y) = E(Y)(1-E(Y))$$ It turns out that a sigmoid function, also called the "inverse link" (for a logistic regr