idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,501
Multi-armed bandit vs AB testing
The Difference I'm going to assume you are interested in this topic from a website design perspective. In an A/B test, you choose layout A half the time and layout B the other half. You record how much revenue is collected under each layout for $n$ visitors. Then you do a statistical test to determine if layout A or l...
Multi-armed bandit vs AB testing
The Difference I'm going to assume you are interested in this topic from a website design perspective. In an A/B test, you choose layout A half the time and layout B the other half. You record how mu
Multi-armed bandit vs AB testing The Difference I'm going to assume you are interested in this topic from a website design perspective. In an A/B test, you choose layout A half the time and layout B the other half. You record how much revenue is collected under each layout for $n$ visitors. Then you do a statistical t...
Multi-armed bandit vs AB testing The Difference I'm going to assume you are interested in this topic from a website design perspective. In an A/B test, you choose layout A half the time and layout B the other half. You record how mu
46,502
Multi-armed bandit vs AB testing
Assuming we are talking about traffic to a website, AB testing refers the method whereby traffic is split 50/50 between two different pages (or options, images, or whatever you are studying). So exactly half of your users would see, say, Page A and the other half would see Page B. You then use the results (e.g. purch...
Multi-armed bandit vs AB testing
Assuming we are talking about traffic to a website, AB testing refers the method whereby traffic is split 50/50 between two different pages (or options, images, or whatever you are studying). So exac
Multi-armed bandit vs AB testing Assuming we are talking about traffic to a website, AB testing refers the method whereby traffic is split 50/50 between two different pages (or options, images, or whatever you are studying). So exactly half of your users would see, say, Page A and the other half would see Page B. You...
Multi-armed bandit vs AB testing Assuming we are talking about traffic to a website, AB testing refers the method whereby traffic is split 50/50 between two different pages (or options, images, or whatever you are studying). So exac
46,503
ELI5: The Logic Behind Coefficient Estimation in OLS Regression
Suppose you have a model of the form: $$X \beta= Y$$ where X is a normal 2-D matrix, for ease of visualisation. Now, if the matrix $X$ is square and invertible, then getting $\beta$ is trivial: $$\beta= X^{-1}Y$$ And that would be the end of it. If this is not the case, to get $\beta$ you’ll have to find a way to “ap...
ELI5: The Logic Behind Coefficient Estimation in OLS Regression
Suppose you have a model of the form: $$X \beta= Y$$ where X is a normal 2-D matrix, for ease of visualisation. Now, if the matrix $X$ is square and invertible, then getting $\beta$ is trivial: $$\be
ELI5: The Logic Behind Coefficient Estimation in OLS Regression Suppose you have a model of the form: $$X \beta= Y$$ where X is a normal 2-D matrix, for ease of visualisation. Now, if the matrix $X$ is square and invertible, then getting $\beta$ is trivial: $$\beta= X^{-1}Y$$ And that would be the end of it. If this ...
ELI5: The Logic Behind Coefficient Estimation in OLS Regression Suppose you have a model of the form: $$X \beta= Y$$ where X is a normal 2-D matrix, for ease of visualisation. Now, if the matrix $X$ is square and invertible, then getting $\beta$ is trivial: $$\be
46,504
ELI5: The Logic Behind Coefficient Estimation in OLS Regression
If you look at sources such as wikipedia, there are some good explanations for where this comes from. Here are some cores ideas: OLS is aiming to minimize the error $||y-X\beta||$. The norm of a vector is minimized when its derivative is perpendicular to the vector. (Since you asked for ELI5, I won't go into a rigorou...
ELI5: The Logic Behind Coefficient Estimation in OLS Regression
If you look at sources such as wikipedia, there are some good explanations for where this comes from. Here are some cores ideas: OLS is aiming to minimize the error $||y-X\beta||$. The norm of a vect
ELI5: The Logic Behind Coefficient Estimation in OLS Regression If you look at sources such as wikipedia, there are some good explanations for where this comes from. Here are some cores ideas: OLS is aiming to minimize the error $||y-X\beta||$. The norm of a vector is minimized when its derivative is perpendicular to ...
ELI5: The Logic Behind Coefficient Estimation in OLS Regression If you look at sources such as wikipedia, there are some good explanations for where this comes from. Here are some cores ideas: OLS is aiming to minimize the error $||y-X\beta||$. The norm of a vect
46,505
Understanding the shifted log-normal distribution
By definition, a random variable X has a shifted log-normal distribution with shift $\theta$ if log(X + $\theta$) ~ N($\mu$,$\sigma$). In the more usual notation, that would correspond to a lognormal with shift $-\theta$. However, if X + $\theta$ ~logN($\mu$,$\sigma$), then also X has a log-normal distribution X ~log...
Understanding the shifted log-normal distribution
By definition, a random variable X has a shifted log-normal distribution with shift $\theta$ if log(X + $\theta$) ~ N($\mu$,$\sigma$). In the more usual notation, that would correspond to a lognormal
Understanding the shifted log-normal distribution By definition, a random variable X has a shifted log-normal distribution with shift $\theta$ if log(X + $\theta$) ~ N($\mu$,$\sigma$). In the more usual notation, that would correspond to a lognormal with shift $-\theta$. However, if X + $\theta$ ~logN($\mu$,$\sigma$)...
Understanding the shifted log-normal distribution By definition, a random variable X has a shifted log-normal distribution with shift $\theta$ if log(X + $\theta$) ~ N($\mu$,$\sigma$). In the more usual notation, that would correspond to a lognormal
46,506
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$)
As is pointed out in this related question, the normality of the error term in a linear regression is not sufficient to ensure the marginal normality of the response variable. The latter is also affected by the distribution of the explanatory variable, which is not assumed to be normal in a regression analysis. Under ...
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$,
As is pointed out in this related question, the normality of the error term in a linear regression is not sufficient to ensure the marginal normality of the response variable. The latter is also affe
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$) As is pointed out in this related question, the normality of the error term in a linear regression is not sufficient to ensure the marginal normality of the response variable. The latter is also affected by ...
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, As is pointed out in this related question, the normality of the error term in a linear regression is not sufficient to ensure the marginal normality of the response variable. The latter is also affe
46,507
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$)
The answer is a most definitive "no." Marginal normality of $\epsilon$ does not imply that the conditional distributions of $Y$ are normal. See here for a counterexample: https://stats.stackexchange.com/a/486951/102879
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$,
The answer is a most definitive "no." Marginal normality of $\epsilon$ does not imply that the conditional distributions of $Y$ are normal. See here for a counterexample: https://stats.stackexchange
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$) The answer is a most definitive "no." Marginal normality of $\epsilon$ does not imply that the conditional distributions of $Y$ are normal. See here for a counterexample: https://stats.stackexchange.com/a/4...
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, The answer is a most definitive "no." Marginal normality of $\epsilon$ does not imply that the conditional distributions of $Y$ are normal. See here for a counterexample: https://stats.stackexchange
46,508
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$)
The distribution at a fixed value of x is normal. Y is not normal. Just look at the histogram of the response. It will not look like a normal distribution. But if you look at the distribution at a fixed x, then it will look normal.
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$,
The distribution at a fixed value of x is normal. Y is not normal. Just look at the histogram of the response. It will not look like a normal distribution. But if you look at the distribution at a
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$) The distribution at a fixed value of x is normal. Y is not normal. Just look at the histogram of the response. It will not look like a normal distribution. But if you look at the distribution at a fixed x...
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, The distribution at a fixed value of x is normal. Y is not normal. Just look at the histogram of the response. It will not look like a normal distribution. But if you look at the distribution at a
46,509
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$)
Yes if $\varepsilon \sim N(0, \sigma^2)$ $Y=\alpha+\beta x+ \epsilon$ then we can say that $Y \sim N(\alpha+\beta x,\sigma^2)$. This follows from the result that if a random variable $X \sim N(\mu, \sigma^2)$ then $X+a \sim N(\mu+a, \sigma^2)$, for example if $X\sim N(0, 3^2)$ then $X+2 \sim N(2, 3^2)$
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$,
Yes if $\varepsilon \sim N(0, \sigma^2)$ $Y=\alpha+\beta x+ \epsilon$ then we can say that $Y \sim N(\alpha+\beta x,\sigma^2)$. This follows from the result that if a random variable $X \sim N(\mu, \
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, $\sigma^2$) Yes if $\varepsilon \sim N(0, \sigma^2)$ $Y=\alpha+\beta x+ \epsilon$ then we can say that $Y \sim N(\alpha+\beta x,\sigma^2)$. This follows from the result that if a random variable $X \sim N(\mu, \sigma^2)...
In linear regression, if the random error is N(0,$\sigma^2$) does this mean Y~N($\alpha + \beta X$, Yes if $\varepsilon \sim N(0, \sigma^2)$ $Y=\alpha+\beta x+ \epsilon$ then we can say that $Y \sim N(\alpha+\beta x,\sigma^2)$. This follows from the result that if a random variable $X \sim N(\mu, \
46,510
Is this actually an example of selection bias?
If it's true that only women with a hip fracture were selected, then there is no association between hip fracture and anything in the selected population. This would amount to saying something like "among women with hip fractures, there is an association between having a hip fracture and having lung cancer." Clearly, t...
Is this actually an example of selection bias?
If it's true that only women with a hip fracture were selected, then there is no association between hip fracture and anything in the selected population. This would amount to saying something like "a
Is this actually an example of selection bias? If it's true that only women with a hip fracture were selected, then there is no association between hip fracture and anything in the selected population. This would amount to saying something like "among women with hip fractures, there is an association between having a h...
Is this actually an example of selection bias? If it's true that only women with a hip fracture were selected, then there is no association between hip fracture and anything in the selected population. This would amount to saying something like "a
46,511
Is this actually an example of selection bias?
My question is, if only women with hip fractures were surveyed, doesn't that mean we are conditioning on hip fractures? If so, there should be a square around hip fracture, the backdoor path is actually blocked, and there is no selection bias. Notice that even if the DAG were only $A \rightarrow Y \rightarrow C$...
Is this actually an example of selection bias?
My question is, if only women with hip fractures were surveyed, doesn't that mean we are conditioning on hip fractures? If so, there should be a square around hip fracture, the backdoor path is ac
Is this actually an example of selection bias? My question is, if only women with hip fractures were surveyed, doesn't that mean we are conditioning on hip fractures? If so, there should be a square around hip fracture, the backdoor path is actually blocked, and there is no selection bias. Notice that even if th...
Is this actually an example of selection bias? My question is, if only women with hip fractures were surveyed, doesn't that mean we are conditioning on hip fractures? If so, there should be a square around hip fracture, the backdoor path is ac
46,512
Can the covariance matrix in a Gaussian Process be non-symmetric?
Can the covariance matrix in a Gaussian Process be non-symmetric? Every valid covariance matrix is a real symmetric non-negative definite matrix. This holds regardless of the underlying distribution. So no, it can't be non-symmetric. If the lecturers are making an argument for using some non-symmetric matrix (e.g.,...
Can the covariance matrix in a Gaussian Process be non-symmetric?
Can the covariance matrix in a Gaussian Process be non-symmetric? Every valid covariance matrix is a real symmetric non-negative definite matrix. This holds regardless of the underlying distribution
Can the covariance matrix in a Gaussian Process be non-symmetric? Can the covariance matrix in a Gaussian Process be non-symmetric? Every valid covariance matrix is a real symmetric non-negative definite matrix. This holds regardless of the underlying distribution. So no, it can't be non-symmetric. If the lecturers...
Can the covariance matrix in a Gaussian Process be non-symmetric? Can the covariance matrix in a Gaussian Process be non-symmetric? Every valid covariance matrix is a real symmetric non-negative definite matrix. This holds regardless of the underlying distribution
46,513
Can the covariance matrix in a Gaussian Process be non-symmetric?
Answering below in order to be able to post a screen shot in response to a comment, this is not an answer to the question.
Can the covariance matrix in a Gaussian Process be non-symmetric?
Answering below in order to be able to post a screen shot in response to a comment, this is not an answer to the question.
Can the covariance matrix in a Gaussian Process be non-symmetric? Answering below in order to be able to post a screen shot in response to a comment, this is not an answer to the question.
Can the covariance matrix in a Gaussian Process be non-symmetric? Answering below in order to be able to post a screen shot in response to a comment, this is not an answer to the question.
46,514
Self-Study Plan Help (no undergrad math or stats experience)
I see various areas you should have a look into: Basics of probability Here you should understand the most common continuous probability distributions (e.g. normal distribution, t-distribution) and the most common discrete distributions (e.g. binomial distribution and geometric distribution). You should also understa...
Self-Study Plan Help (no undergrad math or stats experience)
I see various areas you should have a look into: Basics of probability Here you should understand the most common continuous probability distributions (e.g. normal distribution, t-distribution) and
Self-Study Plan Help (no undergrad math or stats experience) I see various areas you should have a look into: Basics of probability Here you should understand the most common continuous probability distributions (e.g. normal distribution, t-distribution) and the most common discrete distributions (e.g. binomial distr...
Self-Study Plan Help (no undergrad math or stats experience) I see various areas you should have a look into: Basics of probability Here you should understand the most common continuous probability distributions (e.g. normal distribution, t-distribution) and
46,515
Self-Study Plan Help (no undergrad math or stats experience)
Since you mentioned Bayesian statistics, let me recommend Data Analysis: A Baysesian Tutorial by Sivia and Skilling to you. I am reading it myself at the moment and find it fantastic. The book really helps in understanding the big picture of (Bayesian) probability theory. Also it finally links together all the divergen...
Self-Study Plan Help (no undergrad math or stats experience)
Since you mentioned Bayesian statistics, let me recommend Data Analysis: A Baysesian Tutorial by Sivia and Skilling to you. I am reading it myself at the moment and find it fantastic. The book really
Self-Study Plan Help (no undergrad math or stats experience) Since you mentioned Bayesian statistics, let me recommend Data Analysis: A Baysesian Tutorial by Sivia and Skilling to you. I am reading it myself at the moment and find it fantastic. The book really helps in understanding the big picture of (Bayesian) probab...
Self-Study Plan Help (no undergrad math or stats experience) Since you mentioned Bayesian statistics, let me recommend Data Analysis: A Baysesian Tutorial by Sivia and Skilling to you. I am reading it myself at the moment and find it fantastic. The book really
46,516
Redundant variables in linear regression
Not necessarily. It is instructive to understand why not. The issue is whether some linear combination of the variables is linearly correlated with the response. Sometimes a set of explanatory variables can be extremely closely correlated, but removing any single one of those variables significantly reduces the quali...
Redundant variables in linear regression
Not necessarily. It is instructive to understand why not. The issue is whether some linear combination of the variables is linearly correlated with the response. Sometimes a set of explanatory varia
Redundant variables in linear regression Not necessarily. It is instructive to understand why not. The issue is whether some linear combination of the variables is linearly correlated with the response. Sometimes a set of explanatory variables can be extremely closely correlated, but removing any single one of those ...
Redundant variables in linear regression Not necessarily. It is instructive to understand why not. The issue is whether some linear combination of the variables is linearly correlated with the response. Sometimes a set of explanatory varia
46,517
Why is the risk function defined to be the expectation of loss function?
In my understanding, the expected value of a random variable is not necessarily a good description of it. This depends on what you mean by "description". The expectation has a number of interpretations, all of which might or might not be "good" for you. In frequentist terms, it is the long-run average of a data-genera...
Why is the risk function defined to be the expectation of loss function?
In my understanding, the expected value of a random variable is not necessarily a good description of it. This depends on what you mean by "description". The expectation has a number of interpretatio
Why is the risk function defined to be the expectation of loss function? In my understanding, the expected value of a random variable is not necessarily a good description of it. This depends on what you mean by "description". The expectation has a number of interpretations, all of which might or might not be "good" f...
Why is the risk function defined to be the expectation of loss function? In my understanding, the expected value of a random variable is not necessarily a good description of it. This depends on what you mean by "description". The expectation has a number of interpretatio
46,518
Why is the risk function defined to be the expectation of loss function?
My intuition about the topic: In a statistical parametric setting, which is in Desion Theory bounds, we'd like to estime, say, $\theta \in \Theta$ in the best way possible, by choosing a function of the sample data (a statistic) before we met the data. Let 'best' be measured by a loss function $l: t \times \theta \to ...
Why is the risk function defined to be the expectation of loss function?
My intuition about the topic: In a statistical parametric setting, which is in Desion Theory bounds, we'd like to estime, say, $\theta \in \Theta$ in the best way possible, by choosing a function of t
Why is the risk function defined to be the expectation of loss function? My intuition about the topic: In a statistical parametric setting, which is in Desion Theory bounds, we'd like to estime, say, $\theta \in \Theta$ in the best way possible, by choosing a function of the sample data (a statistic) before we met the ...
Why is the risk function defined to be the expectation of loss function? My intuition about the topic: In a statistical parametric setting, which is in Desion Theory bounds, we'd like to estime, say, $\theta \in \Theta$ in the best way possible, by choosing a function of t
46,519
How to build a predictive model when more levels of a categorical predictor are possible than appear in the training data
Categorical features that can't be fully enumerated are failure-prone The challenge that you've discovered is a natural consequence of how you've organized your research project: your model has no generalizable information about new file paths or new names of .exe files. This theme is very common -- suppose you're tr...
How to build a predictive model when more levels of a categorical predictor are possible than appear
Categorical features that can't be fully enumerated are failure-prone The challenge that you've discovered is a natural consequence of how you've organized your research project: your model has no gen
How to build a predictive model when more levels of a categorical predictor are possible than appear in the training data Categorical features that can't be fully enumerated are failure-prone The challenge that you've discovered is a natural consequence of how you've organized your research project: your model has no g...
How to build a predictive model when more levels of a categorical predictor are possible than appear Categorical features that can't be fully enumerated are failure-prone The challenge that you've discovered is a natural consequence of how you've organized your research project: your model has no gen
46,520
Time series analysis textbooks for mathematicians
I recommend Time Series: Theory and Methods by Brockwell & Davis. They have a strong focus on ARIMA models and related topics (stationarity, autocorrelation etc.), and they are as rigorous as you would expect from a book that appeared in the Springer Series in Statistics. They include exercises for self-study - without...
Time series analysis textbooks for mathematicians
I recommend Time Series: Theory and Methods by Brockwell & Davis. They have a strong focus on ARIMA models and related topics (stationarity, autocorrelation etc.), and they are as rigorous as you woul
Time series analysis textbooks for mathematicians I recommend Time Series: Theory and Methods by Brockwell & Davis. They have a strong focus on ARIMA models and related topics (stationarity, autocorrelation etc.), and they are as rigorous as you would expect from a book that appeared in the Springer Series in Statistic...
Time series analysis textbooks for mathematicians I recommend Time Series: Theory and Methods by Brockwell & Davis. They have a strong focus on ARIMA models and related topics (stationarity, autocorrelation etc.), and they are as rigorous as you woul
46,521
Time series analysis textbooks for mathematicians
I learned Time Series from Time Series Analysis and Its Applications by R.H. Shumway and D.S. Stoffer. The textbook has its own website, where you can also find accompanying R packages, and a stripped down version of the textbook and the actual, statistically and mathematically rigorous version. If you have access to S...
Time series analysis textbooks for mathematicians
I learned Time Series from Time Series Analysis and Its Applications by R.H. Shumway and D.S. Stoffer. The textbook has its own website, where you can also find accompanying R packages, and a stripped
Time series analysis textbooks for mathematicians I learned Time Series from Time Series Analysis and Its Applications by R.H. Shumway and D.S. Stoffer. The textbook has its own website, where you can also find accompanying R packages, and a stripped down version of the textbook and the actual, statistically and mathem...
Time series analysis textbooks for mathematicians I learned Time Series from Time Series Analysis and Its Applications by R.H. Shumway and D.S. Stoffer. The textbook has its own website, where you can also find accompanying R packages, and a stripped
46,522
How do we know $X'X$ is nonsingular in OLS?
It's a property of the $\text{rank}$ operator when its used on real matrices $\mathbf{A}$: $$ \text{rank}(\mathbf{A}) = \text{rank}(\mathbf{A}') = \text{rank}(\mathbf{A}'\mathbf{A}) = \text{rank}(\mathbf{A}\mathbf{A}'). $$ In your case, the data matrix $\mathbf{X} \in \mathbb{R}^{n \times p}$ is usually tall and skinn...
How do we know $X'X$ is nonsingular in OLS?
It's a property of the $\text{rank}$ operator when its used on real matrices $\mathbf{A}$: $$ \text{rank}(\mathbf{A}) = \text{rank}(\mathbf{A}') = \text{rank}(\mathbf{A}'\mathbf{A}) = \text{rank}(\mat
How do we know $X'X$ is nonsingular in OLS? It's a property of the $\text{rank}$ operator when its used on real matrices $\mathbf{A}$: $$ \text{rank}(\mathbf{A}) = \text{rank}(\mathbf{A}') = \text{rank}(\mathbf{A}'\mathbf{A}) = \text{rank}(\mathbf{A}\mathbf{A}'). $$ In your case, the data matrix $\mathbf{X} \in \mathbb...
How do we know $X'X$ is nonsingular in OLS? It's a property of the $\text{rank}$ operator when its used on real matrices $\mathbf{A}$: $$ \text{rank}(\mathbf{A}) = \text{rank}(\mathbf{A}') = \text{rank}(\mathbf{A}'\mathbf{A}) = \text{rank}(\mat
46,523
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR)
The probability that $K = k$ is given by $$ p(k) = \frac{\binom{m}{k} f(n,k)}{m^n} $$ where $f(n,k)$ is the number of sequences consisting of only the integers $i = 1, \ldots, k$ of length $n$ in which each $i$ occurs at least once. To see this, note that we want the probability that exactly $k$ unique entries appear ...
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR)
The probability that $K = k$ is given by $$ p(k) = \frac{\binom{m}{k} f(n,k)}{m^n} $$ where $f(n,k)$ is the number of sequences consisting of only the integers $i = 1, \ldots, k$ of length $n$ in whi
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR) The probability that $K = k$ is given by $$ p(k) = \frac{\binom{m}{k} f(n,k)}{m^n} $$ where $f(n,k)$ is the number of sequences consisting of only the integers $i = 1, \ldots, k$ of length $n$ in which each $i$ occurs at least once. T...
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR) The probability that $K = k$ is given by $$ p(k) = \frac{\binom{m}{k} f(n,k)}{m^n} $$ where $f(n,k)$ is the number of sequences consisting of only the integers $i = 1, \ldots, k$ of length $n$ in whi
46,524
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR)
Having now done some more study on this problem I have found that this is actually a distribution that has received quite a lot of attention in the mathematical literature. The general problem is called the "classical occupancy problem" and the resulting distribution of the number of occupied groups is called the "occ...
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR)
Having now done some more study on this problem I have found that this is actually a distribution that has received quite a lot of attention in the mathematical literature. The general problem is cal
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR) Having now done some more study on this problem I have found that this is actually a distribution that has received quite a lot of attention in the mathematical literature. The general problem is called the "classical occupancy proble...
Distribution of number of objects in Simple Random Sampling with Replacement (SRSWR) Having now done some more study on this problem I have found that this is actually a distribution that has received quite a lot of attention in the mathematical literature. The general problem is cal
46,525
Random process not so random after all (deterministic)
Suppose that after $T$ iterations, your process should end up at a predefined value $m$. You can first simulate a process $f_t$ with whatever characteristics you want and then modify it as follows: $$ f_t \mapsto f_t + \frac{t}{T}(m-f_T) $$ Note that this requires that we know the end value $f_T$ of the unconstrained p...
Random process not so random after all (deterministic)
Suppose that after $T$ iterations, your process should end up at a predefined value $m$. You can first simulate a process $f_t$ with whatever characteristics you want and then modify it as follows: $$
Random process not so random after all (deterministic) Suppose that after $T$ iterations, your process should end up at a predefined value $m$. You can first simulate a process $f_t$ with whatever characteristics you want and then modify it as follows: $$ f_t \mapsto f_t + \frac{t}{T}(m-f_T) $$ Note that this requires ...
Random process not so random after all (deterministic) Suppose that after $T$ iterations, your process should end up at a predefined value $m$. You can first simulate a process $f_t$ with whatever characteristics you want and then modify it as follows: $$
46,526
Random process not so random after all (deterministic)
What you are asking for is called a Brownian bridge ; there is an answer elsewhere on CV that asks how to convert a Brownian bridge into a Brownian "excursion": Simulating a Brownian Excursion using a Brownian Bridge? . That question uses the rbridge() function from the e1071 package: library(e1071) set.seed(101) r <-...
Random process not so random after all (deterministic)
What you are asking for is called a Brownian bridge ; there is an answer elsewhere on CV that asks how to convert a Brownian bridge into a Brownian "excursion": Simulating a Brownian Excursion using
Random process not so random after all (deterministic) What you are asking for is called a Brownian bridge ; there is an answer elsewhere on CV that asks how to convert a Brownian bridge into a Brownian "excursion": Simulating a Brownian Excursion using a Brownian Bridge? . That question uses the rbridge() function fr...
Random process not so random after all (deterministic) What you are asking for is called a Brownian bridge ; there is an answer elsewhere on CV that asks how to convert a Brownian bridge into a Brownian "excursion": Simulating a Brownian Excursion using
46,527
UMVU estimator for non-linear transformation of a parameter
Your final answer is not quite right. The conclusion due to the sample mean $\bar X$ being only sufficient for $\mu$ also looks faulty. Recall that $T(X_1,X_2,\cdots,X_n)=\sum_{i=1}^n X_i$ is a complete sufficient statistic for $\mu$. It is easy to see this if you work with the exponential family setup. Now we know th...
UMVU estimator for non-linear transformation of a parameter
Your final answer is not quite right. The conclusion due to the sample mean $\bar X$ being only sufficient for $\mu$ also looks faulty. Recall that $T(X_1,X_2,\cdots,X_n)=\sum_{i=1}^n X_i$ is a comple
UMVU estimator for non-linear transformation of a parameter Your final answer is not quite right. The conclusion due to the sample mean $\bar X$ being only sufficient for $\mu$ also looks faulty. Recall that $T(X_1,X_2,\cdots,X_n)=\sum_{i=1}^n X_i$ is a complete sufficient statistic for $\mu$. It is easy to see this i...
UMVU estimator for non-linear transformation of a parameter Your final answer is not quite right. The conclusion due to the sample mean $\bar X$ being only sufficient for $\mu$ also looks faulty. Recall that $T(X_1,X_2,\cdots,X_n)=\sum_{i=1}^n X_i$ is a comple
46,528
UMVU estimator for non-linear transformation of a parameter
A more general perspective on the question is that most non-linear transforms of parameters $\theta$ associated with an unbiased estimator cannot be unbiasedly estimated. There are indeed many instances in the literature about the impossibility to find an unbiased estimator: "A Class of Parameter Functions for Wh...
UMVU estimator for non-linear transformation of a parameter
A more general perspective on the question is that most non-linear transforms of parameters $\theta$ associated with an unbiased estimator cannot be unbiasedly estimated. There are indeed many in
UMVU estimator for non-linear transformation of a parameter A more general perspective on the question is that most non-linear transforms of parameters $\theta$ associated with an unbiased estimator cannot be unbiasedly estimated. There are indeed many instances in the literature about the impossibility to find an...
UMVU estimator for non-linear transformation of a parameter A more general perspective on the question is that most non-linear transforms of parameters $\theta$ associated with an unbiased estimator cannot be unbiasedly estimated. There are indeed many in
46,529
What's the value of $\text{cov}(x, x^TAx)$, when $x$ follows a normal distribution
Writing $$z=x-\mu,$$ we see that $z \sim \mathcal{N}(0,\Sigma).$ Using the bilinearity of the covariance operator repeatedly, make the substitution $x=z+\mu$ and (mindlessly) compute $$\eqalign{ \operatorname{Cov}(x, x^\prime A x) &= \operatorname{Cov}(z+\mu,\ (z+\mu)^\prime A (z+\mu))\\ &= \operatorname{Cov}(z,\ z^\p...
What's the value of $\text{cov}(x, x^TAx)$, when $x$ follows a normal distribution
Writing $$z=x-\mu,$$ we see that $z \sim \mathcal{N}(0,\Sigma).$ Using the bilinearity of the covariance operator repeatedly, make the substitution $x=z+\mu$ and (mindlessly) compute $$\eqalign{ \ope
What's the value of $\text{cov}(x, x^TAx)$, when $x$ follows a normal distribution Writing $$z=x-\mu,$$ we see that $z \sim \mathcal{N}(0,\Sigma).$ Using the bilinearity of the covariance operator repeatedly, make the substitution $x=z+\mu$ and (mindlessly) compute $$\eqalign{ \operatorname{Cov}(x, x^\prime A x) &= \o...
What's the value of $\text{cov}(x, x^TAx)$, when $x$ follows a normal distribution Writing $$z=x-\mu,$$ we see that $z \sim \mathcal{N}(0,\Sigma).$ Using the bilinearity of the covariance operator repeatedly, make the substitution $x=z+\mu$ and (mindlessly) compute $$\eqalign{ \ope
46,530
How do I transform a non-linear relationship to make it linear?
In this problem you have an explicit functional relationship between the two variables: $$y = \text{sgn}(x) (10^{4|x|}-1).$$ You can obtain a linear relationship between transformed variables by using: $$\text{sgn}(y) \log_{10}(1+|y|) = 4x.$$
How do I transform a non-linear relationship to make it linear?
In this problem you have an explicit functional relationship between the two variables: $$y = \text{sgn}(x) (10^{4|x|}-1).$$ You can obtain a linear relationship between transformed variables by usin
How do I transform a non-linear relationship to make it linear? In this problem you have an explicit functional relationship between the two variables: $$y = \text{sgn}(x) (10^{4|x|}-1).$$ You can obtain a linear relationship between transformed variables by using: $$\text{sgn}(y) \log_{10}(1+|y|) = 4x.$$
How do I transform a non-linear relationship to make it linear? In this problem you have an explicit functional relationship between the two variables: $$y = \text{sgn}(x) (10^{4|x|}-1).$$ You can obtain a linear relationship between transformed variables by usin
46,531
Interpreting logistic regression results when explanatory variable has multiple levels
The interpretation for categorical variables with more than 2 levels is very similar to the binary case you mention; for a $k$-level categorical variable, you will have $k-1$ regression coefficients each of which compare the odds of the outcome to the reference group. For the example you state, ethnicity (Caucasian, A...
Interpreting logistic regression results when explanatory variable has multiple levels
The interpretation for categorical variables with more than 2 levels is very similar to the binary case you mention; for a $k$-level categorical variable, you will have $k-1$ regression coefficients e
Interpreting logistic regression results when explanatory variable has multiple levels The interpretation for categorical variables with more than 2 levels is very similar to the binary case you mention; for a $k$-level categorical variable, you will have $k-1$ regression coefficients each of which compare the odds of ...
Interpreting logistic regression results when explanatory variable has multiple levels The interpretation for categorical variables with more than 2 levels is very similar to the binary case you mention; for a $k$-level categorical variable, you will have $k-1$ regression coefficients e
46,532
Interpreting logistic regression results when explanatory variable has multiple levels
For a categorical variable that is not nominal, a logistic regression will output coefficients to a One Hot Encoded version of it. Therefore the logic remains the same: there will be a coefficient for "Caucasian" / "Not caucasian", another for "Hispanic" / "Not hispanic" and so on. The encoding makes it impossible to h...
Interpreting logistic regression results when explanatory variable has multiple levels
For a categorical variable that is not nominal, a logistic regression will output coefficients to a One Hot Encoded version of it. Therefore the logic remains the same: there will be a coefficient for
Interpreting logistic regression results when explanatory variable has multiple levels For a categorical variable that is not nominal, a logistic regression will output coefficients to a One Hot Encoded version of it. Therefore the logic remains the same: there will be a coefficient for "Caucasian" / "Not caucasian", a...
Interpreting logistic regression results when explanatory variable has multiple levels For a categorical variable that is not nominal, a logistic regression will output coefficients to a One Hot Encoded version of it. Therefore the logic remains the same: there will be a coefficient for
46,533
Interpreting logistic regression results when explanatory variable has multiple levels
This is a good question, I've had it myself. Note if you only have one categorical variable then the intercept term corresponds to the reference category. If you have more than one categorical variable in your model then it becomes tricky. One way is to rerun the model with different reference levels (clunky). A be...
Interpreting logistic regression results when explanatory variable has multiple levels
This is a good question, I've had it myself. Note if you only have one categorical variable then the intercept term corresponds to the reference category. If you have more than one categorical varia
Interpreting logistic regression results when explanatory variable has multiple levels This is a good question, I've had it myself. Note if you only have one categorical variable then the intercept term corresponds to the reference category. If you have more than one categorical variable in your model then it becomes...
Interpreting logistic regression results when explanatory variable has multiple levels This is a good question, I've had it myself. Note if you only have one categorical variable then the intercept term corresponds to the reference category. If you have more than one categorical varia
46,534
When is weighted average of $F_1$ scores $\simeq$ accuracy in classification?
Assessing the difference between a support-weighted mean $F1$ and accuracy Class $A$'s $F1$ Using the classification outcomes $a$, $b$, $c$, $d$ as laid out in the confusion matrix above, the function for Class $A$'s $F1$ can be defined as: $$ F_{1;A} = \frac{2a}{(a+b)+(a+c)} $$ Class $B$'s $F1$ Similarly, the functio...
When is weighted average of $F_1$ scores $\simeq$ accuracy in classification?
Assessing the difference between a support-weighted mean $F1$ and accuracy Class $A$'s $F1$ Using the classification outcomes $a$, $b$, $c$, $d$ as laid out in the confusion matrix above, the functio
When is weighted average of $F_1$ scores $\simeq$ accuracy in classification? Assessing the difference between a support-weighted mean $F1$ and accuracy Class $A$'s $F1$ Using the classification outcomes $a$, $b$, $c$, $d$ as laid out in the confusion matrix above, the function for Class $A$'s $F1$ can be defined as: ...
When is weighted average of $F_1$ scores $\simeq$ accuracy in classification? Assessing the difference between a support-weighted mean $F1$ and accuracy Class $A$'s $F1$ Using the classification outcomes $a$, $b$, $c$, $d$ as laid out in the confusion matrix above, the functio
46,535
What's the inverse of the finite polynomial $\phi_p$ in an $\ ARMA(p,q)$ model?
The law you are looking for is the infinite geometric sum: $$\sum_{t=0}^\infty r^t = \frac{1}{1-r} = (1-r)^{-1} \quad \quad \text{for }|r|<1.$$ This law shows that the inverse of a polynomial of degree one (an affine function) can be written as an infinite degree polynomial. To apply this to the inversion of an autore...
What's the inverse of the finite polynomial $\phi_p$ in an $\ ARMA(p,q)$ model?
The law you are looking for is the infinite geometric sum: $$\sum_{t=0}^\infty r^t = \frac{1}{1-r} = (1-r)^{-1} \quad \quad \text{for }|r|<1.$$ This law shows that the inverse of a polynomial of degre
What's the inverse of the finite polynomial $\phi_p$ in an $\ ARMA(p,q)$ model? The law you are looking for is the infinite geometric sum: $$\sum_{t=0}^\infty r^t = \frac{1}{1-r} = (1-r)^{-1} \quad \quad \text{for }|r|<1.$$ This law shows that the inverse of a polynomial of degree one (an affine function) can be writte...
What's the inverse of the finite polynomial $\phi_p$ in an $\ ARMA(p,q)$ model? The law you are looking for is the infinite geometric sum: $$\sum_{t=0}^\infty r^t = \frac{1}{1-r} = (1-r)^{-1} \quad \quad \text{for }|r|<1.$$ This law shows that the inverse of a polynomial of degre
46,536
Should I give more weight to goodness of fit or to conceptual approach?. Example
There are a bunch of issues here You can't compare likelihoods / deviance / AIC between models with continuous vs. count data, see, e.g. Can WAIC be used to compare Bayesian linear regression models with different likelihoods?. Moreover, do you have discrete k/n or continuous proportions? In either case, applying an l...
Should I give more weight to goodness of fit or to conceptual approach?. Example
There are a bunch of issues here You can't compare likelihoods / deviance / AIC between models with continuous vs. count data, see, e.g. Can WAIC be used to compare Bayesian linear regression models
Should I give more weight to goodness of fit or to conceptual approach?. Example There are a bunch of issues here You can't compare likelihoods / deviance / AIC between models with continuous vs. count data, see, e.g. Can WAIC be used to compare Bayesian linear regression models with different likelihoods?. Moreover, ...
Should I give more weight to goodness of fit or to conceptual approach?. Example There are a bunch of issues here You can't compare likelihoods / deviance / AIC between models with continuous vs. count data, see, e.g. Can WAIC be used to compare Bayesian linear regression models
46,537
Distribution of "p-value-like" quantities under null hypothesis
Let $f$ be the density of $X$. You are concerned about the distribution of ''d-values'' $$d = P( f(X) < f(x_{obs}))$$ when $x_{obs}$ is drawn in the distribution of $X$. Let's construct an other random variable by transforming $X$ : $Y = f(X)$, and let $y_{obs} = f(x_{obs})$. Then in fact you're looking at the distrib...
Distribution of "p-value-like" quantities under null hypothesis
Let $f$ be the density of $X$. You are concerned about the distribution of ''d-values'' $$d = P( f(X) < f(x_{obs}))$$ when $x_{obs}$ is drawn in the distribution of $X$. Let's construct an other rand
Distribution of "p-value-like" quantities under null hypothesis Let $f$ be the density of $X$. You are concerned about the distribution of ''d-values'' $$d = P( f(X) < f(x_{obs}))$$ when $x_{obs}$ is drawn in the distribution of $X$. Let's construct an other random variable by transforming $X$ : $Y = f(X)$, and let $y...
Distribution of "p-value-like" quantities under null hypothesis Let $f$ be the density of $X$. You are concerned about the distribution of ''d-values'' $$d = P( f(X) < f(x_{obs}))$$ when $x_{obs}$ is drawn in the distribution of $X$. Let's construct an other rand
46,538
Distribution of "p-value-like" quantities under null hypothesis
Short answer: The statistic you are referring to may just be the p-value (depending on the ordering of evidence for the null vs alternative. Don't assume that p-values are for the area that is "extreme" in the sense of having the highest magnitude values (i.e., the tail area). Longer answer: Every hypothesis test invo...
Distribution of "p-value-like" quantities under null hypothesis
Short answer: The statistic you are referring to may just be the p-value (depending on the ordering of evidence for the null vs alternative. Don't assume that p-values are for the area that is "extre
Distribution of "p-value-like" quantities under null hypothesis Short answer: The statistic you are referring to may just be the p-value (depending on the ordering of evidence for the null vs alternative. Don't assume that p-values are for the area that is "extreme" in the sense of having the highest magnitude values ...
Distribution of "p-value-like" quantities under null hypothesis Short answer: The statistic you are referring to may just be the p-value (depending on the ordering of evidence for the null vs alternative. Don't assume that p-values are for the area that is "extre
46,539
Featurization before or after dataset splitting
That comment is correct: we have to do "feature extraction" from our training data only. Let's consider one of the most common data-transformation procedures, centring. We get an "expected value" $\hat{\mu}_{x_j}$ for our feature $x_j$ and then we subtract that from the values of $x_j$, nothing magical. A central ques...
Featurization before or after dataset splitting
That comment is correct: we have to do "feature extraction" from our training data only. Let's consider one of the most common data-transformation procedures, centring. We get an "expected value" $\h
Featurization before or after dataset splitting That comment is correct: we have to do "feature extraction" from our training data only. Let's consider one of the most common data-transformation procedures, centring. We get an "expected value" $\hat{\mu}_{x_j}$ for our feature $x_j$ and then we subtract that from the ...
Featurization before or after dataset splitting That comment is correct: we have to do "feature extraction" from our training data only. Let's consider one of the most common data-transformation procedures, centring. We get an "expected value" $\h
46,540
Choosing the "Correct" Seed for Reproducible Research/Results
The seed choice should not affect the ultimate result, otherwise you have an issue. Then why do we need a seed at all? The reason is mainly for debugging and trouble shooting. What do I call an ultimate result? Suppose that you're analyzing a drug efficacy, and using Monte Carlo simulation to come up with some kind of...
Choosing the "Correct" Seed for Reproducible Research/Results
The seed choice should not affect the ultimate result, otherwise you have an issue. Then why do we need a seed at all? The reason is mainly for debugging and trouble shooting. What do I call an ultim
Choosing the "Correct" Seed for Reproducible Research/Results The seed choice should not affect the ultimate result, otherwise you have an issue. Then why do we need a seed at all? The reason is mainly for debugging and trouble shooting. What do I call an ultimate result? Suppose that you're analyzing a drug efficacy,...
Choosing the "Correct" Seed for Reproducible Research/Results The seed choice should not affect the ultimate result, otherwise you have an issue. Then why do we need a seed at all? The reason is mainly for debugging and trouble shooting. What do I call an ultim
46,541
Choosing the "Correct" Seed for Reproducible Research/Results
You certainly can use "seed optimization" to be deceptive about performance. For example, suppose I'm comparing two estimators that just give back pure noise. Fifty percent of my seeds will say estimator A is better than B when using cross validation, so all I need to do is make sure I pick a good seed and then go publ...
Choosing the "Correct" Seed for Reproducible Research/Results
You certainly can use "seed optimization" to be deceptive about performance. For example, suppose I'm comparing two estimators that just give back pure noise. Fifty percent of my seeds will say estima
Choosing the "Correct" Seed for Reproducible Research/Results You certainly can use "seed optimization" to be deceptive about performance. For example, suppose I'm comparing two estimators that just give back pure noise. Fifty percent of my seeds will say estimator A is better than B when using cross validation, so all...
Choosing the "Correct" Seed for Reproducible Research/Results You certainly can use "seed optimization" to be deceptive about performance. For example, suppose I'm comparing two estimators that just give back pure noise. Fifty percent of my seeds will say estima
46,542
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper
After I obtained some help from the authors, I can write down now how I understand it. Somebody jump in, if there is disagreement. Say, we have some differentiable loss function $L(y,H(x))$ , where $H(x)$ is our tree ensemble at some iteration. Let $g_i$ be the gradient of our loss function at some entry corresponding ...
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper
After I obtained some help from the authors, I can write down now how I understand it. Somebody jump in, if there is disagreement. Say, we have some differentiable loss function $L(y,H(x))$ , where $H
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper After I obtained some help from the authors, I can write down now how I understand it. Somebody jump in, if there is disagreement. Say, we have some differentiable loss function $L(y,H(x))$ , where $H(x)$ is our tree ense...
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper After I obtained some help from the authors, I can write down now how I understand it. Somebody jump in, if there is disagreement. Say, we have some differentiable loss function $L(y,H(x))$ , where $H
46,543
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper
Section 3 in the LightGBM paper is valid for the MSE loss function for which hessians reduce to 1. In that case the formula from Definition 3.1. coincides with formulas (6) and (7) from the XGBoost paper (where they are derived in an understandable way). Also, you can find in the LightGBM code (goss.php, line 110) that...
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper
Section 3 in the LightGBM paper is valid for the MSE loss function for which hessians reduce to 1. In that case the formula from Definition 3.1. coincides with formulas (6) and (7) from the XGBoost pa
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper Section 3 in the LightGBM paper is valid for the MSE loss function for which hessians reduce to 1. In that case the formula from Definition 3.1. coincides with formulas (6) and (7) from the XGBoost paper (where they are d...
Understanding weak learner splitting criterion in gradient boosting decision tree (lightgbm) paper Section 3 in the LightGBM paper is valid for the MSE loss function for which hessians reduce to 1. In that case the formula from Definition 3.1. coincides with formulas (6) and (7) from the XGBoost pa
46,544
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
$\require{mediawiki-texvc}$Let $T=T(X)=T(X_1,X_2, \dotsc, X_n)$ be a statistic, and assume we have some statistical model for the random variable $X$ (the data), say that $X$ is distributed according to the distribution $f(x;\theta)$, $f$ is then a model function (often a density or probability mass function) which is ...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
$\require{mediawiki-texvc}$Let $T=T(X)=T(X_1,X_2, \dotsc, X_n)$ be a statistic, and assume we have some statistical model for the random variable $X$ (the data), say that $X$ is distributed according
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? $\require{mediawiki-texvc}$Let $T=T(X)=T(X_1,X_2, \dotsc, X_n)$ be a statistic, and assume we have some statistical model for the random variable $X$ (the data), say that $X$ is distributed according to the distribution $f...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? $\require{mediawiki-texvc}$Let $T=T(X)=T(X_1,X_2, \dotsc, X_n)$ be a statistic, and assume we have some statistical model for the random variable $X$ (the data), say that $X$ is distributed according
46,545
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
The confusion here stems from conflating a random variable with its distribution. To be clear about the issue, a random variable is not a function of the model parameters, but its distribution is. Taking things back to their foundations, you have some probability space that consists of a sample space $\Omega$, a class...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
The confusion here stems from conflating a random variable with its distribution. To be clear about the issue, a random variable is not a function of the model parameters, but its distribution is. Ta
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? The confusion here stems from conflating a random variable with its distribution. To be clear about the issue, a random variable is not a function of the model parameters, but its distribution is. Taking things back to th...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? The confusion here stems from conflating a random variable with its distribution. To be clear about the issue, a random variable is not a function of the model parameters, but its distribution is. Ta
46,546
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
While the other answers (so far) are quite to the point and valid, I would like to add another direction to the discussion that relates to both fiducial inference (Fisher's pet theory) and a form of sampling called "perfect sampling" (or "sampling from the past"). Since a random variable is a measurable function from...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter?
While the other answers (so far) are quite to the point and valid, I would like to add another direction to the discussion that relates to both fiducial inference (Fisher's pet theory) and a form of
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? While the other answers (so far) are quite to the point and valid, I would like to add another direction to the discussion that relates to both fiducial inference (Fisher's pet theory) and a form of sampling called "perfe...
Statistics can't be a function of a parameter - but isn't the sample a function of the parameter? While the other answers (so far) are quite to the point and valid, I would like to add another direction to the discussion that relates to both fiducial inference (Fisher's pet theory) and a form of
46,547
Batch Normalization decreasing model accuracy
Try putting your Batch Normalization layer AFTER activation. Because what it's doing right now is effectively killing off half of your gradient on each layer - you normalize to 0 mean, which means only half of your ReLUs are firing, and you get vanishing gradient.
Batch Normalization decreasing model accuracy
Try putting your Batch Normalization layer AFTER activation. Because what it's doing right now is effectively killing off half of your gradient on each layer - you normalize to 0 mean, which means onl
Batch Normalization decreasing model accuracy Try putting your Batch Normalization layer AFTER activation. Because what it's doing right now is effectively killing off half of your gradient on each layer - you normalize to 0 mean, which means only half of your ReLUs are firing, and you get vanishing gradient.
Batch Normalization decreasing model accuracy Try putting your Batch Normalization layer AFTER activation. Because what it's doing right now is effectively killing off half of your gradient on each layer - you normalize to 0 mean, which means onl
46,548
Easy post-hoc tests when meta-analyzing with the `metafor` package in r
You could use the contrMat() function. Something like this should work: summary(glht(fit, linfct=cbind(contrMat(rep(1,7), type="Tukey"))), test=adjusted("none")) You might want to consider an adjustment for multiple testing though. See help(summary.glht) for some options.
Easy post-hoc tests when meta-analyzing with the `metafor` package in r
You could use the contrMat() function. Something like this should work: summary(glht(fit, linfct=cbind(contrMat(rep(1,7), type="Tukey"))), test=adjusted("none")) You might want to consider an adjustm
Easy post-hoc tests when meta-analyzing with the `metafor` package in r You could use the contrMat() function. Something like this should work: summary(glht(fit, linfct=cbind(contrMat(rep(1,7), type="Tukey"))), test=adjusted("none")) You might want to consider an adjustment for multiple testing though. See help(summar...
Easy post-hoc tests when meta-analyzing with the `metafor` package in r You could use the contrMat() function. Something like this should work: summary(glht(fit, linfct=cbind(contrMat(rep(1,7), type="Tukey"))), test=adjusted("none")) You might want to consider an adjustm
46,549
Why can minimizing $\|w\|$ be solved by minimizing $\frac{\|w\|^2}{2}$?
Notice that $\frac{1}{x}$ is a decreasing function over the positive domain and $\frac{x^2}{2}$ is an increasing function over the non-negative domain. If $g$ is a decreasing function (a function where as the input increases, the output decreases). Maximizing $f_1(x)$ is equivalent to minimizing $g(f_1(x))$. Here $f_1...
Why can minimizing $\|w\|$ be solved by minimizing $\frac{\|w\|^2}{2}$?
Notice that $\frac{1}{x}$ is a decreasing function over the positive domain and $\frac{x^2}{2}$ is an increasing function over the non-negative domain. If $g$ is a decreasing function (a function wher
Why can minimizing $\|w\|$ be solved by minimizing $\frac{\|w\|^2}{2}$? Notice that $\frac{1}{x}$ is a decreasing function over the positive domain and $\frac{x^2}{2}$ is an increasing function over the non-negative domain. If $g$ is a decreasing function (a function where as the input increases, the output decreases)....
Why can minimizing $\|w\|$ be solved by minimizing $\frac{\|w\|^2}{2}$? Notice that $\frac{1}{x}$ is a decreasing function over the positive domain and $\frac{x^2}{2}$ is an increasing function over the non-negative domain. If $g$ is a decreasing function (a function wher
46,550
Seed in a grid search
The premise that the same random seed will lead two randomized algorithms to have more similar performance is extremely dubious (except perhaps for the most similar and specially structured of algorithms over the smallest of samples). An analogy Using a Monte-Carlo simulation, let's say you're trying to estimate a casi...
Seed in a grid search
The premise that the same random seed will lead two randomized algorithms to have more similar performance is extremely dubious (except perhaps for the most similar and specially structured of algorit
Seed in a grid search The premise that the same random seed will lead two randomized algorithms to have more similar performance is extremely dubious (except perhaps for the most similar and specially structured of algorithms over the smallest of samples). An analogy Using a Monte-Carlo simulation, let's say you're try...
Seed in a grid search The premise that the same random seed will lead two randomized algorithms to have more similar performance is extremely dubious (except perhaps for the most similar and specially structured of algorit
46,551
Seed in a grid search
That is an ongoing reseach topic (hyperparameter optimization). A very popular technique following the idea you formulate in your question is random search. Once you see it, the idea is quite simple, and it is shown to work well in practice. Consider you search space with a finite maximum. Take the 5% interval around t...
Seed in a grid search
That is an ongoing reseach topic (hyperparameter optimization). A very popular technique following the idea you formulate in your question is random search. Once you see it, the idea is quite simple,
Seed in a grid search That is an ongoing reseach topic (hyperparameter optimization). A very popular technique following the idea you formulate in your question is random search. Once you see it, the idea is quite simple, and it is shown to work well in practice. Consider you search space with a finite maximum. Take th...
Seed in a grid search That is an ongoing reseach topic (hyperparameter optimization). A very popular technique following the idea you formulate in your question is random search. Once you see it, the idea is quite simple,
46,552
Seed in a grid search
It seems straightforward, that you ONLY want to test the parameters, and the less variance, the better Well, it isn't that straightforward. @MatthewGunn already explained that it typically won't help as you apparently thought. In general, if you encounter variance there are two quite opposide strategies of dealing w...
Seed in a grid search
It seems straightforward, that you ONLY want to test the parameters, and the less variance, the better Well, it isn't that straightforward. @MatthewGunn already explained that it typically won't help
Seed in a grid search It seems straightforward, that you ONLY want to test the parameters, and the less variance, the better Well, it isn't that straightforward. @MatthewGunn already explained that it typically won't help as you apparently thought. In general, if you encounter variance there are two quite opposide s...
Seed in a grid search It seems straightforward, that you ONLY want to test the parameters, and the less variance, the better Well, it isn't that straightforward. @MatthewGunn already explained that it typically won't help
46,553
Seed in a grid search
It seems that there is no scientific consensus on it. I think that Monte Carlo analogy by @Matthew is not perfect. E.g. if you had a neural network and you did a grid search of learning rate and momentum, then using the same random seed would lead to the same initialization which seems a good idea, so I agree with you...
Seed in a grid search
It seems that there is no scientific consensus on it. I think that Monte Carlo analogy by @Matthew is not perfect. E.g. if you had a neural network and you did a grid search of learning rate and mome
Seed in a grid search It seems that there is no scientific consensus on it. I think that Monte Carlo analogy by @Matthew is not perfect. E.g. if you had a neural network and you did a grid search of learning rate and momentum, then using the same random seed would lead to the same initialization which seems a good ide...
Seed in a grid search It seems that there is no scientific consensus on it. I think that Monte Carlo analogy by @Matthew is not perfect. E.g. if you had a neural network and you did a grid search of learning rate and mome
46,554
Gaussian Process smooths in mgcv: choosing between spherical and exponential covariance functions
The gp smooth type is only discussed in the second edition of Simon's book as it was added to the mgcv long after the first edition went to press. The main difference to consider is that spherical covariance function is not entirely smooth; there is a discontinuity which can pass through to the resultant smoother. The ...
Gaussian Process smooths in mgcv: choosing between spherical and exponential covariance functions
The gp smooth type is only discussed in the second edition of Simon's book as it was added to the mgcv long after the first edition went to press. The main difference to consider is that spherical cov
Gaussian Process smooths in mgcv: choosing between spherical and exponential covariance functions The gp smooth type is only discussed in the second edition of Simon's book as it was added to the mgcv long after the first edition went to press. The main difference to consider is that spherical covariance function is no...
Gaussian Process smooths in mgcv: choosing between spherical and exponential covariance functions The gp smooth type is only discussed in the second edition of Simon's book as it was added to the mgcv long after the first edition went to press. The main difference to consider is that spherical cov
46,555
Is non-integer power of a kernel still a kernel?
You have exactly defined the class of infinitely divisible kernels, i.e., a kernel $k(x, y)$ such that $k(x, y)^p$ is a kernel for any $p > 0$. Not all kernels are infinitely divisible. Many of the kernels you know and love are infinitely divisible.
Is non-integer power of a kernel still a kernel?
You have exactly defined the class of infinitely divisible kernels, i.e., a kernel $k(x, y)$ such that $k(x, y)^p$ is a kernel for any $p > 0$. Not all kernels are infinitely divisible. Many of the
Is non-integer power of a kernel still a kernel? You have exactly defined the class of infinitely divisible kernels, i.e., a kernel $k(x, y)$ such that $k(x, y)^p$ is a kernel for any $p > 0$. Not all kernels are infinitely divisible. Many of the kernels you know and love are infinitely divisible.
Is non-integer power of a kernel still a kernel? You have exactly defined the class of infinitely divisible kernels, i.e., a kernel $k(x, y)$ such that $k(x, y)^p$ is a kernel for any $p > 0$. Not all kernels are infinitely divisible. Many of the
46,556
What should we do when changing SGD optimizer to Adam optimizer?
In my experience, changing optimizers is not a simple matter of swapping one for the other. Instead, changing optimizers also interacts with several other configuration choices in the neural network. The optimizer interacts with the initialization scheme, so this might need to be changed. The learning rate may need to...
What should we do when changing SGD optimizer to Adam optimizer?
In my experience, changing optimizers is not a simple matter of swapping one for the other. Instead, changing optimizers also interacts with several other configuration choices in the neural network.
What should we do when changing SGD optimizer to Adam optimizer? In my experience, changing optimizers is not a simple matter of swapping one for the other. Instead, changing optimizers also interacts with several other configuration choices in the neural network. The optimizer interacts with the initialization scheme...
What should we do when changing SGD optimizer to Adam optimizer? In my experience, changing optimizers is not a simple matter of swapping one for the other. Instead, changing optimizers also interacts with several other configuration choices in the neural network.
46,557
Find marginal distribution of $K$-variate Dirichlet
The marginal distribution of $x_j$ is, $$ p(x_j) = \frac{1}{B({\bf a})} \int_0^{1 - x_j} \int_0^{1 - x_j - x_1} \cdots \int_0^{1 - \sum_{k =1}^{K-2} x_k} \prod_{p=1}^{K-1} x_p^{a_p - 1} \left( 1 - \sum_{l=1}^{K-1} x_l \right)^{a_K - 1} d x_{K-1} d x_{K-2} \dots d x_1, $$ where $\bf a$ is the vector of all $a_j$ values,...
Find marginal distribution of $K$-variate Dirichlet
The marginal distribution of $x_j$ is, $$ p(x_j) = \frac{1}{B({\bf a})} \int_0^{1 - x_j} \int_0^{1 - x_j - x_1} \cdots \int_0^{1 - \sum_{k =1}^{K-2} x_k} \prod_{p=1}^{K-1} x_p^{a_p - 1} \left( 1 - \su
Find marginal distribution of $K$-variate Dirichlet The marginal distribution of $x_j$ is, $$ p(x_j) = \frac{1}{B({\bf a})} \int_0^{1 - x_j} \int_0^{1 - x_j - x_1} \cdots \int_0^{1 - \sum_{k =1}^{K-2} x_k} \prod_{p=1}^{K-1} x_p^{a_p - 1} \left( 1 - \sum_{l=1}^{K-1} x_l \right)^{a_K - 1} d x_{K-1} d x_{K-2} \dots d x_1,...
Find marginal distribution of $K$-variate Dirichlet The marginal distribution of $x_j$ is, $$ p(x_j) = \frac{1}{B({\bf a})} \int_0^{1 - x_j} \int_0^{1 - x_j - x_1} \cdots \int_0^{1 - \sum_{k =1}^{K-2} x_k} \prod_{p=1}^{K-1} x_p^{a_p - 1} \left( 1 - \su
46,558
Poisson process: how long until we observe two events separated by at least a specified amount of time?
This answer requires solving three subproblems: How many bus arrivals $N$ do I expect to suffer through before experiencing a time between bus arrivals $> T_{min}$? How long is the expected wait between buses $\mathbb{E}T_s$ given that $T_s \leq T_{min}$? How long is the expected wait between two consecutive buses $\m...
Poisson process: how long until we observe two events separated by at least a specified amount of ti
This answer requires solving three subproblems: How many bus arrivals $N$ do I expect to suffer through before experiencing a time between bus arrivals $> T_{min}$? How long is the expected wait betw
Poisson process: how long until we observe two events separated by at least a specified amount of time? This answer requires solving three subproblems: How many bus arrivals $N$ do I expect to suffer through before experiencing a time between bus arrivals $> T_{min}$? How long is the expected wait between buses $\math...
Poisson process: how long until we observe two events separated by at least a specified amount of ti This answer requires solving three subproblems: How many bus arrivals $N$ do I expect to suffer through before experiencing a time between bus arrivals $> T_{min}$? How long is the expected wait betw
46,559
Poisson process: how long until we observe two events separated by at least a specified amount of time?
Think first about the number of events you expect to observe, think next about the amount of time it will take to observe those events. Letting $N$ be the number of events before an inter-arrival time of at least $T_{min}$, see that $N$ is geometric with parameter $p=P(T_i>T_{min})$. Now let $T$ be the amount of time t...
Poisson process: how long until we observe two events separated by at least a specified amount of ti
Think first about the number of events you expect to observe, think next about the amount of time it will take to observe those events. Letting $N$ be the number of events before an inter-arrival time
Poisson process: how long until we observe two events separated by at least a specified amount of time? Think first about the number of events you expect to observe, think next about the amount of time it will take to observe those events. Letting $N$ be the number of events before an inter-arrival time of at least $T_...
Poisson process: how long until we observe two events separated by at least a specified amount of ti Think first about the number of events you expect to observe, think next about the amount of time it will take to observe those events. Letting $N$ be the number of events before an inter-arrival time
46,560
KNN and K-folding in R
To use 5-fold cross validation in caret, you can set the "train control" as follows: trControl <- trainControl(method = "cv", number = 5) Then you can evaluate the accuracy of the KNN classifier with different values of k by cross validation using fit <- train(Species ~ ., met...
KNN and K-folding in R
To use 5-fold cross validation in caret, you can set the "train control" as follows: trControl <- trainControl(method = "cv", number = 5) Then you can evaluate the accurac
KNN and K-folding in R To use 5-fold cross validation in caret, you can set the "train control" as follows: trControl <- trainControl(method = "cv", number = 5) Then you can evaluate the accuracy of the KNN classifier with different values of k by cross validation using fit <- train(Specie...
KNN and K-folding in R To use 5-fold cross validation in caret, you can set the "train control" as follows: trControl <- trainControl(method = "cv", number = 5) Then you can evaluate the accurac
46,561
In deep learning, what is the difference between "disentangled representation" and "distributed representation"
Let's say the following vectors are respectively representations for a ball: [1,0,0,0] and a car: [0,1,0,0] In this representation a single neuron learns the meaning of a ball or a car without having to rely on other neurons. This is a disentangled representation, which is meant to facilitate the understanding of art...
In deep learning, what is the difference between "disentangled representation" and "distributed repr
Let's say the following vectors are respectively representations for a ball: [1,0,0,0] and a car: [0,1,0,0] In this representation a single neuron learns the meaning of a ball or a car without havin
In deep learning, what is the difference between "disentangled representation" and "distributed representation" Let's say the following vectors are respectively representations for a ball: [1,0,0,0] and a car: [0,1,0,0] In this representation a single neuron learns the meaning of a ball or a car without having to rel...
In deep learning, what is the difference between "disentangled representation" and "distributed repr Let's say the following vectors are respectively representations for a ball: [1,0,0,0] and a car: [0,1,0,0] In this representation a single neuron learns the meaning of a ball or a car without havin
46,562
Forecasting daily time series with many zeros
(This answer is based on experience with the business side of sales forecasting, more so than on rigorous statistical/mathematical knowledge) Looking at your data, it makes more sense to forecast it at a weekly level than at a daily level. At at daily level it is too sparse, but at a weekly level you would have a more...
Forecasting daily time series with many zeros
(This answer is based on experience with the business side of sales forecasting, more so than on rigorous statistical/mathematical knowledge) Looking at your data, it makes more sense to forecast it
Forecasting daily time series with many zeros (This answer is based on experience with the business side of sales forecasting, more so than on rigorous statistical/mathematical knowledge) Looking at your data, it makes more sense to forecast it at a weekly level than at a daily level. At at daily level it is too spars...
Forecasting daily time series with many zeros (This answer is based on experience with the business side of sales forecasting, more so than on rigorous statistical/mathematical knowledge) Looking at your data, it makes more sense to forecast it
46,563
Forecasting daily time series with many zeros
Croston's method is definitely an appropriate choice for this case. Its basic idea is to estimate non-zero demand and inter-demand interval separately. But note that its output is actually "demand rate", not actual demand units (e.g. a forecast of 0.1 means a demand of 1 unit over 10 periods). The exact timing of the d...
Forecasting daily time series with many zeros
Croston's method is definitely an appropriate choice for this case. Its basic idea is to estimate non-zero demand and inter-demand interval separately. But note that its output is actually "demand rat
Forecasting daily time series with many zeros Croston's method is definitely an appropriate choice for this case. Its basic idea is to estimate non-zero demand and inter-demand interval separately. But note that its output is actually "demand rate", not actual demand units (e.g. a forecast of 0.1 means a demand of 1 un...
Forecasting daily time series with many zeros Croston's method is definitely an appropriate choice for this case. Its basic idea is to estimate non-zero demand and inter-demand interval separately. But note that its output is actually "demand rat
46,564
Implicit Regularization in SGD on linear model
As noted by Leo, the other answer is technically incorrect since it assumes that $x^T x$ is invertible, but this is incompatible with the underdetermined problem where $d > n$, where $x^T x$ is not invertible but $xx^T$ is. A correct derivation still, however, follows the same approach outlined by elliotp. In general, ...
Implicit Regularization in SGD on linear model
As noted by Leo, the other answer is technically incorrect since it assumes that $x^T x$ is invertible, but this is incompatible with the underdetermined problem where $d > n$, where $x^T x$ is not in
Implicit Regularization in SGD on linear model As noted by Leo, the other answer is technically incorrect since it assumes that $x^T x$ is invertible, but this is incompatible with the underdetermined problem where $d > n$, where $x^T x$ is not invertible but $xx^T$ is. A correct derivation still, however, follows the ...
Implicit Regularization in SGD on linear model As noted by Leo, the other answer is technically incorrect since it assumes that $x^T x$ is invertible, but this is incompatible with the underdetermined problem where $d > n$, where $x^T x$ is not in
46,565
Implicit Regularization in SGD on linear model
The minimum $\ell_2$ norm solution can be found by solving the constrained optimization problem: $\underset{w}{\min} \Vert w \Vert_2^2~~s.t.~~y=Xw $ This can be written as an unconstrained convex optimization using the method of Lagrange multipliers at the limit $\lambda \rightarrow \infty$: $\underset{w}{\min}{\left(\...
Implicit Regularization in SGD on linear model
The minimum $\ell_2$ norm solution can be found by solving the constrained optimization problem: $\underset{w}{\min} \Vert w \Vert_2^2~~s.t.~~y=Xw $ This can be written as an unconstrained convex opti
Implicit Regularization in SGD on linear model The minimum $\ell_2$ norm solution can be found by solving the constrained optimization problem: $\underset{w}{\min} \Vert w \Vert_2^2~~s.t.~~y=Xw $ This can be written as an unconstrained convex optimization using the method of Lagrange multipliers at the limit $\lambda \...
Implicit Regularization in SGD on linear model The minimum $\ell_2$ norm solution can be found by solving the constrained optimization problem: $\underset{w}{\min} \Vert w \Vert_2^2~~s.t.~~y=Xw $ This can be written as an unconstrained convex opti
46,566
What are the leverage values for Ridge regression?
Ridge regression can be calculated via ordinary least squares (OLS) calculated with the data matrix $X$ extended with some surrogate data, taken as corresponding to the surrogate observations $Y_0=0$. Write the model, extended with the surrogate data, as $$ \begin{pmatrix} Y \\ Y_0=0\end{pmatrix} = \begin{pmatrix} X\...
What are the leverage values for Ridge regression?
Ridge regression can be calculated via ordinary least squares (OLS) calculated with the data matrix $X$ extended with some surrogate data, taken as corresponding to the surrogate observations $Y_0=0$.
What are the leverage values for Ridge regression? Ridge regression can be calculated via ordinary least squares (OLS) calculated with the data matrix $X$ extended with some surrogate data, taken as corresponding to the surrogate observations $Y_0=0$. Write the model, extended with the surrogate data, as $$ \begin{pm...
What are the leverage values for Ridge regression? Ridge regression can be calculated via ordinary least squares (OLS) calculated with the data matrix $X$ extended with some surrogate data, taken as corresponding to the surrogate observations $Y_0=0$.
46,567
Gibbs sampling an Ising model with 0s and 1s
The Ising model is one of the simplest examples of distributions with intractable normalising constant: the exact definition of the pmf is $$\pi(x) \propto \exp\left\{-\beta \sum_{i=1}^{19} |x_{i+1}-x_i| \right\}\qquad x\in\{0,1\}^{20}$$meaning that $\pi(x)$ is equal to $$\dfrac{\exp\left\{-\beta \sum_{i=1}^{19} |x_{i+...
Gibbs sampling an Ising model with 0s and 1s
The Ising model is one of the simplest examples of distributions with intractable normalising constant: the exact definition of the pmf is $$\pi(x) \propto \exp\left\{-\beta \sum_{i=1}^{19} |x_{i+1}-x
Gibbs sampling an Ising model with 0s and 1s The Ising model is one of the simplest examples of distributions with intractable normalising constant: the exact definition of the pmf is $$\pi(x) \propto \exp\left\{-\beta \sum_{i=1}^{19} |x_{i+1}-x_i| \right\}\qquad x\in\{0,1\}^{20}$$meaning that $\pi(x)$ is equal to $$\d...
Gibbs sampling an Ising model with 0s and 1s The Ising model is one of the simplest examples of distributions with intractable normalising constant: the exact definition of the pmf is $$\pi(x) \propto \exp\left\{-\beta \sum_{i=1}^{19} |x_{i+1}-x
46,568
Estimation of quantile regression by hand
(A little bit more a long comment than an answer, but I'm missing the repetition to comment) First, your calculation of the loss appears to be correct (this is R code): y <- c(5, 4, 5, 4, 7) x <- c(1, 2, 3, 4, 5) a <- 0.217092 b <- 1.594303 tau <- 0.75 f <- function(par, y, x, tau) { sum((tau - (y <= par[1] + par[...
Estimation of quantile regression by hand
(A little bit more a long comment than an answer, but I'm missing the repetition to comment) First, your calculation of the loss appears to be correct (this is R code): y <- c(5, 4, 5, 4, 7) x <- c(1,
Estimation of quantile regression by hand (A little bit more a long comment than an answer, but I'm missing the repetition to comment) First, your calculation of the loss appears to be correct (this is R code): y <- c(5, 4, 5, 4, 7) x <- c(1, 2, 3, 4, 5) a <- 0.217092 b <- 1.594303 tau <- 0.75 f <- function(par, y, x, ...
Estimation of quantile regression by hand (A little bit more a long comment than an answer, but I'm missing the repetition to comment) First, your calculation of the loss appears to be correct (this is R code): y <- c(5, 4, 5, 4, 7) x <- c(1,
46,569
Estimation of quantile regression by hand
solving solution using matlab, on the base of @BayerSe (thanks my friend,i always respect humans who share their knowledge to others), i have solved this problem in matlab define objective function f=@(a) sum((q-(y<=a(1)+a(2)x)).(y-a(1)-a(2)*x)) make initial guess of $\alpha$ and $\beta$ and $q$ a_b = [0.1,0.2];...
Estimation of quantile regression by hand
solving solution using matlab, on the base of @BayerSe (thanks my friend,i always respect humans who share their knowledge to others), i have solved this problem in matlab define objective function
Estimation of quantile regression by hand solving solution using matlab, on the base of @BayerSe (thanks my friend,i always respect humans who share their knowledge to others), i have solved this problem in matlab define objective function f=@(a) sum((q-(y<=a(1)+a(2)x)).(y-a(1)-a(2)*x)) make initial guess of $\al...
Estimation of quantile regression by hand solving solution using matlab, on the base of @BayerSe (thanks my friend,i always respect humans who share their knowledge to others), i have solved this problem in matlab define objective function
46,570
z score on Wilcoxon signed ranks test?
Given the number of elements in your samples (and the number of ties in them), you can transform one (the test stat, say in your case the $V$ stat though as I show below, this will also work for the $W$ stat) to the other (the corresponding z score) fairly easily. The formula's for the W and V stats are widely availab...
z score on Wilcoxon signed ranks test?
Given the number of elements in your samples (and the number of ties in them), you can transform one (the test stat, say in your case the $V$ stat though as I show below, this will also work for the $
z score on Wilcoxon signed ranks test? Given the number of elements in your samples (and the number of ties in them), you can transform one (the test stat, say in your case the $V$ stat though as I show below, this will also work for the $W$ stat) to the other (the corresponding z score) fairly easily. The formula's f...
z score on Wilcoxon signed ranks test? Given the number of elements in your samples (and the number of ties in them), you can transform one (the test stat, say in your case the $V$ stat though as I show below, this will also work for the $
46,571
z score on Wilcoxon signed ranks test?
See Section 7.2 of BBR which discusses a simple, accurate $z$ test statistic for the Wilcoxon signed-rank test. $z$ equals the sum of signed ranks divided by the square root of the sum of their squares. This handles ties well also.
z score on Wilcoxon signed ranks test?
See Section 7.2 of BBR which discusses a simple, accurate $z$ test statistic for the Wilcoxon signed-rank test. $z$ equals the sum of signed ranks divided by the square root of the sum of their squar
z score on Wilcoxon signed ranks test? See Section 7.2 of BBR which discusses a simple, accurate $z$ test statistic for the Wilcoxon signed-rank test. $z$ equals the sum of signed ranks divided by the square root of the sum of their squares. This handles ties well also.
z score on Wilcoxon signed ranks test? See Section 7.2 of BBR which discusses a simple, accurate $z$ test statistic for the Wilcoxon signed-rank test. $z$ equals the sum of signed ranks divided by the square root of the sum of their squar
46,572
z score on Wilcoxon signed ranks test?
No, they are not interchangeable. The $V$ in Wilcoxon signed test from R wilcox test is "the sum of ranks assigned to the differences with positive sign". Let me show you by using ZeaMays data. First we do the Wilcoxon signed test using wilcox.test funciton. install.packages("HistData") library(HistData) data(ZeaMay...
z score on Wilcoxon signed ranks test?
No, they are not interchangeable. The $V$ in Wilcoxon signed test from R wilcox test is "the sum of ranks assigned to the differences with positive sign". Let me show you by using ZeaMays data. First
z score on Wilcoxon signed ranks test? No, they are not interchangeable. The $V$ in Wilcoxon signed test from R wilcox test is "the sum of ranks assigned to the differences with positive sign". Let me show you by using ZeaMays data. First we do the Wilcoxon signed test using wilcox.test funciton. install.packages("His...
z score on Wilcoxon signed ranks test? No, they are not interchangeable. The $V$ in Wilcoxon signed test from R wilcox test is "the sum of ranks assigned to the differences with positive sign". Let me show you by using ZeaMays data. First
46,573
z score on Wilcoxon signed ranks test?
I do the following to obtain the Z-score when doing a Wilcoxon signed rank test. test<-wilcox.test(mtdata$x, mydata$y, paired=TRUE, exact=TRUE) print(test) # get the results of the Wilcoxon signed rank test Zstat<-qnorm(test$p.value/2) # obtain the Z-score abs(Zstat)/sqrt(20)
z score on Wilcoxon signed ranks test?
I do the following to obtain the Z-score when doing a Wilcoxon signed rank test. test<-wilcox.test(mtdata$x, mydata$y, paired=TRUE, exact=TRUE) print(test) # get the results of the Wilcoxon signed r
z score on Wilcoxon signed ranks test? I do the following to obtain the Z-score when doing a Wilcoxon signed rank test. test<-wilcox.test(mtdata$x, mydata$y, paired=TRUE, exact=TRUE) print(test) # get the results of the Wilcoxon signed rank test Zstat<-qnorm(test$p.value/2) # obtain the Z-score abs(Zstat)/sqrt(20)
z score on Wilcoxon signed ranks test? I do the following to obtain the Z-score when doing a Wilcoxon signed rank test. test<-wilcox.test(mtdata$x, mydata$y, paired=TRUE, exact=TRUE) print(test) # get the results of the Wilcoxon signed r
46,574
Bayesian inference on the correlation parameter of a bivariate normal
Since $$L(y_1,\ldots,y_n|\rho)\propto(1-\rho^2)^{-\frac{n}{2}}\exp\bigg\{-\dfrac{\sum_{i=1}^{n}\tilde{y}_{i1}^2 - 2\rho\tilde{y}_{i1}\tilde{y}_{i2}+\tilde{y}_{i2}^2}{2(1-\rho^2)}\bigg \}$$ is a function of $\rho$ of the form $$(1-\rho^2)^{-\alpha}\exp\bigg\{-\dfrac{\beta}{1-\rho^2}-\dfrac{\gamma\rho}{1-\rho^2}\bigg \}\...
Bayesian inference on the correlation parameter of a bivariate normal
Since $$L(y_1,\ldots,y_n|\rho)\propto(1-\rho^2)^{-\frac{n}{2}}\exp\bigg\{-\dfrac{\sum_{i=1}^{n}\tilde{y}_{i1}^2 - 2\rho\tilde{y}_{i1}\tilde{y}_{i2}+\tilde{y}_{i2}^2}{2(1-\rho^2)}\bigg \}$$ is a functi
Bayesian inference on the correlation parameter of a bivariate normal Since $$L(y_1,\ldots,y_n|\rho)\propto(1-\rho^2)^{-\frac{n}{2}}\exp\bigg\{-\dfrac{\sum_{i=1}^{n}\tilde{y}_{i1}^2 - 2\rho\tilde{y}_{i1}\tilde{y}_{i2}+\tilde{y}_{i2}^2}{2(1-\rho^2)}\bigg \}$$ is a function of $\rho$ of the form $$(1-\rho^2)^{-\alpha}\ex...
Bayesian inference on the correlation parameter of a bivariate normal Since $$L(y_1,\ldots,y_n|\rho)\propto(1-\rho^2)^{-\frac{n}{2}}\exp\bigg\{-\dfrac{\sum_{i=1}^{n}\tilde{y}_{i1}^2 - 2\rho\tilde{y}_{i1}\tilde{y}_{i2}+\tilde{y}_{i2}^2}{2(1-\rho^2)}\bigg \}$$ is a functi
46,575
Bayesian inference on the correlation parameter of a bivariate normal
It seems that a Laplace approximation works quite well. Below I define the log-likelihood and its gradient. Note that I change the variable so that the support is on the real line for better Laplace approximation performance. I use a logit transformation, i.e., $\rho = \dfrac{2}{e^{-x}+1}-1$. likfcn <- function(x, a, b...
Bayesian inference on the correlation parameter of a bivariate normal
It seems that a Laplace approximation works quite well. Below I define the log-likelihood and its gradient. Note that I change the variable so that the support is on the real line for better Laplace a
Bayesian inference on the correlation parameter of a bivariate normal It seems that a Laplace approximation works quite well. Below I define the log-likelihood and its gradient. Note that I change the variable so that the support is on the real line for better Laplace approximation performance. I use a logit transforma...
Bayesian inference on the correlation parameter of a bivariate normal It seems that a Laplace approximation works quite well. Below I define the log-likelihood and its gradient. Note that I change the variable so that the support is on the real line for better Laplace a
46,576
What is the point of introducing the concept of estimable function?
If you want to test $H_0: c^T\beta=0$ vs $H_1: c^T\beta\neq 0$, you will want to be able to estimate $c^T\beta$. Or, for example, if you need a prediction interval for $x_\text{new}\beta$, it would really help a lot if it's actually possible to estimate $x_\text{new}\beta$ ... (here we have $c=x_\text{new}^T$). When y...
What is the point of introducing the concept of estimable function?
If you want to test $H_0: c^T\beta=0$ vs $H_1: c^T\beta\neq 0$, you will want to be able to estimate $c^T\beta$. Or, for example, if you need a prediction interval for $x_\text{new}\beta$, it would r
What is the point of introducing the concept of estimable function? If you want to test $H_0: c^T\beta=0$ vs $H_1: c^T\beta\neq 0$, you will want to be able to estimate $c^T\beta$. Or, for example, if you need a prediction interval for $x_\text{new}\beta$, it would really help a lot if it's actually possible to estima...
What is the point of introducing the concept of estimable function? If you want to test $H_0: c^T\beta=0$ vs $H_1: c^T\beta\neq 0$, you will want to be able to estimate $c^T\beta$. Or, for example, if you need a prediction interval for $x_\text{new}\beta$, it would r
46,577
What is the point of introducing the concept of estimable function?
Let me give some perspective from linear algebra. In linear model $ y = X\beta +\epsilon$, $E(a'y) = a'X\beta$ so the definition actually says that $c'\beta$ is estimable if and only if $c\in C(X')$ where $C(X')$ is the row space of $X$. So if $X$ is full rank then any $c'\beta$ would be estimable. But what if $X$ is n...
What is the point of introducing the concept of estimable function?
Let me give some perspective from linear algebra. In linear model $ y = X\beta +\epsilon$, $E(a'y) = a'X\beta$ so the definition actually says that $c'\beta$ is estimable if and only if $c\in C(X')$ w
What is the point of introducing the concept of estimable function? Let me give some perspective from linear algebra. In linear model $ y = X\beta +\epsilon$, $E(a'y) = a'X\beta$ so the definition actually says that $c'\beta$ is estimable if and only if $c\in C(X')$ where $C(X')$ is the row space of $X$. So if $X$ is f...
What is the point of introducing the concept of estimable function? Let me give some perspective from linear algebra. In linear model $ y = X\beta +\epsilon$, $E(a'y) = a'X\beta$ so the definition actually says that $c'\beta$ is estimable if and only if $c\in C(X')$ w
46,578
What is the point of introducing the concept of estimable function?
In the general situation, we have a model that is parametrised by $\theta \in \Theta$. We are interested in estimating $\theta$, or estimating some function $g$ of $\theta$. We say that $g(\theta)$ is estimable if an unbiased estimator of $g(\theta)$ exists. That is, if there exists a statistic $T(Y)$ (a function from ...
What is the point of introducing the concept of estimable function?
In the general situation, we have a model that is parametrised by $\theta \in \Theta$. We are interested in estimating $\theta$, or estimating some function $g$ of $\theta$. We say that $g(\theta)$ is
What is the point of introducing the concept of estimable function? In the general situation, we have a model that is parametrised by $\theta \in \Theta$. We are interested in estimating $\theta$, or estimating some function $g$ of $\theta$. We say that $g(\theta)$ is estimable if an unbiased estimator of $g(\theta)$ e...
What is the point of introducing the concept of estimable function? In the general situation, we have a model that is parametrised by $\theta \in \Theta$. We are interested in estimating $\theta$, or estimating some function $g$ of $\theta$. We say that $g(\theta)$ is
46,579
Cox Regression when survival doesn't go to 0?
There is quite some literature on survival analysis under population heterogeneity, but like you notice yourself, I rarely see such models being used - or even considered - in applied research. I'll give some brief intuition, and hopefully others can add mathematically-heavier explanations if that's what you're looking...
Cox Regression when survival doesn't go to 0?
There is quite some literature on survival analysis under population heterogeneity, but like you notice yourself, I rarely see such models being used - or even considered - in applied research. I'll g
Cox Regression when survival doesn't go to 0? There is quite some literature on survival analysis under population heterogeneity, but like you notice yourself, I rarely see such models being used - or even considered - in applied research. I'll give some brief intuition, and hopefully others can add mathematically-heav...
Cox Regression when survival doesn't go to 0? There is quite some literature on survival analysis under population heterogeneity, but like you notice yourself, I rarely see such models being used - or even considered - in applied research. I'll g
46,580
Cox Regression when survival doesn't go to 0?
It may perfectly fine to apply the Cox model in the situation you describe. The Cox model makes no assumptions about the baseline hazard $h_0(t)$ other than that it is non-negative for all $t$. So the baseline hazard may well go to zero fast enough to make the cumulative hazard $\int_0^t h_0(u) du$ go to a finite valu...
Cox Regression when survival doesn't go to 0?
It may perfectly fine to apply the Cox model in the situation you describe. The Cox model makes no assumptions about the baseline hazard $h_0(t)$ other than that it is non-negative for all $t$. So th
Cox Regression when survival doesn't go to 0? It may perfectly fine to apply the Cox model in the situation you describe. The Cox model makes no assumptions about the baseline hazard $h_0(t)$ other than that it is non-negative for all $t$. So the baseline hazard may well go to zero fast enough to make the cumulative h...
Cox Regression when survival doesn't go to 0? It may perfectly fine to apply the Cox model in the situation you describe. The Cox model makes no assumptions about the baseline hazard $h_0(t)$ other than that it is non-negative for all $t$. So th
46,581
Polynomial approximations of nonlinearities in neural networks
The problem you're having is due to the asymptotic behavior of the remainder between the Taylor approximation of a function, and the function itself. If $f$ is at least $k$ times differentiable and you approximate it with a $k$'th order polynomial, $$ P_k(x) = f(a) + f'(a) (x - a) + \frac{1}{2} f''(a) (x-a)^2 + \dots ...
Polynomial approximations of nonlinearities in neural networks
The problem you're having is due to the asymptotic behavior of the remainder between the Taylor approximation of a function, and the function itself. If $f$ is at least $k$ times differentiable and y
Polynomial approximations of nonlinearities in neural networks The problem you're having is due to the asymptotic behavior of the remainder between the Taylor approximation of a function, and the function itself. If $f$ is at least $k$ times differentiable and you approximate it with a $k$'th order polynomial, $$ P_k(...
Polynomial approximations of nonlinearities in neural networks The problem you're having is due to the asymptotic behavior of the remainder between the Taylor approximation of a function, and the function itself. If $f$ is at least $k$ times differentiable and y
46,582
Polynomial approximations of nonlinearities in neural networks
So the issue is that Taylor series are not always the best finite approximation for a function. With the smooth relu example, you can take advantage of a few tricks to make the approximation better. As an example, $\ln(1+e^{x})=\ln(e^x(e^{-x}+1))=x+\ln(1+e^{-x})$. This form is advantageous when $x$ is large, because th...
Polynomial approximations of nonlinearities in neural networks
So the issue is that Taylor series are not always the best finite approximation for a function. With the smooth relu example, you can take advantage of a few tricks to make the approximation better. A
Polynomial approximations of nonlinearities in neural networks So the issue is that Taylor series are not always the best finite approximation for a function. With the smooth relu example, you can take advantage of a few tricks to make the approximation better. As an example, $\ln(1+e^{x})=\ln(e^x(e^{-x}+1))=x+\ln(1+e^...
Polynomial approximations of nonlinearities in neural networks So the issue is that Taylor series are not always the best finite approximation for a function. With the smooth relu example, you can take advantage of a few tricks to make the approximation better. A
46,583
Logistic regression and inclusion of independent and/or correlated variables
@NULL is right that this is a general question that isn't specific to your use case. Let me supplement his answer a little. What you really need to do in any given situation is think very hard about what you want to do and why. You have a situation where you want to build a model with response $Y$, but you believe ...
Logistic regression and inclusion of independent and/or correlated variables
@NULL is right that this is a general question that isn't specific to your use case. Let me supplement his answer a little. What you really need to do in any given situation is think very hard abou
Logistic regression and inclusion of independent and/or correlated variables @NULL is right that this is a general question that isn't specific to your use case. Let me supplement his answer a little. What you really need to do in any given situation is think very hard about what you want to do and why. You have a ...
Logistic regression and inclusion of independent and/or correlated variables @NULL is right that this is a general question that isn't specific to your use case. Let me supplement his answer a little. What you really need to do in any given situation is think very hard abou
46,584
Logistic regression and inclusion of independent and/or correlated variables
What you are asking are some fundamental questions about regression analysis that are not just about your specific use case. Hence, I recommend you reading more on regression analysis or take an online course such as Statistical Learning thought by those whom actually proposed some of the regularization method I'm disc...
Logistic regression and inclusion of independent and/or correlated variables
What you are asking are some fundamental questions about regression analysis that are not just about your specific use case. Hence, I recommend you reading more on regression analysis or take an onlin
Logistic regression and inclusion of independent and/or correlated variables What you are asking are some fundamental questions about regression analysis that are not just about your specific use case. Hence, I recommend you reading more on regression analysis or take an online course such as Statistical Learning thoug...
Logistic regression and inclusion of independent and/or correlated variables What you are asking are some fundamental questions about regression analysis that are not just about your specific use case. Hence, I recommend you reading more on regression analysis or take an onlin
46,585
What are the consequences of including unnecessary random effects?
Barr, Levy, Scheepers & Tily (2013) present an argument and simulations for why you should (by default) use the maximal random effects structure justified by your design. The crux of the argument is that the maximal model will generalize better. The paper also provides an argument for why it is anti-conservative to use...
What are the consequences of including unnecessary random effects?
Barr, Levy, Scheepers & Tily (2013) present an argument and simulations for why you should (by default) use the maximal random effects structure justified by your design. The crux of the argument is t
What are the consequences of including unnecessary random effects? Barr, Levy, Scheepers & Tily (2013) present an argument and simulations for why you should (by default) use the maximal random effects structure justified by your design. The crux of the argument is that the maximal model will generalize better. The pap...
What are the consequences of including unnecessary random effects? Barr, Levy, Scheepers & Tily (2013) present an argument and simulations for why you should (by default) use the maximal random effects structure justified by your design. The crux of the argument is t
46,586
Formula for number of weights in neural network
The reason you're confused is the fact that function nnetar creates an autoregressive neural network and not a standard neural network. This means that the input layer nodes of the network are: the exogenous regressors that you pass with xreg , the autoregressive variables that nnetar creates, the bias term. Runni...
Formula for number of weights in neural network
The reason you're confused is the fact that function nnetar creates an autoregressive neural network and not a standard neural network. This means that the input layer nodes of the network are: the
Formula for number of weights in neural network The reason you're confused is the fact that function nnetar creates an autoregressive neural network and not a standard neural network. This means that the input layer nodes of the network are: the exogenous regressors that you pass with xreg , the autoregressive variab...
Formula for number of weights in neural network The reason you're confused is the fact that function nnetar creates an autoregressive neural network and not a standard neural network. This means that the input layer nodes of the network are: the
46,587
What is this chart of before and after data called?
They are called Sankey Diagrams. There are a few different options to create these in R. Your example could have been created with straight lines using parallel sets. There are a few other R packages though that can make more complicated diagrams, see this SO Q/A for some examples.
What is this chart of before and after data called?
They are called Sankey Diagrams. There are a few different options to create these in R. Your example could have been created with straight lines using parallel sets. There are a few other R packages
What is this chart of before and after data called? They are called Sankey Diagrams. There are a few different options to create these in R. Your example could have been created with straight lines using parallel sets. There are a few other R packages though that can make more complicated diagrams, see this SO Q/A for...
What is this chart of before and after data called? They are called Sankey Diagrams. There are a few different options to create these in R. Your example could have been created with straight lines using parallel sets. There are a few other R packages
46,588
How to improve rare event binary classification performance?
Casting this as a classification problem was a major misstep. This is inherently a "tendency estimation", i.e., probability estimation problem. That is what logistic regression is all about. And you've chosen improper accuracy scores - scores that are optimized by choosing the wrong features and giving them the wron...
How to improve rare event binary classification performance?
Casting this as a classification problem was a major misstep. This is inherently a "tendency estimation", i.e., probability estimation problem. That is what logistic regression is all about. And yo
How to improve rare event binary classification performance? Casting this as a classification problem was a major misstep. This is inherently a "tendency estimation", i.e., probability estimation problem. That is what logistic regression is all about. And you've chosen improper accuracy scores - scores that are opti...
How to improve rare event binary classification performance? Casting this as a classification problem was a major misstep. This is inherently a "tendency estimation", i.e., probability estimation problem. That is what logistic regression is all about. And yo
46,589
How to improve rare event binary classification performance?
In addition to Frank Harrell's important point about classification versus prediction, you might need to consider that you don't have the information needed to judge the probability of admission. AUC is the one measure in your list that isn't subject to arbitrary choices of cutoffs for classification, and it is very cl...
How to improve rare event binary classification performance?
In addition to Frank Harrell's important point about classification versus prediction, you might need to consider that you don't have the information needed to judge the probability of admission. AUC
How to improve rare event binary classification performance? In addition to Frank Harrell's important point about classification versus prediction, you might need to consider that you don't have the information needed to judge the probability of admission. AUC is the one measure in your list that isn't subject to arbit...
How to improve rare event binary classification performance? In addition to Frank Harrell's important point about classification versus prediction, you might need to consider that you don't have the information needed to judge the probability of admission. AUC
46,590
Variance of a sample covariance for normal variables
The OP is interested in Var(sample covariances) in a bivariate Normal world. You know the solution for: Var(sample variances) (main diagonal), so ... All that is needed is the solution for: Var(sample covariance) Then there is no need for any matrix notation whatsoever, and if I understand correctly, the question re...
Variance of a sample covariance for normal variables
The OP is interested in Var(sample covariances) in a bivariate Normal world. You know the solution for: Var(sample variances) (main diagonal), so ... All that is needed is the solution for: Var(samp
Variance of a sample covariance for normal variables The OP is interested in Var(sample covariances) in a bivariate Normal world. You know the solution for: Var(sample variances) (main diagonal), so ... All that is needed is the solution for: Var(sample covariance) Then there is no need for any matrix notation whats...
Variance of a sample covariance for normal variables The OP is interested in Var(sample covariances) in a bivariate Normal world. You know the solution for: Var(sample variances) (main diagonal), so ... All that is needed is the solution for: Var(samp
46,591
Variance of a sample covariance for normal variables
After following the suggestion from Mark Stone I looked up Wishart distribution and estimation of covariance matrices and here's a quick summary. For a random, normally distributed $p$ element vector with a covariance matrix $\Sigma$ the quantity: $$ \sum_{i=1}^n \mathbf{X}_i\mathbf{X}_i^T \sim W_p(\Sigma, n-1) $$ wher...
Variance of a sample covariance for normal variables
After following the suggestion from Mark Stone I looked up Wishart distribution and estimation of covariance matrices and here's a quick summary. For a random, normally distributed $p$ element vector
Variance of a sample covariance for normal variables After following the suggestion from Mark Stone I looked up Wishart distribution and estimation of covariance matrices and here's a quick summary. For a random, normally distributed $p$ element vector with a covariance matrix $\Sigma$ the quantity: $$ \sum_{i=1}^n \ma...
Variance of a sample covariance for normal variables After following the suggestion from Mark Stone I looked up Wishart distribution and estimation of covariance matrices and here's a quick summary. For a random, normally distributed $p$ element vector
46,592
Distribution of sum of two independent normals conditional on one of them
Given: $X$ and $Y$ are independent standard Normals with pdf's $\phi(.)$ and cdf's $\Phi(.)$. Since $X$ and $Y$ are independent, the joint pdf of $\big((X \; \big|\;X<c), \; Y\big)$ is $f(x,y) = {\large\frac{\phi(x)}{\Phi(c)}} \phi(y)$: where Erf[.] denotes the error function. Part 1: The pdf of $Z = X+Y \; | \; X<c$...
Distribution of sum of two independent normals conditional on one of them
Given: $X$ and $Y$ are independent standard Normals with pdf's $\phi(.)$ and cdf's $\Phi(.)$. Since $X$ and $Y$ are independent, the joint pdf of $\big((X \; \big|\;X<c), \; Y\big)$ is $f(x,y) = {\la
Distribution of sum of two independent normals conditional on one of them Given: $X$ and $Y$ are independent standard Normals with pdf's $\phi(.)$ and cdf's $\Phi(.)$. Since $X$ and $Y$ are independent, the joint pdf of $\big((X \; \big|\;X<c), \; Y\big)$ is $f(x,y) = {\large\frac{\phi(x)}{\Phi(c)}} \phi(y)$: where E...
Distribution of sum of two independent normals conditional on one of them Given: $X$ and $Y$ are independent standard Normals with pdf's $\phi(.)$ and cdf's $\Phi(.)$. Since $X$ and $Y$ are independent, the joint pdf of $\big((X \; \big|\;X<c), \; Y\big)$ is $f(x,y) = {\la
46,593
Distribution of sum of two independent normals conditional on one of them
Sorry for not delivering the details, but $$ \int_{-\infty}^c \phi(x) \; \Phi(x-\sqrt{2} c) \, dx = 2T(c, \sqrt{2}-1) $$ where $T$ is the Owen $T$-function. This function is available in Mathematica/Wolfram and in the R package OwenQ. library(OwenQ) pr <- function(c){ 2*OwenT(c, sqrt(2)-1) / pnorm(c) } curve(Vectoriz...
Distribution of sum of two independent normals conditional on one of them
Sorry for not delivering the details, but $$ \int_{-\infty}^c \phi(x) \; \Phi(x-\sqrt{2} c) \, dx = 2T(c, \sqrt{2}-1) $$ where $T$ is the Owen $T$-function. This function is available in Mathematica/W
Distribution of sum of two independent normals conditional on one of them Sorry for not delivering the details, but $$ \int_{-\infty}^c \phi(x) \; \Phi(x-\sqrt{2} c) \, dx = 2T(c, \sqrt{2}-1) $$ where $T$ is the Owen $T$-function. This function is available in Mathematica/Wolfram and in the R package OwenQ. library(Owe...
Distribution of sum of two independent normals conditional on one of them Sorry for not delivering the details, but $$ \int_{-\infty}^c \phi(x) \; \Phi(x-\sqrt{2} c) \, dx = 2T(c, \sqrt{2}-1) $$ where $T$ is the Owen $T$-function. This function is available in Mathematica/W
46,594
Are D-separation and Conditional independence equivalent?
D-seperation is not equivalent to conditional independence. The D-seperation of $X$ and $Y$ given $Z$ implies the following conditional independence: $$P(X,Y|Z) = P(X|Z)P(Y|Z).$$ However D-seperation is a concept that applies specifically to graphical models. You can talk about conditional independence in any context...
Are D-separation and Conditional independence equivalent?
D-seperation is not equivalent to conditional independence. The D-seperation of $X$ and $Y$ given $Z$ implies the following conditional independence: $$P(X,Y|Z) = P(X|Z)P(Y|Z).$$ However D-seperation
Are D-separation and Conditional independence equivalent? D-seperation is not equivalent to conditional independence. The D-seperation of $X$ and $Y$ given $Z$ implies the following conditional independence: $$P(X,Y|Z) = P(X|Z)P(Y|Z).$$ However D-seperation is a concept that applies specifically to graphical models. ...
Are D-separation and Conditional independence equivalent? D-seperation is not equivalent to conditional independence. The D-seperation of $X$ and $Y$ given $Z$ implies the following conditional independence: $$P(X,Y|Z) = P(X|Z)P(Y|Z).$$ However D-seperation
46,595
Why is empirical risk minimization prone to overfitting?
It's a pretty general question, I'll try to lay out the main ideas in a simple manner. There are a lot of good resources which you can use for further reading, one which I can recommend is Shai Shalev-Schwarz "Understanding Machine Learning" which focuses on the theoretical foundations for machine learning. Put very s...
Why is empirical risk minimization prone to overfitting?
It's a pretty general question, I'll try to lay out the main ideas in a simple manner. There are a lot of good resources which you can use for further reading, one which I can recommend is Shai Shalev
Why is empirical risk minimization prone to overfitting? It's a pretty general question, I'll try to lay out the main ideas in a simple manner. There are a lot of good resources which you can use for further reading, one which I can recommend is Shai Shalev-Schwarz "Understanding Machine Learning" which focuses on the...
Why is empirical risk minimization prone to overfitting? It's a pretty general question, I'll try to lay out the main ideas in a simple manner. There are a lot of good resources which you can use for further reading, one which I can recommend is Shai Shalev
46,596
Why is empirical risk minimization prone to overfitting?
If we know the true distribution, memorizing it won't lead to overfitting. In the example in the above answer, suppose the true distribution is the black curve plus some noise, we will know that the empirical loss at those overfit points is different from their expected loss, $E_{sample}[L(x,y,\theta)]\neq E_{data}[L(x...
Why is empirical risk minimization prone to overfitting?
If we know the true distribution, memorizing it won't lead to overfitting. In the example in the above answer, suppose the true distribution is the black curve plus some noise, we will know that the e
Why is empirical risk minimization prone to overfitting? If we know the true distribution, memorizing it won't lead to overfitting. In the example in the above answer, suppose the true distribution is the black curve plus some noise, we will know that the empirical loss at those overfit points is different from their e...
Why is empirical risk minimization prone to overfitting? If we know the true distribution, memorizing it won't lead to overfitting. In the example in the above answer, suppose the true distribution is the black curve plus some noise, we will know that the e
46,597
Why is empirical risk minimization prone to overfitting?
There are already some good answers here explaining the general point, I'll just add two more points: The overfitting of the empirical risk is especially prominent in cases of a small training set. When the data don't contain enough information to learn the underlying pattern, more regularization is needed to fill in ...
Why is empirical risk minimization prone to overfitting?
There are already some good answers here explaining the general point, I'll just add two more points: The overfitting of the empirical risk is especially prominent in cases of a small training set. W
Why is empirical risk minimization prone to overfitting? There are already some good answers here explaining the general point, I'll just add two more points: The overfitting of the empirical risk is especially prominent in cases of a small training set. When the data don't contain enough information to learn the unde...
Why is empirical risk minimization prone to overfitting? There are already some good answers here explaining the general point, I'll just add two more points: The overfitting of the empirical risk is especially prominent in cases of a small training set. W
46,598
What is the difference between PCA and PLS-DA?
Quick answer which I will expand in few days is PLS-DA is a supervised method where you supply the information about each sample's group. PCA, on the other hand, is an unsupervised method which means that you are just projecting the data to, lets say, 2D space in a good way to observe how the samples are clustering by ...
What is the difference between PCA and PLS-DA?
Quick answer which I will expand in few days is PLS-DA is a supervised method where you supply the information about each sample's group. PCA, on the other hand, is an unsupervised method which means
What is the difference between PCA and PLS-DA? Quick answer which I will expand in few days is PLS-DA is a supervised method where you supply the information about each sample's group. PCA, on the other hand, is an unsupervised method which means that you are just projecting the data to, lets say, 2D space in a good wa...
What is the difference between PCA and PLS-DA? Quick answer which I will expand in few days is PLS-DA is a supervised method where you supply the information about each sample's group. PCA, on the other hand, is an unsupervised method which means
46,599
What is the difference between PCA and PLS-DA?
PCA is used for clustering where as PLS-DA use for classification in other word PCA shows the similarities in variables but PLS-DA shows the discrimination between variables
What is the difference between PCA and PLS-DA?
PCA is used for clustering where as PLS-DA use for classification in other word PCA shows the similarities in variables but PLS-DA shows the discrimination between variables
What is the difference between PCA and PLS-DA? PCA is used for clustering where as PLS-DA use for classification in other word PCA shows the similarities in variables but PLS-DA shows the discrimination between variables
What is the difference between PCA and PLS-DA? PCA is used for clustering where as PLS-DA use for classification in other word PCA shows the similarities in variables but PLS-DA shows the discrimination between variables
46,600
Loss function for Logistic Regression
You got off on the wrong track as detailed here. Just because you have a binary $Y$ it doesn't mean that you should be interested in classification. You are really interested in a probability model, so logistic regression is a good choice. Get the nomenclature right or you will confuse everyone. To the main point, t...
Loss function for Logistic Regression
You got off on the wrong track as detailed here. Just because you have a binary $Y$ it doesn't mean that you should be interested in classification. You are really interested in a probability model,
Loss function for Logistic Regression You got off on the wrong track as detailed here. Just because you have a binary $Y$ it doesn't mean that you should be interested in classification. You are really interested in a probability model, so logistic regression is a good choice. Get the nomenclature right or you will ...
Loss function for Logistic Regression You got off on the wrong track as detailed here. Just because you have a binary $Y$ it doesn't mean that you should be interested in classification. You are really interested in a probability model,