idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
38,301
T-test paradox: can adding a single point very far from the null value change the outcome from significant to nonsignificant?
Why is this a paradox? You are describing a typical situation which we encounter daily: your hypothesis is rejected, then you add one more observation and it's not rejected anymore. I think the reason why it looks like a paradox of sorts is purely psychological. It's called "framing bias" in behavioral economics. Let's re-frame it. Is it possible that a larger sample does not reject the same hypothesis that a smaller sample does? I'm sure, you'd say "Sure! why not?". Now, take a smaller sample and start adding observations from the larger sample to it. At some point the hypothesis will stop being rejected. At this point it was exactly one observation that changed the outcome. And this is what many of us face quite often, especially when building models on quarterly or monthly economic data. One data point may flip the outcome of the test. That's one reason that I ask my modelers to conduct the robustness check by moving the sample boundaries by a couple of periods and observing whether the results still hold. UPDATE Here's the "proof", it's as rigorous as a physicist would bother to produce for himself. You have a sample: $x_1,x_2$, and $x_2=x_1+\delta$, where $0<\delta<<1$. The mean and the dispersion are:$\bar x_2=x_1+\delta/2$ and $s_2=\delta/2$. You tested a hypothesis, and rejected it because $\frac{\bar x_2-H_0}{s_2}>c>0$, where $c$ is a critical value corresponding to your significance. The expanded form is $$\frac{2x_1+\delta-2H_0}{\delta}>c>0$$ Now, you add a third observation to the sample, such that $x_3>\bar x$. The new mean is $$\bar x_3=\frac{2x_1+\delta+x_3}{3}$$ and the dispersion is $$s_3= \sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2/3$$ Let's test the same hypothesis: $$\frac{\bar x_3-H_0}{s_3}=\frac{\frac{2x_1+\delta+x_3}{3}-H_0}{\sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2/3}$$ $$=\frac{2x_1+\delta+x_3-3H_0}{\sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2}$$ $$\lim_{\delta\to 0}\frac{\bar x_3-H_0}{s_3}=\frac{2x_1+x_3-3H_0}{\sqrt{ (x1 - x3)^2}\sqrt 2} =\frac{2x_1+x_3-3H_0}{(x3 - x1)\sqrt 2}$$ Let's do a trick here: $$=\frac{x_3-x_1+3x_1-3H_0}{(x3 - x1)\sqrt 2} =\left(1+3\frac{x_1-H_0}{(x3 - x1)}\right)\frac{1}{\sqrt 2}$$ If you pull $x_3$ far to the right so that $x_3-x_1>>x_1-H_0$ then you get $$\lim_{\delta\to 0\\x_3\to\infty}\frac{\bar x_3-H_0}{s_3} =\frac{1}{\sqrt 2}\approx 0.71$$ Notice, how you could make your test stats arbitrarily large by picking a small $\delta$ in the original sample: $$\lim_{\delta\to 0}\frac{\bar x_2-H_0}{s_2}=\infty$$ This simply demonstrates the point that @whuber emphasized in his comment: the test statistic is defined by a combination of inputs including the original sample mean and variance, the additional observation, critical value of the test statistic and the value $H_0$. You have a bunch if inputs with which you can easily construct an example that would reproduce your "paradox". However, I go back to my point of "framing bias": by wording your question in such a way that all the focus is on the new observation, you made it sound as if there was only little input that flips the situation upside down, while in reality there are all these other inputs that I just mentioned.
T-test paradox: can adding a single point very far from the null value change the outcome from signi
Why is this a paradox? You are describing a typical situation which we encounter daily: your hypothesis is rejected, then you add one more observation and it's not rejected anymore. I think the reason
T-test paradox: can adding a single point very far from the null value change the outcome from significant to nonsignificant? Why is this a paradox? You are describing a typical situation which we encounter daily: your hypothesis is rejected, then you add one more observation and it's not rejected anymore. I think the reason why it looks like a paradox of sorts is purely psychological. It's called "framing bias" in behavioral economics. Let's re-frame it. Is it possible that a larger sample does not reject the same hypothesis that a smaller sample does? I'm sure, you'd say "Sure! why not?". Now, take a smaller sample and start adding observations from the larger sample to it. At some point the hypothesis will stop being rejected. At this point it was exactly one observation that changed the outcome. And this is what many of us face quite often, especially when building models on quarterly or monthly economic data. One data point may flip the outcome of the test. That's one reason that I ask my modelers to conduct the robustness check by moving the sample boundaries by a couple of periods and observing whether the results still hold. UPDATE Here's the "proof", it's as rigorous as a physicist would bother to produce for himself. You have a sample: $x_1,x_2$, and $x_2=x_1+\delta$, where $0<\delta<<1$. The mean and the dispersion are:$\bar x_2=x_1+\delta/2$ and $s_2=\delta/2$. You tested a hypothesis, and rejected it because $\frac{\bar x_2-H_0}{s_2}>c>0$, where $c$ is a critical value corresponding to your significance. The expanded form is $$\frac{2x_1+\delta-2H_0}{\delta}>c>0$$ Now, you add a third observation to the sample, such that $x_3>\bar x$. The new mean is $$\bar x_3=\frac{2x_1+\delta+x_3}{3}$$ and the dispersion is $$s_3= \sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2/3$$ Let's test the same hypothesis: $$\frac{\bar x_3-H_0}{s_3}=\frac{\frac{2x_1+\delta+x_3}{3}-H_0}{\sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2/3}$$ $$=\frac{2x_1+\delta+x_3-3H_0}{\sqrt{\delta^2 + \delta (x1 - x3) + (x1 - x3)^2}\sqrt 2}$$ $$\lim_{\delta\to 0}\frac{\bar x_3-H_0}{s_3}=\frac{2x_1+x_3-3H_0}{\sqrt{ (x1 - x3)^2}\sqrt 2} =\frac{2x_1+x_3-3H_0}{(x3 - x1)\sqrt 2}$$ Let's do a trick here: $$=\frac{x_3-x_1+3x_1-3H_0}{(x3 - x1)\sqrt 2} =\left(1+3\frac{x_1-H_0}{(x3 - x1)}\right)\frac{1}{\sqrt 2}$$ If you pull $x_3$ far to the right so that $x_3-x_1>>x_1-H_0$ then you get $$\lim_{\delta\to 0\\x_3\to\infty}\frac{\bar x_3-H_0}{s_3} =\frac{1}{\sqrt 2}\approx 0.71$$ Notice, how you could make your test stats arbitrarily large by picking a small $\delta$ in the original sample: $$\lim_{\delta\to 0}\frac{\bar x_2-H_0}{s_2}=\infty$$ This simply demonstrates the point that @whuber emphasized in his comment: the test statistic is defined by a combination of inputs including the original sample mean and variance, the additional observation, critical value of the test statistic and the value $H_0$. You have a bunch if inputs with which you can easily construct an example that would reproduce your "paradox". However, I go back to my point of "framing bias": by wording your question in such a way that all the focus is on the new observation, you made it sound as if there was only little input that flips the situation upside down, while in reality there are all these other inputs that I just mentioned.
T-test paradox: can adding a single point very far from the null value change the outcome from signi Why is this a paradox? You are describing a typical situation which we encounter daily: your hypothesis is rejected, then you add one more observation and it's not rejected anymore. I think the reason
38,302
Use of Bayesian hierarchical model
To the best of my knowledge, there is no opposition between Bayesian models (BM) and hierarchical Bayesian models (HBM) (see e.g. Relation between Bayesian analysis and Bayesian hierarchical analysis?) and the fact is that, analytically, HBMs are BMs. Hierarchical models simply allow you to design a convoled prior structures that is more likely to represents e.g. interactions between variables of your model and thus to provide more suited inference. Then you should use hierarchical model at the instant hyperparameters appear naturally in the modeling of your problem. A simple example is when you need to account for individual level and group level variation for example: $$ y_{ij} \sim N(\mu_j,\sigma^2_j) \mbox{, (individual level variation)} $$ $$ \mu_j \sim Gamma(k_{\mu},\theta_{\mu}) \mbox{, (group level variation)} $$ with $k$ and $\theta$ (and $\sigma^2_j$ if unknown) assigned to well chosen priors.
Use of Bayesian hierarchical model
To the best of my knowledge, there is no opposition between Bayesian models (BM) and hierarchical Bayesian models (HBM) (see e.g. Relation between Bayesian analysis and Bayesian hierarchical analysis?
Use of Bayesian hierarchical model To the best of my knowledge, there is no opposition between Bayesian models (BM) and hierarchical Bayesian models (HBM) (see e.g. Relation between Bayesian analysis and Bayesian hierarchical analysis?) and the fact is that, analytically, HBMs are BMs. Hierarchical models simply allow you to design a convoled prior structures that is more likely to represents e.g. interactions between variables of your model and thus to provide more suited inference. Then you should use hierarchical model at the instant hyperparameters appear naturally in the modeling of your problem. A simple example is when you need to account for individual level and group level variation for example: $$ y_{ij} \sim N(\mu_j,\sigma^2_j) \mbox{, (individual level variation)} $$ $$ \mu_j \sim Gamma(k_{\mu},\theta_{\mu}) \mbox{, (group level variation)} $$ with $k$ and $\theta$ (and $\sigma^2_j$ if unknown) assigned to well chosen priors.
Use of Bayesian hierarchical model To the best of my knowledge, there is no opposition between Bayesian models (BM) and hierarchical Bayesian models (HBM) (see e.g. Relation between Bayesian analysis and Bayesian hierarchical analysis?
38,303
Use of Bayesian hierarchical model
In my opinion, there are two different aspects to your question: when should I use a hierarchical model? when should I perform a Bayesian analysis? When should I use a hierarchical model? An advantage to using hierarchical models is their flexibility in modeling the continuum from all groups have the same parameters to all groups have completely different parameters. For example, the normal hierarchical model (with a known variance of 1 for simplicity) is $$ y_{ij} \stackrel{ind}{\sim} N(\theta_j, 1), \quad \theta_j \stackrel{ind}{\sim} N(\mu,\sigma^2) $$ for groups $j=1,\ldots,J$ and individuals $i=1,\ldots,n_j$ in each group. If the means of each group are actually similar (or identical) then $\sigma^2$ will be estimated to be small and the resulting inference for the individual $\theta_j$ will be almost the same as if you had just assumed a common mean $\theta$ for all groups. In contast, if the groups have very different means, then $\sigma^2$ will be large and the resulting inference for the individual $\theta_j$ will be almost the same as you didn't have the hierarchical model at all. Thus you didn't have to choose whether to use a model with a common mean for all groups or a completely independent mean for all groups, the hierarchical model allowed the data to tell you where you fell along that continuum. An additional advantage to hierarchical models occurs when the number of observations for groups varies widely. In these situations, the groups with smaller numbers of observations will have improved inference about their group parameters by borrowing information via the hierarchical model about the group specific parameters. When should I perform a Bayesian analysis? Once you have decided to use a hierarchical model for your data, there is still the question of how you will estimate parameters and account for their uncertainty. While there are other options, many people will opt for a Bayesian analysis because of the computational tools, e.g. Markov chain Monte Carlo, and the propagation of uncertainty, e.g. uncertainty in $\mu$ and $\sigma^2$ gets propagated to uncertainty about the $\theta_j$ group means. In order to perform a Bayesian analysis, you need a prior for unmodeled parameters, e.g. $\mu$ and $\sigma^2$ in the example, and, if there are enough groups and enough observations, you can generally be non-informative about these parameters.
Use of Bayesian hierarchical model
In my opinion, there are two different aspects to your question: when should I use a hierarchical model? when should I perform a Bayesian analysis? When should I use a hierarchical model? An advanta
Use of Bayesian hierarchical model In my opinion, there are two different aspects to your question: when should I use a hierarchical model? when should I perform a Bayesian analysis? When should I use a hierarchical model? An advantage to using hierarchical models is their flexibility in modeling the continuum from all groups have the same parameters to all groups have completely different parameters. For example, the normal hierarchical model (with a known variance of 1 for simplicity) is $$ y_{ij} \stackrel{ind}{\sim} N(\theta_j, 1), \quad \theta_j \stackrel{ind}{\sim} N(\mu,\sigma^2) $$ for groups $j=1,\ldots,J$ and individuals $i=1,\ldots,n_j$ in each group. If the means of each group are actually similar (or identical) then $\sigma^2$ will be estimated to be small and the resulting inference for the individual $\theta_j$ will be almost the same as if you had just assumed a common mean $\theta$ for all groups. In contast, if the groups have very different means, then $\sigma^2$ will be large and the resulting inference for the individual $\theta_j$ will be almost the same as you didn't have the hierarchical model at all. Thus you didn't have to choose whether to use a model with a common mean for all groups or a completely independent mean for all groups, the hierarchical model allowed the data to tell you where you fell along that continuum. An additional advantage to hierarchical models occurs when the number of observations for groups varies widely. In these situations, the groups with smaller numbers of observations will have improved inference about their group parameters by borrowing information via the hierarchical model about the group specific parameters. When should I perform a Bayesian analysis? Once you have decided to use a hierarchical model for your data, there is still the question of how you will estimate parameters and account for their uncertainty. While there are other options, many people will opt for a Bayesian analysis because of the computational tools, e.g. Markov chain Monte Carlo, and the propagation of uncertainty, e.g. uncertainty in $\mu$ and $\sigma^2$ gets propagated to uncertainty about the $\theta_j$ group means. In order to perform a Bayesian analysis, you need a prior for unmodeled parameters, e.g. $\mu$ and $\sigma^2$ in the example, and, if there are enough groups and enough observations, you can generally be non-informative about these parameters.
Use of Bayesian hierarchical model In my opinion, there are two different aspects to your question: when should I use a hierarchical model? when should I perform a Bayesian analysis? When should I use a hierarchical model? An advanta
38,304
What are neurons in neural networks / how do they work?
You are correct in your overall view of the subject. The neuron is nothing more than a set of inputs, a set of weights, and an activation function. The neuron translates these inputs into a single output, which can then be picked up as input for another layer of neurons later on. While details can vary between neural networks, the function $f(x_1, x_2, \ldots , x_n)$ is often just a weighted sum: $$ f(x_1, x_2, \ldots , x_n) = w_1\cdot x_1 + w_2\cdot x_2 + ... + w_n\cdot x_n $$ Each neuron has a weight vector $w = (w_1, w_2, ..., w_n)$, where $n$ is the number of inputs to that neuron. These inputs can be either the 'raw' input features — say temperature, precipitation, and wind speed for a weather model — or the output of neurons from an earlier layer. The weights for each neuron are turned during the training stage such that the final network output is biased toward some value (usually 1) for signal, and another (usually -1 or 0) for background. Non-linear behavior in a neural network is accomplished by use of an activation function (often a sigmoid function) to which the output of $f$ is passed and modified. This allows neural networks to describe more complicated systems while still combining inputs in a simple fashion. I found the Deep Learning in a Nutshell series by Tim Dettmers to be a very approachable introduction to the subject of machine learning in general.
What are neurons in neural networks / how do they work?
You are correct in your overall view of the subject. The neuron is nothing more than a set of inputs, a set of weights, and an activation function. The neuron translates these inputs into a single out
What are neurons in neural networks / how do they work? You are correct in your overall view of the subject. The neuron is nothing more than a set of inputs, a set of weights, and an activation function. The neuron translates these inputs into a single output, which can then be picked up as input for another layer of neurons later on. While details can vary between neural networks, the function $f(x_1, x_2, \ldots , x_n)$ is often just a weighted sum: $$ f(x_1, x_2, \ldots , x_n) = w_1\cdot x_1 + w_2\cdot x_2 + ... + w_n\cdot x_n $$ Each neuron has a weight vector $w = (w_1, w_2, ..., w_n)$, where $n$ is the number of inputs to that neuron. These inputs can be either the 'raw' input features — say temperature, precipitation, and wind speed for a weather model — or the output of neurons from an earlier layer. The weights for each neuron are turned during the training stage such that the final network output is biased toward some value (usually 1) for signal, and another (usually -1 or 0) for background. Non-linear behavior in a neural network is accomplished by use of an activation function (often a sigmoid function) to which the output of $f$ is passed and modified. This allows neural networks to describe more complicated systems while still combining inputs in a simple fashion. I found the Deep Learning in a Nutshell series by Tim Dettmers to be a very approachable introduction to the subject of machine learning in general.
What are neurons in neural networks / how do they work? You are correct in your overall view of the subject. The neuron is nothing more than a set of inputs, a set of weights, and an activation function. The neuron translates these inputs into a single out
38,305
What are neurons in neural networks / how do they work?
Within an artificial neural network, a neuron is a mathematical function that model the functioning of a biological neuron. Typically, a neuron compute the weighted average of its input, and this sum is passed through a nonlinear function, often called activation function, such as the sigmoid. I attach an image from an AI course that illustrates it (note that in this particular case the weighted sum contains also a bias term). The output of the neuron can then be sent as input to the neurons of another layer, which could repeat the same computation (weighted sum of the input and transformation with activation function). Note that this computation correspond to multiplying a vector of input/activation states with a matrix of weights (and passing the resulting vector through the activation function). If you are interested in neural networks, not only from the engineering point of view but also in how these artificial neurons resemble real biological neurons, a classic book which could still be interesting is Parallel Distributed Processing by Rumelhart and McClelland
What are neurons in neural networks / how do they work?
Within an artificial neural network, a neuron is a mathematical function that model the functioning of a biological neuron. Typically, a neuron compute the weighted average of its input, and this sum
What are neurons in neural networks / how do they work? Within an artificial neural network, a neuron is a mathematical function that model the functioning of a biological neuron. Typically, a neuron compute the weighted average of its input, and this sum is passed through a nonlinear function, often called activation function, such as the sigmoid. I attach an image from an AI course that illustrates it (note that in this particular case the weighted sum contains also a bias term). The output of the neuron can then be sent as input to the neurons of another layer, which could repeat the same computation (weighted sum of the input and transformation with activation function). Note that this computation correspond to multiplying a vector of input/activation states with a matrix of weights (and passing the resulting vector through the activation function). If you are interested in neural networks, not only from the engineering point of view but also in how these artificial neurons resemble real biological neurons, a classic book which could still be interesting is Parallel Distributed Processing by Rumelhart and McClelland
What are neurons in neural networks / how do they work? Within an artificial neural network, a neuron is a mathematical function that model the functioning of a biological neuron. Typically, a neuron compute the weighted average of its input, and this sum
38,306
What are neurons in neural networks / how do they work?
The simplest versions are fairly simple, especially the regression case. If $y$ is your outcome and $X$is your data, the model is $$ y= Z'\beta $$ $$ Z=\sigma(X'\alpha) $$ $$ \sigma(x) = 1/(1+e^{-x}) $$ $\alpha$ and $\beta$ are parameters, and are chosen by an optimizer. Fixing the number of Z's is the same as fixing the number of columns of $\alpha$
What are neurons in neural networks / how do they work?
The simplest versions are fairly simple, especially the regression case. If $y$ is your outcome and $X$is your data, the model is $$ y= Z'\beta $$ $$ Z=\sigma(X'\alpha) $$ $$ \sigma(x) = 1/(1+e^{-x})
What are neurons in neural networks / how do they work? The simplest versions are fairly simple, especially the regression case. If $y$ is your outcome and $X$is your data, the model is $$ y= Z'\beta $$ $$ Z=\sigma(X'\alpha) $$ $$ \sigma(x) = 1/(1+e^{-x}) $$ $\alpha$ and $\beta$ are parameters, and are chosen by an optimizer. Fixing the number of Z's is the same as fixing the number of columns of $\alpha$
What are neurons in neural networks / how do they work? The simplest versions are fairly simple, especially the regression case. If $y$ is your outcome and $X$is your data, the model is $$ y= Z'\beta $$ $$ Z=\sigma(X'\alpha) $$ $$ \sigma(x) = 1/(1+e^{-x})
38,307
What are neurons in neural networks / how do they work?
In Software Engineering Artifical Neural Networks, Neurons are "containers" of mathematical functions, typically drawn as circles in Artificial Neural Networks graphical representations (see picture below). One or more neurons form a layer -- a set of layers typically disposed in vertical line in Artificial Neural Networks representations. In more complex Hardware Systems, each computer, or each cluster of computers, can be seen as a single neuron in graphical representations. In the scope of the Software Engineering point of view: Neurons can belong to input layers (red circles below), hidden layers (blue circles) or output layers (green circles). In a simple one-way Artificial Neural Network (also called Feed-Forward Artificial Neural Networks): Input Layer Neurons receive the input information (usually numeric representations of text, image, audio and others types of data), process it through a mathematical function (activation function) and "send" an output to the next layer's neurons based in conditions. On the way to other layer's neurons, that data is multiplied by preset weights (placed in the graphical lines linking one neuron to the others). Hidden layer neurons receive inputs from the input layers or by the previous hidden layer, pass them through new functions and send the result to the next layer neurons. Again, the data here is typically multiplied by weights on the way. Output layer neurons receive inputs from previous layers, process them through new functions and outputs the expected results. The results could be simple binary classifications (0 or 1, yes or no, black or white, dog or not dog), multiple choice classifications (e.i.: cat, dog or wolf), numeric predictions, matrices and so on. Depending on the type of Artificial Neural Network, this output could be used as a final result, or as an output to a new loop over the same or another neural net. The mathematical function contained in a neuron can vary depending on the type of Artificial Neural Network -- they could be simple regression functions, non-linear sigmoid functions and so on.
What are neurons in neural networks / how do they work?
In Software Engineering Artifical Neural Networks, Neurons are "containers" of mathematical functions, typically drawn as circles in Artificial Neural Networks graphical representations (see picture b
What are neurons in neural networks / how do they work? In Software Engineering Artifical Neural Networks, Neurons are "containers" of mathematical functions, typically drawn as circles in Artificial Neural Networks graphical representations (see picture below). One or more neurons form a layer -- a set of layers typically disposed in vertical line in Artificial Neural Networks representations. In more complex Hardware Systems, each computer, or each cluster of computers, can be seen as a single neuron in graphical representations. In the scope of the Software Engineering point of view: Neurons can belong to input layers (red circles below), hidden layers (blue circles) or output layers (green circles). In a simple one-way Artificial Neural Network (also called Feed-Forward Artificial Neural Networks): Input Layer Neurons receive the input information (usually numeric representations of text, image, audio and others types of data), process it through a mathematical function (activation function) and "send" an output to the next layer's neurons based in conditions. On the way to other layer's neurons, that data is multiplied by preset weights (placed in the graphical lines linking one neuron to the others). Hidden layer neurons receive inputs from the input layers or by the previous hidden layer, pass them through new functions and send the result to the next layer neurons. Again, the data here is typically multiplied by weights on the way. Output layer neurons receive inputs from previous layers, process them through new functions and outputs the expected results. The results could be simple binary classifications (0 or 1, yes or no, black or white, dog or not dog), multiple choice classifications (e.i.: cat, dog or wolf), numeric predictions, matrices and so on. Depending on the type of Artificial Neural Network, this output could be used as a final result, or as an output to a new loop over the same or another neural net. The mathematical function contained in a neuron can vary depending on the type of Artificial Neural Network -- they could be simple regression functions, non-linear sigmoid functions and so on.
What are neurons in neural networks / how do they work? In Software Engineering Artifical Neural Networks, Neurons are "containers" of mathematical functions, typically drawn as circles in Artificial Neural Networks graphical representations (see picture b
38,308
Kernel matrix is not positive definite
Most research in kernel methods focuses on Mercer kernels, which have two properties: (1) the function is symmetric: $K(x,y)=K(y,x)$ and (2) the function is positive semi-definite (p.s.d.). The Gaussian covariance function is certainly p.s.d., but I can't recall if it is also p.d. -- perhaps you've mistakenly omitted the "semi-" from your mental definition? Alternatively, this is potentially a numerical issue. I'm not sure how you've concluded that the result is not p.s.d., but, for example, spectral decomposition algorithms will sometimes commit errors due to finite-precision arithmetic, so some of the smallest eigenvalues will be slightly negative (on the order of whatever the error is in your numerical software). These minute errors can be safely ignored in most practical applications. Adding a larger, positive number to the diagonal can suppress this. If two feature vectors are identical, or nearly so, this can also cause numerical issues. Testing if an arbitrary covariance function is a valid kernel relies on Mercer's theorem, which is a bit involved for this forum. I'd recommend referring to his research.
Kernel matrix is not positive definite
Most research in kernel methods focuses on Mercer kernels, which have two properties: (1) the function is symmetric: $K(x,y)=K(y,x)$ and (2) the function is positive semi-definite (p.s.d.). The Gaussi
Kernel matrix is not positive definite Most research in kernel methods focuses on Mercer kernels, which have two properties: (1) the function is symmetric: $K(x,y)=K(y,x)$ and (2) the function is positive semi-definite (p.s.d.). The Gaussian covariance function is certainly p.s.d., but I can't recall if it is also p.d. -- perhaps you've mistakenly omitted the "semi-" from your mental definition? Alternatively, this is potentially a numerical issue. I'm not sure how you've concluded that the result is not p.s.d., but, for example, spectral decomposition algorithms will sometimes commit errors due to finite-precision arithmetic, so some of the smallest eigenvalues will be slightly negative (on the order of whatever the error is in your numerical software). These minute errors can be safely ignored in most practical applications. Adding a larger, positive number to the diagonal can suppress this. If two feature vectors are identical, or nearly so, this can also cause numerical issues. Testing if an arbitrary covariance function is a valid kernel relies on Mercer's theorem, which is a bit involved for this forum. I'd recommend referring to his research.
Kernel matrix is not positive definite Most research in kernel methods focuses on Mercer kernels, which have two properties: (1) the function is symmetric: $K(x,y)=K(y,x)$ and (2) the function is positive semi-definite (p.s.d.). The Gaussi
38,309
Kernel matrix is not positive definite
I believe I know that example. I worked it in R and got the same problem with singularity. You might try adding some white noise to the model. The squared exponential kernel is very smooth, so if your data points are close together, the covariance matrix goes singular. Recall that at short distances, differentiable functions are approximately linear and the columns of the matrix will be collinear. To add noise, add a constant to $K(i,i)$, the diagonal element.
Kernel matrix is not positive definite
I believe I know that example. I worked it in R and got the same problem with singularity. You might try adding some white noise to the model. The squared exponential kernel is very smooth, so if your
Kernel matrix is not positive definite I believe I know that example. I worked it in R and got the same problem with singularity. You might try adding some white noise to the model. The squared exponential kernel is very smooth, so if your data points are close together, the covariance matrix goes singular. Recall that at short distances, differentiable functions are approximately linear and the columns of the matrix will be collinear. To add noise, add a constant to $K(i,i)$, the diagonal element.
Kernel matrix is not positive definite I believe I know that example. I worked it in R and got the same problem with singularity. You might try adding some white noise to the model. The squared exponential kernel is very smooth, so if your
38,310
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
Here is an example of this, attributed to S. Benstein. Let $X_1, X_2, X_3$ have the joint pmf $$p\left(x_1, x_2, x_3 \right) =\begin{cases} \frac{1}{4} & \left(x_1, x_2, x_3 \right) \in \left\{ (1,0,0), (0,1,0), (0,0,1), (1,1,1) \right\} \\ 0 & \text{otherwise} \end{cases}$$ Then by summing out the third variable it is easy to see that the joint pmf of $X_i$ and $X_j$, $i\neq j$ is $$p_{ij} (x_i, x_j)= \begin{cases} \frac{1}{4} & (x_i, x_j) \in \left\{ (0,0), (1,0), (0,1), (1,1) \right\} \\ 0 & \text{otherwise} \end{cases} $$ Finally, the marginal pmf of $X_i$ is $$p_i(x_i) = \begin{cases} \frac{1}{2} & x_i= 0, 1 \\ 0 & \text{otherwise} \end{cases}$$ Now, note that for $i\neq j$ $$p_{ij} (x_i, x_j) =p_i (x_i) p_j (x_j)$$ and thus $X_i$ and $X_j$ are independent. However $$p(x_1, x_2, x_3) \neq p_1 (x_1) p_2 (x_2) p_3 (x_3)$$ and so $X_1, X_2, X_3$ are not independent. Thus pairwise independence does not imply mutual independence. The latter is a stronger condition and it's usually the one we use with random samples.
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
Here is an example of this, attributed to S. Benstein. Let $X_1, X_2, X_3$ have the joint pmf $$p\left(x_1, x_2, x_3 \right) =\begin{cases} \frac{1}{4} & \left(x_1, x_2, x_3 \right) \in \left\{ (1,0,0
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? Here is an example of this, attributed to S. Benstein. Let $X_1, X_2, X_3$ have the joint pmf $$p\left(x_1, x_2, x_3 \right) =\begin{cases} \frac{1}{4} & \left(x_1, x_2, x_3 \right) \in \left\{ (1,0,0), (0,1,0), (0,0,1), (1,1,1) \right\} \\ 0 & \text{otherwise} \end{cases}$$ Then by summing out the third variable it is easy to see that the joint pmf of $X_i$ and $X_j$, $i\neq j$ is $$p_{ij} (x_i, x_j)= \begin{cases} \frac{1}{4} & (x_i, x_j) \in \left\{ (0,0), (1,0), (0,1), (1,1) \right\} \\ 0 & \text{otherwise} \end{cases} $$ Finally, the marginal pmf of $X_i$ is $$p_i(x_i) = \begin{cases} \frac{1}{2} & x_i= 0, 1 \\ 0 & \text{otherwise} \end{cases}$$ Now, note that for $i\neq j$ $$p_{ij} (x_i, x_j) =p_i (x_i) p_j (x_j)$$ and thus $X_i$ and $X_j$ are independent. However $$p(x_1, x_2, x_3) \neq p_1 (x_1) p_2 (x_2) p_3 (x_3)$$ and so $X_1, X_2, X_3$ are not independent. Thus pairwise independence does not imply mutual independence. The latter is a stronger condition and it's usually the one we use with random samples.
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? Here is an example of this, attributed to S. Benstein. Let $X_1, X_2, X_3$ have the joint pmf $$p\left(x_1, x_2, x_3 \right) =\begin{cases} \frac{1}{4} & \left(x_1, x_2, x_3 \right) \in \left\{ (1,0,0
38,311
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
$X$, $Y$ independent Bernoulli$(\frac 12)$ and $Z= X+Y-2XY$ is an example of three random variables that are pairwise independent but not mutually independent. It is easy to show that $Z$ is also Bernoulli$(\frac 12)$ and that $(X,Z)$ and $(Y,Z)$ are pairs of independent random variables, and of course, $(X,Y)$ is a pair of independent random variables by assumption. (If you feel too lazy to carry this out for yourself, note that the answer by @JohnK essentially uses $X_1=X, X_2=Y, X_3 = 1-Z$). Thus, $X,Y,Z$ are said to be pairwise independent random variables. However, for $X,Y,Z$ to be called mutually independent random variables, their joint probability mass function must factor into the product of the individual (marginal) probability mass functions, that is, if $X, Y, Z$ take on values in the sets $\{x_i\}, \{y_j\}, \{z_k\}$ respectively, then $X,Y,Z$ are said to be mutually independent random variables if for all choices of $x_i, y_j, z_k$, $$P\{X=x_i, Y=y_j, Z = z_k\} = P\{X=x_i\}P\{Y = y_j\}P\{Z = z_k\}.$$ In the example above, it is easy to verify that $$P\{X=1,Y=1,Z=1\} = 0 \neq \frac 18 = P\{X=1\}P\{Y=1\}P\{Z=1\}$$ and so $X,Y,Z$ cannot be called mutually independent random variables. Lest you think that it is necessary to use discrete random variables to have examples such as the one above, consider three standard normal random variables $X,Y,Z$ whose joint probability density function $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density, but rather $$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ Note that $X$, $Y$, and $Z$ are not a set of three jointly normal random variables but as will be described below, any two of these is indeed a pair of independent normal random variables. We can calculate the joint density of any pair of the random variables, (say $X$ and $Z$) by integrating out the joint density with respect to the unwanted variable, that is, $$f_{X,Z}(x,z) = \int_{-\infty}^\infty f_{X,Y,Z}(x,y,z)\,\mathrm dy. \tag{2}$$ If $x \geq 0, z \geq 0$ or if $x < 0, z < 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y \geq 0,\\ 0, & y < 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{0}^\infty 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{3}$$ If $x \geq 0, z < 0$ or if $x < 0, z \geq 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y < 0,\\ 0, & y \geq 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{-\infty}^0 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{4}$$ In short, $(3)$ and $(4)$ show that $f_{X,Z}(x,z) = \phi(x)\phi(z)$ for all $x, z \in (-\infty,\infty)$ and so $X$ and $Z$ are (pairwise) independent standard normal random variables. Similar calculations (left as an exercise for the bemused reader) show that $X$ and $Y$ are (pairwise) independent standard normal random variables, and $Y$ and $Z$ also are (pairwise) independent standard normal random variables. But $X,Y,Z$ are not mutually independent normal random variables. Nor are the three of them together a set of jointly normal random variables. Indeed, their joint density $f_{X,Y,Z}(x,y,z)$ does not equal the product $\phi(x)\phi(y)\phi(z)$ of their marginal densities for any choice of $x, y, z \in (-\infty,\infty)$
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
$X$, $Y$ independent Bernoulli$(\frac 12)$ and $Z= X+Y-2XY$ is an example of three random variables that are pairwise independent but not mutually independent. It is easy to show that $Z$ is also Be
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? $X$, $Y$ independent Bernoulli$(\frac 12)$ and $Z= X+Y-2XY$ is an example of three random variables that are pairwise independent but not mutually independent. It is easy to show that $Z$ is also Bernoulli$(\frac 12)$ and that $(X,Z)$ and $(Y,Z)$ are pairs of independent random variables, and of course, $(X,Y)$ is a pair of independent random variables by assumption. (If you feel too lazy to carry this out for yourself, note that the answer by @JohnK essentially uses $X_1=X, X_2=Y, X_3 = 1-Z$). Thus, $X,Y,Z$ are said to be pairwise independent random variables. However, for $X,Y,Z$ to be called mutually independent random variables, their joint probability mass function must factor into the product of the individual (marginal) probability mass functions, that is, if $X, Y, Z$ take on values in the sets $\{x_i\}, \{y_j\}, \{z_k\}$ respectively, then $X,Y,Z$ are said to be mutually independent random variables if for all choices of $x_i, y_j, z_k$, $$P\{X=x_i, Y=y_j, Z = z_k\} = P\{X=x_i\}P\{Y = y_j\}P\{Z = z_k\}.$$ In the example above, it is easy to verify that $$P\{X=1,Y=1,Z=1\} = 0 \neq \frac 18 = P\{X=1\}P\{Y=1\}P\{Z=1\}$$ and so $X,Y,Z$ cannot be called mutually independent random variables. Lest you think that it is necessary to use discrete random variables to have examples such as the one above, consider three standard normal random variables $X,Y,Z$ whose joint probability density function $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density, but rather $$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ Note that $X$, $Y$, and $Z$ are not a set of three jointly normal random variables but as will be described below, any two of these is indeed a pair of independent normal random variables. We can calculate the joint density of any pair of the random variables, (say $X$ and $Z$) by integrating out the joint density with respect to the unwanted variable, that is, $$f_{X,Z}(x,z) = \int_{-\infty}^\infty f_{X,Y,Z}(x,y,z)\,\mathrm dy. \tag{2}$$ If $x \geq 0, z \geq 0$ or if $x < 0, z < 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y \geq 0,\\ 0, & y < 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{0}^\infty 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{3}$$ If $x \geq 0, z < 0$ or if $x < 0, z \geq 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y < 0,\\ 0, & y \geq 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{-\infty}^0 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{4}$$ In short, $(3)$ and $(4)$ show that $f_{X,Z}(x,z) = \phi(x)\phi(z)$ for all $x, z \in (-\infty,\infty)$ and so $X$ and $Z$ are (pairwise) independent standard normal random variables. Similar calculations (left as an exercise for the bemused reader) show that $X$ and $Y$ are (pairwise) independent standard normal random variables, and $Y$ and $Z$ also are (pairwise) independent standard normal random variables. But $X,Y,Z$ are not mutually independent normal random variables. Nor are the three of them together a set of jointly normal random variables. Indeed, their joint density $f_{X,Y,Z}(x,y,z)$ does not equal the product $\phi(x)\phi(y)\phi(z)$ of their marginal densities for any choice of $x, y, z \in (-\infty,\infty)$
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? $X$, $Y$ independent Bernoulli$(\frac 12)$ and $Z= X+Y-2XY$ is an example of three random variables that are pairwise independent but not mutually independent. It is easy to show that $Z$ is also Be
38,312
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
One that's perhaps easier to think about comes from a chessboard. Pick a point uniformly on the chessboard and consider $X_1$: row number (1-8) modulo 2 $X_2$: column number (1-8) modulo 2 $X_3$: color, 0 for white,1 for black. It's easy to see than any pair of these is independent: rows are independent of columns; row is independent of colour. But if you know $X_1$ and $X_2$, you know $X_3$ exactly.
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
One that's perhaps easier to think about comes from a chessboard. Pick a point uniformly on the chessboard and consider $X_1$: row number (1-8) modulo 2 $X_2$: column number (1-8) modulo 2 $X_3$: col
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? One that's perhaps easier to think about comes from a chessboard. Pick a point uniformly on the chessboard and consider $X_1$: row number (1-8) modulo 2 $X_2$: column number (1-8) modulo 2 $X_3$: color, 0 for white,1 for black. It's easy to see than any pair of these is independent: rows are independent of columns; row is independent of colour. But if you know $X_1$ and $X_2$, you know $X_3$ exactly.
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent? One that's perhaps easier to think about comes from a chessboard. Pick a point uniformly on the chessboard and consider $X_1$: row number (1-8) modulo 2 $X_2$: column number (1-8) modulo 2 $X_3$: col
38,313
What to do if residuals are not normally distributed?
You should not remove outliers just because they make the distribution of the residuals non-normal. You may examine the case that has that high residual and see if there are problems with it (the easiest would be if it is a data entry error) but you must justify your deletion on substantive grounds. Assuming there is no good reason to remove that observation, you can run the regression with and without it and see if there are any large differences in the parameter estimates; if not, you can leave it in and note that removing it made little difference If it makes a big difference, then you could try robust regression, which deals with outliers or quantile regression, which makes no assumptions about the distribution of the residuals. I am a fan of quantile regression, which I think is very underutilized.
What to do if residuals are not normally distributed?
You should not remove outliers just because they make the distribution of the residuals non-normal. You may examine the case that has that high residual and see if there are problems with it (the eas
What to do if residuals are not normally distributed? You should not remove outliers just because they make the distribution of the residuals non-normal. You may examine the case that has that high residual and see if there are problems with it (the easiest would be if it is a data entry error) but you must justify your deletion on substantive grounds. Assuming there is no good reason to remove that observation, you can run the regression with and without it and see if there are any large differences in the parameter estimates; if not, you can leave it in and note that removing it made little difference If it makes a big difference, then you could try robust regression, which deals with outliers or quantile regression, which makes no assumptions about the distribution of the residuals. I am a fan of quantile regression, which I think is very underutilized.
What to do if residuals are not normally distributed? You should not remove outliers just because they make the distribution of the residuals non-normal. You may examine the case that has that high residual and see if there are problems with it (the eas
38,314
What to do if residuals are not normally distributed?
@PeterFlom has made some good points here. I agree with his three points and his plan of action. Let me clear up one remaining issue: You are correct to note that only the residuals need to be normally distributed. However, @dsaxton is also right that in the real world, no data (including residuals) are ever perfectly normal. Thus what you really need are residuals that are 'normal enough'. If the population distribution of errors is very close to normal (which is implied by your qq-plot once the outlier is accounted for), then the central limit theorem implies that the sampling distribution of your betas will converge to the normal as $N$ increases. So although your data are still nearly significant even with the outlier excluded, I think you will be fine following @PeterFlom's advice. You may be interested in reading this excellent CV thread: Is normality testing 'essentially useless'?
What to do if residuals are not normally distributed?
@PeterFlom has made some good points here. I agree with his three points and his plan of action. Let me clear up one remaining issue: You are correct to note that only the residuals need to be nor
What to do if residuals are not normally distributed? @PeterFlom has made some good points here. I agree with his three points and his plan of action. Let me clear up one remaining issue: You are correct to note that only the residuals need to be normally distributed. However, @dsaxton is also right that in the real world, no data (including residuals) are ever perfectly normal. Thus what you really need are residuals that are 'normal enough'. If the population distribution of errors is very close to normal (which is implied by your qq-plot once the outlier is accounted for), then the central limit theorem implies that the sampling distribution of your betas will converge to the normal as $N$ increases. So although your data are still nearly significant even with the outlier excluded, I think you will be fine following @PeterFlom's advice. You may be interested in reading this excellent CV thread: Is normality testing 'essentially useless'?
What to do if residuals are not normally distributed? @PeterFlom has made some good points here. I agree with his three points and his plan of action. Let me clear up one remaining issue: You are correct to note that only the residuals need to be nor
38,315
What to do if residuals are not normally distributed?
As this is an optional assumption, just ignore the normality of residuals and go ahead. In most sensitive cases, you can remove or replace the outliers with mean value or justified value to improve the condition.
What to do if residuals are not normally distributed?
As this is an optional assumption, just ignore the normality of residuals and go ahead. In most sensitive cases, you can remove or replace the outliers with mean value or justified value to improve th
What to do if residuals are not normally distributed? As this is an optional assumption, just ignore the normality of residuals and go ahead. In most sensitive cases, you can remove or replace the outliers with mean value or justified value to improve the condition.
What to do if residuals are not normally distributed? As this is an optional assumption, just ignore the normality of residuals and go ahead. In most sensitive cases, you can remove or replace the outliers with mean value or justified value to improve th
38,316
Is centering a valid solution for multicollinearity?
When the model is additive and linear, centering has nothing to do with collinearity. Centering can only help when there are multiple terms per variable such as square or interaction terms. Even then, centering only helps in a way that doesn't matter to us, because centering does not impact the pooled multiple degree of freedom tests that are most relevant when there are multiple connected variables present in the model. For example, if a model contains $X$ and $X^2$, the most relevant test is the 2 d.f. test of association, which is completely unaffected by centering $X$. The next most relevant test is that of the effect of $X^2$ which again is completely unaffected by centering.
Is centering a valid solution for multicollinearity?
When the model is additive and linear, centering has nothing to do with collinearity. Centering can only help when there are multiple terms per variable such as square or interaction terms. Even the
Is centering a valid solution for multicollinearity? When the model is additive and linear, centering has nothing to do with collinearity. Centering can only help when there are multiple terms per variable such as square or interaction terms. Even then, centering only helps in a way that doesn't matter to us, because centering does not impact the pooled multiple degree of freedom tests that are most relevant when there are multiple connected variables present in the model. For example, if a model contains $X$ and $X^2$, the most relevant test is the 2 d.f. test of association, which is completely unaffected by centering $X$. The next most relevant test is that of the effect of $X^2$ which again is completely unaffected by centering.
Is centering a valid solution for multicollinearity? When the model is additive and linear, centering has nothing to do with collinearity. Centering can only help when there are multiple terms per variable such as square or interaction terms. Even the
38,317
Is centering a valid solution for multicollinearity?
(An easy way to find out is to try it and check for multicollinearity using the same methods you had used to discover the multicollinearity the first time ;-) No, unfortunately, centering $x_1$ and $x_2$ will not help you. When you have multicollinearity with just two variables, you have a (very strong) pairwise correlation between those two variables. Consider this example in R: library(MASS) set.seed(1) X = mvrnorm(100, mu=c(30,30), Sigma=rbind(c(100, 97), c( 97, 100) )) x1 = X[,1] x2 = X[,2] cor(x1, x2) # [1] 0.9698819 Centering is just a linear transformation, so it will not change anything about the shapes of the distributions or the relationship between them. Instead, it just slides them in one direction or the other. To see this, let's try it with our data: x1c = x1 - mean(x1) x2c = x2 - mean(x2) cor(x1c, x2c) # [1] 0.9698819 The correlation is exactly the same. Here's what the new variables look like: They look exactly the same too, except that they are now centered on $(0, 0)$. To learn more about these topics, it may help you to read these CV threads: Centering: When should you center your data & when should you standardize? Multicollinearity: Is there an intuitive explanation why multicollinearity is a problem in linear regression? Remedies: Dealing with correlated regressors
Is centering a valid solution for multicollinearity?
(An easy way to find out is to try it and check for multicollinearity using the same methods you had used to discover the multicollinearity the first time ;-) No, unfortunately, centering $x_1$ and $
Is centering a valid solution for multicollinearity? (An easy way to find out is to try it and check for multicollinearity using the same methods you had used to discover the multicollinearity the first time ;-) No, unfortunately, centering $x_1$ and $x_2$ will not help you. When you have multicollinearity with just two variables, you have a (very strong) pairwise correlation between those two variables. Consider this example in R: library(MASS) set.seed(1) X = mvrnorm(100, mu=c(30,30), Sigma=rbind(c(100, 97), c( 97, 100) )) x1 = X[,1] x2 = X[,2] cor(x1, x2) # [1] 0.9698819 Centering is just a linear transformation, so it will not change anything about the shapes of the distributions or the relationship between them. Instead, it just slides them in one direction or the other. To see this, let's try it with our data: x1c = x1 - mean(x1) x2c = x2 - mean(x2) cor(x1c, x2c) # [1] 0.9698819 The correlation is exactly the same. Here's what the new variables look like: They look exactly the same too, except that they are now centered on $(0, 0)$. To learn more about these topics, it may help you to read these CV threads: Centering: When should you center your data & when should you standardize? Multicollinearity: Is there an intuitive explanation why multicollinearity is a problem in linear regression? Remedies: Dealing with correlated regressors
Is centering a valid solution for multicollinearity? (An easy way to find out is to try it and check for multicollinearity using the same methods you had used to discover the multicollinearity the first time ;-) No, unfortunately, centering $x_1$ and $
38,318
Is centering a valid solution for multicollinearity?
When you ask if centering is a valid solution to the problem of multicollinearity, then I think it is helpful to discuss what the problem actually is. I say this because there is great disagreement about whether or not multicollinearity is "a problem" that needs a statistical solution. Many people, also many very well-established people, have very strong opinions on multicollinearity, which goes as far as to mock people who consider it a problem. The very best example is Goldberger who compared testing for multicollinearity with testing for "small sample size", which is obviously nonsense. Very good expositions can be found in Dave Giles' blog. See here and here for the Goldberger example. Let me define what I understand under multicollinearity: one or more of your explanatory variables are correlated to some degree. What is the problem with that? Well, it can be shown that the variance of your estimator increases. Is this a problem that needs a solution? Well, from a meta-perspective, it is a desirable property. If your variables do not contain much independent information, then the variance of your estimator should reflect this. From a researcher's perspective, it is however often a problem because publication bias forces us to put stars into tables, and a high variance of the estimator implies low power, which is detrimental to finding signficant effects if effects are small or noisy. If this is the problem, then what you are looking for are ways to increase precision. But stop right here! Note: if you do find effects, you can stop to consider multicollinearity a problem. Apparently, even if the independent information in your variables is limited, i.e. they are correlated, you are still able to detect the effects that you are looking for. So the "problem" has no consequence for you. Now to your question: Does subtracting means from your data "solve collinearity"? One answer has already been given: the collinearity of said variables is not changed by subtracting constants. You can see this by asking yourself: does the covariance between the variables change? Well, since the covariance is defined as $Cov(x_i,x_j) = E[(x_i-E[x_i])(x_j-E[x_j])]$, or their sample analogues if you wish, then you see that adding or subtracting constants don't matter. Hence, centering has no effect on the collinearity of your explanatory variables. Does centering improve your precision? In this case, we need to look at the variance-covarance matrix of your estimator and compare them. The problem is that it is difficult to compare: in the non-centered case, when an intercept is included in the model, you have a matrix with one more dimension (note here that I assume that you would skip the constant in the regression with centered variables). However, since there is no intercept anymore, the dependency on the estimate of your intercept of your other estimates is clearly removed (i.e. if you define the problem of collinearity as "(strong) dependence between regressors, as measured by the off-diagonal elements of the variance-covariance matrix", then the answer is more complicated than a simple "no"). In any case, it might be that the standard errors of your estimates appear lower, which means that the precision could have been improved by centering (might be interesting to simulate this to test this). Having said that, if you do a statistical test, you will need to adjust the degrees of freedom correctly, and then the apparent increase in precision will most likely be lost (I would be surprised if not). If centering does not improve your precision in meaningful ways, what helps? You could consider merging highly correlated variables into one factor (if this makes sense in your application). Outlier removal also tends to help, as does GLM estimation etc (even though this is less widely applied nowadays).
Is centering a valid solution for multicollinearity?
When you ask if centering is a valid solution to the problem of multicollinearity, then I think it is helpful to discuss what the problem actually is. I say this because there is great disagreement ab
Is centering a valid solution for multicollinearity? When you ask if centering is a valid solution to the problem of multicollinearity, then I think it is helpful to discuss what the problem actually is. I say this because there is great disagreement about whether or not multicollinearity is "a problem" that needs a statistical solution. Many people, also many very well-established people, have very strong opinions on multicollinearity, which goes as far as to mock people who consider it a problem. The very best example is Goldberger who compared testing for multicollinearity with testing for "small sample size", which is obviously nonsense. Very good expositions can be found in Dave Giles' blog. See here and here for the Goldberger example. Let me define what I understand under multicollinearity: one or more of your explanatory variables are correlated to some degree. What is the problem with that? Well, it can be shown that the variance of your estimator increases. Is this a problem that needs a solution? Well, from a meta-perspective, it is a desirable property. If your variables do not contain much independent information, then the variance of your estimator should reflect this. From a researcher's perspective, it is however often a problem because publication bias forces us to put stars into tables, and a high variance of the estimator implies low power, which is detrimental to finding signficant effects if effects are small or noisy. If this is the problem, then what you are looking for are ways to increase precision. But stop right here! Note: if you do find effects, you can stop to consider multicollinearity a problem. Apparently, even if the independent information in your variables is limited, i.e. they are correlated, you are still able to detect the effects that you are looking for. So the "problem" has no consequence for you. Now to your question: Does subtracting means from your data "solve collinearity"? One answer has already been given: the collinearity of said variables is not changed by subtracting constants. You can see this by asking yourself: does the covariance between the variables change? Well, since the covariance is defined as $Cov(x_i,x_j) = E[(x_i-E[x_i])(x_j-E[x_j])]$, or their sample analogues if you wish, then you see that adding or subtracting constants don't matter. Hence, centering has no effect on the collinearity of your explanatory variables. Does centering improve your precision? In this case, we need to look at the variance-covarance matrix of your estimator and compare them. The problem is that it is difficult to compare: in the non-centered case, when an intercept is included in the model, you have a matrix with one more dimension (note here that I assume that you would skip the constant in the regression with centered variables). However, since there is no intercept anymore, the dependency on the estimate of your intercept of your other estimates is clearly removed (i.e. if you define the problem of collinearity as "(strong) dependence between regressors, as measured by the off-diagonal elements of the variance-covariance matrix", then the answer is more complicated than a simple "no"). In any case, it might be that the standard errors of your estimates appear lower, which means that the precision could have been improved by centering (might be interesting to simulate this to test this). Having said that, if you do a statistical test, you will need to adjust the degrees of freedom correctly, and then the apparent increase in precision will most likely be lost (I would be surprised if not). If centering does not improve your precision in meaningful ways, what helps? You could consider merging highly correlated variables into one factor (if this makes sense in your application). Outlier removal also tends to help, as does GLM estimation etc (even though this is less widely applied nowadays).
Is centering a valid solution for multicollinearity? When you ask if centering is a valid solution to the problem of multicollinearity, then I think it is helpful to discuss what the problem actually is. I say this because there is great disagreement ab
38,319
Is a p-value a sample statistic, or a population parameter, or neither?
A p-value is a random variable, so it's not a population parameter. You could certainly argue that it's a statistic: A statistic (singular) is a single measure of some attribute of a sample (e.g., its arithmetic mean value). It is calculated by applying a function (statistical algorithm) to the values of the items of the sample, which are known together as a set of data.
Is a p-value a sample statistic, or a population parameter, or neither?
A p-value is a random variable, so it's not a population parameter. You could certainly argue that it's a statistic: A statistic (singular) is a single measure of some attribute of a sample (e.g., it
Is a p-value a sample statistic, or a population parameter, or neither? A p-value is a random variable, so it's not a population parameter. You could certainly argue that it's a statistic: A statistic (singular) is a single measure of some attribute of a sample (e.g., its arithmetic mean value). It is calculated by applying a function (statistical algorithm) to the values of the items of the sample, which are known together as a set of data.
Is a p-value a sample statistic, or a population parameter, or neither? A p-value is a random variable, so it's not a population parameter. You could certainly argue that it's a statistic: A statistic (singular) is a single measure of some attribute of a sample (e.g., it
38,320
Is a p-value a sample statistic, or a population parameter, or neither?
A $p$-value is the probability of observing a test static value as or more extreme than the test statistic created from one's data if the null hypothesis is true. You can therefore interpret the $p$-value as a measure of how extreme your test statistic is under H$_{0}$ and the probability distribution attached to H$_{0}$. The $p$-value is therefore a statistic that is a function of one's data, and one's choice of H$_{0}$.
Is a p-value a sample statistic, or a population parameter, or neither?
A $p$-value is the probability of observing a test static value as or more extreme than the test statistic created from one's data if the null hypothesis is true. You can therefore interpret the $p$-v
Is a p-value a sample statistic, or a population parameter, or neither? A $p$-value is the probability of observing a test static value as or more extreme than the test statistic created from one's data if the null hypothesis is true. You can therefore interpret the $p$-value as a measure of how extreme your test statistic is under H$_{0}$ and the probability distribution attached to H$_{0}$. The $p$-value is therefore a statistic that is a function of one's data, and one's choice of H$_{0}$.
Is a p-value a sample statistic, or a population parameter, or neither? A $p$-value is the probability of observing a test static value as or more extreme than the test statistic created from one's data if the null hypothesis is true. You can therefore interpret the $p$-v
38,321
Is a p-value a sample statistic, or a population parameter, or neither?
If the test statistic can be called a statistic, then so must the $p$-value: the test statistic is a function of the data under the assumption that the null hypothesis is true. The $p$-value is simply a probability associated with that test statistic.
Is a p-value a sample statistic, or a population parameter, or neither?
If the test statistic can be called a statistic, then so must the $p$-value: the test statistic is a function of the data under the assumption that the null hypothesis is true. The $p$-value is simp
Is a p-value a sample statistic, or a population parameter, or neither? If the test statistic can be called a statistic, then so must the $p$-value: the test statistic is a function of the data under the assumption that the null hypothesis is true. The $p$-value is simply a probability associated with that test statistic.
Is a p-value a sample statistic, or a population parameter, or neither? If the test statistic can be called a statistic, then so must the $p$-value: the test statistic is a function of the data under the assumption that the null hypothesis is true. The $p$-value is simp
38,322
Visualising the variance
If the main concern is "that our customers gets very conscious of the actual number. Even though we try to tell them that a high/low number isn't necessarily bad", then I think you should formally address them by plotting the confidence intervals. Variance is a bad choice because its unit is the square of whatever you're measuring with and they are much bigger and can be potentially very misleading. Standard deviation is a better approach but that does not answer your customers' concern because just by SD itself one cannot tell if the point estimates are really different from the reference mean. Some kind of plot modified based on a forest plot would be a better candidate. It's compact and easy to integrate with text fields (where you can show the summary statistics.) And what's more, it answers your client question head on. If they are worried that 3.5 is so much lower than 4.6, then show them statistically they are not different. (Or maybe your clients are right.) And somewhat contrary to what you propose to do (eliminating numeric output altogether), I'd perhaps try to enrich the graph so that it shows more data. Devices like panel histogram or violin plot (see below) allows you to show the distribution of the actual data, which perhaps will give a strong visual cue that the data do spread and it's not about just one point. Also, I'd recommend evaluating your score distribution for skewness or other deviation from normal distribution, and see if augmenting with some non-parametric plot like box plot would be a good idea. Side comment: I feel that your trimming criterion is very rigid, but I wouldn't question your familiarity with the scale. Anyhow, if such a trimming scheme is being used, I feel you're also obligated to report how many of the people are trimmed. It's because the variation you're using to convince them that things are not that different can be potentially altered by how you define the trimming threshold. It'd be awkward if they find out later.
Visualising the variance
If the main concern is "that our customers gets very conscious of the actual number. Even though we try to tell them that a high/low number isn't necessarily bad", then I think you should formally add
Visualising the variance If the main concern is "that our customers gets very conscious of the actual number. Even though we try to tell them that a high/low number isn't necessarily bad", then I think you should formally address them by plotting the confidence intervals. Variance is a bad choice because its unit is the square of whatever you're measuring with and they are much bigger and can be potentially very misleading. Standard deviation is a better approach but that does not answer your customers' concern because just by SD itself one cannot tell if the point estimates are really different from the reference mean. Some kind of plot modified based on a forest plot would be a better candidate. It's compact and easy to integrate with text fields (where you can show the summary statistics.) And what's more, it answers your client question head on. If they are worried that 3.5 is so much lower than 4.6, then show them statistically they are not different. (Or maybe your clients are right.) And somewhat contrary to what you propose to do (eliminating numeric output altogether), I'd perhaps try to enrich the graph so that it shows more data. Devices like panel histogram or violin plot (see below) allows you to show the distribution of the actual data, which perhaps will give a strong visual cue that the data do spread and it's not about just one point. Also, I'd recommend evaluating your score distribution for skewness or other deviation from normal distribution, and see if augmenting with some non-parametric plot like box plot would be a good idea. Side comment: I feel that your trimming criterion is very rigid, but I wouldn't question your familiarity with the scale. Anyhow, if such a trimming scheme is being used, I feel you're also obligated to report how many of the people are trimmed. It's because the variation you're using to convince them that things are not that different can be potentially altered by how you define the trimming threshold. It'd be awkward if they find out later.
Visualising the variance If the main concern is "that our customers gets very conscious of the actual number. Even though we try to tell them that a high/low number isn't necessarily bad", then I think you should formally add
38,323
Visualising the variance
The question can be reduced to "How do I show one value of interest against a reference distribution?". The former, showing a value of interest is the simple part; any dramatic marking at that point on the graph will do. So it will be useful to show different ways of displaying the reference distribution. We need not know what exactly that reference distribution is to give pertinent advice. One of the most usual ways to show a distribution is to plot its probability density function or its cumulative density function (most often referred to as PDF and CDF respectively). The below plot shows a reference distribution that is normally distributed with a mean of 40 and a standard deviation of 15. A value of interest, at 80, is superimposed as an unmistakable large red dot. The grey line on the left plot shows the estimate of the CDF from the reference distribution, and the PDF in the right plot. This type of graph is amenable to not as well defined reference distributions as well. For example you could plot the smoothed kernel density estimate of the PDF (or CDF) based on the prior reference values and superimpose the current value of interest just the same. From these plots one can either estimate the probability of getting a value above or below the current value of interest. The CDF it is read directly off the chart, PDF one has to make that estimate based on the area to the left or right of the value of interest. Another alternative (which Penguin shows) is to reflect the PDF and show its area as a violin plot. This provides some more visual gurth for the area in the tail of the distribution. Here the value of interest is marked by a black horizontal line, and the area above the value is colored red. Another popular alternative to showing distributions are box plots (or error bar charts). The error bars in the left chart covers the middle 80% of the reference distribution, and the box plot on the right plots the interquartile range within the grey bar and outside of the whiskers are typically considered to be a robust estimate of outliers. These are potentially suspect to the rote worrying you noticed though - everything is fine if within the bars and the sky is falling if it is outside. Depending on how well the reference distribution is estimated, you could plot letter values beyond the interquartile range, or plot a continuous density strip to show the reference distribution. Below is an example of a continuous gradient, where the darker grey symbolizes a higher PDF for the reference distribution. (See 40 years of boxplots by Wickham & Stryjewski.)
Visualising the variance
The question can be reduced to "How do I show one value of interest against a reference distribution?". The former, showing a value of interest is the simple part; any dramatic marking at that point o
Visualising the variance The question can be reduced to "How do I show one value of interest against a reference distribution?". The former, showing a value of interest is the simple part; any dramatic marking at that point on the graph will do. So it will be useful to show different ways of displaying the reference distribution. We need not know what exactly that reference distribution is to give pertinent advice. One of the most usual ways to show a distribution is to plot its probability density function or its cumulative density function (most often referred to as PDF and CDF respectively). The below plot shows a reference distribution that is normally distributed with a mean of 40 and a standard deviation of 15. A value of interest, at 80, is superimposed as an unmistakable large red dot. The grey line on the left plot shows the estimate of the CDF from the reference distribution, and the PDF in the right plot. This type of graph is amenable to not as well defined reference distributions as well. For example you could plot the smoothed kernel density estimate of the PDF (or CDF) based on the prior reference values and superimpose the current value of interest just the same. From these plots one can either estimate the probability of getting a value above or below the current value of interest. The CDF it is read directly off the chart, PDF one has to make that estimate based on the area to the left or right of the value of interest. Another alternative (which Penguin shows) is to reflect the PDF and show its area as a violin plot. This provides some more visual gurth for the area in the tail of the distribution. Here the value of interest is marked by a black horizontal line, and the area above the value is colored red. Another popular alternative to showing distributions are box plots (or error bar charts). The error bars in the left chart covers the middle 80% of the reference distribution, and the box plot on the right plots the interquartile range within the grey bar and outside of the whiskers are typically considered to be a robust estimate of outliers. These are potentially suspect to the rote worrying you noticed though - everything is fine if within the bars and the sky is falling if it is outside. Depending on how well the reference distribution is estimated, you could plot letter values beyond the interquartile range, or plot a continuous density strip to show the reference distribution. Below is an example of a continuous gradient, where the darker grey symbolizes a higher PDF for the reference distribution. (See 40 years of boxplots by Wickham & Stryjewski.)
Visualising the variance The question can be reduced to "How do I show one value of interest against a reference distribution?". The former, showing a value of interest is the simple part; any dramatic marking at that point o
38,324
Visualising the variance
As I understand from his comments, Christian wants to add an iconish representation of the variance to an existing plot. We don't know what kind of plot yet. For a dotplot, the moment of inertia representation of the variance possibly is a solution. Taking the standard deviation of the sample as the horizontal radius is a good option, and one can choose three colors for a "low-medium-high" scale.
Visualising the variance
As I understand from his comments, Christian wants to add an iconish representation of the variance to an existing plot. We don't know what kind of plot yet. For a dotplot, the moment of inertia repre
Visualising the variance As I understand from his comments, Christian wants to add an iconish representation of the variance to an existing plot. We don't know what kind of plot yet. For a dotplot, the moment of inertia representation of the variance possibly is a solution. Taking the standard deviation of the sample as the horizontal radius is a good option, and one can choose three colors for a "low-medium-high" scale.
Visualising the variance As I understand from his comments, Christian wants to add an iconish representation of the variance to an existing plot. We don't know what kind of plot yet. For a dotplot, the moment of inertia repre
38,325
Visualising the variance
The square root of variance is on the same scale as your data. For a normal distribution, this is known as the standard deviation. It is a common practise to normalize values to multiples of the standard deviation, such that $+3\sigma$ is considered an unusually high value, whereas $-3\sigma$ is considered unusually low. This is known as "standardization", or the $z$-score.
Visualising the variance
The square root of variance is on the same scale as your data. For a normal distribution, this is known as the standard deviation. It is a common practise to normalize values to multiples of the stand
Visualising the variance The square root of variance is on the same scale as your data. For a normal distribution, this is known as the standard deviation. It is a common practise to normalize values to multiples of the standard deviation, such that $+3\sigma$ is considered an unusually high value, whereas $-3\sigma$ is considered unusually low. This is known as "standardization", or the $z$-score.
Visualising the variance The square root of variance is on the same scale as your data. For a normal distribution, this is known as the standard deviation. It is a common practise to normalize values to multiples of the stand
38,326
Visualising the variance
I would suggest using bar plots with pairs of bars for the target group next to the reference group, and on top of each bar plot confidence intervals (I's) centered around the top of the bar with length of $2\sigma$, where $\sigma$ is the standard deviation. See example from Wikipedia's Error Bar article:
Visualising the variance
I would suggest using bar plots with pairs of bars for the target group next to the reference group, and on top of each bar plot confidence intervals (I's) centered around the top of the bar with leng
Visualising the variance I would suggest using bar plots with pairs of bars for the target group next to the reference group, and on top of each bar plot confidence intervals (I's) centered around the top of the bar with length of $2\sigma$, where $\sigma$ is the standard deviation. See example from Wikipedia's Error Bar article:
Visualising the variance I would suggest using bar plots with pairs of bars for the target group next to the reference group, and on top of each bar plot confidence intervals (I's) centered around the top of the bar with leng
38,327
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
use pvals.fnc() function the pMCMC here works like p-value which should be less than 0.05 to reject the null hypothesis.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
use pvals.fnc() function the pMCMC here works like p-value which should be less than 0.05 to reject the null hypothesis.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] use pvals.fnc() function the pMCMC here works like p-value which should be less than 0.05 to reject the null hypothesis.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] use pvals.fnc() function the pMCMC here works like p-value which should be less than 0.05 to reject the null hypothesis.
38,328
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
The lmer package's author made a conscious choice not to create p-values for the fixed effects. Some packages do, but he feels that they are doing simplistic calculations that are misleading. (Many statisticians feel that there's a p-value obsession that causes confusion in and of itself, but that's a separate matter.) He addresses the question in: this post. I believe the summary paragraph is: Most of the research on tests for the fixed-effects specification in a mixed model begin with the assumption that these statistics will have an F distribution with a known numerator degrees of freedom and the only purpose of the research is to decide how to obtain an approximate denominator degrees of freedom. I don't agree. I don't understand the issue well enough to paraphrase it.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
The lmer package's author made a conscious choice not to create p-values for the fixed effects. Some packages do, but he feels that they are doing simplistic calculations that are misleading. (Many st
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] The lmer package's author made a conscious choice not to create p-values for the fixed effects. Some packages do, but he feels that they are doing simplistic calculations that are misleading. (Many statisticians feel that there's a p-value obsession that causes confusion in and of itself, but that's a separate matter.) He addresses the question in: this post. I believe the summary paragraph is: Most of the research on tests for the fixed-effects specification in a mixed model begin with the assumption that these statistics will have an F distribution with a known numerator degrees of freedom and the only purpose of the research is to decide how to obtain an approximate denominator degrees of freedom. I don't agree. I don't understand the issue well enough to paraphrase it.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] The lmer package's author made a conscious choice not to create p-values for the fixed effects. Some packages do, but he feels that they are doing simplistic calculations that are misleading. (Many st
38,329
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
Install the coda and languageR package and run the pvals.fnc as follows for p-value: Model.pval<-pvals.fnc(Model, nsim = n, withMCMC = TRUE) Note that this will not work for level 3 or above in nested random effects models.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate]
Install the coda and languageR package and run the pvals.fnc as follows for p-value: Model.pval<-pvals.fnc(Model, nsim = n, withMCMC = TRUE) Note that this will not work for level 3 or above in nested
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] Install the coda and languageR package and run the pvals.fnc as follows for p-value: Model.pval<-pvals.fnc(Model, nsim = n, withMCMC = TRUE) Note that this will not work for level 3 or above in nested random effects models.
Can't find p-values in the output from lmer() in the lm4 package in R [duplicate] Install the coda and languageR package and run the pvals.fnc as follows for p-value: Model.pval<-pvals.fnc(Model, nsim = n, withMCMC = TRUE) Note that this will not work for level 3 or above in nested
38,330
How to correlate ordinal and nominal variables in SPSS?
You should have a look at multiple correspondence analysis. This is a technique to uncover patterns and structures in categorical data. It is an example of what some people call "French Data Analysis" In SPSS, you can use the CORRESPONDENCE command. If you prefer the Menu, it is available via "Analyze -> Data Reduction -> Correspondence Analysis". However, before doing that, start with cross-tabulations between the variables. In SPSS the command is called CROSSTABS or click on "Analyze -> Descriptive Statistics -> Crosstabs"
How to correlate ordinal and nominal variables in SPSS?
You should have a look at multiple correspondence analysis. This is a technique to uncover patterns and structures in categorical data. It is an example of what some people call "French Data Analysis"
How to correlate ordinal and nominal variables in SPSS? You should have a look at multiple correspondence analysis. This is a technique to uncover patterns and structures in categorical data. It is an example of what some people call "French Data Analysis" In SPSS, you can use the CORRESPONDENCE command. If you prefer the Menu, it is available via "Analyze -> Data Reduction -> Correspondence Analysis". However, before doing that, start with cross-tabulations between the variables. In SPSS the command is called CROSSTABS or click on "Analyze -> Descriptive Statistics -> Crosstabs"
How to correlate ordinal and nominal variables in SPSS? You should have a look at multiple correspondence analysis. This is a technique to uncover patterns and structures in categorical data. It is an example of what some people call "French Data Analysis"
38,331
How to correlate ordinal and nominal variables in SPSS?
You might want to look at the AUTORECODE command (Transform > Automatic Recode) if you are reading a lot of string data that needs to be converted to numeric. Parametric and nonparametric correlations are available from the Analyze > Correlate menu for a first look. There are tools available as extensions for color coding significant and/or large correlations. There is also a user-posted tool for generating a graphical representation of a correlation table that you can find in the Graphics forum in the SPSS Community website.
How to correlate ordinal and nominal variables in SPSS?
You might want to look at the AUTORECODE command (Transform > Automatic Recode) if you are reading a lot of string data that needs to be converted to numeric. Parametric and nonparametric correlations
How to correlate ordinal and nominal variables in SPSS? You might want to look at the AUTORECODE command (Transform > Automatic Recode) if you are reading a lot of string data that needs to be converted to numeric. Parametric and nonparametric correlations are available from the Analyze > Correlate menu for a first look. There are tools available as extensions for color coding significant and/or large correlations. There is also a user-posted tool for generating a graphical representation of a correlation table that you can find in the Graphics forum in the SPSS Community website.
How to correlate ordinal and nominal variables in SPSS? You might want to look at the AUTORECODE command (Transform > Automatic Recode) if you are reading a lot of string data that needs to be converted to numeric. Parametric and nonparametric correlations
38,332
How to correlate ordinal and nominal variables in SPSS?
Try Categorical Regression (Optimal Scaling). Nominal variables don't have scale. How far is 'divorced' from 'married'? Does not make sense unless you have another measure to help put the nominal variable levels in order and distance from each other. Ordinal variables don't have scale either. How far is 'fair' from 'good'? There is order but no distance in an ordinal ranking. You can put them on a scale with respect to some other, dependent, variable. So there is no correlation with ordinal variables or nominal variables because correlation is a measure of association between scale variables. However, the optimal scaling procedure creates a scale for nominal variables (and ordinal), based on the variable levels' association with a dependent variable. This syntax will produce a correlation matrix between a scale dependent variable and nominal independent variables. GET FILE='C:\Program Files\IBM\SPSS\Statistics\22\Samples\English\car_sales.sav'. DATASET NAME DataSet1 WINDOW=FRONT. DATASET ACTIVATE DataSet1. CATREG VARIABLES=sales manufact model type /ANALYSIS=sales(LEVEL=SPORD,DEGREE=2,INKNOT=2) WITH manufact(LEVEL=NOMI) model(LEVEL=NOMI) type(LEVEL=NOMI) /DISCRETIZATION=sales(RANKING) manufact(RANKING) model(RANKING) type(RANKING) /PRINT=CORR QUANT(manufact model type) /PLOT=TRANS(manufact model type)(20). Notice that I also included the Quantifications and plots for the transformed variables. You cannot make sense of the correlation coefficients unless you can also make sense of the new scales created for the nominal (or ordinal) variables. CATREG is a very powerful and rich feature of SPSS. See also: Case Study Doctoral thesis by the creator of the SPSS implementation Another option to find the relationship between ordinal and nominal variables is to use Decision Trees. You will not get a correlation coefficient but the algorithm will group nominal variables and split ordinal variables based on association with another variable. Using the CRT method and selecting Variable Importance (output>statistics), you can generate a ranking of each independent (predictor) variable's association with the dependent (target) variable. The importance is a measure of association like correlation. If you are only interested in one factor level (e.g. [Marital status] = 'Married'), use a dummy coding for a new variable so that Married = 1 if Marital status = 'Married' else 0. With the dummy variable, you are creating two groups: Married and everything else. You can use the dummy variable as a scale variable because the groups you created are on a scale, one unit apart.
How to correlate ordinal and nominal variables in SPSS?
Try Categorical Regression (Optimal Scaling). Nominal variables don't have scale. How far is 'divorced' from 'married'? Does not make sense unless you have another measure to help put the nominal v
How to correlate ordinal and nominal variables in SPSS? Try Categorical Regression (Optimal Scaling). Nominal variables don't have scale. How far is 'divorced' from 'married'? Does not make sense unless you have another measure to help put the nominal variable levels in order and distance from each other. Ordinal variables don't have scale either. How far is 'fair' from 'good'? There is order but no distance in an ordinal ranking. You can put them on a scale with respect to some other, dependent, variable. So there is no correlation with ordinal variables or nominal variables because correlation is a measure of association between scale variables. However, the optimal scaling procedure creates a scale for nominal variables (and ordinal), based on the variable levels' association with a dependent variable. This syntax will produce a correlation matrix between a scale dependent variable and nominal independent variables. GET FILE='C:\Program Files\IBM\SPSS\Statistics\22\Samples\English\car_sales.sav'. DATASET NAME DataSet1 WINDOW=FRONT. DATASET ACTIVATE DataSet1. CATREG VARIABLES=sales manufact model type /ANALYSIS=sales(LEVEL=SPORD,DEGREE=2,INKNOT=2) WITH manufact(LEVEL=NOMI) model(LEVEL=NOMI) type(LEVEL=NOMI) /DISCRETIZATION=sales(RANKING) manufact(RANKING) model(RANKING) type(RANKING) /PRINT=CORR QUANT(manufact model type) /PLOT=TRANS(manufact model type)(20). Notice that I also included the Quantifications and plots for the transformed variables. You cannot make sense of the correlation coefficients unless you can also make sense of the new scales created for the nominal (or ordinal) variables. CATREG is a very powerful and rich feature of SPSS. See also: Case Study Doctoral thesis by the creator of the SPSS implementation Another option to find the relationship between ordinal and nominal variables is to use Decision Trees. You will not get a correlation coefficient but the algorithm will group nominal variables and split ordinal variables based on association with another variable. Using the CRT method and selecting Variable Importance (output>statistics), you can generate a ranking of each independent (predictor) variable's association with the dependent (target) variable. The importance is a measure of association like correlation. If you are only interested in one factor level (e.g. [Marital status] = 'Married'), use a dummy coding for a new variable so that Married = 1 if Marital status = 'Married' else 0. With the dummy variable, you are creating two groups: Married and everything else. You can use the dummy variable as a scale variable because the groups you created are on a scale, one unit apart.
How to correlate ordinal and nominal variables in SPSS? Try Categorical Regression (Optimal Scaling). Nominal variables don't have scale. How far is 'divorced' from 'married'? Does not make sense unless you have another measure to help put the nominal v
38,333
How to correlate ordinal and nominal variables in SPSS?
Use Transform > Automatic Recode to make two numeric variables that carry the information of your two string variables. Run a frequency table of the new variables, and make sure the string attributes are correct. E.g. check for misspelling (commute vs communte), plural/singular confusion (cars vs car), and grammatical difference (drive vs driving). Tidy them up by aggregating them, or each of these variants will be treated as its only level. Likert's scale with 5 levels can be safely treated as ordinal variables, and the other two variables generated from the string variables are probably nominal variables. To test the association of Ordinal vs. ordinal, you may consider Spearman's correlation coefficient. (Analyze > Bivariate) You'd need the check the box "Spearman" in order to get the statsitics. Nominal vs. nominal, probably a chi-square test. (Analyze > Descriptive statistics > Crosstab Put in the variables into row and column, and then click Statistics and check Chi-square). Nominal vs. ordinal, you may consider Kruskal-Wallis. (Analyze > Non-parametric > Legacy dialog > K-independent samples. Put the Likert variables into Test variable list and put the nominal variable into Grouping variable). Now, I want to correlate these variables between them in order to find meaningful pattern. How do I do this in SPSS? Be careful with the intention of finding a meaningful pattern. If you just run the test and make up a reason for anything that appears to be sensible, you're just being toyed by the statistics. Instead, I'd suggest you to draft some questions and have some hypotheses on how they should correlate/associated before you even touch the data. If you are just trying to explore potential relationship, then treat it strictly as a hypothesis-generating activity, and statistically test the association using some other data. Moreover I would like to test the values of some variables against the whole number of entries. Sorry, I don't understand what this means.
How to correlate ordinal and nominal variables in SPSS?
Use Transform > Automatic Recode to make two numeric variables that carry the information of your two string variables. Run a frequency table of the new variables, and make sure the string attributes
How to correlate ordinal and nominal variables in SPSS? Use Transform > Automatic Recode to make two numeric variables that carry the information of your two string variables. Run a frequency table of the new variables, and make sure the string attributes are correct. E.g. check for misspelling (commute vs communte), plural/singular confusion (cars vs car), and grammatical difference (drive vs driving). Tidy them up by aggregating them, or each of these variants will be treated as its only level. Likert's scale with 5 levels can be safely treated as ordinal variables, and the other two variables generated from the string variables are probably nominal variables. To test the association of Ordinal vs. ordinal, you may consider Spearman's correlation coefficient. (Analyze > Bivariate) You'd need the check the box "Spearman" in order to get the statsitics. Nominal vs. nominal, probably a chi-square test. (Analyze > Descriptive statistics > Crosstab Put in the variables into row and column, and then click Statistics and check Chi-square). Nominal vs. ordinal, you may consider Kruskal-Wallis. (Analyze > Non-parametric > Legacy dialog > K-independent samples. Put the Likert variables into Test variable list and put the nominal variable into Grouping variable). Now, I want to correlate these variables between them in order to find meaningful pattern. How do I do this in SPSS? Be careful with the intention of finding a meaningful pattern. If you just run the test and make up a reason for anything that appears to be sensible, you're just being toyed by the statistics. Instead, I'd suggest you to draft some questions and have some hypotheses on how they should correlate/associated before you even touch the data. If you are just trying to explore potential relationship, then treat it strictly as a hypothesis-generating activity, and statistically test the association using some other data. Moreover I would like to test the values of some variables against the whole number of entries. Sorry, I don't understand what this means.
How to correlate ordinal and nominal variables in SPSS? Use Transform > Automatic Recode to make two numeric variables that carry the information of your two string variables. Run a frequency table of the new variables, and make sure the string attributes
38,334
How to correlate ordinal and nominal variables in SPSS?
A correlation of nominal (e.g. Client yes or no) and ordinal (e.g. 5-point likert scale on satisfaction) variables can be had using chi-square analysis. The 2 x (5?) table (which a researcher might want to reduce to a 2 x 2 table by bucketing categories) will hypothesis test whether a significant relationship exists (chi-square test statistic) while at least SPSS also supplies a measure of the strength of relationship via the phi (or Cramers) coefficients. Note these are directionless as nominal variables have no direction.
How to correlate ordinal and nominal variables in SPSS?
A correlation of nominal (e.g. Client yes or no) and ordinal (e.g. 5-point likert scale on satisfaction) variables can be had using chi-square analysis. The 2 x (5?) table (which a researcher might wa
How to correlate ordinal and nominal variables in SPSS? A correlation of nominal (e.g. Client yes or no) and ordinal (e.g. 5-point likert scale on satisfaction) variables can be had using chi-square analysis. The 2 x (5?) table (which a researcher might want to reduce to a 2 x 2 table by bucketing categories) will hypothesis test whether a significant relationship exists (chi-square test statistic) while at least SPSS also supplies a measure of the strength of relationship via the phi (or Cramers) coefficients. Note these are directionless as nominal variables have no direction.
How to correlate ordinal and nominal variables in SPSS? A correlation of nominal (e.g. Client yes or no) and ordinal (e.g. 5-point likert scale on satisfaction) variables can be had using chi-square analysis. The 2 x (5?) table (which a researcher might wa
38,335
Patient distance metrics
You asked a difficult question, but I'm a little bit surprised that the various clues that were suggested to you received so little attention. I upvoted all of them because I think they basically are useful responses, though in their actual form they call for further bibliographic work. Disclaimer: I never had to deal with such a problem, but I regularly have to expose statistical results that may differ from physician's a priori beliefs and I learn a lot from unraveling their lines of reasoning. Also, I have some background in teaching human decision/knowledge from the perspective of Artificial Intelligence and Cognitive Science, and I think what you asked is not so far from how experts actually decide that two objects are similar or not, based on their attributes and a common understanding of their relationships. From your question, I noticed two interesting assertions. The first one related to how an expert assess the similarity or difference between two set of measurements: I don't particularly care if there is some relation between attribute X and Y. What I care about is if a doctor thinks there is a relation between X and Y. The second one, How can I predict what they think the similarity is? Do they look at certain attributes? looks like it is somewhat subsumed by the former, but it seems more closely related to what are the most salient attributes that allow to draw a clear separation between the objects of interest. To the first question, I would answer: Well, if there is no characteristic or objective relationship between any two subjects, what would be the rationale for making up an hypothetical one? Rather, I think the question should be: If I only have limited resources (knowledge, time, data) to take a decision, how do I optimize my choice? To the second question, my answer is: Although it seems to partly contradicts your former assertion (if there is no relationship at all, it implies that the available attributes are not discriminative or useless), I think that most of the time this is a combination of attributes that makes sense, and not only how a given individual scores on a single attribute. Let me dwell on these two points. Human beings have a limited or bounded rationality, and can take a decision (often the right one) without examining all possible solutions. There is also a close connection with abductive reasoning. It is well known that there is some variability between individual judgments, and even between judgments from the same expert at two occasions. This is what we are interested in in reliability studies. But you want to know how these experts elaborate their judgments. There is a huge amount of papers about that in cognitive psychology, especially on the fact that relative judgments are easier and more reliable than absolute ones. Doctors' decisions are interesting in this respect because they are able to take a "good" decision with a limited amount of information, but at the same time they benefit from an ever growing internal knowledge base from which they can draw expected relationships (extrapolation). In other words, they have a built-in inference (assumed to be hypothetico-deductive) machinery and accumulate positive evidence or counterfactuals from there experience or practice. Reproducing this inferential ability and the use of declarative knowledge was the aim of several expert or production rule systems in the 70s, the most famous one being MYCIN, and more generally of Artifical Intelligence earlier in 1946 (Can we reproduce on an artificial system the intelligent behavior observed in man?). Automatic treatment of speech, problem solving, visual shape recognition are still active projects nowadays and they all have to do with identifying salient features and their relationships to make an appropriate decision (i.e., how far should two patterns be to be judged as the emanation of two distinct generating processes?). In sum, our doctors are able to draw an optimal inference from a limited amount of data, compensating from noise that arises simply as a byproduct of individual variability (at the level of the patients). Thus, there is a clear connection with statistics and probability theory, and the question is what conscious or subconscious methodology help doctors forming their judgments. Semantic networks (SN), belief networks, and decision trees are all relevant to the question you asked. The paper you cited is about using an ontology as a basis of formal judgments, but it is no more than an extension of SNs, and many projects were initiated in this direction (I can think of the Gene Ontology for genomic studies, but many others exist in different domains). Now, look at the following hierarchical classification of diagnostic categories (it is roughly taken from Dunn 1989, p. 25): And now take a look at the ICD classification; I think it is not too far from this schematic classification. Mental disorders are organized into distinct categories, some of them being closer one to each other. What render them similar is the closeness of their expression (phenotype) in any patient, and the fact that they share some similarities in their somatic/psychological etiology. Assessing whether two doctors would make the same diagnostic is a typical example of an inter-rater agreement study, where two psychiatrists are asked to place each of several patients in mutually exclusive categories. The hierarchical structure should be reflected in the disagreement between each doctor, that is they may not agree on the finer distinction between diagnostic classes (the leafs) but if they were to disagree between insomnia and schizophrenia, well it would be little bit disconcerting... How these two doctors decide on which class a given patient belongs to is no more than a clustering problem: How likely are two individuals, given a set of observed values on different attributes, to be similar enough so that I decide they share the same class membership? Now, some attributes are more influential than others, and this is exactly what is reflected in the weight attributed to a given attribute in Latent Class Analysis (which can be thought of as a probabilistic extension of clustering methods like k-means), or the variable importance in Random Forests. We need to put things into boxes, because at first sight it's simpler. The problem is that often things overlap to some extent, so we need to consider different levels of categorization. In fact, cluster analysis is at the heart of the actual DSM categories, and many papers actually turn around assigning one patient to a specific syndromic category, based on the profile of his response to a battery of neuropsychological assessments. This merely looks like a subtyping approach; each time, we seek to refine a preliminary well-established diagnostic category, by adding exception rules or an additional relevant symptom or impairment. A related topic is decision trees which are by far the most well understood statistical techniques by physicians. Most of the time, they described a nested series of boolean assertions (Do you have a sore throat? If yes, do you have a temperature? etc.; but look at an example of public influenza diagnostic tree) according to which we can form a decision regarding patients proximity (i.e. how similar patients are wrt. attributes considered for building the tree -- the closer they are the more likely they are to end up in the same leaf). Association rules and the C4.5 algorithm rely quite on the same idea. On a related topic, there's the patient rule-induction method (PRIM). Now clearly, we must make a distinction between all those methods, that make an efficient use of a large body of data and incorporate bagging or boosting to compensate for model fragility or overfitting issues, and doctors who cannot process huge amount of data in an automatic and algorithmic manner. But, for small to moderate amount of descriptors, I think they perform quite good after all. The yes-or-no approach is not the panacea, though. In behavioral genetics and psychiatry, it is commonly argued that the classification approach is probably not the best way to go, and that common diseases (learning disorders, depression, personality disorders, etc.) reflect a continuum rather than classes of opposite valence. Nobody's perfect! In conclusion, I think doctors actually hold a kind of internalized inference engine that allows them to assign patients into distinctive classes that are characterized by a weighted combination of available evidences; in other words, they are able to organize their knowledge in an efficient manner, and these internal representations and the relationships they share may be augmented throughout experience. Case-based reasoning probably comes into play at some point too. All of this may be subjected to (a) revision with newly available data (we are not simply acting as definitive binary classifiers, and are able to incorporate new data in our decision making), and (b) subjective biases arising from past experience or wrong self-made association rules. However, they are prone to errors, as every decision systems... All statistical techniques reflecting these steps -- decisions trees, bagging/boosting, cluster analysis, latent cluster analysis -- seems relevant to your questions, although they may be hard to instantiate in a single decision rule. Here are a couple of references that might be helpful, as a first start to how doctors make their decisions: A clinical decision support system for clinicians for determining appropriate radiologic imaging examination Grzymala-Busse, JW. Selected Algorithms of Machine Learning from Examples. Fundamenta Informaticae 18 (1993), 193–207 Santiago Medina, L, Kuntz, KM, and Pomeroy, S. Children With Headache Suspected of Having a Brain Tumor: A Cost-Effectiveness Analysis of Diagnostic Strategies. Pediatrics 108 (2001), 255-263 Building Better Algorithms for the Diagnosis of Nontraumatic Headache Jenkins, J, Shields, M, Patterson, C, and Kee, F. Decision making in asthma exacerbation: a clinical judgement analysis. Arch Dis Child 92 (2007), 672–677 Croskerry, P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 9(11) (2002), 1184-204. Cahan, A, Gilon, D, Manor, O, and Paltiel. Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96(10) (2003), 763-769 Wegwarth, O, Gaissmaier, W, and Gigerenzer, G. Smart strategies for doctors and doctors-in-training: heuristics in medicine. Medical Education 43 (2009), 721–728
Patient distance metrics
You asked a difficult question, but I'm a little bit surprised that the various clues that were suggested to you received so little attention. I upvoted all of them because I think they basically are
Patient distance metrics You asked a difficult question, but I'm a little bit surprised that the various clues that were suggested to you received so little attention. I upvoted all of them because I think they basically are useful responses, though in their actual form they call for further bibliographic work. Disclaimer: I never had to deal with such a problem, but I regularly have to expose statistical results that may differ from physician's a priori beliefs and I learn a lot from unraveling their lines of reasoning. Also, I have some background in teaching human decision/knowledge from the perspective of Artificial Intelligence and Cognitive Science, and I think what you asked is not so far from how experts actually decide that two objects are similar or not, based on their attributes and a common understanding of their relationships. From your question, I noticed two interesting assertions. The first one related to how an expert assess the similarity or difference between two set of measurements: I don't particularly care if there is some relation between attribute X and Y. What I care about is if a doctor thinks there is a relation between X and Y. The second one, How can I predict what they think the similarity is? Do they look at certain attributes? looks like it is somewhat subsumed by the former, but it seems more closely related to what are the most salient attributes that allow to draw a clear separation between the objects of interest. To the first question, I would answer: Well, if there is no characteristic or objective relationship between any two subjects, what would be the rationale for making up an hypothetical one? Rather, I think the question should be: If I only have limited resources (knowledge, time, data) to take a decision, how do I optimize my choice? To the second question, my answer is: Although it seems to partly contradicts your former assertion (if there is no relationship at all, it implies that the available attributes are not discriminative or useless), I think that most of the time this is a combination of attributes that makes sense, and not only how a given individual scores on a single attribute. Let me dwell on these two points. Human beings have a limited or bounded rationality, and can take a decision (often the right one) without examining all possible solutions. There is also a close connection with abductive reasoning. It is well known that there is some variability between individual judgments, and even between judgments from the same expert at two occasions. This is what we are interested in in reliability studies. But you want to know how these experts elaborate their judgments. There is a huge amount of papers about that in cognitive psychology, especially on the fact that relative judgments are easier and more reliable than absolute ones. Doctors' decisions are interesting in this respect because they are able to take a "good" decision with a limited amount of information, but at the same time they benefit from an ever growing internal knowledge base from which they can draw expected relationships (extrapolation). In other words, they have a built-in inference (assumed to be hypothetico-deductive) machinery and accumulate positive evidence or counterfactuals from there experience or practice. Reproducing this inferential ability and the use of declarative knowledge was the aim of several expert or production rule systems in the 70s, the most famous one being MYCIN, and more generally of Artifical Intelligence earlier in 1946 (Can we reproduce on an artificial system the intelligent behavior observed in man?). Automatic treatment of speech, problem solving, visual shape recognition are still active projects nowadays and they all have to do with identifying salient features and their relationships to make an appropriate decision (i.e., how far should two patterns be to be judged as the emanation of two distinct generating processes?). In sum, our doctors are able to draw an optimal inference from a limited amount of data, compensating from noise that arises simply as a byproduct of individual variability (at the level of the patients). Thus, there is a clear connection with statistics and probability theory, and the question is what conscious or subconscious methodology help doctors forming their judgments. Semantic networks (SN), belief networks, and decision trees are all relevant to the question you asked. The paper you cited is about using an ontology as a basis of formal judgments, but it is no more than an extension of SNs, and many projects were initiated in this direction (I can think of the Gene Ontology for genomic studies, but many others exist in different domains). Now, look at the following hierarchical classification of diagnostic categories (it is roughly taken from Dunn 1989, p. 25): And now take a look at the ICD classification; I think it is not too far from this schematic classification. Mental disorders are organized into distinct categories, some of them being closer one to each other. What render them similar is the closeness of their expression (phenotype) in any patient, and the fact that they share some similarities in their somatic/psychological etiology. Assessing whether two doctors would make the same diagnostic is a typical example of an inter-rater agreement study, where two psychiatrists are asked to place each of several patients in mutually exclusive categories. The hierarchical structure should be reflected in the disagreement between each doctor, that is they may not agree on the finer distinction between diagnostic classes (the leafs) but if they were to disagree between insomnia and schizophrenia, well it would be little bit disconcerting... How these two doctors decide on which class a given patient belongs to is no more than a clustering problem: How likely are two individuals, given a set of observed values on different attributes, to be similar enough so that I decide they share the same class membership? Now, some attributes are more influential than others, and this is exactly what is reflected in the weight attributed to a given attribute in Latent Class Analysis (which can be thought of as a probabilistic extension of clustering methods like k-means), or the variable importance in Random Forests. We need to put things into boxes, because at first sight it's simpler. The problem is that often things overlap to some extent, so we need to consider different levels of categorization. In fact, cluster analysis is at the heart of the actual DSM categories, and many papers actually turn around assigning one patient to a specific syndromic category, based on the profile of his response to a battery of neuropsychological assessments. This merely looks like a subtyping approach; each time, we seek to refine a preliminary well-established diagnostic category, by adding exception rules or an additional relevant symptom or impairment. A related topic is decision trees which are by far the most well understood statistical techniques by physicians. Most of the time, they described a nested series of boolean assertions (Do you have a sore throat? If yes, do you have a temperature? etc.; but look at an example of public influenza diagnostic tree) according to which we can form a decision regarding patients proximity (i.e. how similar patients are wrt. attributes considered for building the tree -- the closer they are the more likely they are to end up in the same leaf). Association rules and the C4.5 algorithm rely quite on the same idea. On a related topic, there's the patient rule-induction method (PRIM). Now clearly, we must make a distinction between all those methods, that make an efficient use of a large body of data and incorporate bagging or boosting to compensate for model fragility or overfitting issues, and doctors who cannot process huge amount of data in an automatic and algorithmic manner. But, for small to moderate amount of descriptors, I think they perform quite good after all. The yes-or-no approach is not the panacea, though. In behavioral genetics and psychiatry, it is commonly argued that the classification approach is probably not the best way to go, and that common diseases (learning disorders, depression, personality disorders, etc.) reflect a continuum rather than classes of opposite valence. Nobody's perfect! In conclusion, I think doctors actually hold a kind of internalized inference engine that allows them to assign patients into distinctive classes that are characterized by a weighted combination of available evidences; in other words, they are able to organize their knowledge in an efficient manner, and these internal representations and the relationships they share may be augmented throughout experience. Case-based reasoning probably comes into play at some point too. All of this may be subjected to (a) revision with newly available data (we are not simply acting as definitive binary classifiers, and are able to incorporate new data in our decision making), and (b) subjective biases arising from past experience or wrong self-made association rules. However, they are prone to errors, as every decision systems... All statistical techniques reflecting these steps -- decisions trees, bagging/boosting, cluster analysis, latent cluster analysis -- seems relevant to your questions, although they may be hard to instantiate in a single decision rule. Here are a couple of references that might be helpful, as a first start to how doctors make their decisions: A clinical decision support system for clinicians for determining appropriate radiologic imaging examination Grzymala-Busse, JW. Selected Algorithms of Machine Learning from Examples. Fundamenta Informaticae 18 (1993), 193–207 Santiago Medina, L, Kuntz, KM, and Pomeroy, S. Children With Headache Suspected of Having a Brain Tumor: A Cost-Effectiveness Analysis of Diagnostic Strategies. Pediatrics 108 (2001), 255-263 Building Better Algorithms for the Diagnosis of Nontraumatic Headache Jenkins, J, Shields, M, Patterson, C, and Kee, F. Decision making in asthma exacerbation: a clinical judgement analysis. Arch Dis Child 92 (2007), 672–677 Croskerry, P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 9(11) (2002), 1184-204. Cahan, A, Gilon, D, Manor, O, and Paltiel. Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96(10) (2003), 763-769 Wegwarth, O, Gaissmaier, W, and Gigerenzer, G. Smart strategies for doctors and doctors-in-training: heuristics in medicine. Medical Education 43 (2009), 721–728
Patient distance metrics You asked a difficult question, but I'm a little bit surprised that the various clues that were suggested to you received so little attention. I upvoted all of them because I think they basically are
38,336
Patient distance metrics
I might be misunderstanding your goals here, but to me it sounds like a multi-dimensional scaling (MDS) problem. I've never used MDS myself, but my sense is that it should allow you to derive a global measure of similarity as well as dimensional measures of similarity. My memory is that it is able to handle both continuous items (e.g. pulse rate) and nominal items (e.g. Gender) which seems like it would be an important consideration for what you are trying to do.
Patient distance metrics
I might be misunderstanding your goals here, but to me it sounds like a multi-dimensional scaling (MDS) problem. I've never used MDS myself, but my sense is that it should allow you to derive a globa
Patient distance metrics I might be misunderstanding your goals here, but to me it sounds like a multi-dimensional scaling (MDS) problem. I've never used MDS myself, but my sense is that it should allow you to derive a global measure of similarity as well as dimensional measures of similarity. My memory is that it is able to handle both continuous items (e.g. pulse rate) and nominal items (e.g. Gender) which seems like it would be an important consideration for what you are trying to do.
Patient distance metrics I might be misunderstanding your goals here, but to me it sounds like a multi-dimensional scaling (MDS) problem. I've never used MDS myself, but my sense is that it should allow you to derive a globa
38,337
Patient distance metrics
The whole field of Cluster Analysis is relevant to your concept of multi-variable statistical distance. The linked book on the subject is very short and pretty good.
Patient distance metrics
The whole field of Cluster Analysis is relevant to your concept of multi-variable statistical distance. The linked book on the subject is very short and pretty good.
Patient distance metrics The whole field of Cluster Analysis is relevant to your concept of multi-variable statistical distance. The linked book on the subject is very short and pretty good.
Patient distance metrics The whole field of Cluster Analysis is relevant to your concept of multi-variable statistical distance. The linked book on the subject is very short and pretty good.
38,338
Patient distance metrics
The simple idea is to make PCA and base distance of few first components (yet I don't like this technique because of assumptions it makes). The complex idea is to use machine learning; the resulting distances will expose the classifier structure, so will be about as good as the classification accuracy. The simplest approach here is just random forest object distance (Breiman's example), but you can also use the kernel justified by SVM, see for instance Winters-Hilt & Merat 2007.
Patient distance metrics
The simple idea is to make PCA and base distance of few first components (yet I don't like this technique because of assumptions it makes). The complex idea is to use machine learning; the resulting d
Patient distance metrics The simple idea is to make PCA and base distance of few first components (yet I don't like this technique because of assumptions it makes). The complex idea is to use machine learning; the resulting distances will expose the classifier structure, so will be about as good as the classification accuracy. The simplest approach here is just random forest object distance (Breiman's example), but you can also use the kernel justified by SVM, see for instance Winters-Hilt & Merat 2007.
Patient distance metrics The simple idea is to make PCA and base distance of few first components (yet I don't like this technique because of assumptions it makes). The complex idea is to use machine learning; the resulting d
38,339
Patient distance metrics
There is a subfield called Distance Metric Learning. One such method is Information Theoretic Metric Learning (ITML).
Patient distance metrics
There is a subfield called Distance Metric Learning. One such method is Information Theoretic Metric Learning (ITML).
Patient distance metrics There is a subfield called Distance Metric Learning. One such method is Information Theoretic Metric Learning (ITML).
Patient distance metrics There is a subfield called Distance Metric Learning. One such method is Information Theoretic Metric Learning (ITML).
38,340
How would YOU compute IMDB movie rating?
First, define the theoretical construct of interest. There are many ways that a rating can be defined: What is the theoretical target population? The entire world, English speakers, people who visit IMDB, people who have seen the movie in question? What is the target time frame? It is the rating of the movie now or averaged over its release time. Is it a democratic rating or an expert rating? Some people are more knowledgeable about the worth of movies. Some people are better able to differentiate a good from a bad movie. Some people are more consistent in their ratings over time. Should ratings from people who are "better" at rating movies be given more worth. This relates to a philosophical question of aesthetics and the meaning of intersubjective goodness. Assuming you could get honest ratings from the entire target population in the entire time frame weighted or not by expertise, what is the mapping between these ratings and the composite rating? This could be the arithmetic mean. Alternatively, there are many other ways of combining individual ratings. For example, you could use an interpolated median. Some alternatives would have minimal effect on the rank order of films, but would have a major influence on the absolute value of the rating. Is the number of people interested in the movie relevant to the rating? Second, use all the available information to estimate the theoretical construct. This is where the issues discussed by others would be important. the role of demographic adjustments would depend on your definition of the target population a weight for trust could be incorporated. Many indicators could be used: the number of previous ratings (more ratings would suggest someone who is engaged in the site more) the degree to which previous ratings are consistent with other raters or at least a subset of raters (greater consistency would suggest thoughtful and honest responding; the degree to which responses are distributed over an extended period of time (this would suggest that the person is less likely to be attempting to game the system) degree of engagement with the site in general: e.g., accessing the site, contributing to discussion boards (more engagement, more trust) as mentioned by @csgillespie you could weight more recent votes greater if you wanted to estimate current attitudes to the film You could weight for expertise in ratings. This would be correlated with trust ratings, but there is a difference. Third, validate and monitor the estimation process using external trusted data sources.
How would YOU compute IMDB movie rating?
First, define the theoretical construct of interest. There are many ways that a rating can be defined: What is the theoretical target population? The entire world, English speakers, people who visit
How would YOU compute IMDB movie rating? First, define the theoretical construct of interest. There are many ways that a rating can be defined: What is the theoretical target population? The entire world, English speakers, people who visit IMDB, people who have seen the movie in question? What is the target time frame? It is the rating of the movie now or averaged over its release time. Is it a democratic rating or an expert rating? Some people are more knowledgeable about the worth of movies. Some people are better able to differentiate a good from a bad movie. Some people are more consistent in their ratings over time. Should ratings from people who are "better" at rating movies be given more worth. This relates to a philosophical question of aesthetics and the meaning of intersubjective goodness. Assuming you could get honest ratings from the entire target population in the entire time frame weighted or not by expertise, what is the mapping between these ratings and the composite rating? This could be the arithmetic mean. Alternatively, there are many other ways of combining individual ratings. For example, you could use an interpolated median. Some alternatives would have minimal effect on the rank order of films, but would have a major influence on the absolute value of the rating. Is the number of people interested in the movie relevant to the rating? Second, use all the available information to estimate the theoretical construct. This is where the issues discussed by others would be important. the role of demographic adjustments would depend on your definition of the target population a weight for trust could be incorporated. Many indicators could be used: the number of previous ratings (more ratings would suggest someone who is engaged in the site more) the degree to which previous ratings are consistent with other raters or at least a subset of raters (greater consistency would suggest thoughtful and honest responding; the degree to which responses are distributed over an extended period of time (this would suggest that the person is less likely to be attempting to game the system) degree of engagement with the site in general: e.g., accessing the site, contributing to discussion boards (more engagement, more trust) as mentioned by @csgillespie you could weight more recent votes greater if you wanted to estimate current attitudes to the film You could weight for expertise in ratings. This would be correlated with trust ratings, but there is a difference. Third, validate and monitor the estimation process using external trusted data sources.
How would YOU compute IMDB movie rating? First, define the theoretical construct of interest. There are many ways that a rating can be defined: What is the theoretical target population? The entire world, English speakers, people who visit
38,341
How would YOU compute IMDB movie rating?
Partial answer. See the help page entitled: The vote average for film "X" should be Y! Why are you displaying another rating? In short, IMDb uses: a complex voter weighting system to make sure that the final rating is representative of the general voting population and not subject to over influence from individuals who are not regular participants in the poll. Also note that: In order to avoid leaving the scheme open to abuse, [IMDb does not] disclose the exact methods used.
How would YOU compute IMDB movie rating?
Partial answer. See the help page entitled: The vote average for film "X" should be Y! Why are you displaying another rating? In short, IMDb uses: a complex voter weighting system to make sure tha
How would YOU compute IMDB movie rating? Partial answer. See the help page entitled: The vote average for film "X" should be Y! Why are you displaying another rating? In short, IMDb uses: a complex voter weighting system to make sure that the final rating is representative of the general voting population and not subject to over influence from individuals who are not regular participants in the poll. Also note that: In order to avoid leaving the scheme open to abuse, [IMDb does not] disclose the exact methods used.
How would YOU compute IMDB movie rating? Partial answer. See the help page entitled: The vote average for film "X" should be Y! Why are you displaying another rating? In short, IMDb uses: a complex voter weighting system to make sure tha
38,342
How would YOU compute IMDB movie rating?
Whats wrong with my score? Why is it not ideal (because IMDB didn't use it)? If the score was only for your use, then nothing is wrong with your calculation. However, IMDB try to make it difficult for people to obviously influence the final score. If you had to compute. How would you have done it? What factors you would consider? Here are a few factors that you could consider (but will be unable to check): The final score may be weighted according to how many votes have been cast. Votes may be weighted by a time variable. For example, votes cast last year are less important than votes cast today. Votes cast by users who have voted for other movies have more weight, i,e. a reputation coefficient. Perhaps they incorporate data from other sites.
How would YOU compute IMDB movie rating?
Whats wrong with my score? Why is it not ideal (because IMDB didn't use it)? If the score was only for your use, then nothing is wrong with your calculation. However, IMDB try to make it difficult
How would YOU compute IMDB movie rating? Whats wrong with my score? Why is it not ideal (because IMDB didn't use it)? If the score was only for your use, then nothing is wrong with your calculation. However, IMDB try to make it difficult for people to obviously influence the final score. If you had to compute. How would you have done it? What factors you would consider? Here are a few factors that you could consider (but will be unable to check): The final score may be weighted according to how many votes have been cast. Votes may be weighted by a time variable. For example, votes cast last year are less important than votes cast today. Votes cast by users who have voted for other movies have more weight, i,e. a reputation coefficient. Perhaps they incorporate data from other sites.
How would YOU compute IMDB movie rating? Whats wrong with my score? Why is it not ideal (because IMDB didn't use it)? If the score was only for your use, then nothing is wrong with your calculation. However, IMDB try to make it difficult
38,343
How can I determine accuracy of past probability calculations?
What you're looking for are called Scoring Rules, which are ways of evaluating probabilistic forecasts. They were invented in the 1950s by weather forecasters, and there has been a been a bit of work on them in the statistics community, but I don't know of any books on the topic. One thing you could do would be to bin the forecasts by probability range (e.g.: 0-5%, 5%-10%, etc.) and look at how many predicted events in that range occurred (if there are 40 events in the 0-5% range, and 20 occur, then your might have problems). If the events are independent, then you could compare these numbers to a binomial distribution.
How can I determine accuracy of past probability calculations?
What you're looking for are called Scoring Rules, which are ways of evaluating probabilistic forecasts. They were invented in the 1950s by weather forecasters, and there has been a been a bit of work
How can I determine accuracy of past probability calculations? What you're looking for are called Scoring Rules, which are ways of evaluating probabilistic forecasts. They were invented in the 1950s by weather forecasters, and there has been a been a bit of work on them in the statistics community, but I don't know of any books on the topic. One thing you could do would be to bin the forecasts by probability range (e.g.: 0-5%, 5%-10%, etc.) and look at how many predicted events in that range occurred (if there are 40 events in the 0-5% range, and 20 occur, then your might have problems). If the events are independent, then you could compare these numbers to a binomial distribution.
How can I determine accuracy of past probability calculations? What you're looking for are called Scoring Rules, which are ways of evaluating probabilistic forecasts. They were invented in the 1950s by weather forecasters, and there has been a been a bit of work
38,344
How can I determine accuracy of past probability calculations?
In their classic book on the Federalist papers, Mosteller and Wallace argue for a log penalty function: you penalize yourself $-\log(p)$ when you predict an event with probability $p$ and it occurs; the penalty for it not occurring equals $-\log(1-p)$. Thus, the penalty is high when whatever happens is unexpected according to your prediction. Their argument in favor of this function rests on a simple natural criterion: "the penalty function should encourage the prediction of the correct probabilities if they are known." Assuming the total penalty is summed over all predictions and there will be three or more of them, M&W claim that the log penalty function is the only one (up to affine transformation) for which the "expected penalty is minimized over all predictions" by the correct probabilities. Following this, then, a good test for you to use is to track your accumulated log penalties. If, after a long time (or by means of some independent oracle), you obtain accurate estimates of what the probabilities actually were, you can compare your penalty with the minimum possible one. The average of that difference measures your long-run predictive performance (the lower the better). This is an excellent way to compare two or more competing predictors, too.
How can I determine accuracy of past probability calculations?
In their classic book on the Federalist papers, Mosteller and Wallace argue for a log penalty function: you penalize yourself $-\log(p)$ when you predict an event with probability $p$ and it occurs; t
How can I determine accuracy of past probability calculations? In their classic book on the Federalist papers, Mosteller and Wallace argue for a log penalty function: you penalize yourself $-\log(p)$ when you predict an event with probability $p$ and it occurs; the penalty for it not occurring equals $-\log(1-p)$. Thus, the penalty is high when whatever happens is unexpected according to your prediction. Their argument in favor of this function rests on a simple natural criterion: "the penalty function should encourage the prediction of the correct probabilities if they are known." Assuming the total penalty is summed over all predictions and there will be three or more of them, M&W claim that the log penalty function is the only one (up to affine transformation) for which the "expected penalty is minimized over all predictions" by the correct probabilities. Following this, then, a good test for you to use is to track your accumulated log penalties. If, after a long time (or by means of some independent oracle), you obtain accurate estimates of what the probabilities actually were, you can compare your penalty with the minimum possible one. The average of that difference measures your long-run predictive performance (the lower the better). This is an excellent way to compare two or more competing predictors, too.
How can I determine accuracy of past probability calculations? In their classic book on the Federalist papers, Mosteller and Wallace argue for a log penalty function: you penalize yourself $-\log(p)$ when you predict an event with probability $p$ and it occurs; t
38,345
Standard Error of Noise Variance in Least Squares
The other answer here shows you the variance of the estimator of the error variance under the assumption that the errors are normally distributed, which is the specified form in the Gaussian regression model. I would counsel against using this method, since it is often the case that the actual data we use in a regression model does not conform to this assumption. In particular, it is not unusual for the residuals in regression data to exhibit leptokurtosis (or sometimes platykurtosis), contrary to the model assumptions. Many aspects of regression analysis are robust to the normality assumption, but this aspect of the model is not robust --- it is a case where the asserted formula hinges on the Gaussian model assumption rather than being determined by actual analysis of the data used in the model. In general, this to be avoided if we want to use robust methods. Suppose we are willing to generalise the model slightly by no longer assuming a normal distribution for the error term (we will still assume that the errors are IID with zero mean and fixed variance). In the generalised case we may consider error variables with kurtosis $\kappa$ (but still assuming a simple linear regression), in which case the variance of $\hat{\sigma}^2$ is actually given by: $$\mathbb{V}(\hat{\sigma}^2) = \bigg( \kappa - \frac{n-p-4}{n-p-2} \bigg) \frac{\sigma^4}{n-p-1}.$$ In the special case of a mesokurtotic error distribution (e.g., the normal distribution) we have $\kappa = 3$ and so the variance formula reduces to the more familiar form: $$\mathbb{V}(\hat{\sigma}^2) = \frac{2 \sigma^4}{n-p-1}.$$ Now, it is possible to estimate the kurtosis of the error distribution from the residuals in the regression model, so in principle it is possible to estimate the variance of the estimator for the error variance in a way that does not hinge on the assumption of a mesokurtic error distribution. For example, if you have an estimator $\hat{\kappa}$ for the kurtosis of the error distribution (e.g., from a kurtosis estimator using the residuals) then you could estimate the standard error of the estimator for the error variance as: $$\hat{\text{se}}_n = \sqrt{\hat{\mathbb{V}}(\hat{\sigma}^2)} = \sqrt{ \hat{\kappa} - \frac{n-p-4}{n-p-2}} \times \frac{\hat{\sigma}^2}{\sqrt{n-p-1}}.$$ Assuming you use a consistent estimator $\hat{\kappa}$, this latter estimator will be a consistent estimator of the true standard deviation of the estimator of the error variance. For more details on the moments of these types of sampling quantities you might find it useful to consult O'Neill (2014) (that paper deals with standard sampling quantities outside of regression, but its results can easily be adapted to the regression setting).
Standard Error of Noise Variance in Least Squares
The other answer here shows you the variance of the estimator of the error variance under the assumption that the errors are normally distributed, which is the specified form in the Gaussian regressio
Standard Error of Noise Variance in Least Squares The other answer here shows you the variance of the estimator of the error variance under the assumption that the errors are normally distributed, which is the specified form in the Gaussian regression model. I would counsel against using this method, since it is often the case that the actual data we use in a regression model does not conform to this assumption. In particular, it is not unusual for the residuals in regression data to exhibit leptokurtosis (or sometimes platykurtosis), contrary to the model assumptions. Many aspects of regression analysis are robust to the normality assumption, but this aspect of the model is not robust --- it is a case where the asserted formula hinges on the Gaussian model assumption rather than being determined by actual analysis of the data used in the model. In general, this to be avoided if we want to use robust methods. Suppose we are willing to generalise the model slightly by no longer assuming a normal distribution for the error term (we will still assume that the errors are IID with zero mean and fixed variance). In the generalised case we may consider error variables with kurtosis $\kappa$ (but still assuming a simple linear regression), in which case the variance of $\hat{\sigma}^2$ is actually given by: $$\mathbb{V}(\hat{\sigma}^2) = \bigg( \kappa - \frac{n-p-4}{n-p-2} \bigg) \frac{\sigma^4}{n-p-1}.$$ In the special case of a mesokurtotic error distribution (e.g., the normal distribution) we have $\kappa = 3$ and so the variance formula reduces to the more familiar form: $$\mathbb{V}(\hat{\sigma}^2) = \frac{2 \sigma^4}{n-p-1}.$$ Now, it is possible to estimate the kurtosis of the error distribution from the residuals in the regression model, so in principle it is possible to estimate the variance of the estimator for the error variance in a way that does not hinge on the assumption of a mesokurtic error distribution. For example, if you have an estimator $\hat{\kappa}$ for the kurtosis of the error distribution (e.g., from a kurtosis estimator using the residuals) then you could estimate the standard error of the estimator for the error variance as: $$\hat{\text{se}}_n = \sqrt{\hat{\mathbb{V}}(\hat{\sigma}^2)} = \sqrt{ \hat{\kappa} - \frac{n-p-4}{n-p-2}} \times \frac{\hat{\sigma}^2}{\sqrt{n-p-1}}.$$ Assuming you use a consistent estimator $\hat{\kappa}$, this latter estimator will be a consistent estimator of the true standard deviation of the estimator of the error variance. For more details on the moments of these types of sampling quantities you might find it useful to consult O'Neill (2014) (that paper deals with standard sampling quantities outside of regression, but its results can easily be adapted to the regression setting).
Standard Error of Noise Variance in Least Squares The other answer here shows you the variance of the estimator of the error variance under the assumption that the errors are normally distributed, which is the specified form in the Gaussian regressio
38,346
Standard Error of Noise Variance in Least Squares
$\DeclareMathOperator{\Var}{Var}$ Suppose the design matrix is $N \times (p + 1)$, I am assuming you want to calculate the variance of $\hat{\sigma}^2 = \frac{1}{N - p - 1}\epsilon'(I - H)\epsilon$, which is an unbiased estimator of $\sigma^2$, where $I$ is the order $N + 1$ identity matrix and $H = X(X'X)^{-1}X'$. In this thread, it has been shown that \begin{align} (N - p - 1)\hat{\sigma}^2/\sigma^2 \sim \chi_{N - p - 1}^2, \end{align} by which and the variance of a $\chi_k^2$ r.v. is $2k$ follows that \begin{align} \Var(\hat{\sigma}^2) = \frac{\sigma^4}{(N - p - 1)^2} \times 2(N - p - 1) = \frac{2\sigma^4}{N - p - 1}. \end{align} Estimate the $\sigma^4$ in the numerator by $\hat{\sigma}^4$, it follows that the standard error of $\hat{\sigma}^2$ is $\frac{\sqrt{2}\hat{\sigma}^2}{\sqrt{N - p - 1}}$.
Standard Error of Noise Variance in Least Squares
$\DeclareMathOperator{\Var}{Var}$ Suppose the design matrix is $N \times (p + 1)$, I am assuming you want to calculate the variance of $\hat{\sigma}^2 = \frac{1}{N - p - 1}\epsilon'(I - H)\epsilon$, w
Standard Error of Noise Variance in Least Squares $\DeclareMathOperator{\Var}{Var}$ Suppose the design matrix is $N \times (p + 1)$, I am assuming you want to calculate the variance of $\hat{\sigma}^2 = \frac{1}{N - p - 1}\epsilon'(I - H)\epsilon$, which is an unbiased estimator of $\sigma^2$, where $I$ is the order $N + 1$ identity matrix and $H = X(X'X)^{-1}X'$. In this thread, it has been shown that \begin{align} (N - p - 1)\hat{\sigma}^2/\sigma^2 \sim \chi_{N - p - 1}^2, \end{align} by which and the variance of a $\chi_k^2$ r.v. is $2k$ follows that \begin{align} \Var(\hat{\sigma}^2) = \frac{\sigma^4}{(N - p - 1)^2} \times 2(N - p - 1) = \frac{2\sigma^4}{N - p - 1}. \end{align} Estimate the $\sigma^4$ in the numerator by $\hat{\sigma}^4$, it follows that the standard error of $\hat{\sigma}^2$ is $\frac{\sqrt{2}\hat{\sigma}^2}{\sqrt{N - p - 1}}$.
Standard Error of Noise Variance in Least Squares $\DeclareMathOperator{\Var}{Var}$ Suppose the design matrix is $N \times (p + 1)$, I am assuming you want to calculate the variance of $\hat{\sigma}^2 = \frac{1}{N - p - 1}\epsilon'(I - H)\epsilon$, w
38,347
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R?
I presume that you want to know whether the linear relation between $x$ and $y$ is different depending on those additional variables, which I also presume to not be confounders. Thus, you don't want to control for them but you rather want to learn their interaction with the treatment $x$. If that is all true, use as formula for 'lm': $$ y \sim x \;+\; \mbox{stimulus}:x \;+\; \mbox{colony}:x \;+\; \mbox{lighting}:x $$ and the p-values of the coefficients of the interaction terms, given by the model fitted by lm, will then tell you whether those interactions are significant. To make it more complicated, you could also think about higher-order interactions like $x$:stimulus:colony. Edit: And I agree with @dipetkov's point below that, at least in general in those situations, we should also include an offset for the interactions. This was an omission on my side.
How do you determine if there is a significant relationship between two variables with several facto
I presume that you want to know whether the linear relation between $x$ and $y$ is different depending on those additional variables, which I also presume to not be confounders. Thus, you don't want t
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R? I presume that you want to know whether the linear relation between $x$ and $y$ is different depending on those additional variables, which I also presume to not be confounders. Thus, you don't want to control for them but you rather want to learn their interaction with the treatment $x$. If that is all true, use as formula for 'lm': $$ y \sim x \;+\; \mbox{stimulus}:x \;+\; \mbox{colony}:x \;+\; \mbox{lighting}:x $$ and the p-values of the coefficients of the interaction terms, given by the model fitted by lm, will then tell you whether those interactions are significant. To make it more complicated, you could also think about higher-order interactions like $x$:stimulus:colony. Edit: And I agree with @dipetkov's point below that, at least in general in those situations, we should also include an offset for the interactions. This was an omission on my side.
How do you determine if there is a significant relationship between two variables with several facto I presume that you want to know whether the linear relation between $x$ and $y$ is different depending on those additional variables, which I also presume to not be confounders. Thus, you don't want t
38,348
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R?
I disagree with @frank's advice to include interactions (with $x$) but no main effects for the stimulus, lighting and colony variables. But see How do you deal with "nested" variables in a regression model? for an important exception to this rule. Moreover, a scatterplot of the data reveals that the two colonies are observed under different conditions. The difference is most pronounced when stimulus = "Cold" as there is complete separation in the $x$ values. (This may indicate that $x$ is not really a "treatment" assigned randomly to units as @frank interprets it.) This pattern suggests to interact colony with all the other variables, which leads — visually at least — to a better model fit. Update: The OP has provided additional context. The experimental design is still unclear but it seems the units were randomized to one of four conditions (stimulus×lighting), given time to train under the assigned condition (x), and then tested "officially" (y). The numerical variables x and y are number of successfully completed tasks in a fixed amount of time (everyone got the same amount of time). The number of attempted but unsuccessful tasks is ignored though it may be important: what if stimulus and lighting affect the number of trials but not the success rate of a trial? Also unexplained is the concept of a colony. The number of successes x during training under Cold stimulus is markedly different for colonies 1 and 2. On its own, the data cannot explain why this happened and therefore the data cannot point out which is the most scientifically justified model either. Instead the OP should explain/examine the difference between the two colonies. If these two colonies were sampled from a larger population, it would help to do followup experiments to investigate the variability between colonies under "Cold" stimulus. If these two colonies are the only ones of interest, it would help to sample more units from each colony to study the variability within each colony. To highlight the importance of colony and following comments by @SextusEmpiricus, let's compare two model fits: the full model y ~ x * stimulus * lighting * colony (figure above) and the restricted model y ~ x * stimulus * lighting (figure below). The full model fits better statistically (in terms of an F-test). As it fits a regression line for each stimulus×lighting×colony combination, the full model interpolates well "unusual" points. I've highlighted one such point in each panel without testing formally that these points are outliers or high-leverage. The restricted model fits a line for the four stimulus×lighting combinations (four panels) and — within each panel — it uses the same line for the two colonies. Qualitatively, the fit is not bad and I can come up with a nice story that training is effective (the regression lines have positive slopes) under all conditions except for "dark and cold". Whether this story is meaningful depends on what the colonies actually are. Here is R code to reproduce the figures and play with different models for this data. library("broom") library("tidyverse") # Cast `colony` and `ID` as categorical (rather than numeric) variables. df <- as_tibble(df) %>% mutate( colony = as.character(colony) ) outliers <- c(1, 6, 15, 40) extract_model_formula <- function(model) { Reduce(paste, deparse(model$call$formula)) } plot_model_fit <- function(model, title = "") { model %>% augment( newdata = df ) %>% ggplot( aes(x, .fitted) ) + geom_line( aes(color = colony) ) + geom_text( aes(x, y, color = colony, label = colony ), data = df, inherit.aes = FALSE ) + geom_label( aes(x, y, color = colony, label = colony ), data = df %>% filter(ID %in% outliers), inherit.aes = FALSE ) + facet_grid( stimulus ~ lighting ) + theme( plot.title = element_text( size = 12 ), legend.position = "none" ) + ggtitle( title, extract_model_formula(model) ) } m0 <- lm( y ~ x + x:stimulus + x:lighting + x:colony, # y ~ x + x:(stimulus + lighting + colony), data = df ) plot_model_fit( m0, "Interactions without main effects is often not a great choice." ) m1 <- lm( y ~ x * stimulus * lighting * colony, data = df ) plot_model_fit( m1, "For each variable interacted with x, include its main effect as well." ) m2 <- lm( y ~ x * stimulus * lighting, data = df ) plot_model_fit( m2, "What is a colony and should it be included in the model? This is a domain knowledge question." ) anova(m1, m2) ```
How do you determine if there is a significant relationship between two variables with several facto
I disagree with @frank's advice to include interactions (with $x$) but no main effects for the stimulus, lighting and colony variables. But see How do you deal with "nested" variables in a regression
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R? I disagree with @frank's advice to include interactions (with $x$) but no main effects for the stimulus, lighting and colony variables. But see How do you deal with "nested" variables in a regression model? for an important exception to this rule. Moreover, a scatterplot of the data reveals that the two colonies are observed under different conditions. The difference is most pronounced when stimulus = "Cold" as there is complete separation in the $x$ values. (This may indicate that $x$ is not really a "treatment" assigned randomly to units as @frank interprets it.) This pattern suggests to interact colony with all the other variables, which leads — visually at least — to a better model fit. Update: The OP has provided additional context. The experimental design is still unclear but it seems the units were randomized to one of four conditions (stimulus×lighting), given time to train under the assigned condition (x), and then tested "officially" (y). The numerical variables x and y are number of successfully completed tasks in a fixed amount of time (everyone got the same amount of time). The number of attempted but unsuccessful tasks is ignored though it may be important: what if stimulus and lighting affect the number of trials but not the success rate of a trial? Also unexplained is the concept of a colony. The number of successes x during training under Cold stimulus is markedly different for colonies 1 and 2. On its own, the data cannot explain why this happened and therefore the data cannot point out which is the most scientifically justified model either. Instead the OP should explain/examine the difference between the two colonies. If these two colonies were sampled from a larger population, it would help to do followup experiments to investigate the variability between colonies under "Cold" stimulus. If these two colonies are the only ones of interest, it would help to sample more units from each colony to study the variability within each colony. To highlight the importance of colony and following comments by @SextusEmpiricus, let's compare two model fits: the full model y ~ x * stimulus * lighting * colony (figure above) and the restricted model y ~ x * stimulus * lighting (figure below). The full model fits better statistically (in terms of an F-test). As it fits a regression line for each stimulus×lighting×colony combination, the full model interpolates well "unusual" points. I've highlighted one such point in each panel without testing formally that these points are outliers or high-leverage. The restricted model fits a line for the four stimulus×lighting combinations (four panels) and — within each panel — it uses the same line for the two colonies. Qualitatively, the fit is not bad and I can come up with a nice story that training is effective (the regression lines have positive slopes) under all conditions except for "dark and cold". Whether this story is meaningful depends on what the colonies actually are. Here is R code to reproduce the figures and play with different models for this data. library("broom") library("tidyverse") # Cast `colony` and `ID` as categorical (rather than numeric) variables. df <- as_tibble(df) %>% mutate( colony = as.character(colony) ) outliers <- c(1, 6, 15, 40) extract_model_formula <- function(model) { Reduce(paste, deparse(model$call$formula)) } plot_model_fit <- function(model, title = "") { model %>% augment( newdata = df ) %>% ggplot( aes(x, .fitted) ) + geom_line( aes(color = colony) ) + geom_text( aes(x, y, color = colony, label = colony ), data = df, inherit.aes = FALSE ) + geom_label( aes(x, y, color = colony, label = colony ), data = df %>% filter(ID %in% outliers), inherit.aes = FALSE ) + facet_grid( stimulus ~ lighting ) + theme( plot.title = element_text( size = 12 ), legend.position = "none" ) + ggtitle( title, extract_model_formula(model) ) } m0 <- lm( y ~ x + x:stimulus + x:lighting + x:colony, # y ~ x + x:(stimulus + lighting + colony), data = df ) plot_model_fit( m0, "Interactions without main effects is often not a great choice." ) m1 <- lm( y ~ x * stimulus * lighting * colony, data = df ) plot_model_fit( m1, "For each variable interacted with x, include its main effect as well." ) m2 <- lm( y ~ x * stimulus * lighting, data = df ) plot_model_fit( m2, "What is a colony and should it be included in the model? This is a domain knowledge question." ) anova(m1, m2) ```
How do you determine if there is a significant relationship between two variables with several facto I disagree with @frank's advice to include interactions (with $x$) but no main effects for the stimulus, lighting and colony variables. But see How do you deal with "nested" variables in a regression
38,349
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R?
We assume that the question is how to determine whether stimulus is statistically significant in the presence of the other variables. First let us try a mixed model with colony as a random effect. Unfortunately, using the data in the question, this results in a singular model which can also be seen from the zero random effects. This is suggestive of overfitting. library(lmer) fm1 <- lmer(y ~ x + stimulus + lighting + (1|colony), df) ## boundary (singular) fit: see help('isSingular') ranef(fm1) ## $colony ## (Intercept) ## 1 0 ## 2 0 Thus we try a fixed effects model. We can try interactions among all variables fm2 <- lm(y ~ x * stimulus * lighting * colony, df) summary(fm2) giving: Call: lm(formula = y ~ x * stimulus * lighting * colony, data = df) Residuals: Min 1Q Median 3Q Max -17.742 -3.199 0.721 4.297 14.172 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 73.1768 101.9931 0.717 0.4800 x -1.0131 1.2157 -0.833 0.4128 stimulusHeat -172.2299 111.4630 -1.545 0.1354 lightinglight -45.0352 117.2821 -0.384 0.7044 colony -46.2449 53.7372 -0.861 0.3980 x:stimulusHeat 3.0552 1.3429 2.275 0.0321 * x:lightinglight 1.1976 1.3931 0.860 0.3985 stimulusHeat:lightinglight 132.2606 131.0734 1.009 0.3230 x:colony 1.1097 0.6769 1.639 0.1142 stimulusHeat:colony 117.2380 61.3083 1.912 0.0678 . lightinglight:colony 46.8192 64.6041 0.725 0.4756 x:stimulusHeat:lightinglight -2.1497 1.5701 -1.369 0.1836 x:stimulusHeat:colony -2.1382 0.7744 -2.761 0.0109 * x:lightinglight:colony -1.1066 0.8541 -1.296 0.2074 stimulusHeat:lightinglight:colony -87.6013 74.7980 -1.171 0.2530 x:stimulusHeat:lightinglight:colony 1.5915 0.9806 1.623 0.1176 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 8.473 on 24 degrees of freedom Multiple R-squared: 0.767, Adjusted R-squared: 0.6214 F-statistic: 5.268 on 15 and 24 DF, p-value: 0.0001652 Looking at which coefficients are significant (marked to the right of the p values) it seems that we can simplify the model by omitting lighting. Comparing that to the same model without stimulus we have a highly significant difference between the models with stimulus and without (p = 1.839e-05). fm3 <- lm(y ~ x * stimulus * colony, df) fm4 <- lm(y ~ x * colony, df) anova(fm4, fm3) ## Analysis of Variance Table ## ## Model 1: y ~ x * colony ## Model 2: y ~ x * stimulus * colony ## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 36 6053.3 ## 2 32 2651.9 4 3401.4 10.261 1.839e-05 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here is a plot of x on horizontal axis vs y on vertical axis with separate panels for the colonies. The colors represent the stimulus. library(lattice) xyplot(y ~ x | factor(colony), df, groups = stimulus, type = c("p", "r"), auto = TRUE)
How do you determine if there is a significant relationship between two variables with several facto
We assume that the question is how to determine whether stimulus is statistically significant in the presence of the other variables. First let us try a mixed model with colony as a random effect. Un
How do you determine if there is a significant relationship between two variables with several factors affecting it, using R? We assume that the question is how to determine whether stimulus is statistically significant in the presence of the other variables. First let us try a mixed model with colony as a random effect. Unfortunately, using the data in the question, this results in a singular model which can also be seen from the zero random effects. This is suggestive of overfitting. library(lmer) fm1 <- lmer(y ~ x + stimulus + lighting + (1|colony), df) ## boundary (singular) fit: see help('isSingular') ranef(fm1) ## $colony ## (Intercept) ## 1 0 ## 2 0 Thus we try a fixed effects model. We can try interactions among all variables fm2 <- lm(y ~ x * stimulus * lighting * colony, df) summary(fm2) giving: Call: lm(formula = y ~ x * stimulus * lighting * colony, data = df) Residuals: Min 1Q Median 3Q Max -17.742 -3.199 0.721 4.297 14.172 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 73.1768 101.9931 0.717 0.4800 x -1.0131 1.2157 -0.833 0.4128 stimulusHeat -172.2299 111.4630 -1.545 0.1354 lightinglight -45.0352 117.2821 -0.384 0.7044 colony -46.2449 53.7372 -0.861 0.3980 x:stimulusHeat 3.0552 1.3429 2.275 0.0321 * x:lightinglight 1.1976 1.3931 0.860 0.3985 stimulusHeat:lightinglight 132.2606 131.0734 1.009 0.3230 x:colony 1.1097 0.6769 1.639 0.1142 stimulusHeat:colony 117.2380 61.3083 1.912 0.0678 . lightinglight:colony 46.8192 64.6041 0.725 0.4756 x:stimulusHeat:lightinglight -2.1497 1.5701 -1.369 0.1836 x:stimulusHeat:colony -2.1382 0.7744 -2.761 0.0109 * x:lightinglight:colony -1.1066 0.8541 -1.296 0.2074 stimulusHeat:lightinglight:colony -87.6013 74.7980 -1.171 0.2530 x:stimulusHeat:lightinglight:colony 1.5915 0.9806 1.623 0.1176 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 8.473 on 24 degrees of freedom Multiple R-squared: 0.767, Adjusted R-squared: 0.6214 F-statistic: 5.268 on 15 and 24 DF, p-value: 0.0001652 Looking at which coefficients are significant (marked to the right of the p values) it seems that we can simplify the model by omitting lighting. Comparing that to the same model without stimulus we have a highly significant difference between the models with stimulus and without (p = 1.839e-05). fm3 <- lm(y ~ x * stimulus * colony, df) fm4 <- lm(y ~ x * colony, df) anova(fm4, fm3) ## Analysis of Variance Table ## ## Model 1: y ~ x * colony ## Model 2: y ~ x * stimulus * colony ## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 36 6053.3 ## 2 32 2651.9 4 3401.4 10.261 1.839e-05 *** ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here is a plot of x on horizontal axis vs y on vertical axis with separate panels for the colonies. The colors represent the stimulus. library(lattice) xyplot(y ~ x | factor(colony), df, groups = stimulus, type = c("p", "r"), auto = TRUE)
How do you determine if there is a significant relationship between two variables with several facto We assume that the question is how to determine whether stimulus is statistically significant in the presence of the other variables. First let us try a mixed model with colony as a random effect. Un
38,350
How can I fit distribution for data which "almost fits"?
The overwhelming majority of datasets do not perfectly fit any parameterised class of distributions used in probability theory.$^\dagger$ Those classes of distributions represent an infinitesimally small sliver of the set of all distributions and so it is rare that a dataset comes from one of these classes. As a result, you often get situations like this one where a dataset is close to a known parameterised distribution, but does not quite fit it. As a secondary matter, it is well known that if you do a classical hypothesis test, even a tiny deviation from the null hypothesis (of a perfect distribution fit) will manifest in the p-value going to zero as the sample size grows to infinity. Consequently, when you test a large dataset against a parameterised class of distributions, you will almost always get a rejection of the null hypothesis if you have sufficient data. Now, most likely your dataset follows an almost-beta distribution that is not equivalent to any standard parameterised class used in probability theory. You can get a reasonable estimate of the true distribution using a KDE (e.g., with a beta kernel) or some other non-parametric estimator. Alternatively, since your observations involve multiple variables, you might get a better fit to a parameterised class of distributions for the conditional distribution arising from regression analysis (looking at the distribution of one variable conditional on one or more others). Answers to your specific questions are below. Is there anything I could or I should do about my data before estimation? Is window smoothing a valid alternative? Don't change your data to try to get a desired conformity with a hypothesised class of distributions --- instead, change your inference to conform to the evidence in your data. Should I give up trying to fit it into beta? Is there a way to tell something like "this is beta, but with error margins"? What could I say in the paper to support my decision? What do you mean by "give up"? You tested this hypothesis and it was rejected with strong evidence --- end of test. If you don't "give up" the null hypothesis after testing and strongly rejecting it, what was the test for? As to what you can say here, you can say that this distribution is close to a beta distribution, but with some deviation in the upper tails. If you want to quantify this you could use a measure of distance between distributions (e.g., between your empirical distribution and the closest "fitted" beta distribution) (see e.g., Chung et al 1989). If you do this then I think you will find that there is a fairly small distance between your distribution and the class of beta distributions. If my anwer isn't in any of the alternatives above, what should I be reading right now to advance? I recommend you read about non-parametric inference, and bear in mind the general rule that parametric models for continuous random variables are approximations to true datasets at best. $^\dagger$ There is an exception to this when dealing with discrete data with finite support. For data with finite support, the categorical distribution actually does cover every possible distribution. However, for continuous variables, parameterised classes are much smaller relative to the space of all possible distributions.
How can I fit distribution for data which "almost fits"?
The overwhelming majority of datasets do not perfectly fit any parameterised class of distributions used in probability theory.$^\dagger$ Those classes of distributions represent an infinitesimally s
How can I fit distribution for data which "almost fits"? The overwhelming majority of datasets do not perfectly fit any parameterised class of distributions used in probability theory.$^\dagger$ Those classes of distributions represent an infinitesimally small sliver of the set of all distributions and so it is rare that a dataset comes from one of these classes. As a result, you often get situations like this one where a dataset is close to a known parameterised distribution, but does not quite fit it. As a secondary matter, it is well known that if you do a classical hypothesis test, even a tiny deviation from the null hypothesis (of a perfect distribution fit) will manifest in the p-value going to zero as the sample size grows to infinity. Consequently, when you test a large dataset against a parameterised class of distributions, you will almost always get a rejection of the null hypothesis if you have sufficient data. Now, most likely your dataset follows an almost-beta distribution that is not equivalent to any standard parameterised class used in probability theory. You can get a reasonable estimate of the true distribution using a KDE (e.g., with a beta kernel) or some other non-parametric estimator. Alternatively, since your observations involve multiple variables, you might get a better fit to a parameterised class of distributions for the conditional distribution arising from regression analysis (looking at the distribution of one variable conditional on one or more others). Answers to your specific questions are below. Is there anything I could or I should do about my data before estimation? Is window smoothing a valid alternative? Don't change your data to try to get a desired conformity with a hypothesised class of distributions --- instead, change your inference to conform to the evidence in your data. Should I give up trying to fit it into beta? Is there a way to tell something like "this is beta, but with error margins"? What could I say in the paper to support my decision? What do you mean by "give up"? You tested this hypothesis and it was rejected with strong evidence --- end of test. If you don't "give up" the null hypothesis after testing and strongly rejecting it, what was the test for? As to what you can say here, you can say that this distribution is close to a beta distribution, but with some deviation in the upper tails. If you want to quantify this you could use a measure of distance between distributions (e.g., between your empirical distribution and the closest "fitted" beta distribution) (see e.g., Chung et al 1989). If you do this then I think you will find that there is a fairly small distance between your distribution and the class of beta distributions. If my anwer isn't in any of the alternatives above, what should I be reading right now to advance? I recommend you read about non-parametric inference, and bear in mind the general rule that parametric models for continuous random variables are approximations to true datasets at best. $^\dagger$ There is an exception to this when dealing with discrete data with finite support. For data with finite support, the categorical distribution actually does cover every possible distribution. However, for continuous variables, parameterised classes are much smaller relative to the space of all possible distributions.
How can I fit distribution for data which "almost fits"? The overwhelming majority of datasets do not perfectly fit any parameterised class of distributions used in probability theory.$^\dagger$ Those classes of distributions represent an infinitesimally s
38,351
How can I fit distribution for data which "almost fits"?
The answer by @Ben, which you've already accepted, is great. I'll just add that a Beta distribution has bounded support (assumes an upper maximum), whereas you're dealing with distances, which don't naturally lend themselves to such an assumption. Moreover, the QQ plot indicates a potential uncertainty in the tail of the fitted distribution. Therefore, I recommend to also try to fit to your datasets: a Gamma distribution (perhaps constrained with shape parameter $k = \alpha = 2$), a Weibull distribution (perhaps constrained with shape parameter $k = 2$, a case which is equivalent to a Rayleigh distribution). https://en.wikipedia.org/wiki/Gamma_distribution https://en.wikipedia.org/wiki/Weibull_distribution https://en.wikipedia.org/wiki/Rayleigh_distribution (equivalent to Weibull with shape parameter $k = 2$)
How can I fit distribution for data which "almost fits"?
The answer by @Ben, which you've already accepted, is great. I'll just add that a Beta distribution has bounded support (assumes an upper maximum), whereas you're dealing with distances, which don't n
How can I fit distribution for data which "almost fits"? The answer by @Ben, which you've already accepted, is great. I'll just add that a Beta distribution has bounded support (assumes an upper maximum), whereas you're dealing with distances, which don't naturally lend themselves to such an assumption. Moreover, the QQ plot indicates a potential uncertainty in the tail of the fitted distribution. Therefore, I recommend to also try to fit to your datasets: a Gamma distribution (perhaps constrained with shape parameter $k = \alpha = 2$), a Weibull distribution (perhaps constrained with shape parameter $k = 2$, a case which is equivalent to a Rayleigh distribution). https://en.wikipedia.org/wiki/Gamma_distribution https://en.wikipedia.org/wiki/Weibull_distribution https://en.wikipedia.org/wiki/Rayleigh_distribution (equivalent to Weibull with shape parameter $k = 2$)
How can I fit distribution for data which "almost fits"? The answer by @Ben, which you've already accepted, is great. I'll just add that a Beta distribution has bounded support (assumes an upper maximum), whereas you're dealing with distances, which don't n
38,352
Is a boxplot useful, when it doesn't even look like a box?
Visualisations need to be chosen based on the properties of the data and the message you are trying to convey. Clearly, boxplots do not communicate the distribution of this data well. Given that you have just 40 entries in each group and most values are identical, you might consider using a table, a dotplot, or translucent histograms with appropriately chosen bin width.
Is a boxplot useful, when it doesn't even look like a box?
Visualisations need to be chosen based on the properties of the data and the message you are trying to convey. Clearly, boxplots do not communicate the distribution of this data well. Given that you h
Is a boxplot useful, when it doesn't even look like a box? Visualisations need to be chosen based on the properties of the data and the message you are trying to convey. Clearly, boxplots do not communicate the distribution of this data well. Given that you have just 40 entries in each group and most values are identical, you might consider using a table, a dotplot, or translucent histograms with appropriately chosen bin width.
Is a boxplot useful, when it doesn't even look like a box? Visualisations need to be chosen based on the properties of the data and the message you are trying to convey. Clearly, boxplots do not communicate the distribution of this data well. Given that you h
38,353
Is a boxplot useful, when it doesn't even look like a box?
The boxplot doesn't visualize your data effectively. (For a discussion of the advantages and disadvantages of boxplots see How should we do boxplots with small samples? .) Other types of graphs (except a barplot as suggested by @ Bernhard) would have trouble with you data because there is almost no variability within loading strategy; this leads to overplotting. But since there are so few distinct values I would consider making a table instead of a graph. For example, a table with columns Loading Strategy, Target (%) and #Vehicles. Or %Vehicles since there are 40 vehicles for each strategy, so we can meaningfully compare the counts divided by 40. Rows within a table can be effectively grouped together with color highlights as well.
Is a boxplot useful, when it doesn't even look like a box?
The boxplot doesn't visualize your data effectively. (For a discussion of the advantages and disadvantages of boxplots see How should we do boxplots with small samples? .) Other types of graphs (excep
Is a boxplot useful, when it doesn't even look like a box? The boxplot doesn't visualize your data effectively. (For a discussion of the advantages and disadvantages of boxplots see How should we do boxplots with small samples? .) Other types of graphs (except a barplot as suggested by @ Bernhard) would have trouble with you data because there is almost no variability within loading strategy; this leads to overplotting. But since there are so few distinct values I would consider making a table instead of a graph. For example, a table with columns Loading Strategy, Target (%) and #Vehicles. Or %Vehicles since there are 40 vehicles for each strategy, so we can meaningfully compare the counts divided by 40. Rows within a table can be effectively grouped together with color highlights as well.
Is a boxplot useful, when it doesn't even look like a box? The boxplot doesn't visualize your data effectively. (For a discussion of the advantages and disadvantages of boxplots see How should we do boxplots with small samples? .) Other types of graphs (excep
38,354
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
All else being equal, more data is always better. So #3 is clearly the best option. Imbalanced data is not really a problem, and sacrificing more data for balance is throwing away free information (as Stephan Kolassa notes, the cost of data collection could be a concern - I am ignoring that for now). See the following questions for more detailed discussion about this common misconception: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? When is unbalanced data really a problem in Machine Learning? Does an unbalanced sample matter when doing logistic regression? What is the root cause of the class imbalance problem? This would be a more difficult choice if instead of [1000, 1000, 1000], option #3 was something like [10, 1000, 1000]. In that case, it is arguable whether you would learn enough about that one class from 10 samples to make the additional benefit of 1000 samples from the other 2 classes worth it - so [200, 200, 200] or [100, 1000, 1000] might be better options.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
All else being equal, more data is always better. So #3 is clearly the best option. Imbalanced data is not really a problem, and sacrificing more data for balance is throwing away free information (as
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? All else being equal, more data is always better. So #3 is clearly the best option. Imbalanced data is not really a problem, and sacrificing more data for balance is throwing away free information (as Stephan Kolassa notes, the cost of data collection could be a concern - I am ignoring that for now). See the following questions for more detailed discussion about this common misconception: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? When is unbalanced data really a problem in Machine Learning? Does an unbalanced sample matter when doing logistic regression? What is the root cause of the class imbalance problem? This would be a more difficult choice if instead of [1000, 1000, 1000], option #3 was something like [10, 1000, 1000]. In that case, it is arguable whether you would learn enough about that one class from 10 samples to make the additional benefit of 1000 samples from the other 2 classes worth it - so [200, 200, 200] or [100, 1000, 1000] might be better options.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? All else being equal, more data is always better. So #3 is clearly the best option. Imbalanced data is not really a problem, and sacrificing more data for balance is throwing away free information (as
38,355
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
More data per group is always better than less data. It doesn't matter what sample sizes are off other groups. The imbalance "problem" means that if you can collect only 1000 data points, it's usually better to have 500:500 than 100:900, but 100:900 will be better than 100:100 simply because there is more information on the data. It doesn't matter what's the balance. So an additional value of a data point is lower if you already have many data points from that class, but their value is never negative. Some models and measures do have problems with unbalanced data, but that's just a modeling and competence issue, others are completely fine. There are many threads about it already on this site, but you are still better off collecting more data than less.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
More data per group is always better than less data. It doesn't matter what sample sizes are off other groups. The imbalance "problem" means that if you can collect only 1000 data points, it's usually
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? More data per group is always better than less data. It doesn't matter what sample sizes are off other groups. The imbalance "problem" means that if you can collect only 1000 data points, it's usually better to have 500:500 than 100:900, but 100:900 will be better than 100:100 simply because there is more information on the data. It doesn't matter what's the balance. So an additional value of a data point is lower if you already have many data points from that class, but their value is never negative. Some models and measures do have problems with unbalanced data, but that's just a modeling and competence issue, others are completely fine. There are many threads about it already on this site, but you are still better off collecting more data than less.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? More data per group is always better than less data. It doesn't matter what sample sizes are off other groups. The imbalance "problem" means that if you can collect only 1000 data points, it's usually
38,356
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
You have a trade-off between wanting the data to be balanced and preferring more data. As you said, it always depends on the data. Moreover, some metrics are more robust towards imbalance than others. Without any further information, I would choose option 1 or 2. You could also try to augment your dataset by oversampling, or use metrics that are more robust towards imbalance.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
You have a trade-off between wanting the data to be balanced and preferring more data. As you said, it always depends on the data. Moreover, some metrics are more robust towards imbalance than others.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? You have a trade-off between wanting the data to be balanced and preferring more data. As you said, it always depends on the data. Moreover, some metrics are more robust towards imbalance than others. Without any further information, I would choose option 1 or 2. You could also try to augment your dataset by oversampling, or use metrics that are more robust towards imbalance.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? You have a trade-off between wanting the data to be balanced and preferring more data. As you said, it always depends on the data. Moreover, some metrics are more robust towards imbalance than others.
38,357
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
Presumably there is some analysis or model training you intend to do with this data. Depending on what that is, there may be an a priori way to know how many samples you need (power analysis). Even if there isn't a closed form solution, you could use simulation to get some idea of how much data is needed. You could generate some plausible looking classes and see how your method performs with 50, 100, 200, etc samples.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data?
Presumably there is some analysis or model training you intend to do with this data. Depending on what that is, there may be an a priori way to know how many samples you need (power analysis). Even if
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? Presumably there is some analysis or model training you intend to do with this data. Depending on what that is, there may be an a priori way to know how many samples you need (power analysis). Even if there isn't a closed form solution, you could use simulation to get some idea of how much data is needed. You could generate some plausible looking classes and see how your method performs with 50, 100, 200, etc samples.
Data Imbalance: what would be an ideal number(ratio) of newly added class's data? Presumably there is some analysis or model training you intend to do with this data. Depending on what that is, there may be an a priori way to know how many samples you need (power analysis). Even if
38,358
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
There are two forms of the geometric distribution. The one we use here counts the number $X$ of Bernoulli trials until the first Success occurs, where the Success probability is $p,$ so that the PDF is $$f_X(x) = p(1-p)^x,$$ for $x = 1, 2, 3, \dots$ and $$E(X) = \mu_X = 1/p.$$ [The alternative version counts the number of Failures before the first Success.] It is not trivial to show that $$E(X) = \sum_{x=1}^\infty xf_X(x) =\sum_{x=1}^\infty xp(1-p)^{x-1} = 1/p.$$ [The Wikipedia article linked above shows a formal derivation for the alternative version. A slight modification works for our version.] However, the terms of the sum decrease markedly as $x$ increases, so that one does not need to sum a huge number of terms to get a good approximation. For example, let $p = 1/6.$ p = 1/6; x = 0:100; f = p*(1-p)^{x-1} mu = sum(x*f) [1] 6 In your problem about rolling a fair die, the probabilty of getting a 3 on any one roll is $p = 1/6,$ so the expected number of rolls of the die until a 3 occurs is $6.$ Notes: (1) One way to show that $\mu_X = 1/p$ is to use moment generating functions. The proof if the Wilkipedia article uses an analogous differentiation method. (2) The geometric distribution has the memoryless property: $P(X > m+n | X > m) = P(X > m),$ for positive integers $m, n.$ So the average number of rolls after the first 5 observed until we get a 3 is also $6.$ (3) An approximate simulation of a million waiting times for the first 3 shows that the average wait is $6.$ [Extremely rare waits longer than 100 trials are ignored.] set.seed(2021) w = replicate(10^6, match(3, sample(1:6, 100, rep=T))) mean(w) [1] 6.003519
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
There are two forms of the geometric distribution. The one we use here counts the number $X$ of Bernoulli trials until the first Success occurs, where the Success probability is $p,$ so that the PDF i
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? There are two forms of the geometric distribution. The one we use here counts the number $X$ of Bernoulli trials until the first Success occurs, where the Success probability is $p,$ so that the PDF is $$f_X(x) = p(1-p)^x,$$ for $x = 1, 2, 3, \dots$ and $$E(X) = \mu_X = 1/p.$$ [The alternative version counts the number of Failures before the first Success.] It is not trivial to show that $$E(X) = \sum_{x=1}^\infty xf_X(x) =\sum_{x=1}^\infty xp(1-p)^{x-1} = 1/p.$$ [The Wikipedia article linked above shows a formal derivation for the alternative version. A slight modification works for our version.] However, the terms of the sum decrease markedly as $x$ increases, so that one does not need to sum a huge number of terms to get a good approximation. For example, let $p = 1/6.$ p = 1/6; x = 0:100; f = p*(1-p)^{x-1} mu = sum(x*f) [1] 6 In your problem about rolling a fair die, the probabilty of getting a 3 on any one roll is $p = 1/6,$ so the expected number of rolls of the die until a 3 occurs is $6.$ Notes: (1) One way to show that $\mu_X = 1/p$ is to use moment generating functions. The proof if the Wilkipedia article uses an analogous differentiation method. (2) The geometric distribution has the memoryless property: $P(X > m+n | X > m) = P(X > m),$ for positive integers $m, n.$ So the average number of rolls after the first 5 observed until we get a 3 is also $6.$ (3) An approximate simulation of a million waiting times for the first 3 shows that the average wait is $6.$ [Extremely rare waits longer than 100 trials are ignored.] set.seed(2021) w = replicate(10^6, match(3, sample(1:6, 100, rep=T))) mean(w) [1] 6.003519
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? There are two forms of the geometric distribution. The one we use here counts the number $X$ of Bernoulli trials until the first Success occurs, where the Success probability is $p,$ so that the PDF i
38,359
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
This answer consists of two parts. The first part develops a basic insight about long sequences of repetitions of an experiment. This insight is conveyed by a simple diagram of the experimental results. The second part quickly answers the question by applying this insight. The insight Consider any probabilistic event, such as a 3 appearing in one roll of a die ("experiment $A$"). Its "probability" is intended to reflect the proportion of times this event occurs in very long sequences of the experiment. One way to compute this proportion is to run a slightly different experiment, "experiment $B.$" The second version repeats experiment $A$ up until the moment a prescribed outcome, such as 3, appears. Let's refer to this outcome as $\omega.$ Let $N$ count how many iterations of experiment $A$ are needed until $\omega$ occurs. When we repeat experiment $B$ we observe a sequence of realizations of such random variables: $N_1$ for the first occurrence, $N_2$ for the second, and so on. This figure shows a schematic timeline in which the repetitions of experiment $A$ are plotted left to right. Each occurrence of $\omega$ is noted. The $N_i,$ by definition, count how many trials of experiment $A$ were needed to produce each successive $\omega.$ Evidently, $\omega$ occurs $n$ times out of $N_1+\cdots + N_n$ repetitions of experiment $A.$ Because experiment $A$ (rolling a die) is assumed to behave the same way each time and to have independent outcomes, the $N_i$ have identical distributions and are independent, too. Let's use them to estimate how often $\omega$ appears in a long sequence of runs of experiment $A.$ Pick a large number $n$ of iterations of experiment $B,$ with outcomes $N_1, N_2,\ldots, N_n.$ This implies that $\omega$ occurred in exactly $n$ out of $N_1+N_2+\cdots+N_n$ iterations of experiment $A.$ The proportion estimates the chance of $\omega$ in experiment $A:$ $$\Pr(\omega) \approx \frac{n}{N_1+N_2+\cdots + N_n} = \frac{1}{\frac{1}{n}\sum_{i=1}^n N_i}.$$ (The second equality arises from the algebra of fractions: numerator and denominator were both divided by $n.$) In the denominator appears an approximation to the expected value of experiment $B.$ As a matter of notation, let $N(\omega)$ refer to the generic outcome of experiment $B,$ so that we may express this fact as $$E[N(\omega)] \approx \frac{1}{n}\sum_{i=1}^n N_i.$$ Weak laws of large numbers guarantee these approximations become arbitrarily good as $n$ increases. Putting the results together, we see that $$\Pr(\omega) = \frac{1}{E[N(\omega)]}.$$ The application A die is fair when all its outcomes are equally likely. With a six-sided die then, the sum of all six chances must be $1$ (axiomatically), implying each chance is $1/6.$ In the denominator of the foregoing result we can read off the expected time to roll any given face: it is $6,$ QED.
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
This answer consists of two parts. The first part develops a basic insight about long sequences of repetitions of an experiment. This insight is conveyed by a simple diagram of the experimental resu
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? This answer consists of two parts. The first part develops a basic insight about long sequences of repetitions of an experiment. This insight is conveyed by a simple diagram of the experimental results. The second part quickly answers the question by applying this insight. The insight Consider any probabilistic event, such as a 3 appearing in one roll of a die ("experiment $A$"). Its "probability" is intended to reflect the proportion of times this event occurs in very long sequences of the experiment. One way to compute this proportion is to run a slightly different experiment, "experiment $B.$" The second version repeats experiment $A$ up until the moment a prescribed outcome, such as 3, appears. Let's refer to this outcome as $\omega.$ Let $N$ count how many iterations of experiment $A$ are needed until $\omega$ occurs. When we repeat experiment $B$ we observe a sequence of realizations of such random variables: $N_1$ for the first occurrence, $N_2$ for the second, and so on. This figure shows a schematic timeline in which the repetitions of experiment $A$ are plotted left to right. Each occurrence of $\omega$ is noted. The $N_i,$ by definition, count how many trials of experiment $A$ were needed to produce each successive $\omega.$ Evidently, $\omega$ occurs $n$ times out of $N_1+\cdots + N_n$ repetitions of experiment $A.$ Because experiment $A$ (rolling a die) is assumed to behave the same way each time and to have independent outcomes, the $N_i$ have identical distributions and are independent, too. Let's use them to estimate how often $\omega$ appears in a long sequence of runs of experiment $A.$ Pick a large number $n$ of iterations of experiment $B,$ with outcomes $N_1, N_2,\ldots, N_n.$ This implies that $\omega$ occurred in exactly $n$ out of $N_1+N_2+\cdots+N_n$ iterations of experiment $A.$ The proportion estimates the chance of $\omega$ in experiment $A:$ $$\Pr(\omega) \approx \frac{n}{N_1+N_2+\cdots + N_n} = \frac{1}{\frac{1}{n}\sum_{i=1}^n N_i}.$$ (The second equality arises from the algebra of fractions: numerator and denominator were both divided by $n.$) In the denominator appears an approximation to the expected value of experiment $B.$ As a matter of notation, let $N(\omega)$ refer to the generic outcome of experiment $B,$ so that we may express this fact as $$E[N(\omega)] \approx \frac{1}{n}\sum_{i=1}^n N_i.$$ Weak laws of large numbers guarantee these approximations become arbitrarily good as $n$ increases. Putting the results together, we see that $$\Pr(\omega) = \frac{1}{E[N(\omega)]}.$$ The application A die is fair when all its outcomes are equally likely. With a six-sided die then, the sum of all six chances must be $1$ (axiomatically), implying each chance is $1/6.$ In the denominator of the foregoing result we can read off the expected time to roll any given face: it is $6,$ QED.
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? This answer consists of two parts. The first part develops a basic insight about long sequences of repetitions of an experiment. This insight is conveyed by a simple diagram of the experimental resu
38,360
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
For a nonnegative integer-valued discrete random variable $X$, it is a standard result (see e.g. here on stats.SE) that $$E[X] = \sum_{n=0}^\infty P(X > n).$$ When $X$ is a geometric random variable with parameter $p$ that takes on values $1, 2, 3, \ldots$ (which is the case here since $X$ is the number of the trial on which $3$ occurs for the first time), we have that \begin{align} E[X] &= \sum_{n=0}^\infty P(X > n)\\ &= 1 + (1-p) + (1-p)^2 + (1-p)^3 + \cdots\\ &= \frac{1}{1 - (1-p)}\\ &= \frac 1p \end{align} without the need for simulations as in BruceET's answer or taking derivatives and worrying about interchanging the order of differentiation and summation etc as in the Wikipedia article referenced in BruceET's answer. Intuitively, if $3$ has probability $\frac 16$ of occurring, relative frequency notions say that over a long run of $N$ trials, $3$ should occur on roughly $\frac N6$ trials, and so the average spacing between successive occurrences of $3$ should be $5$, that is, on average, every sixth trial results in a $3$.
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3?
For a nonnegative integer-valued discrete random variable $X$, it is a standard result (see e.g. here on stats.SE) that $$E[X] = \sum_{n=0}^\infty P(X > n).$$ When $X$ is a geometric random variable w
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? For a nonnegative integer-valued discrete random variable $X$, it is a standard result (see e.g. here on stats.SE) that $$E[X] = \sum_{n=0}^\infty P(X > n).$$ When $X$ is a geometric random variable with parameter $p$ that takes on values $1, 2, 3, \ldots$ (which is the case here since $X$ is the number of the trial on which $3$ occurs for the first time), we have that \begin{align} E[X] &= \sum_{n=0}^\infty P(X > n)\\ &= 1 + (1-p) + (1-p)^2 + (1-p)^3 + \cdots\\ &= \frac{1}{1 - (1-p)}\\ &= \frac 1p \end{align} without the need for simulations as in BruceET's answer or taking derivatives and worrying about interchanging the order of differentiation and summation etc as in the Wikipedia article referenced in BruceET's answer. Intuitively, if $3$ has probability $\frac 16$ of occurring, relative frequency notions say that over a long run of $N$ trials, $3$ should occur on roughly $\frac N6$ trials, and so the average spacing between successive occurrences of $3$ should be $5$, that is, on average, every sixth trial results in a $3$.
When we roll a fair die, why is 6 the expected number of rolls after which we get our first 3? For a nonnegative integer-valued discrete random variable $X$, it is a standard result (see e.g. here on stats.SE) that $$E[X] = \sum_{n=0}^\infty P(X > n).$$ When $X$ is a geometric random variable w
38,361
What is the history of $p < 0.05$ or 95% confidence?
See this historical article by Stigler (2008) in Chance, about Fisher's influence (as you suggest). Much of early significance testing used the standard normal distribution. As cut-off values get smaller than $-2.0$ there is rapidly diminishing tail probability. So if one wants a relatively small tail probability without insisting on $z$-values too far from $0,$ it seems that cut-off points around $\pm 2$ give a reasonable tradeoff between more extreme z values and smaller probabilities. If one wants "round" numbers for the sum of two tail probabilities, such as $0.01, 0.02, 0.03,$ $0.04, 0.05, 0.06,$ etc., then something near $0.05=5\%$ seems reasonable. p = seq(.01,.1,by=.01); z = qnorm(p) plot(z, p, ylim=c(0,.1))
What is the history of $p < 0.05$ or 95% confidence?
See this historical article by Stigler (2008) in Chance, about Fisher's influence (as you suggest). Much of early significance testing used the standard normal distribution. As cut-off values get smal
What is the history of $p < 0.05$ or 95% confidence? See this historical article by Stigler (2008) in Chance, about Fisher's influence (as you suggest). Much of early significance testing used the standard normal distribution. As cut-off values get smaller than $-2.0$ there is rapidly diminishing tail probability. So if one wants a relatively small tail probability without insisting on $z$-values too far from $0,$ it seems that cut-off points around $\pm 2$ give a reasonable tradeoff between more extreme z values and smaller probabilities. If one wants "round" numbers for the sum of two tail probabilities, such as $0.01, 0.02, 0.03,$ $0.04, 0.05, 0.06,$ etc., then something near $0.05=5\%$ seems reasonable. p = seq(.01,.1,by=.01); z = qnorm(p) plot(z, p, ylim=c(0,.1))
What is the history of $p < 0.05$ or 95% confidence? See this historical article by Stigler (2008) in Chance, about Fisher's influence (as you suggest). Much of early significance testing used the standard normal distribution. As cut-off values get smal
38,362
What is the history of $p < 0.05$ or 95% confidence?
Fisher suggested the 0.05 level indirectly. He mentioned that two standard deviations is an easy rule for significance, and the 0.05 level is what approximately corresponds to it. From Fisher's 1925 'Statistical methods for research workers' If, therefore, we know the standard deviation of a population, we can calculate the standard deviation of the mean of a random sample of any size, and so test whether or not it differs significantly from any fixed value. If the difference is many times greater than the standard error, it is certainly significant, and it is a convenient convention to take twice the standard error as the limit of significance ; this is roughly equivalent to the corresponding limit $P=.05$, already used for the $\chi^2$ distribution. He mentions as well that this level is already used. This refers to Pearson's chi squared test. In the same book he writes about the construction of a table for the values of the $\chi^2$ distribution we have not reprinted Elderton's table, but have given a new table (Table III. p. 98) in a form which experience has shown to be more convenient. Instead of giving the values of $P$ corresponding to an arbitrary series of values of $\chi^2$, we have given the values of $\chi^2$ corresponding to specially selected values of $P$. We have thus been able in a compact form to cover those parts of the distributions which have hitherto not been available, namely, the values of $\chi^2$ less than unity, which frequently occur for small values of $n$, and the values exceeding $30$, which for larger values of $n$ become of importance. ... In preparing this table we have borne in mind that in practice we do not want to know the exact value of $P$ for any observed $\chi^2$, but, in the first place, whether or not the observed value is open to suspicion. If $P$ is between $.1$ and $.9$ there is certainly no reason to suspect the hypothesis tested. If it is below $.02$ it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at $.05$, and consider that higher values of $\chi^2$ indicate a real discrepancy. So the .05 level stems from two types of convenience. It relates to the 68-97.5-99.7 rule and the 2 sigma value. And it relates to the lack of computers in the old days and the need to find values for distributions from tables. To make these tables easier, Fisher thought it would be better to give $\chi^2$ as function of $p$ instead of the other way around. So convenient levels needed to be chosen to construct those new type of tables.
What is the history of $p < 0.05$ or 95% confidence?
Fisher suggested the 0.05 level indirectly. He mentioned that two standard deviations is an easy rule for significance, and the 0.05 level is what approximately corresponds to it. From Fisher's 1925 '
What is the history of $p < 0.05$ or 95% confidence? Fisher suggested the 0.05 level indirectly. He mentioned that two standard deviations is an easy rule for significance, and the 0.05 level is what approximately corresponds to it. From Fisher's 1925 'Statistical methods for research workers' If, therefore, we know the standard deviation of a population, we can calculate the standard deviation of the mean of a random sample of any size, and so test whether or not it differs significantly from any fixed value. If the difference is many times greater than the standard error, it is certainly significant, and it is a convenient convention to take twice the standard error as the limit of significance ; this is roughly equivalent to the corresponding limit $P=.05$, already used for the $\chi^2$ distribution. He mentions as well that this level is already used. This refers to Pearson's chi squared test. In the same book he writes about the construction of a table for the values of the $\chi^2$ distribution we have not reprinted Elderton's table, but have given a new table (Table III. p. 98) in a form which experience has shown to be more convenient. Instead of giving the values of $P$ corresponding to an arbitrary series of values of $\chi^2$, we have given the values of $\chi^2$ corresponding to specially selected values of $P$. We have thus been able in a compact form to cover those parts of the distributions which have hitherto not been available, namely, the values of $\chi^2$ less than unity, which frequently occur for small values of $n$, and the values exceeding $30$, which for larger values of $n$ become of importance. ... In preparing this table we have borne in mind that in practice we do not want to know the exact value of $P$ for any observed $\chi^2$, but, in the first place, whether or not the observed value is open to suspicion. If $P$ is between $.1$ and $.9$ there is certainly no reason to suspect the hypothesis tested. If it is below $.02$ it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at $.05$, and consider that higher values of $\chi^2$ indicate a real discrepancy. So the .05 level stems from two types of convenience. It relates to the 68-97.5-99.7 rule and the 2 sigma value. And it relates to the lack of computers in the old days and the need to find values for distributions from tables. To make these tables easier, Fisher thought it would be better to give $\chi^2$ as function of $p$ instead of the other way around. So convenient levels needed to be chosen to construct those new type of tables.
What is the history of $p < 0.05$ or 95% confidence? Fisher suggested the 0.05 level indirectly. He mentioned that two standard deviations is an easy rule for significance, and the 0.05 level is what approximately corresponds to it. From Fisher's 1925 '
38,363
What is the history of $p < 0.05$ or 95% confidence?
As I remember it, it was indeed Fisher who threw 0,05 out there, as a suggestion, and this has been taken as law in many circles since then. I don't have the book at hand, but I read the passage once, and it can probably be found through a quick Google search. I am not that familiar with decision theory, so I can't definitively say that Fisher is the only reason for this.
What is the history of $p < 0.05$ or 95% confidence?
As I remember it, it was indeed Fisher who threw 0,05 out there, as a suggestion, and this has been taken as law in many circles since then. I don't have the book at hand, but I read the passage once,
What is the history of $p < 0.05$ or 95% confidence? As I remember it, it was indeed Fisher who threw 0,05 out there, as a suggestion, and this has been taken as law in many circles since then. I don't have the book at hand, but I read the passage once, and it can probably be found through a quick Google search. I am not that familiar with decision theory, so I can't definitively say that Fisher is the only reason for this.
What is the history of $p < 0.05$ or 95% confidence? As I remember it, it was indeed Fisher who threw 0,05 out there, as a suggestion, and this has been taken as law in many circles since then. I don't have the book at hand, but I read the passage once,
38,364
Flipping Coins : Probability of Sequences vs Probability of Individuals
By default, str_count does not count overlapping occurrances of the specified pattern. The substring 1111 can overlap with itself substantially, whereas the substring 1110 cannot overlap with itself. Consequently, your calculation for the first substring is substantially biased --- you are substantially undercounting the number of times this pattern actually occurs in your simulation. Try this alternative method instead: #Flip the coin many times set.seed(1) n <- 10^8 FLIPS <- sample(c(0,1), size = n, replace = TRUE) #Count the proportion of occurrences of 1-1-1-1 PATTERN.1111 <- FLIPS[1:(n-3)]*FLIPS[2:(n-2)]*FLIPS[3:(n-1)]*FLIPS[4:n] sum(PATTERN.1111)/n [1] 0.06246614 #Count the proportion of occurrences of 1-1-1-0 PATTERN.1110 <- FLIPS[1:(n-3)]*FLIPS[2:(n-2)]*FLIPS[3:(n-1)]*(1-FLIPS[4:n]) sum(PATTERN.1110)/n [1] 0.0624983 With this alternative simulation (which counts overlapping occurrences of the patterns) you get proportions for the two outcomes that are roughly the same. ​If the coin flips are in fact independent and "fair" then each player has the same probability of winning the wager. Mathematically, the true probability of any run of four outcomes is $1/2^4 = 0.0625$, so that it what the above simulations are effectively estimating; the remaining small disparity in the simulation is due to random error.
Flipping Coins : Probability of Sequences vs Probability of Individuals
By default, str_count does not count overlapping occurrances of the specified pattern. The substring 1111 can overlap with itself substantially, whereas the substring 1110 cannot overlap with itself.
Flipping Coins : Probability of Sequences vs Probability of Individuals By default, str_count does not count overlapping occurrances of the specified pattern. The substring 1111 can overlap with itself substantially, whereas the substring 1110 cannot overlap with itself. Consequently, your calculation for the first substring is substantially biased --- you are substantially undercounting the number of times this pattern actually occurs in your simulation. Try this alternative method instead: #Flip the coin many times set.seed(1) n <- 10^8 FLIPS <- sample(c(0,1), size = n, replace = TRUE) #Count the proportion of occurrences of 1-1-1-1 PATTERN.1111 <- FLIPS[1:(n-3)]*FLIPS[2:(n-2)]*FLIPS[3:(n-1)]*FLIPS[4:n] sum(PATTERN.1111)/n [1] 0.06246614 #Count the proportion of occurrences of 1-1-1-0 PATTERN.1110 <- FLIPS[1:(n-3)]*FLIPS[2:(n-2)]*FLIPS[3:(n-1)]*(1-FLIPS[4:n]) sum(PATTERN.1110)/n [1] 0.0624983 With this alternative simulation (which counts overlapping occurrences of the patterns) you get proportions for the two outcomes that are roughly the same. ​If the coin flips are in fact independent and "fair" then each player has the same probability of winning the wager. Mathematically, the true probability of any run of four outcomes is $1/2^4 = 0.0625$, so that it what the above simulations are effectively estimating; the remaining small disparity in the simulation is due to random error.
Flipping Coins : Probability of Sequences vs Probability of Individuals By default, str_count does not count overlapping occurrances of the specified pattern. The substring 1111 can overlap with itself substantially, whereas the substring 1110 cannot overlap with itself.
38,365
Flipping Coins : Probability of Sequences vs Probability of Individuals
EDIT The reason you are getting a different percentage for HHHH and HHHT is that you are calculating the instances of 1111 and 1110 in a very long string. you are not breaking these into blocks of 4. In a very long string it is more likely for you to have 3 ones in a row than it is to have 4 ones in a row. Since you aren't checking the groupings of 4 to make sure the flips are all in a single test, you will end up with more 1110 then you will 1111. The correct way to code the problem is to group the coin flips into groups of 4. The following should be pretty easy to follow but is a bit slow. #load library library(stringr) #define number of flips n <- 100000 # Pre-assign a length of n to a data.frame df <- data.frame(flip = character(n)) for(i in 1:n){ df$flip[i] <- paste(sample(c(0,1),replace = TRUE,size = 4),collapse = "") } 100*sum(df$flip == "1111")/n # 6.259 100*sum(df$flip == "1110")/n # 6.193 Original math based answer (missing code): This is a common misinterpretation of statistics. Great example to learn from. Your question was: After 3 coin flips, if I bet on the outcome of a 4th flip what is the probability of the 4th flip. The 4th flip is now independent of the first 3 flips. There is no mechanism out there that grabs the coin and changes the probability of that 4th flip. The 4th flip will have a 50% chance of being heads, and a 50% chance of being tails. Now, the question you are answering is: what is the probability a coin will be heads 4 times in a row. This is an entirely different question. The new question is asking what the probability is that you will get 4 heads in a row and this is a dependent question because not only does the 4th flip have to be heads, it depends on the first 3 having also been heads first. Then you have 16 possible combinations in 4 coin flips and only 1 possible way for it to come up with 4 heads (1/16 = 6.25%).
Flipping Coins : Probability of Sequences vs Probability of Individuals
EDIT The reason you are getting a different percentage for HHHH and HHHT is that you are calculating the instances of 1111 and 1110 in a very long string. you are not breaking these into blocks of 4.
Flipping Coins : Probability of Sequences vs Probability of Individuals EDIT The reason you are getting a different percentage for HHHH and HHHT is that you are calculating the instances of 1111 and 1110 in a very long string. you are not breaking these into blocks of 4. In a very long string it is more likely for you to have 3 ones in a row than it is to have 4 ones in a row. Since you aren't checking the groupings of 4 to make sure the flips are all in a single test, you will end up with more 1110 then you will 1111. The correct way to code the problem is to group the coin flips into groups of 4. The following should be pretty easy to follow but is a bit slow. #load library library(stringr) #define number of flips n <- 100000 # Pre-assign a length of n to a data.frame df <- data.frame(flip = character(n)) for(i in 1:n){ df$flip[i] <- paste(sample(c(0,1),replace = TRUE,size = 4),collapse = "") } 100*sum(df$flip == "1111")/n # 6.259 100*sum(df$flip == "1110")/n # 6.193 Original math based answer (missing code): This is a common misinterpretation of statistics. Great example to learn from. Your question was: After 3 coin flips, if I bet on the outcome of a 4th flip what is the probability of the 4th flip. The 4th flip is now independent of the first 3 flips. There is no mechanism out there that grabs the coin and changes the probability of that 4th flip. The 4th flip will have a 50% chance of being heads, and a 50% chance of being tails. Now, the question you are answering is: what is the probability a coin will be heads 4 times in a row. This is an entirely different question. The new question is asking what the probability is that you will get 4 heads in a row and this is a dependent question because not only does the 4th flip have to be heads, it depends on the first 3 having also been heads first. Then you have 16 possible combinations in 4 coin flips and only 1 possible way for it to come up with 4 heads (1/16 = 6.25%).
Flipping Coins : Probability of Sequences vs Probability of Individuals EDIT The reason you are getting a different percentage for HHHH and HHHT is that you are calculating the instances of 1111 and 1110 in a very long string. you are not breaking these into blocks of 4.
38,366
Flipping Coins : Probability of Sequences vs Probability of Individuals
Ah my friend, you are making a very simple mistake. In your simulation, you are computing the proportion of times a person could flip 4 heads in a row. But that is not what you have wagered. You enter the bet having seen the three heads and have wagered only on the result of the next flip. Because each flip is independent, and the coin assumed fair, the probability of a heads is the same as a tails and hence the odds are even! It would have been different had you made the wager at the beginning of the four flips. In such a case, we could just compute the binomial density. We would see that 4 heads in a row (conditioned on making only four flips) is very small and so you would have the better odds, again assuming the coin is fair. But having already seen the 3 flips and then wagering is akin to just betting on a coin flip.
Flipping Coins : Probability of Sequences vs Probability of Individuals
Ah my friend, you are making a very simple mistake. In your simulation, you are computing the proportion of times a person could flip 4 heads in a row. But that is not what you have wagered. You ent
Flipping Coins : Probability of Sequences vs Probability of Individuals Ah my friend, you are making a very simple mistake. In your simulation, you are computing the proportion of times a person could flip 4 heads in a row. But that is not what you have wagered. You enter the bet having seen the three heads and have wagered only on the result of the next flip. Because each flip is independent, and the coin assumed fair, the probability of a heads is the same as a tails and hence the odds are even! It would have been different had you made the wager at the beginning of the four flips. In such a case, we could just compute the binomial density. We would see that 4 heads in a row (conditioned on making only four flips) is very small and so you would have the better odds, again assuming the coin is fair. But having already seen the 3 flips and then wagering is akin to just betting on a coin flip.
Flipping Coins : Probability of Sequences vs Probability of Individuals Ah my friend, you are making a very simple mistake. In your simulation, you are computing the proportion of times a person could flip 4 heads in a row. But that is not what you have wagered. You ent
38,367
Underestimation of standard error
This is what's going on > summary(row_std) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.05212 0.69485 0.91762 0.94109 1.15915 2.56883 > summary(row_std^2) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.002717 0.482820 0.842034 1.001564 1.343628 6.598870 Because $s^2$ is unbiased for $\sigma^2$, it is not possible for $s$ to be unbiased for $\sigma$. To a first approximation $$E[f(Z)]=f(E[Z])+f''(E[Z])\dfrac{\mathrm{var}(Z)}{2}$$ where the second derivative is $-\frac{1}{2}\sigma^{-3}$ and the variance is $\frac{\sigma^2}{n}\times (2-\frac{2}{n-1})$ (for a Gaussian), giving a first correction of -0.075. This overcorrects. Also, it depends on the unknown $\sigma^2$ and kurtosis, so it wouldn't be estimated all that well from five observations if we weren't pretending we knew the values. More importantly, if the data are Gaussian, the distribution of $s^2$ is already taken into account in computing the confidence interval > in_interval<-function(theta,hat,se,tcrit){ (hat-tcrit*se <= theta) & (hat+tcrit*se>=theta)} > meanhat<-rowMeans(mat) > table(in_interval(0,meanhat, row_se,abs(qt(.025,4)))) FALSE TRUE 4783 95217
Underestimation of standard error
This is what's going on > summary(row_std) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.05212 0.69485 0.91762 0.94109 1.15915 2.56883 > summary(row_std^2) Min. 1st Qu. Median Mean 3
Underestimation of standard error This is what's going on > summary(row_std) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.05212 0.69485 0.91762 0.94109 1.15915 2.56883 > summary(row_std^2) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.002717 0.482820 0.842034 1.001564 1.343628 6.598870 Because $s^2$ is unbiased for $\sigma^2$, it is not possible for $s$ to be unbiased for $\sigma$. To a first approximation $$E[f(Z)]=f(E[Z])+f''(E[Z])\dfrac{\mathrm{var}(Z)}{2}$$ where the second derivative is $-\frac{1}{2}\sigma^{-3}$ and the variance is $\frac{\sigma^2}{n}\times (2-\frac{2}{n-1})$ (for a Gaussian), giving a first correction of -0.075. This overcorrects. Also, it depends on the unknown $\sigma^2$ and kurtosis, so it wouldn't be estimated all that well from five observations if we weren't pretending we knew the values. More importantly, if the data are Gaussian, the distribution of $s^2$ is already taken into account in computing the confidence interval > in_interval<-function(theta,hat,se,tcrit){ (hat-tcrit*se <= theta) & (hat+tcrit*se>=theta)} > meanhat<-rowMeans(mat) > table(in_interval(0,meanhat, row_se,abs(qt(.025,4)))) FALSE TRUE 4783 95217
Underestimation of standard error This is what's going on > summary(row_std) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.05212 0.69485 0.91762 0.94109 1.15915 2.56883 > summary(row_std^2) Min. 1st Qu. Median Mean 3
38,368
Underestimation of standard error
By Jensen’s inequality, the sample standard deviation is an underestimate (in expectation) of the true standard deviation, since the square root is concave and $S^2$ is unbiased for the second central moment.
Underestimation of standard error
By Jensen’s inequality, the sample standard deviation is an underestimate (in expectation) of the true standard deviation, since the square root is concave and $S^2$ is unbiased for the second central
Underestimation of standard error By Jensen’s inequality, the sample standard deviation is an underestimate (in expectation) of the true standard deviation, since the square root is concave and $S^2$ is unbiased for the second central moment.
Underestimation of standard error By Jensen’s inequality, the sample standard deviation is an underestimate (in expectation) of the true standard deviation, since the square root is concave and $S^2$ is unbiased for the second central
38,369
Absolute Value of Uniform
No, because given $X,X'$, you have the values for $Y,Y'$. So, the probability is either $1$ or $0$, depending on $x,x'$: $$P(Y\leq Y'|X=x,X'=x')=\mathbb I(|x|\leq |x'|)$$
Absolute Value of Uniform
No, because given $X,X'$, you have the values for $Y,Y'$. So, the probability is either $1$ or $0$, depending on $x,x'$: $$P(Y\leq Y'|X=x,X'=x')=\mathbb I(|x|\leq |x'|)$$
Absolute Value of Uniform No, because given $X,X'$, you have the values for $Y,Y'$. So, the probability is either $1$ or $0$, depending on $x,x'$: $$P(Y\leq Y'|X=x,X'=x')=\mathbb I(|x|\leq |x'|)$$
Absolute Value of Uniform No, because given $X,X'$, you have the values for $Y,Y'$. So, the probability is either $1$ or $0$, depending on $x,x'$: $$P(Y\leq Y'|X=x,X'=x')=\mathbb I(|x|\leq |x'|)$$
38,370
Absolute Value of Uniform
No, that is false. Once you condition on $X$ and $X'$ the event $Y \leqslant Y'$ is deterministic. Specifically, you have: $$\mathbb{P}(Y \leqslant Y'|X=x, X'=x') = \mathbb{I}(|x| \leqslant |x'|).$$
Absolute Value of Uniform
No, that is false. Once you condition on $X$ and $X'$ the event $Y \leqslant Y'$ is deterministic. Specifically, you have: $$\mathbb{P}(Y \leqslant Y'|X=x, X'=x') = \mathbb{I}(|x| \leqslant |x'|).$$
Absolute Value of Uniform No, that is false. Once you condition on $X$ and $X'$ the event $Y \leqslant Y'$ is deterministic. Specifically, you have: $$\mathbb{P}(Y \leqslant Y'|X=x, X'=x') = \mathbb{I}(|x| \leqslant |x'|).$$
Absolute Value of Uniform No, that is false. Once you condition on $X$ and $X'$ the event $Y \leqslant Y'$ is deterministic. Specifically, you have: $$\mathbb{P}(Y \leqslant Y'|X=x, X'=x') = \mathbb{I}(|x| \leqslant |x'|).$$
38,371
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
cyclicSpline already contains the constant vector in its span so if you additionally add an intercept it'll be rank deficient. Matrix::rankMatrix(cyclicSpline) shows that cyclicSpline is full rank by itself and doing lm(y ~ cyclicSpline - 1) will fix the issue. To confirm that this really is the case, we can compute the hat matrix explicitly as U = svd(cyclicSpline)$u; H = U %*% t(U) and then check that H %*% rep(1, n) is within numerical rounding of rep(1,n) (i.e. H acts as the identity on the vector of all 1s so that vector is entirely within the column space of cyclicSpline).
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
cyclicSpline already contains the constant vector in its span so if you additionally add an intercept it'll be rank deficient. Matrix::rankMatrix(cyclicSpline) shows that cyclicSpline is full rank by
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) cyclicSpline already contains the constant vector in its span so if you additionally add an intercept it'll be rank deficient. Matrix::rankMatrix(cyclicSpline) shows that cyclicSpline is full rank by itself and doing lm(y ~ cyclicSpline - 1) will fix the issue. To confirm that this really is the case, we can compute the hat matrix explicitly as U = svd(cyclicSpline)$u; H = U %*% t(U) and then check that H %*% rep(1, n) is within numerical rounding of rep(1,n) (i.e. H acts as the identity on the vector of all 1s so that vector is entirely within the column space of cyclicSpline).
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) cyclicSpline already contains the constant vector in its span so if you additionally add an intercept it'll be rank deficient. Matrix::rankMatrix(cyclicSpline) shows that cyclicSpline is full rank by
38,372
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
As jld writes, your splines contain the constant 1 in their span, so you should fit a model without an intercept. As a matter of fact, the splines sum rowwise to 1: rowSums(cyclicSpline) gives you a constant vector of 1s. Here is a plot of x against its spline transform, which shows very nicely how the splines add to 1: plot(x,cyclicSpline[,1],pch=19) points(x,cyclicSpline[,2],pch=19,col=2) points(x,cyclicSpline[,3],pch=19,col=3)
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
As jld writes, your splines contain the constant 1 in their span, so you should fit a model without an intercept. As a matter of fact, the splines sum rowwise to 1: rowSums(cyclicSpline) gives you a c
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) As jld writes, your splines contain the constant 1 in their span, so you should fit a model without an intercept. As a matter of fact, the splines sum rowwise to 1: rowSums(cyclicSpline) gives you a constant vector of 1s. Here is a plot of x against its spline transform, which shows very nicely how the splines add to 1: plot(x,cyclicSpline[,1],pch=19) points(x,cyclicSpline[,2],pch=19,col=2) points(x,cyclicSpline[,3],pch=19,col=3)
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) As jld writes, your splines contain the constant 1 in their span, so you should fit a model without an intercept. As a matter of fact, the splines sum rowwise to 1: rowSums(cyclicSpline) gives you a c
38,373
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
I'm sure this lack of identification is due to your model also including a constant (intercept). mgcv is certainly applying identifiability constraints on cyclic splines it creates using cSplineDes() but that happens in other functions as you want to apply those constraints through the smooth.construct.xxxx functions while cSplineDes is a lower-level function that just creates the basis. Remove the intercept and things will start working: > # Example data > set.seed(1234) > n <- 1000 > x <- runif(n, 0, 2 * pi) # some random time in the year (2 * pi being day 365) > y <- sin(x) + rnorm(n) > > # Generate cyclic B-spline bases using mgcv > k <- 4 > knots <- seq(0, 2 * pi, length.out = k) > cyclicSpline <- mgcv::cSplineDes(x, knots = knots) > m1 <- lm(y ~ cyclicSpline) > m2 <- lm(y ~ cyclicSpline - 1) ## remove intercept > AIC(m1, m2) df AIC m1 4 2742.641 m2 4 2742.641
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline)
I'm sure this lack of identification is due to your model also including a constant (intercept). mgcv is certainly applying identifiability constraints on cyclic splines it creates using cSplineDes()
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) I'm sure this lack of identification is due to your model also including a constant (intercept). mgcv is certainly applying identifiability constraints on cyclic splines it creates using cSplineDes() but that happens in other functions as you want to apply those constraints through the smooth.construct.xxxx functions while cSplineDes is a lower-level function that just creates the basis. Remove the intercept and things will start working: > # Example data > set.seed(1234) > n <- 1000 > x <- runif(n, 0, 2 * pi) # some random time in the year (2 * pi being day 365) > y <- sin(x) + rnorm(n) > > # Generate cyclic B-spline bases using mgcv > k <- 4 > knots <- seq(0, 2 * pi, length.out = k) > cyclicSpline <- mgcv::cSplineDes(x, knots = knots) > m1 <- lm(y ~ cyclicSpline) > m2 <- lm(y ~ cyclicSpline - 1) ## remove intercept > AIC(m1, m2) df AIC m1 4 2742.641 m2 4 2742.641
Why is my design matrix rank deficient? (modelling seasonal data with a cyclic spline) I'm sure this lack of identification is due to your model also including a constant (intercept). mgcv is certainly applying identifiability constraints on cyclic splines it creates using cSplineDes()
38,374
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the analysis [duplicate]
Bootstrapping involves resampling your data randomly. Thus, each time you bootstrap, a different (re)sample will be drawn. Therefore, the results of different bootstrap runs will be different. If these differences are large, then you should be suspicious that your bootstrap may not be working well. If the differences are trivial, they are no problem. You may want to set the seed value of your random number generator in order to make your bootstrap exactly replicable.
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the a
Bootstrapping involves resampling your data randomly. Thus, each time you bootstrap, a different (re)sample will be drawn. Therefore, the results of different bootstrap runs will be different. If thes
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the analysis [duplicate] Bootstrapping involves resampling your data randomly. Thus, each time you bootstrap, a different (re)sample will be drawn. Therefore, the results of different bootstrap runs will be different. If these differences are large, then you should be suspicious that your bootstrap may not be working well. If the differences are trivial, they are no problem. You may want to set the seed value of your random number generator in order to make your bootstrap exactly replicable.
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the a Bootstrapping involves resampling your data randomly. Thus, each time you bootstrap, a different (re)sample will be drawn. Therefore, the results of different bootstrap runs will be different. If thes
38,375
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the analysis [duplicate]
This is totally normal and why we set a random seed (to get the same randomization each time) via set seed in R or np.random.seed in Python. The way bootstrap works is to take many random samples, with replacement, of your data, so there should be small fluctuations in your calculated values as those random samples vary.
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the a
This is totally normal and why we set a random seed (to get the same randomization each time) via set seed in R or np.random.seed in Python. The way bootstrap works is to take many random samples, wit
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the analysis [duplicate] This is totally normal and why we set a random seed (to get the same randomization each time) via set seed in R or np.random.seed in Python. The way bootstrap works is to take many random samples, with replacement, of your data, so there should be small fluctuations in your calculated values as those random samples vary.
Why Bootstrapping standard errors and 95% confidence intervals change each time I re-conducted the a This is totally normal and why we set a random seed (to get the same randomization each time) via set seed in R or np.random.seed in Python. The way bootstrap works is to take many random samples, wit
38,376
What is a metric can I use to calculate the distance between labels?
It really depends on what kind of words you are referring to. There are two distance that I wish to talk about : Edit Distance If you wish to capture difference in terms of how different two sequence are, you can use levenshtein distance or Damerau-Levenshtein distance. Mathematically for a word $A$ or $B$, the levenshtein distance is the least number of moves/operations to transform word $A$ to word $B$. This is what you might be looking for when your definition of word as a sequence of alphabet. Context Similarity For words we can also talk about contextual meaning of each word. If the two words are related or have similar meaning then we expect this measure to be small. This can be implemented with word2vec. Basically we train our model in unsupervised manner and will have it's vectorized representation and we measure the distance by comparing the two vectors. The most popular way for measuring the distance is using cosine similarity. The two distances does not correlate with each other. For example, deed and deer, the edit distance is small (in fact it is equals to 1), but the similarity distance will be big since those words are not related. Edit : Since the asker explained his specific case. You can consider using Earth Mover's/Wasserstein distance. This is my idea how you might approach this. Suppose you wish to imply ordering for each letter such that $a < b < c < d < e$ and you have 3 words on your letter. Suppose you have a word $abc$, for 3 letters let $t_1=0,t_2=1,t_3=2$, and $w_1=a,w_2=b,w_3=c$. Also let $T$ be a key value mapping between letter and some arbitrary value and should reflect your ordering. \begin{equation} f(x) = \begin{cases} T(w_i), & t_i \leq x < t_{i+1}, \\ 0, & otherwise\\ \end{cases} \end{equation} Forgive my poor use of notations though, but the idea is (if we let $T(a)=0.8$ and $T(b)=1.5$) for example you have a function that have value $f(x)=0.8$ for $x\in[0,1]$ and then value of $f(x)=1.5$ for $x\in[1,2]$. Now you can see this as an unnormalized distribution and you could calculate earthmovers/wasserstein distance. This is just some random idea, might not necessarily make sense though. Here is a useful link.
What is a metric can I use to calculate the distance between labels?
It really depends on what kind of words you are referring to. There are two distance that I wish to talk about : Edit Distance If you wish to capture difference in terms of how different two sequence
What is a metric can I use to calculate the distance between labels? It really depends on what kind of words you are referring to. There are two distance that I wish to talk about : Edit Distance If you wish to capture difference in terms of how different two sequence are, you can use levenshtein distance or Damerau-Levenshtein distance. Mathematically for a word $A$ or $B$, the levenshtein distance is the least number of moves/operations to transform word $A$ to word $B$. This is what you might be looking for when your definition of word as a sequence of alphabet. Context Similarity For words we can also talk about contextual meaning of each word. If the two words are related or have similar meaning then we expect this measure to be small. This can be implemented with word2vec. Basically we train our model in unsupervised manner and will have it's vectorized representation and we measure the distance by comparing the two vectors. The most popular way for measuring the distance is using cosine similarity. The two distances does not correlate with each other. For example, deed and deer, the edit distance is small (in fact it is equals to 1), but the similarity distance will be big since those words are not related. Edit : Since the asker explained his specific case. You can consider using Earth Mover's/Wasserstein distance. This is my idea how you might approach this. Suppose you wish to imply ordering for each letter such that $a < b < c < d < e$ and you have 3 words on your letter. Suppose you have a word $abc$, for 3 letters let $t_1=0,t_2=1,t_3=2$, and $w_1=a,w_2=b,w_3=c$. Also let $T$ be a key value mapping between letter and some arbitrary value and should reflect your ordering. \begin{equation} f(x) = \begin{cases} T(w_i), & t_i \leq x < t_{i+1}, \\ 0, & otherwise\\ \end{cases} \end{equation} Forgive my poor use of notations though, but the idea is (if we let $T(a)=0.8$ and $T(b)=1.5$) for example you have a function that have value $f(x)=0.8$ for $x\in[0,1]$ and then value of $f(x)=1.5$ for $x\in[1,2]$. Now you can see this as an unnormalized distribution and you could calculate earthmovers/wasserstein distance. This is just some random idea, might not necessarily make sense though. Here is a useful link.
What is a metric can I use to calculate the distance between labels? It really depends on what kind of words you are referring to. There are two distance that I wish to talk about : Edit Distance If you wish to capture difference in terms of how different two sequence
38,377
What is a metric can I use to calculate the distance between labels?
You can still use the Hamming distance for 5 letters. Another metric than you can use is the Levenshtein distance - the minimum number of single-character edits required to change one word into the other. If there is some meaning to the order of your letters, such that for example the distance between a and c is larger than the distance between a and b, then you can use a metric such as the euclidean distance.
What is a metric can I use to calculate the distance between labels?
You can still use the Hamming distance for 5 letters. Another metric than you can use is the Levenshtein distance - the minimum number of single-character edits required to change one word into the o
What is a metric can I use to calculate the distance between labels? You can still use the Hamming distance for 5 letters. Another metric than you can use is the Levenshtein distance - the minimum number of single-character edits required to change one word into the other. If there is some meaning to the order of your letters, such that for example the distance between a and c is larger than the distance between a and b, then you can use a metric such as the euclidean distance.
What is a metric can I use to calculate the distance between labels? You can still use the Hamming distance for 5 letters. Another metric than you can use is the Levenshtein distance - the minimum number of single-character edits required to change one word into the o
38,378
What is a metric can I use to calculate the distance between labels?
It sounds to me like you may be looking for a simple sum of absolute differences (also called L1 distance). Assuming that we can extend your ordinal relation $a < b < c < d < e$ into a metric (e.g. $b - a = c - b = 1$, $e - a = 4$, etc.), then you can let the difference between two words of the same length be the sum of absolute differences of symbols in the same position. For example, $ abcd - edcb = |a-e| + |b-d| + |c-c| + |d-b| = 4 + 2 + 0 + 2 = 8$. In the case of an alphabet of two symbols, this is identical to the Hamming distance, and like the Hamming distance it doesn't have any idea of context or transposition — $aaea$ and $aeaa$ are separated by a distance of 8, just as much as $aaaa$ and $cccc$.
What is a metric can I use to calculate the distance between labels?
It sounds to me like you may be looking for a simple sum of absolute differences (also called L1 distance). Assuming that we can extend your ordinal relation $a < b < c < d < e$ into a metric (e.g. $
What is a metric can I use to calculate the distance between labels? It sounds to me like you may be looking for a simple sum of absolute differences (also called L1 distance). Assuming that we can extend your ordinal relation $a < b < c < d < e$ into a metric (e.g. $b - a = c - b = 1$, $e - a = 4$, etc.), then you can let the difference between two words of the same length be the sum of absolute differences of symbols in the same position. For example, $ abcd - edcb = |a-e| + |b-d| + |c-c| + |d-b| = 4 + 2 + 0 + 2 = 8$. In the case of an alphabet of two symbols, this is identical to the Hamming distance, and like the Hamming distance it doesn't have any idea of context or transposition — $aaea$ and $aeaa$ are separated by a distance of 8, just as much as $aaaa$ and $cccc$.
What is a metric can I use to calculate the distance between labels? It sounds to me like you may be looking for a simple sum of absolute differences (also called L1 distance). Assuming that we can extend your ordinal relation $a < b < c < d < e$ into a metric (e.g. $
38,379
What is a metric can I use to calculate the distance between labels?
Usually when people talk about word similarity, they refer to something like Yohanes Alfredo's answer. In your case, you want to take into account the sort order of the characters. In that case, it might be that hobbs has the answer you need. Do you want to find distance according to the overall sort order of the word? In other words, do you want $$d(aaaa, aaad)<d(aaaa, daaa)$$ because words that start with the same letter are closer to each other in the dictionary than words that differ in other letters? If so, then you're better off calculating a value of each word using the following algorithm that takes into account the position of the letters in the word. initially set value1 = 0 for i in 1 to length: value1 = value1 + (alphabetSize ^ i) * letters1[length - i] initially set value2 = 0 for i in 1 to length: value2 = value2 + (alphabetSize ^ i) * letters2[length - i] distance = abs(value1 - value2) What this is doing is treating each word as a number written in base alphabetSize. I apologize for writing this in pseudocode. Using mathematical notation with capital Sigma would be clearer but I don't know my way around the typography.
What is a metric can I use to calculate the distance between labels?
Usually when people talk about word similarity, they refer to something like Yohanes Alfredo's answer. In your case, you want to take into account the sort order of the characters. In that case, it mi
What is a metric can I use to calculate the distance between labels? Usually when people talk about word similarity, they refer to something like Yohanes Alfredo's answer. In your case, you want to take into account the sort order of the characters. In that case, it might be that hobbs has the answer you need. Do you want to find distance according to the overall sort order of the word? In other words, do you want $$d(aaaa, aaad)<d(aaaa, daaa)$$ because words that start with the same letter are closer to each other in the dictionary than words that differ in other letters? If so, then you're better off calculating a value of each word using the following algorithm that takes into account the position of the letters in the word. initially set value1 = 0 for i in 1 to length: value1 = value1 + (alphabetSize ^ i) * letters1[length - i] initially set value2 = 0 for i in 1 to length: value2 = value2 + (alphabetSize ^ i) * letters2[length - i] distance = abs(value1 - value2) What this is doing is treating each word as a number written in base alphabetSize. I apologize for writing this in pseudocode. Using mathematical notation with capital Sigma would be clearer but I don't know my way around the typography.
What is a metric can I use to calculate the distance between labels? Usually when people talk about word similarity, they refer to something like Yohanes Alfredo's answer. In your case, you want to take into account the sort order of the characters. In that case, it mi
38,380
Calculate accelerated bootstrap interval in R
First a warning... the Bootstrap (as with most statistical methods) is unlikely to be reliable with such a small sample size. I would exercise caution if $n=6$ is a standard sample size in your case. Lets simulate some data set.seed(42) n <- 30 #Sample size x <- round(runif(n, 0, 100)) Lets refer to your index as $\theta$ and the estimator you provide for it as $\hat\theta$, which can be computed as follows. theta_hat <- var(x)/mean(x)^2 - 1/mean(x) For this simulated data, I get $\hat\theta = 0.2104$ and (by cranking $n$ wayyyy up) we have (roughly) $\theta = 0.32$. Obtain the Bootstrap distribution The Bootstrap algorithm is fairly straightforward to code up on your own. B <- 10000 #number of bootstrap resamples theta_boot <- rep(NA, B) for(i in 1:B){ #Select a bootstrap sample xnew <- sample(x, length(x), replace=TRUE) #Estimate index theta_boot[i] <- var(xnew)/mean(xnew)^2 - 1/mean(xnew) } #Plot bootstrap distribution hist(theta_boot, breaks=30, xlab='theta', main='Bootstrap distribution') abline(v=0.32, lwd=2, col='orange') The resulting distribution looks like this, where the vertical line represents the "true" value of the index $\theta$. Confidence intervals using the (percentile) Bootstrap At this point, getting a confidence interval is very straightforward. Suppose you want a $95\%$ CI (i.e. $\alpha = 0.05$). You are looking for the points $L$ and $U$ such that $2.5\%$ of the Bootstrap samples are below $L$ and above $U$. Mathematically, this is equivalent to setting $$L = \hat F^{-1}(\alpha/2) \quad\quad\quad U = \hat F^{-1}(1-\alpha/2),$$ where $\hat F$ is the "Bootstrap CDF". In R, this can be done simply by typing alpha <- 0.05 quantile(theta_boot, c(alpha/2, 1-alpha/2)) For this data, we get a $95\%$ CI of $(0.101, 0.355)$. The Accelerated Bootstrap Although the method of the previous section is a straightforward and natural way to obtain endpoints for a confidence interval, there are several alternatives which have been shown to perform better in a variety of settings. The Accelerated Bootstrap is one such method. The endpoints to the CI in this approach are found by considering the function $$g(u) = \hat F^{-1}\left(\Phi\left(z_0 + \frac{z_0 + z_u}{1-a(z_0+z_u)}\right) \right)$$ and setting $L = g(\alpha/2)$ and $U=g(1-\alpha/2)$. There are a lot of new terms in this function which I will now describe. $\Phi(z)$ represents the standard normal CDF. $z_0 = \Phi^{-1}(\hat F(\hat\theta)).$ $z_u = \Phi^{-1}(u).$ $a$ is an "acceleration constant". Estimation of the acceleration constant is the last remaining "challenge" and will be discussed in the next section. For now, let's fix the value $a=0.046$. The accelerate Bootstrap CI can now be computed in R as follows. #Desired quantiles u <- c(alpha/2, 1-alpha/2) #Compute constants z0 <- qnorm(mean(theta_boot <= theta_hat)) zu <- qnorm(u) a <- 0.046 #Adjusted quantiles u_adjusted <- pnorm(z0 + (z0+zu)/(1-a*(z0+zu))) #Accelerated Bootstrap CI quantile(theta_boot, u_adjusted) This gives a new $95\%$ CI of $(0.114, 0.383)$, which has effectively "shifted" the CI bounds in the direction of the true value for $\theta$. (Side note: when $a=0$, the accelerated Bootstrap is known as the bias correction Bootstrap). The following figure shows the Bootstrap distribution again, with vertical lines representing the Confidence intervals for each case. Estimating the acceleration constant The acceleration constant can (in some cases) be calculated theoretically from the data by assuming a particular distribution for the data. Otherwise, a non-parametric approach can be used. Efron (1987) shows that for univariate sampling distributions, the acceleration constant is reasonably well approximated by $$\hat a = \frac{1}{6}\frac{\sum_{i=1}^n I_i^3}{\left(\sum_{i=1}^nI_i^2\right)^{3/2}}$$ where $I_i$ denotes the influence of point $x_i$ on the estimation of $\theta$. Efron proposes approximating $I_i$ using the infinitesimal jackknife, but others have demonstrated that the finite-sample Jackknife is often sufficient. Thus, each $I_i$ can be approximated by $$I_i = (n-1)[\hat\theta - \hat\theta_{-i}]$$ where $\hat\theta_{-i}$ represents an estimate of $\theta$ (your index) after removing the $i^{th}$ data point. I <- rep(NA, n) for(i in 1:n){ #Remove ith data point xnew <- x[-i] #Estimate theta theta_jack <- var(xnew)/mean(xnew)^2 - 1/mean(xnew) I[i] <- (n-1)*(theta_hat - theta_jack) } #Estimate a a_hat <- (sum(I^3)/sum(I^2)^1.5)/6 This leads to the accleration constant estimate of $\hat a = 0.046$ that was used in the previous section.
Calculate accelerated bootstrap interval in R
First a warning... the Bootstrap (as with most statistical methods) is unlikely to be reliable with such a small sample size. I would exercise caution if $n=6$ is a standard sample size in your case.
Calculate accelerated bootstrap interval in R First a warning... the Bootstrap (as with most statistical methods) is unlikely to be reliable with such a small sample size. I would exercise caution if $n=6$ is a standard sample size in your case. Lets simulate some data set.seed(42) n <- 30 #Sample size x <- round(runif(n, 0, 100)) Lets refer to your index as $\theta$ and the estimator you provide for it as $\hat\theta$, which can be computed as follows. theta_hat <- var(x)/mean(x)^2 - 1/mean(x) For this simulated data, I get $\hat\theta = 0.2104$ and (by cranking $n$ wayyyy up) we have (roughly) $\theta = 0.32$. Obtain the Bootstrap distribution The Bootstrap algorithm is fairly straightforward to code up on your own. B <- 10000 #number of bootstrap resamples theta_boot <- rep(NA, B) for(i in 1:B){ #Select a bootstrap sample xnew <- sample(x, length(x), replace=TRUE) #Estimate index theta_boot[i] <- var(xnew)/mean(xnew)^2 - 1/mean(xnew) } #Plot bootstrap distribution hist(theta_boot, breaks=30, xlab='theta', main='Bootstrap distribution') abline(v=0.32, lwd=2, col='orange') The resulting distribution looks like this, where the vertical line represents the "true" value of the index $\theta$. Confidence intervals using the (percentile) Bootstrap At this point, getting a confidence interval is very straightforward. Suppose you want a $95\%$ CI (i.e. $\alpha = 0.05$). You are looking for the points $L$ and $U$ such that $2.5\%$ of the Bootstrap samples are below $L$ and above $U$. Mathematically, this is equivalent to setting $$L = \hat F^{-1}(\alpha/2) \quad\quad\quad U = \hat F^{-1}(1-\alpha/2),$$ where $\hat F$ is the "Bootstrap CDF". In R, this can be done simply by typing alpha <- 0.05 quantile(theta_boot, c(alpha/2, 1-alpha/2)) For this data, we get a $95\%$ CI of $(0.101, 0.355)$. The Accelerated Bootstrap Although the method of the previous section is a straightforward and natural way to obtain endpoints for a confidence interval, there are several alternatives which have been shown to perform better in a variety of settings. The Accelerated Bootstrap is one such method. The endpoints to the CI in this approach are found by considering the function $$g(u) = \hat F^{-1}\left(\Phi\left(z_0 + \frac{z_0 + z_u}{1-a(z_0+z_u)}\right) \right)$$ and setting $L = g(\alpha/2)$ and $U=g(1-\alpha/2)$. There are a lot of new terms in this function which I will now describe. $\Phi(z)$ represents the standard normal CDF. $z_0 = \Phi^{-1}(\hat F(\hat\theta)).$ $z_u = \Phi^{-1}(u).$ $a$ is an "acceleration constant". Estimation of the acceleration constant is the last remaining "challenge" and will be discussed in the next section. For now, let's fix the value $a=0.046$. The accelerate Bootstrap CI can now be computed in R as follows. #Desired quantiles u <- c(alpha/2, 1-alpha/2) #Compute constants z0 <- qnorm(mean(theta_boot <= theta_hat)) zu <- qnorm(u) a <- 0.046 #Adjusted quantiles u_adjusted <- pnorm(z0 + (z0+zu)/(1-a*(z0+zu))) #Accelerated Bootstrap CI quantile(theta_boot, u_adjusted) This gives a new $95\%$ CI of $(0.114, 0.383)$, which has effectively "shifted" the CI bounds in the direction of the true value for $\theta$. (Side note: when $a=0$, the accelerated Bootstrap is known as the bias correction Bootstrap). The following figure shows the Bootstrap distribution again, with vertical lines representing the Confidence intervals for each case. Estimating the acceleration constant The acceleration constant can (in some cases) be calculated theoretically from the data by assuming a particular distribution for the data. Otherwise, a non-parametric approach can be used. Efron (1987) shows that for univariate sampling distributions, the acceleration constant is reasonably well approximated by $$\hat a = \frac{1}{6}\frac{\sum_{i=1}^n I_i^3}{\left(\sum_{i=1}^nI_i^2\right)^{3/2}}$$ where $I_i$ denotes the influence of point $x_i$ on the estimation of $\theta$. Efron proposes approximating $I_i$ using the infinitesimal jackknife, but others have demonstrated that the finite-sample Jackknife is often sufficient. Thus, each $I_i$ can be approximated by $$I_i = (n-1)[\hat\theta - \hat\theta_{-i}]$$ where $\hat\theta_{-i}$ represents an estimate of $\theta$ (your index) after removing the $i^{th}$ data point. I <- rep(NA, n) for(i in 1:n){ #Remove ith data point xnew <- x[-i] #Estimate theta theta_jack <- var(xnew)/mean(xnew)^2 - 1/mean(xnew) I[i] <- (n-1)*(theta_hat - theta_jack) } #Estimate a a_hat <- (sum(I^3)/sum(I^2)^1.5)/6 This leads to the accleration constant estimate of $\hat a = 0.046$ that was used in the previous section.
Calculate accelerated bootstrap interval in R First a warning... the Bootstrap (as with most statistical methods) is unlikely to be reliable with such a small sample size. I would exercise caution if $n=6$ is a standard sample size in your case.
38,381
Calculate accelerated bootstrap interval in R
Since the question mentioned boot.ci, I thought I would try to replicate the results of @knrumsey with the boot package. A couple of notes. I copied my general code for using boot.ci with a function from here (with the caveat that I am the author of the code). The results are similar to those of @knrumsey. I can't confirm that the 'perc' and 'bca' methods are the same as those used in the original answer. set.seed(42) n <- 30 #Sample size x <- round(runif(n, 0, 100)) library(boot) Function = function(input, index){ Input = input[index] Result = var(Input)/mean(Input)^2 - 1/mean(Input) return(Result)} Boot = boot(x, Function, R=10000) hist(Boot$t[,1]) boot.ci(Boot, conf = 0.95, type = "perc") ### BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ### Based on 10000 bootstrap replicates ### ### Intervals : ### Level Percentile ### 95% ( 0.1021, 0.3521 ) boot.ci(Boot, conf = 0.95, type = "bca") ### BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ### Based on 10000 bootstrap replicates ### ### Intervals : ### Level BCa ### 95% ( 0.1181, 0.3906 )
Calculate accelerated bootstrap interval in R
Since the question mentioned boot.ci, I thought I would try to replicate the results of @knrumsey with the boot package. A couple of notes. I copied my general code for using boot.ci with a function f
Calculate accelerated bootstrap interval in R Since the question mentioned boot.ci, I thought I would try to replicate the results of @knrumsey with the boot package. A couple of notes. I copied my general code for using boot.ci with a function from here (with the caveat that I am the author of the code). The results are similar to those of @knrumsey. I can't confirm that the 'perc' and 'bca' methods are the same as those used in the original answer. set.seed(42) n <- 30 #Sample size x <- round(runif(n, 0, 100)) library(boot) Function = function(input, index){ Input = input[index] Result = var(Input)/mean(Input)^2 - 1/mean(Input) return(Result)} Boot = boot(x, Function, R=10000) hist(Boot$t[,1]) boot.ci(Boot, conf = 0.95, type = "perc") ### BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ### Based on 10000 bootstrap replicates ### ### Intervals : ### Level Percentile ### 95% ( 0.1021, 0.3521 ) boot.ci(Boot, conf = 0.95, type = "bca") ### BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ### Based on 10000 bootstrap replicates ### ### Intervals : ### Level BCa ### 95% ( 0.1181, 0.3906 )
Calculate accelerated bootstrap interval in R Since the question mentioned boot.ci, I thought I would try to replicate the results of @knrumsey with the boot package. A couple of notes. I copied my general code for using boot.ci with a function f
38,382
Calculate accelerated bootstrap interval in R
Saw this approach and tried replication it with code in Python, following the steps outlined by @knrumsey. The results are similar. # the libraries import pandas as pd import numpy as np from scipy import stats # for bootstrapping rng = np.random.default_rng() # random seed import random random.seed(42) Data simulation and bootstrapping # simulate data n = 30 # sample size x = np.round(np.random.uniform(low=0.0, high=100, size=n), 0) print(x) array([ 96., 76., 52., 89., 31., 40., 73., 30., 13., 75., 75., 77., 75., 66., 25., 98., 41., 77., 47., 92., 29., 14., 100., 49., 9., 20., 38., 39., 8., 29.]) # small function to calc index def fn(s): return np.var(s)/np.mean(s)**2 - 1/np.mean(s) # call fn theta_hat = fn(x) print(theta_hat) 0.2744459152021498 # bootstrap samples def bootstrap(sample, func, n_reps=1000, replace=True, shuffle=True, random_state=None): boot_resamples = np.empty([n_reps]) def resample(sample, size=len(sample), replace=replace, shuffle=shuffle, axis=0): return rng.choice(a=sample, size=size, axis=axis) for i in range(n_reps): boot_resamples[i] = func(resample(sample)) return boot_resamples # call bootstrap function theta_boot = bootstrap(sample=x, func=fn, n_reps=10000) print(theta_boot) array([0.22055184, 0.28982115, 0.35431769, ..., 0.25468442, 0.23192187, 0.25084865]) Plot the distribution # Plot the distribution import matplotlib.pyplot as plt # plot bootstrap distribution n, bins, patches = plt.hist(theta_boot, 30, density=True, facecolor='b', alpha=0.5) plt.xlabel('theta') plt.title('Bootstrap distribution') plt.axvline(x=theta_hat, color='orange') plt.show() estimating z0 and the bca intervals # Confidence intervals using the (percentile) Bootstrap alpha = 0.05 p = np.quantile(theta_boot, [alpha/2, 1-alpha/2]) print(p) # desired quantiles u = [alpha/2, 1-alpha/2] print('u:', u) # compute constants from scipy import stats z0 = stats.norm.ppf(np.mean(theta_boot <= theta_hat)) print('z0:', z0) zu = stats.norm.ppf(u) print('zu:', zu) a = 0.046 # adjusted quantiles u_adjusted = stats.norm.cdf(z0 + (z0+zu)/(1-a*(z0+zu))) print('u_adjusted:', u_adjusted) # accelerated bootstrap CI bca = np.quantile(theta_boot, u_adjusted) print('bca:', bca) u: [0.025, 0.975] z0: 0.12540870112199437 zu: [-1.95996398 1.95996398] u_adjusted: [0.05863013 0.9924932 ] bca: [0.17385148 0.46037554] Plot of the percentile and bca intervals red vertical bars = bca intervals blue vertical bars = percentile intervals # plot percentile and bca intervals n, bins, patches = plt.hist(theta_boot, 30, density=True, facecolor='b', alpha=0.5) plt.xlabel('theta') plt.title('Bootstrap distribution') plt.axvline(x=theta_hat, color='orange') for i in range(len(p)): plt.axvline(x=p[i], color='blue') for j in range(len(bca)): plt.axvline(x=bca[j], color='red') plt.show() Estimate acceleration constant a # estimate a def jackknife(sample, func, theta_hat): theta_jack = np.empty([sample.shape[0]]) for i in range(len(sample)): # delete row 'i' from df and run function with row 'i' removed jackknife_resample = np.delete(arr=sample, obj=i, axis=0) theta_jack[i] = func(jackknife_resample) I = (n-1)*(theta_hat - theta_jack) return (np.sum(I**3)/np.sum(I**2)**1.5)/6 # call jackknife function a = jackknife(sample=x, func=fn, theta_hat=theta_hat) print(a) 0.04435518601382892
Calculate accelerated bootstrap interval in R
Saw this approach and tried replication it with code in Python, following the steps outlined by @knrumsey. The results are similar. # the libraries import pandas as pd import numpy as np from scipy im
Calculate accelerated bootstrap interval in R Saw this approach and tried replication it with code in Python, following the steps outlined by @knrumsey. The results are similar. # the libraries import pandas as pd import numpy as np from scipy import stats # for bootstrapping rng = np.random.default_rng() # random seed import random random.seed(42) Data simulation and bootstrapping # simulate data n = 30 # sample size x = np.round(np.random.uniform(low=0.0, high=100, size=n), 0) print(x) array([ 96., 76., 52., 89., 31., 40., 73., 30., 13., 75., 75., 77., 75., 66., 25., 98., 41., 77., 47., 92., 29., 14., 100., 49., 9., 20., 38., 39., 8., 29.]) # small function to calc index def fn(s): return np.var(s)/np.mean(s)**2 - 1/np.mean(s) # call fn theta_hat = fn(x) print(theta_hat) 0.2744459152021498 # bootstrap samples def bootstrap(sample, func, n_reps=1000, replace=True, shuffle=True, random_state=None): boot_resamples = np.empty([n_reps]) def resample(sample, size=len(sample), replace=replace, shuffle=shuffle, axis=0): return rng.choice(a=sample, size=size, axis=axis) for i in range(n_reps): boot_resamples[i] = func(resample(sample)) return boot_resamples # call bootstrap function theta_boot = bootstrap(sample=x, func=fn, n_reps=10000) print(theta_boot) array([0.22055184, 0.28982115, 0.35431769, ..., 0.25468442, 0.23192187, 0.25084865]) Plot the distribution # Plot the distribution import matplotlib.pyplot as plt # plot bootstrap distribution n, bins, patches = plt.hist(theta_boot, 30, density=True, facecolor='b', alpha=0.5) plt.xlabel('theta') plt.title('Bootstrap distribution') plt.axvline(x=theta_hat, color='orange') plt.show() estimating z0 and the bca intervals # Confidence intervals using the (percentile) Bootstrap alpha = 0.05 p = np.quantile(theta_boot, [alpha/2, 1-alpha/2]) print(p) # desired quantiles u = [alpha/2, 1-alpha/2] print('u:', u) # compute constants from scipy import stats z0 = stats.norm.ppf(np.mean(theta_boot <= theta_hat)) print('z0:', z0) zu = stats.norm.ppf(u) print('zu:', zu) a = 0.046 # adjusted quantiles u_adjusted = stats.norm.cdf(z0 + (z0+zu)/(1-a*(z0+zu))) print('u_adjusted:', u_adjusted) # accelerated bootstrap CI bca = np.quantile(theta_boot, u_adjusted) print('bca:', bca) u: [0.025, 0.975] z0: 0.12540870112199437 zu: [-1.95996398 1.95996398] u_adjusted: [0.05863013 0.9924932 ] bca: [0.17385148 0.46037554] Plot of the percentile and bca intervals red vertical bars = bca intervals blue vertical bars = percentile intervals # plot percentile and bca intervals n, bins, patches = plt.hist(theta_boot, 30, density=True, facecolor='b', alpha=0.5) plt.xlabel('theta') plt.title('Bootstrap distribution') plt.axvline(x=theta_hat, color='orange') for i in range(len(p)): plt.axvline(x=p[i], color='blue') for j in range(len(bca)): plt.axvline(x=bca[j], color='red') plt.show() Estimate acceleration constant a # estimate a def jackknife(sample, func, theta_hat): theta_jack = np.empty([sample.shape[0]]) for i in range(len(sample)): # delete row 'i' from df and run function with row 'i' removed jackknife_resample = np.delete(arr=sample, obj=i, axis=0) theta_jack[i] = func(jackknife_resample) I = (n-1)*(theta_hat - theta_jack) return (np.sum(I**3)/np.sum(I**2)**1.5)/6 # call jackknife function a = jackknife(sample=x, func=fn, theta_hat=theta_hat) print(a) 0.04435518601382892
Calculate accelerated bootstrap interval in R Saw this approach and tried replication it with code in Python, following the steps outlined by @knrumsey. The results are similar. # the libraries import pandas as pd import numpy as np from scipy im
38,383
Calculate accelerated bootstrap interval in R
There is a problem with your index. It is not dimensionless. The first term of the index is dimensionless, but the second term has the dimension (1/X). Thus it is not only not dimensionless, but it is also non-homogeneous. Please check if it is correct.
Calculate accelerated bootstrap interval in R
There is a problem with your index. It is not dimensionless. The first term of the index is dimensionless, but the second term has the dimension (1/X). Thus it is not only not dimensionless, but it
Calculate accelerated bootstrap interval in R There is a problem with your index. It is not dimensionless. The first term of the index is dimensionless, but the second term has the dimension (1/X). Thus it is not only not dimensionless, but it is also non-homogeneous. Please check if it is correct.
Calculate accelerated bootstrap interval in R There is a problem with your index. It is not dimensionless. The first term of the index is dimensionless, but the second term has the dimension (1/X). Thus it is not only not dimensionless, but it
38,384
Why are Poisson regression coefficients biased?
The default link function is the log function for Poisson, this means: $$\mathbb{E}[y]=\exp\left(\log(5)+\log(x)\right)$$ If you specify your glm model as y ~ log(x) then you should recover "1" as the coefficient and "log(5)" as the intercept
Why are Poisson regression coefficients biased?
The default link function is the log function for Poisson, this means: $$\mathbb{E}[y]=\exp\left(\log(5)+\log(x)\right)$$ If you specify your glm model as y ~ log(x) then you should recover "1" as the
Why are Poisson regression coefficients biased? The default link function is the log function for Poisson, this means: $$\mathbb{E}[y]=\exp\left(\log(5)+\log(x)\right)$$ If you specify your glm model as y ~ log(x) then you should recover "1" as the coefficient and "log(5)" as the intercept
Why are Poisson regression coefficients biased? The default link function is the log function for Poisson, this means: $$\mathbb{E}[y]=\exp\left(\log(5)+\log(x)\right)$$ If you specify your glm model as y ~ log(x) then you should recover "1" as the
38,385
Why are Poisson regression coefficients biased?
That isn't how poisson regression works. The link function for poisson regression is the log, so if you did something like x<-runif(100) eta<- 5*x lam<- exp(eta) y<-rpois(length(lam), lam) model<- glm(y~x, family = 'poisson') Then you would recover the proper estimates for the coefficient of x and the intercept. You could however recover the correct coefficients from your code if you were to use the identity link function. For instance x <- runif(100) y <- rpois(100, 5 *x) m <- glm(y ~ x, family = poisson(link = 'identity'), start = c(2,2)) Note that R will warn you in this case that the optimization algorithm is having a tough time because the mean is not constrained to be positive, leading to problems in the evaluation of the log likelihood. The log link ensures that the linear predictor (which is unconstrained) does not result in such problems.
Why are Poisson regression coefficients biased?
That isn't how poisson regression works. The link function for poisson regression is the log, so if you did something like x<-runif(100) eta<- 5*x lam<- exp(eta) y<-rpois(length(lam), lam) model<- gl
Why are Poisson regression coefficients biased? That isn't how poisson regression works. The link function for poisson regression is the log, so if you did something like x<-runif(100) eta<- 5*x lam<- exp(eta) y<-rpois(length(lam), lam) model<- glm(y~x, family = 'poisson') Then you would recover the proper estimates for the coefficient of x and the intercept. You could however recover the correct coefficients from your code if you were to use the identity link function. For instance x <- runif(100) y <- rpois(100, 5 *x) m <- glm(y ~ x, family = poisson(link = 'identity'), start = c(2,2)) Note that R will warn you in this case that the optimization algorithm is having a tough time because the mean is not constrained to be positive, leading to problems in the evaluation of the log likelihood. The log link ensures that the linear predictor (which is unconstrained) does not result in such problems.
Why are Poisson regression coefficients biased? That isn't how poisson regression works. The link function for poisson regression is the log, so if you did something like x<-runif(100) eta<- 5*x lam<- exp(eta) y<-rpois(length(lam), lam) model<- gl
38,386
How can I visualize an ordinal variable predicting a continuous outcome?
The plot you shown is pretty good. But I think you can improve the data-ink ratio (invented by Edward Tufte) even more by showing all the datapoints. You can do this by adding jitter to the x-axis. Another improvement is to emphasise that the ordinal variable is categorical and not continuous. You can do this by using a different colour for the different levels. As an example I have plotted the titanic dataset in R, using the passenger class as an ordinal variable and the passenger age as the continuous variable. library(tidyverse) library(ggplot2) library(titanic) df <- titanic_train %>% mutate(Class=factor(Pclass)) ggplot(df, aes(Class, Age, color=Class)) + geom_jitter(height = 0) + ggtitle("Titanic passenger age vs. class")
How can I visualize an ordinal variable predicting a continuous outcome?
The plot you shown is pretty good. But I think you can improve the data-ink ratio (invented by Edward Tufte) even more by showing all the datapoints. You can do this by adding jitter to the x-axis. A
How can I visualize an ordinal variable predicting a continuous outcome? The plot you shown is pretty good. But I think you can improve the data-ink ratio (invented by Edward Tufte) even more by showing all the datapoints. You can do this by adding jitter to the x-axis. Another improvement is to emphasise that the ordinal variable is categorical and not continuous. You can do this by using a different colour for the different levels. As an example I have plotted the titanic dataset in R, using the passenger class as an ordinal variable and the passenger age as the continuous variable. library(tidyverse) library(ggplot2) library(titanic) df <- titanic_train %>% mutate(Class=factor(Pclass)) ggplot(df, aes(Class, Age, color=Class)) + geom_jitter(height = 0) + ggtitle("Titanic passenger age vs. class")
How can I visualize an ordinal variable predicting a continuous outcome? The plot you shown is pretty good. But I think you can improve the data-ink ratio (invented by Edward Tufte) even more by showing all the datapoints. You can do this by adding jitter to the x-axis. A
38,387
How can I visualize an ordinal variable predicting a continuous outcome?
The problem with this is that there's no way of knowing how many dots are bunched up together. Two solutions I've seen: Box plot This would give you tighter box if data points are bunched up together. Bubble chart Not sure if this is the official name, but basically you put the vertical axis into bins. The size of the bubble is determined by how many observations fall into that bin.
How can I visualize an ordinal variable predicting a continuous outcome?
The problem with this is that there's no way of knowing how many dots are bunched up together. Two solutions I've seen: Box plot This would give you tighter box if data points are bunched up together.
How can I visualize an ordinal variable predicting a continuous outcome? The problem with this is that there's no way of knowing how many dots are bunched up together. Two solutions I've seen: Box plot This would give you tighter box if data points are bunched up together. Bubble chart Not sure if this is the official name, but basically you put the vertical axis into bins. The size of the bubble is determined by how many observations fall into that bin.
How can I visualize an ordinal variable predicting a continuous outcome? The problem with this is that there's no way of knowing how many dots are bunched up together. Two solutions I've seen: Box plot This would give you tighter box if data points are bunched up together.
38,388
How can I visualize an ordinal variable predicting a continuous outcome?
In addition to the box plot suggested by Art, I suggest a violin plot: Explicitly showing the median and interquartile range, as done in the above image, is optional. Quoting from Wikipedia: Violin plots are similar to box plots, except that they also show the probability density of the data at different values, usually smoothed by a kernel density estimator. A violin plot is more informative than a plain box plot. While a box plot only shows summary statistics such as mean/median and interquartile ranges, the violin plot shows the full distribution of the data. The difference is particularly useful when the data distribution is multimodal (more than one peak). A similar alternative is stacked histograms or density estimators:
How can I visualize an ordinal variable predicting a continuous outcome?
In addition to the box plot suggested by Art, I suggest a violin plot: Explicitly showing the median and interquartile range, as done in the above image, is optional. Quoting from Wikipedia: Violin
How can I visualize an ordinal variable predicting a continuous outcome? In addition to the box plot suggested by Art, I suggest a violin plot: Explicitly showing the median and interquartile range, as done in the above image, is optional. Quoting from Wikipedia: Violin plots are similar to box plots, except that they also show the probability density of the data at different values, usually smoothed by a kernel density estimator. A violin plot is more informative than a plain box plot. While a box plot only shows summary statistics such as mean/median and interquartile ranges, the violin plot shows the full distribution of the data. The difference is particularly useful when the data distribution is multimodal (more than one peak). A similar alternative is stacked histograms or density estimators:
How can I visualize an ordinal variable predicting a continuous outcome? In addition to the box plot suggested by Art, I suggest a violin plot: Explicitly showing the median and interquartile range, as done in the above image, is optional. Quoting from Wikipedia: Violin
38,389
How can I visualize an ordinal variable predicting a continuous outcome?
To your scatterplot, I would add a large point indicating the mean Y-value at every unique X-value, and also do one or more of the following: Square-root (or cube-root) transform your Y-axis. Both these transformations can deal with zeroes, unlike log transformations. Cube roots can also deal with negative numbers. Make the points a bit transparent. Add a little jitter to the X-axis values if the previous steps are insufficient. As Glen_b notes, there is insufficient information right now to decide whether adding a linear regression line is meaningful.
How can I visualize an ordinal variable predicting a continuous outcome?
To your scatterplot, I would add a large point indicating the mean Y-value at every unique X-value, and also do one or more of the following: Square-root (or cube-root) transform your Y-axis. Both t
How can I visualize an ordinal variable predicting a continuous outcome? To your scatterplot, I would add a large point indicating the mean Y-value at every unique X-value, and also do one or more of the following: Square-root (or cube-root) transform your Y-axis. Both these transformations can deal with zeroes, unlike log transformations. Cube roots can also deal with negative numbers. Make the points a bit transparent. Add a little jitter to the X-axis values if the previous steps are insufficient. As Glen_b notes, there is insufficient information right now to decide whether adding a linear regression line is meaningful.
How can I visualize an ordinal variable predicting a continuous outcome? To your scatterplot, I would add a large point indicating the mean Y-value at every unique X-value, and also do one or more of the following: Square-root (or cube-root) transform your Y-axis. Both t
38,390
How can I visualize an ordinal variable predicting a continuous outcome?
You state that one variable is ordinal, then you decide to treat it as interval. Is that reasonable? There is no way for us to know, as you have not said what the ordinal variable actually is. If you do decide to keep it as ordinal, then what to do depends on your sample size. If N is very large then I like the box plot solution. If N is not so large, then I like jitter. There are other additions you can make to the scatterplot as well - I wrote a presentation about this using SAS, but I am sure it could be duplicated in R. (If that link does not work, Googling flom, scatterplots, enhancements should find it). But what if treating the variable as interval is not reasonable? You could come to this conclusion either substantively or by trying different codings and seeing how results change. In that case, I suggest trying optimal scaling. There is an R package optiscale that may help (I have not used this package).
How can I visualize an ordinal variable predicting a continuous outcome?
You state that one variable is ordinal, then you decide to treat it as interval. Is that reasonable? There is no way for us to know, as you have not said what the ordinal variable actually is. If you
How can I visualize an ordinal variable predicting a continuous outcome? You state that one variable is ordinal, then you decide to treat it as interval. Is that reasonable? There is no way for us to know, as you have not said what the ordinal variable actually is. If you do decide to keep it as ordinal, then what to do depends on your sample size. If N is very large then I like the box plot solution. If N is not so large, then I like jitter. There are other additions you can make to the scatterplot as well - I wrote a presentation about this using SAS, but I am sure it could be duplicated in R. (If that link does not work, Googling flom, scatterplots, enhancements should find it). But what if treating the variable as interval is not reasonable? You could come to this conclusion either substantively or by trying different codings and seeing how results change. In that case, I suggest trying optimal scaling. There is an R package optiscale that may help (I have not used this package).
How can I visualize an ordinal variable predicting a continuous outcome? You state that one variable is ordinal, then you decide to treat it as interval. Is that reasonable? There is no way for us to know, as you have not said what the ordinal variable actually is. If you
38,391
How can I visualize an ordinal variable predicting a continuous outcome?
The basic idea of regression is that the probability distribution of $y$ depends on $x$: there is some family of distributions $P_x(y)$. It's generally assumed that these distributions are all normal with a constant standard deviation (homoscedasticity), leaving only the mean as depending on $x$: $p(Y=y) = N(\mu_x,\sigma)$. With continuous data, you typically get only one $y$ for finitely many $x$, and no $y$ for the rest, making estimating $\mu_x$ by just looking at your sample $y$ for that $x$ unworkable. So a further assumption is often made that $u_x$ is a simple linear function of $x$, so that $p(Y=y) = N(mx+b,\sigma)$ for some numbers $m, b, \sigma$. The linear regression formula then gives you an estimate of $m$ (slope) and $b$ (intercept) for your data. Here, you seem to have highly skewed data, and there seems to be a general trend of decreasing spread, so if you were to use linear regression, the normality and homoscedasticity assumptions would be problematic. But you appear to have a large dataset for each value of $x$. So to estimate $\mu$ for a particular $x$, there is no need to use the linear regression formula; you can simply take $\bar y$ for each $x$. Which is more informative for predicting a $y$ for $x=4$: looking at the $y$ values for $x=4$, or looking at the $y$ values for $x=3$ and $x=5$, and trying interpolate between them? You may want to show summary statistics other than just $\bar y_x$. A box plot can show meadian and quartiles, for instance. You might also want to represent the standard deviation somehow. You could also show the entire distributions. You could do that with x-dither, as Pieter suggested, or with another type of chart, such as density plots. You could put them side-by-side as in Pieter's answer, but with only six categories, it might be possible to combine them into one chart with the categories separated by colors. Here's a discussion of histograms and density plots: https://towardsdatascience.com/histograms-and-density-plots-in-python-f6bda88f5ac0
How can I visualize an ordinal variable predicting a continuous outcome?
The basic idea of regression is that the probability distribution of $y$ depends on $x$: there is some family of distributions $P_x(y)$. It's generally assumed that these distributions are all normal
How can I visualize an ordinal variable predicting a continuous outcome? The basic idea of regression is that the probability distribution of $y$ depends on $x$: there is some family of distributions $P_x(y)$. It's generally assumed that these distributions are all normal with a constant standard deviation (homoscedasticity), leaving only the mean as depending on $x$: $p(Y=y) = N(\mu_x,\sigma)$. With continuous data, you typically get only one $y$ for finitely many $x$, and no $y$ for the rest, making estimating $\mu_x$ by just looking at your sample $y$ for that $x$ unworkable. So a further assumption is often made that $u_x$ is a simple linear function of $x$, so that $p(Y=y) = N(mx+b,\sigma)$ for some numbers $m, b, \sigma$. The linear regression formula then gives you an estimate of $m$ (slope) and $b$ (intercept) for your data. Here, you seem to have highly skewed data, and there seems to be a general trend of decreasing spread, so if you were to use linear regression, the normality and homoscedasticity assumptions would be problematic. But you appear to have a large dataset for each value of $x$. So to estimate $\mu$ for a particular $x$, there is no need to use the linear regression formula; you can simply take $\bar y$ for each $x$. Which is more informative for predicting a $y$ for $x=4$: looking at the $y$ values for $x=4$, or looking at the $y$ values for $x=3$ and $x=5$, and trying interpolate between them? You may want to show summary statistics other than just $\bar y_x$. A box plot can show meadian and quartiles, for instance. You might also want to represent the standard deviation somehow. You could also show the entire distributions. You could do that with x-dither, as Pieter suggested, or with another type of chart, such as density plots. You could put them side-by-side as in Pieter's answer, but with only six categories, it might be possible to combine them into one chart with the categories separated by colors. Here's a discussion of histograms and density plots: https://towardsdatascience.com/histograms-and-density-plots-in-python-f6bda88f5ac0
How can I visualize an ordinal variable predicting a continuous outcome? The basic idea of regression is that the probability distribution of $y$ depends on $x$: there is some family of distributions $P_x(y)$. It's generally assumed that these distributions are all normal
38,392
What does 1 with an inequality in the subscript mean? [duplicate]
$\mathbb{1}_{x\ge a}$ is an indicator function, that is equal to $1$ when $x\ge a$ and zero otherwise. Multiplying by it is a fancy, math way of saying that everything else is equal to zero. In this case, it says that only cases greater or equal to $a$ can be observed.
What does 1 with an inequality in the subscript mean? [duplicate]
$\mathbb{1}_{x\ge a}$ is an indicator function, that is equal to $1$ when $x\ge a$ and zero otherwise. Multiplying by it is a fancy, math way of saying that everything else is equal to zero. In this c
What does 1 with an inequality in the subscript mean? [duplicate] $\mathbb{1}_{x\ge a}$ is an indicator function, that is equal to $1$ when $x\ge a$ and zero otherwise. Multiplying by it is a fancy, math way of saying that everything else is equal to zero. In this case, it says that only cases greater or equal to $a$ can be observed.
What does 1 with an inequality in the subscript mean? [duplicate] $\mathbb{1}_{x\ge a}$ is an indicator function, that is equal to $1$ when $x\ge a$ and zero otherwise. Multiplying by it is a fancy, math way of saying that everything else is equal to zero. In this c
38,393
What does 1 with an inequality in the subscript mean? [duplicate]
The I in the formula is the indicator function. In this case it equals 1 when x>=a and zero otherwise.
What does 1 with an inequality in the subscript mean? [duplicate]
The I in the formula is the indicator function. In this case it equals 1 when x>=a and zero otherwise.
What does 1 with an inequality in the subscript mean? [duplicate] The I in the formula is the indicator function. In this case it equals 1 when x>=a and zero otherwise.
What does 1 with an inequality in the subscript mean? [duplicate] The I in the formula is the indicator function. In this case it equals 1 when x>=a and zero otherwise.
38,394
Is there a continuous version of the Uniform distribution?
This is a familiar problem in theoretical mathematics, where it helps the analysis when you don't have to worry about lack of differentiability. The standard solution, sometimes called "mollification," is to convolve the density with a scaled, zero-centered, infinitely differentiable density (often of compact support). By setting the scale close to zero you can make the approximation as close as you like. The figure is a sequence of graphs of mollified Uniform$(0,1)$ density functions using a Gaussian mollifier with standard deviations $1/4$ (green), $1/10$ (gold), $1/25$ (red), and $0$ (blue: the original Uniform PDF). It is easy to show (use integration by parts in the formula for a convolution) that when the mollifier is infinitely differentiable everywhere (aka "smooth"), so is the mollified function. The existence of such families of mollifiers means that for most purposes you don't really have to consider non-differentiable densities (or even singular distributions, which by definition do not have a density everywhere) when thinking about properties of distributions. Singular distributions might indeed be "edge cases" but they can be approached as limits of distributions with smooth densities. This method is particularly congenial in statistical applications because many properties of the mollified distribution are easily computed. As an example, since the variance of the mollifier is proportional to the square of its scale, if we pick a standard mollifier with unit variance (as in this example), the variance of the mollified Uniform distribution equals the variance of the Uniform distribution (here, $1/12$) plus the square of the scale. Thus, you know immediately that mollification with a Gaussian of standard deviation $1/25$ (as in the red curve) will add only $1/(25)^2$ to the variance of the Uniform distribution. You can select the standard deviation to be so small that the change in variance it induces is negligible for your purposes.
Is there a continuous version of the Uniform distribution?
This is a familiar problem in theoretical mathematics, where it helps the analysis when you don't have to worry about lack of differentiability. The standard solution, sometimes called "mollification
Is there a continuous version of the Uniform distribution? This is a familiar problem in theoretical mathematics, where it helps the analysis when you don't have to worry about lack of differentiability. The standard solution, sometimes called "mollification," is to convolve the density with a scaled, zero-centered, infinitely differentiable density (often of compact support). By setting the scale close to zero you can make the approximation as close as you like. The figure is a sequence of graphs of mollified Uniform$(0,1)$ density functions using a Gaussian mollifier with standard deviations $1/4$ (green), $1/10$ (gold), $1/25$ (red), and $0$ (blue: the original Uniform PDF). It is easy to show (use integration by parts in the formula for a convolution) that when the mollifier is infinitely differentiable everywhere (aka "smooth"), so is the mollified function. The existence of such families of mollifiers means that for most purposes you don't really have to consider non-differentiable densities (or even singular distributions, which by definition do not have a density everywhere) when thinking about properties of distributions. Singular distributions might indeed be "edge cases" but they can be approached as limits of distributions with smooth densities. This method is particularly congenial in statistical applications because many properties of the mollified distribution are easily computed. As an example, since the variance of the mollifier is proportional to the square of its scale, if we pick a standard mollifier with unit variance (as in this example), the variance of the mollified Uniform distribution equals the variance of the Uniform distribution (here, $1/12$) plus the square of the scale. Thus, you know immediately that mollification with a Gaussian of standard deviation $1/25$ (as in the red curve) will add only $1/(25)^2$ to the variance of the Uniform distribution. You can select the standard deviation to be so small that the change in variance it induces is negligible for your purposes.
Is there a continuous version of the Uniform distribution? This is a familiar problem in theoretical mathematics, where it helps the analysis when you don't have to worry about lack of differentiability. The standard solution, sometimes called "mollification
38,395
Is there a continuous version of the Uniform distribution?
The standard continuous uniform distribution $\text{U}(a,b)$ distribution has a continuous CDF that is differentiable (in the regular sense) at all points except the edges of its support, $x = a, b$. Since probability theory defines density functions using Radon-Nikodym derivatives, we can still ascribe values to the density function even at these end-points. In view of the use of Radon-Nikosym derivatives in probability, I cannot think of any context where this lack of (regular) differentiability of the CDF would really matter. Nevertheless, if you really want to approximate the uniform distribution with a distribution having a fully differentiable distribution function (in the regular sense), you could approximate the density with a mixture distribution (e.g., a mixture of evenly spaced normal distributions).
Is there a continuous version of the Uniform distribution?
The standard continuous uniform distribution $\text{U}(a,b)$ distribution has a continuous CDF that is differentiable (in the regular sense) at all points except the edges of its support, $x = a, b$.
Is there a continuous version of the Uniform distribution? The standard continuous uniform distribution $\text{U}(a,b)$ distribution has a continuous CDF that is differentiable (in the regular sense) at all points except the edges of its support, $x = a, b$. Since probability theory defines density functions using Radon-Nikodym derivatives, we can still ascribe values to the density function even at these end-points. In view of the use of Radon-Nikosym derivatives in probability, I cannot think of any context where this lack of (regular) differentiability of the CDF would really matter. Nevertheless, if you really want to approximate the uniform distribution with a distribution having a fully differentiable distribution function (in the regular sense), you could approximate the density with a mixture distribution (e.g., a mixture of evenly spaced normal distributions).
Is there a continuous version of the Uniform distribution? The standard continuous uniform distribution $\text{U}(a,b)$ distribution has a continuous CDF that is differentiable (in the regular sense) at all points except the edges of its support, $x = a, b$.
38,396
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory?
Karl Popper has argued for a general mindset that should be employed by a scientist. The frequentist null hypothesis testing was designed in a way that is consistent with this kind of thinking about scientific method. However this does not mean that is is the only way how you could conduct hypothesis tests! In Bayesian framework you could use Bayes factors to compare the "null" model with alternative model so to falsify your hypothesis (this is how most Bayesian equivalents to frequentist tests, like BEST, work). So you can perform hypothesis tests in Bayesian framework and Karl Popper has nothing to do with Bayesian vs. frequentist debate.
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory?
Karl Popper has argued for a general mindset that should be employed by a scientist. The frequentist null hypothesis testing was designed in a way that is consistent with this kind of thinking about
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory? Karl Popper has argued for a general mindset that should be employed by a scientist. The frequentist null hypothesis testing was designed in a way that is consistent with this kind of thinking about scientific method. However this does not mean that is is the only way how you could conduct hypothesis tests! In Bayesian framework you could use Bayes factors to compare the "null" model with alternative model so to falsify your hypothesis (this is how most Bayesian equivalents to frequentist tests, like BEST, work). So you can perform hypothesis tests in Bayesian framework and Karl Popper has nothing to do with Bayesian vs. frequentist debate.
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory? Karl Popper has argued for a general mindset that should be employed by a scientist. The frequentist null hypothesis testing was designed in a way that is consistent with this kind of thinking about
38,397
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory?
It depends on what you mean that Popper had nothing to with debate. IN some sense it is half correct In other sense, its WRONGER THAN WRONG; ultimately,he rejected priors or inductive logic; and he was intimately connected with these issues. The foundations of probability is generally considered to be his best work. Developed and helped make rigorous Von Mises frequentist theory Developed A confirmation Logic, using Popper Functions. Argued against Inductive logic, and standard bayesian inference; that is nonsense (see his paper on this Developed his own probability calculus similar to A Renyi Ultimately was interested in the debate, because he rejected both conceptions; arguing to a return to Kolmogorov interpretation of probability= the neo-classical physical interpretation called propensity theory Connected these issues to QM Generally considered to amongst the greatest mathematical philosophers of probability (if not the greatest in some cases) and probabilistic logicians More than half of his best work (read David Miller, who was close confident, contribution in the newly published cambridge companion to Popper)
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory?
It depends on what you mean that Popper had nothing to with debate. IN some sense it is half correct In other sense, its WRONGER THAN WRONG; ultimately,he rejected priors or inductive logic; and he wa
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory? It depends on what you mean that Popper had nothing to with debate. IN some sense it is half correct In other sense, its WRONGER THAN WRONG; ultimately,he rejected priors or inductive logic; and he was intimately connected with these issues. The foundations of probability is generally considered to be his best work. Developed and helped make rigorous Von Mises frequentist theory Developed A confirmation Logic, using Popper Functions. Argued against Inductive logic, and standard bayesian inference; that is nonsense (see his paper on this Developed his own probability calculus similar to A Renyi Ultimately was interested in the debate, because he rejected both conceptions; arguing to a return to Kolmogorov interpretation of probability= the neo-classical physical interpretation called propensity theory Connected these issues to QM Generally considered to amongst the greatest mathematical philosophers of probability (if not the greatest in some cases) and probabilistic logicians More than half of his best work (read David Miller, who was close confident, contribution in the newly published cambridge companion to Popper)
Is the frequentist framework more appropriate than the Bayesian one, according to Popper's theory? It depends on what you mean that Popper had nothing to with debate. IN some sense it is half correct In other sense, its WRONGER THAN WRONG; ultimately,he rejected priors or inductive logic; and he wa
38,398
probability of gamma greater than exponential
If $X$ has density function $\lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x)\mathbf 1_{\{x\colon x > 0\}}$ and independent $Y$ has density function $\exp(-y)\mathbf 1_{\{y\colon x > 0\}}$, then \begin{align} P\{X < Y\} &= \int_{0}^\infty \lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x) \int_{x}^\infty \exp(-y)\, \mathrm dy \, \mathrm dx\\ &= \int_{0}^\infty \lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-(\lambda+1) x)\, \mathrm dx\\ &= \left(\frac{\lambda}{\lambda+1}\right)^3\int_{0}^\infty (\lambda+1) \frac{((\lambda+1) x)^2}{\Gamma(3)}\exp(-(\lambda+1) x)\, \mathrm dx\\ &= \left(\frac{\lambda}{\lambda+1}\right)^3. \end{align} Consider also a Poisson process with arrival rate $\lambda+1$. We can decompose this process into two independent Poisson subprocesses $\mathcal X$ and $\mathcal Y$ of rates $\lambda$ and $1$ respectively by labeling each arrival as belonging either to the $\mathcal X$ process (with probability $\frac{\lambda}{\lambda+1}$) or to the $\mathcal Y$ process (with probability $\frac{1}{\lambda+1}$), with each label being chosen independently of all other labels. Then, $X$ can be taken to be the time of the third arrival (after $t = 0$) in the $\mathcal X$ subprocess while $Y$ is the time of the first arrival (after $t = 0$) in the $\mathcal Y$ subprocess. With this interpretation, $X < Y$ is just the event that the first three arrivals after $t=0$ were all labeled as belonging to the $\mathcal X$ subprocess, and this event has probability $\displaystyle \left(\frac{\lambda}{\lambda+1}\right)^3$. Look, Ma! No integrals were computed in arriving at the answer!
probability of gamma greater than exponential
If $X$ has density function $\lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x)\mathbf 1_{\{x\colon x > 0\}}$ and independent $Y$ has density function $\exp(-y)\mathbf 1_{\{y\colon x > 0\}}$, the
probability of gamma greater than exponential If $X$ has density function $\lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x)\mathbf 1_{\{x\colon x > 0\}}$ and independent $Y$ has density function $\exp(-y)\mathbf 1_{\{y\colon x > 0\}}$, then \begin{align} P\{X < Y\} &= \int_{0}^\infty \lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x) \int_{x}^\infty \exp(-y)\, \mathrm dy \, \mathrm dx\\ &= \int_{0}^\infty \lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-(\lambda+1) x)\, \mathrm dx\\ &= \left(\frac{\lambda}{\lambda+1}\right)^3\int_{0}^\infty (\lambda+1) \frac{((\lambda+1) x)^2}{\Gamma(3)}\exp(-(\lambda+1) x)\, \mathrm dx\\ &= \left(\frac{\lambda}{\lambda+1}\right)^3. \end{align} Consider also a Poisson process with arrival rate $\lambda+1$. We can decompose this process into two independent Poisson subprocesses $\mathcal X$ and $\mathcal Y$ of rates $\lambda$ and $1$ respectively by labeling each arrival as belonging either to the $\mathcal X$ process (with probability $\frac{\lambda}{\lambda+1}$) or to the $\mathcal Y$ process (with probability $\frac{1}{\lambda+1}$), with each label being chosen independently of all other labels. Then, $X$ can be taken to be the time of the third arrival (after $t = 0$) in the $\mathcal X$ subprocess while $Y$ is the time of the first arrival (after $t = 0$) in the $\mathcal Y$ subprocess. With this interpretation, $X < Y$ is just the event that the first three arrivals after $t=0$ were all labeled as belonging to the $\mathcal X$ subprocess, and this event has probability $\displaystyle \left(\frac{\lambda}{\lambda+1}\right)^3$. Look, Ma! No integrals were computed in arriving at the answer!
probability of gamma greater than exponential If $X$ has density function $\lambda \frac{(\lambda x)^2}{\Gamma(3)}\exp(-\lambda x)\mathbf 1_{\{x\colon x > 0\}}$ and independent $Y$ has density function $\exp(-y)\mathbf 1_{\{y\colon x > 0\}}$, the
38,399
probability of gamma greater than exponential
There is a relationship between gamma and beta random variables that leads to a general expression for $P[X>Y]$ for any two independent gamma random variables. If $X \sim \rm{Gamma}(\alpha_1,\beta_1)$ and $Y \sim \rm{Gamma}(\alpha_2,\beta_2),$ where $\alpha$ is the shape parameter, $\beta$ is the scale parameter, and the mean is $\alpha \beta,$ then $$P[X>Y] = H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right),$$ where $H$ is the cumulative distribution function of a beta random variable. In your case I calculate $P[X>Y]=0.984375$ If you have used a different parameterization of the gamma distribution, this will need to be adjusted. Here is the development. We can construct $\beta_1Y \sim \rm{Gamma}(\alpha_2,\beta_1\beta_2)$ and $\beta_2X \sim \rm{Gamma} (\alpha_1,\beta_1 \beta_2).$ Now consider $$W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}$$ It is known (see https://en.wikipedia.org/wiki/Gamma_distribution, Related Distributions and Properties Section) that $W$ has a beta distribution with first shape parameter of $\alpha_2$ and second shape parameter of $\alpha_1.$ So then $$P \left[ W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}<\frac{\beta_1}{\beta_1+\beta_2} \right]=H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right),$$ where $H$ is the cumulative distribution function of a beta random variable. Taking reciprocals and simplifying, $$P \left[ W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}<\frac{\beta_1}{\beta_1+\beta_2} \right]=P \left[ \frac{\beta_1Y+\beta_2X}{\beta_1Y} > \frac{\beta_1+\beta_2}{\beta_1} \right]$$ $$ = P \left[ 1 + \frac{\beta_2X}{\beta_1Y}>1+\frac{\beta_2}{\beta_1} \right]=P \left[ \frac{X}{Y} >1 \right] =P \left[ X>Y \right] =H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right)$$
probability of gamma greater than exponential
There is a relationship between gamma and beta random variables that leads to a general expression for $P[X>Y]$ for any two independent gamma random variables. If $X \sim \rm{Gamma}(\alpha_1,\beta_1)$
probability of gamma greater than exponential There is a relationship between gamma and beta random variables that leads to a general expression for $P[X>Y]$ for any two independent gamma random variables. If $X \sim \rm{Gamma}(\alpha_1,\beta_1)$ and $Y \sim \rm{Gamma}(\alpha_2,\beta_2),$ where $\alpha$ is the shape parameter, $\beta$ is the scale parameter, and the mean is $\alpha \beta,$ then $$P[X>Y] = H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right),$$ where $H$ is the cumulative distribution function of a beta random variable. In your case I calculate $P[X>Y]=0.984375$ If you have used a different parameterization of the gamma distribution, this will need to be adjusted. Here is the development. We can construct $\beta_1Y \sim \rm{Gamma}(\alpha_2,\beta_1\beta_2)$ and $\beta_2X \sim \rm{Gamma} (\alpha_1,\beta_1 \beta_2).$ Now consider $$W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}$$ It is known (see https://en.wikipedia.org/wiki/Gamma_distribution, Related Distributions and Properties Section) that $W$ has a beta distribution with first shape parameter of $\alpha_2$ and second shape parameter of $\alpha_1.$ So then $$P \left[ W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}<\frac{\beta_1}{\beta_1+\beta_2} \right]=H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right),$$ where $H$ is the cumulative distribution function of a beta random variable. Taking reciprocals and simplifying, $$P \left[ W = \frac{\beta_1Y}{\beta_1Y+\beta_2X}<\frac{\beta_1}{\beta_1+\beta_2} \right]=P \left[ \frac{\beta_1Y+\beta_2X}{\beta_1Y} > \frac{\beta_1+\beta_2}{\beta_1} \right]$$ $$ = P \left[ 1 + \frac{\beta_2X}{\beta_1Y}>1+\frac{\beta_2}{\beta_1} \right]=P \left[ \frac{X}{Y} >1 \right] =P \left[ X>Y \right] =H_{\alpha_2,\alpha_1} \left( \frac{\beta_1}{\beta_1+\beta_2} \right)$$
probability of gamma greater than exponential There is a relationship between gamma and beta random variables that leads to a general expression for $P[X>Y]$ for any two independent gamma random variables. If $X \sim \rm{Gamma}(\alpha_1,\beta_1)$
38,400
probability of gamma greater than exponential
The rote way to compute $P[Y>X]$ is by double integral $$\int_0^\infty f_X(x) dx \int_x^\infty f_Y(y) dy $$ Where the inner integral may be recognized as the survival function of $Y$, an exponential with parameter $\lambda=1$, at $x$, equal to $e^{-x}$. Then the remaining integral $$\int_0^\infty e^{-x} f_X(x) dx $$ may be recognized as the moment generating function of $X$ evaluated at $-1$. The MGF of a $\rm{Gamma}$ is $(1-\theta t)^{-k}$, which for $\theta = 3, k=3, t=-1$ is $$(1+3)^{-3} = 0.015625$$ The question was for $P[X>Y] = 1-P[Y>X]$, so we want $$1-(1+3)^{-3} = 1-0.015625 = 0.984375$$ which agrees with soakley's answer.
probability of gamma greater than exponential
The rote way to compute $P[Y>X]$ is by double integral $$\int_0^\infty f_X(x) dx \int_x^\infty f_Y(y) dy $$ Where the inner integral may be recognized as the survival function of $Y$, an exponential w
probability of gamma greater than exponential The rote way to compute $P[Y>X]$ is by double integral $$\int_0^\infty f_X(x) dx \int_x^\infty f_Y(y) dy $$ Where the inner integral may be recognized as the survival function of $Y$, an exponential with parameter $\lambda=1$, at $x$, equal to $e^{-x}$. Then the remaining integral $$\int_0^\infty e^{-x} f_X(x) dx $$ may be recognized as the moment generating function of $X$ evaluated at $-1$. The MGF of a $\rm{Gamma}$ is $(1-\theta t)^{-k}$, which for $\theta = 3, k=3, t=-1$ is $$(1+3)^{-3} = 0.015625$$ The question was for $P[X>Y] = 1-P[Y>X]$, so we want $$1-(1+3)^{-3} = 1-0.015625 = 0.984375$$ which agrees with soakley's answer.
probability of gamma greater than exponential The rote way to compute $P[Y>X]$ is by double integral $$\int_0^\infty f_X(x) dx \int_x^\infty f_Y(y) dy $$ Where the inner integral may be recognized as the survival function of $Y$, an exponential w