idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
37,601
Post hoc test of adjusted means (after ANCOVA)
In STATISTICA (StatSoft), when performing ANCOVA, post-hoc values are related of adjusted means. Even though in top-X axis are present the "non-adjusted means". Do the test... compare means and do the post-hoc with and without the covariates... P values are different.
Post hoc test of adjusted means (after ANCOVA)
In STATISTICA (StatSoft), when performing ANCOVA, post-hoc values are related of adjusted means. Even though in top-X axis are present the "non-adjusted means". Do the test... compare means and do the
Post hoc test of adjusted means (after ANCOVA) In STATISTICA (StatSoft), when performing ANCOVA, post-hoc values are related of adjusted means. Even though in top-X axis are present the "non-adjusted means". Do the test... compare means and do the post-hoc with and without the covariates... P values are different.
Post hoc test of adjusted means (after ANCOVA) In STATISTICA (StatSoft), when performing ANCOVA, post-hoc values are related of adjusted means. Even though in top-X axis are present the "non-adjusted means". Do the test... compare means and do the
37,602
Prediction with Bayesian networks in R
I think everything is in the function bn.fit. Following up on your code, I got > bn.fit(bn.hc, dat) Bayesian network parameters Parameters of node won (multinomial distribution) Conditional probability table: 0 1 0.6666667 0.3333333 Parameters of node sold (multinomial distribution) Conditional probability table: won sold 0 1 0 1.0 0.5 1 0.0 0.5 Parameters of node insured (multinomial distribution) Conditional probability table: won insured 0 1 0 0.5 1.0 1 0.5 0.0 Parameters of node credit (multinomial distribution) Conditional probability table: insured credit 0 1 FAIR 0.75 0.00 GOOD 0.00 1.00 POOR 0.25 0.00
Prediction with Bayesian networks in R
I think everything is in the function bn.fit. Following up on your code, I got > bn.fit(bn.hc, dat) Bayesian network parameters Parameters of node won (multinomial distribution) Conditional pro
Prediction with Bayesian networks in R I think everything is in the function bn.fit. Following up on your code, I got > bn.fit(bn.hc, dat) Bayesian network parameters Parameters of node won (multinomial distribution) Conditional probability table: 0 1 0.6666667 0.3333333 Parameters of node sold (multinomial distribution) Conditional probability table: won sold 0 1 0 1.0 0.5 1 0.0 0.5 Parameters of node insured (multinomial distribution) Conditional probability table: won insured 0 1 0 0.5 1.0 1 0.5 0.0 Parameters of node credit (multinomial distribution) Conditional probability table: insured credit 0 1 FAIR 0.75 0.00 GOOD 0.00 1.00 POOR 0.25 0.00
Prediction with Bayesian networks in R I think everything is in the function bn.fit. Following up on your code, I got > bn.fit(bn.hc, dat) Bayesian network parameters Parameters of node won (multinomial distribution) Conditional pro
37,603
Dirichlet processes for supervised learning?
This question isn't getting too much attention, so I'm going to answer to update on what I've found and (hopefully) stimulate discussion. I've ran into an article I'm looking forward to reading that uses DPMs to do classification (Shahbaba and Neal, 2007) which they tested on protein fold data. Essentially it appears that they used did something similar to what I suggested in the comments above. It compared favorable against both neural networks and support vector machines. This comes as a bit of a relief to me since I've dumped a lot of time into these models with an eye towards supervised machine learning problems so it appears I (perhaps) haven't been wasting my time.
Dirichlet processes for supervised learning?
This question isn't getting too much attention, so I'm going to answer to update on what I've found and (hopefully) stimulate discussion. I've ran into an article I'm looking forward to reading that u
Dirichlet processes for supervised learning? This question isn't getting too much attention, so I'm going to answer to update on what I've found and (hopefully) stimulate discussion. I've ran into an article I'm looking forward to reading that uses DPMs to do classification (Shahbaba and Neal, 2007) which they tested on protein fold data. Essentially it appears that they used did something similar to what I suggested in the comments above. It compared favorable against both neural networks and support vector machines. This comes as a bit of a relief to me since I've dumped a lot of time into these models with an eye towards supervised machine learning problems so it appears I (perhaps) haven't been wasting my time.
Dirichlet processes for supervised learning? This question isn't getting too much attention, so I'm going to answer to update on what I've found and (hopefully) stimulate discussion. I've ran into an article I'm looking forward to reading that u
37,604
Dirichlet processes for supervised learning?
Take a look at the DPpackage of R. Dirichlet process can be used at least as a prior for a random effect, and to construct a nonparametric error distribution for regression.
Dirichlet processes for supervised learning?
Take a look at the DPpackage of R. Dirichlet process can be used at least as a prior for a random effect, and to construct a nonparametric error distribution for regression.
Dirichlet processes for supervised learning? Take a look at the DPpackage of R. Dirichlet process can be used at least as a prior for a random effect, and to construct a nonparametric error distribution for regression.
Dirichlet processes for supervised learning? Take a look at the DPpackage of R. Dirichlet process can be used at least as a prior for a random effect, and to construct a nonparametric error distribution for regression.
37,605
How important is using the exact method for ties in a Cox model, and how long should this take?
It is normal that the exact method takes such a long time: it has to do quite a lot. My own experience is that the approximate methods you refer to can be quite poor (i.e., leading to substantively different conclusions to the use of exact methods), but my own experiences are a bit atypical as they involve the application of these methods to logit regression rather than the cox model. If you do a google for PHREG and Paul D Allison, who has written a bit on this topic, you will find more information.
How important is using the exact method for ties in a Cox model, and how long should this take?
It is normal that the exact method takes such a long time: it has to do quite a lot. My own experience is that the approximate methods you refer to can be quite poor (i.e., leading to substantively
How important is using the exact method for ties in a Cox model, and how long should this take? It is normal that the exact method takes such a long time: it has to do quite a lot. My own experience is that the approximate methods you refer to can be quite poor (i.e., leading to substantively different conclusions to the use of exact methods), but my own experiences are a bit atypical as they involve the application of these methods to logit regression rather than the cox model. If you do a google for PHREG and Paul D Allison, who has written a bit on this topic, you will find more information.
How important is using the exact method for ties in a Cox model, and how long should this take? It is normal that the exact method takes such a long time: it has to do quite a lot. My own experience is that the approximate methods you refer to can be quite poor (i.e., leading to substantively
37,606
How to interpret coefficients in a vector autoregressive model?
It's often pretty hard interpret the coefficients of a VAR, specially if it includes many variables and lags. As one lag of a variable says one thing and another the opposite, there are no clear dynamics between the variables you wish to investigate, usually a VAR is accompanied with tools like the impulse response function and forecast error variance decomposition.
How to interpret coefficients in a vector autoregressive model?
It's often pretty hard interpret the coefficients of a VAR, specially if it includes many variables and lags. As one lag of a variable says one thing and another the opposite, there are no clear dynam
How to interpret coefficients in a vector autoregressive model? It's often pretty hard interpret the coefficients of a VAR, specially if it includes many variables and lags. As one lag of a variable says one thing and another the opposite, there are no clear dynamics between the variables you wish to investigate, usually a VAR is accompanied with tools like the impulse response function and forecast error variance decomposition.
How to interpret coefficients in a vector autoregressive model? It's often pretty hard interpret the coefficients of a VAR, specially if it includes many variables and lags. As one lag of a variable says one thing and another the opposite, there are no clear dynam
37,607
Estimating the number of times each of four pairs of dice was thrown
This reply addresses the first and third bullet points: Is the problem solvable? Can we frame it in a conventional way to allow for searching or application of conventional methods? To address the latter question it helps to start generalizing the situation while retaining any special features that might be useful for a solution. Let's begin with the raw data. The experiment is a sequence of trials. In each trial (a) a pair of dice is rolled and (b) we record the outcome of each die (one or not-one) and its color. This can be represented by sixteen values: four possibilities for each of the four pairs of dice. We know the probabilities associated with each set of four outcomes: $1/6^2$ for two ones, $(5/6)^2$ for two non-ones, and $(1/6)(5/6)$ for each of the remaining two outcomes. To summarize the data, suppose the red and blue dice were thrown $n_{rb}$ times, the red and yellow dice $n_{ry}$ times, etc. This means we have observed the outcomes of $n_{rb}$ independent throws of the red and blue dice, etc. The sum of those red-blue throws is therefore the outcome of a multinomial distribution with count parameter $n_{rb}$ and probabilities $(1/6, 5/36, 5/36, 25/36)$. Similarly the sum of the red-yellow throws is an independent outcome of a multinomial distribution with count parameter $n_{ry}$ and the same probabilities; the green-blue throws have count parameter $n_{gb}$, and the green-yellow throws have count parameter $n_{gy}$. These four distributions, in this order, collectively describe a $16$-variate distribution. A visualization can help, so let's consider an example. Here are some raw data with descriptive headers: Red-blue Red-yellow Green-blue Green-yellow 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 Red Green Blue Yellow k0 k1 k2 k3 k4 k5 k6 k7 k8 k9 kA kB kC kD kE kF * * 1 * * 1 * * 1 * * 1 The variables are named k0 through kF: evidently they are a dummy coding for the $16$ possible outcomes. The outcomes are schematically shown in the second line: "00" means both dice were non-one, "01" means only the second die (as named on the first line) showed a one, etc. Redundantly, stars indicate which two dice were thrown: I have shown them only to illustrate what's going on. Thus, this dataset describes four trials: in the first, red was non-1 and blue was 1; in the second, red and blue were both non-1; in the third, green and blue were both 1; and in the fourth, green was 1 and yellow was non-1. A sufficient statistic for this experiment would be the sum of all the data rows (taking blanks to be zeros): this counts each of the 16 kinds of outcomes. We do not observe these raw data, though: they are condensed for us. Specifically, The count of cases where both R and B showed 1 is the sum in the k3 column. The count of cases where both R and Y showed 1 is the sum in the k7 column.. The count of cases where both G and B showed 1 is the sum in the kB column. The count of cases where both G and YB showed 1 is the sum in the kF column. The count of cases where only R showed 1 is the sum of the k2 and k6 columns. The count of cases where only G showed 1 is the sum of the kA and kE columns. The count of cases where only B showed 1 is the sum of the k1 and k9 columns. The count of cases where only Y showed 1 is the sum of the k5 and kd columns. The number of cases where neither die showed 1 is the sum of the k0, k4, k8, and kC columns (but this is not revealed to us). Writing $\mathbb{k}$ for the column matrix of k's (in the order shown), this information is conveniently written as a linear transformation $\mathbb{A k}$ where the matrix $\mathbb{A}$ is $$\left( \begin{array}{cccccccccccccccc} 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right)$$ It is now immediate--your computer will happily tell you this--that the last four rows are redundant (the sum of rows 5 and 6 equals the sum of rows 7 and 8). We might just as well drop the last row from $\mathbb{A}$: it will then be of full rank. Here, then, is the abstract statement of the problem: Given one realization $\mathbb{k}$ from a family of multivariate distributions parameterized by natural numbers $\mathbb{n} = (n_{rb}, n_{ry}, n_{gb}, n_{gy})$ and given an observation of $\mathbb{A k}$, estimate the parameters (and obtain standard errors or confidence limits on those estimates). We probably should add that the entries of $\mathbb{k}$ are themselves natural numbers: this is a discrete distribution. In general, the structure of $\mathbb{A}$ induces dependencies among the entries of $\mathbb{A k}$. (Note that the entries of $\mathbb{k}$ themselves already have some slight dependencies arising from the underlying multinomial distributions). This, along with the discrete distribution of $\mathbb{A k}$ and discreteness of the parameter space, are going to create difficulties in developing estimators. We can at least get started by taking expectations, because it's easy to write the expectation of $\mathbb{k}$ in terms of the $n_{*}$: $E[k_0] = n_{rb}25/36$, $E[k_1] = n_{rb}5/36$, ..., and $E[k_A] = n_{gy}/36$. The linearity of expectation tells us that $E[\mathbb{Ak}] = \mathbb{A}E[\mathbb{k}]$. Working this out gives us a lot of possible method-of-moments estimators (not just one). (One of them was posted as a reply by the OP.) So yes, the problem is solvable. (The generalized problem could have a unique method-of-moments estimator or it might have none at all when $\mathbb{A}$ does not give enough information to identify all the parameters.) The important questions left to solve are: How well can it be solved? Can we find good (e.g., admissible) estimators? Can we obtain good confidence intervals or other expressions of uncertainty in the estimates? We can continue and compute the variance of these observations using standard rules, using second moments of the multinomial distribution. With this in hand one might be tempted to combine the seven observations (the counts in 1-7) using generalized least squares. Or, one might proceed directly to try a maximum likelihood approach (but this would be quite tricky to compute). When the components of $\mathbb{k}$ are expected to be large, normal approximations to the multinomial distribution will work nicely, for then $\mathbb{A k}$ will also be (approximately) multivariate normal and maximum likelihood estimates of the $n_{*}$ might be well behaved. That's the (limited) extent of my analysis. I wanted to share it at this point to give something for future answers to build on and to show the complexities and pitfalls involved in this apparently simple situation.
Estimating the number of times each of four pairs of dice was thrown
This reply addresses the first and third bullet points: Is the problem solvable? Can we frame it in a conventional way to allow for searching or application of conventional methods? To address the lat
Estimating the number of times each of four pairs of dice was thrown This reply addresses the first and third bullet points: Is the problem solvable? Can we frame it in a conventional way to allow for searching or application of conventional methods? To address the latter question it helps to start generalizing the situation while retaining any special features that might be useful for a solution. Let's begin with the raw data. The experiment is a sequence of trials. In each trial (a) a pair of dice is rolled and (b) we record the outcome of each die (one or not-one) and its color. This can be represented by sixteen values: four possibilities for each of the four pairs of dice. We know the probabilities associated with each set of four outcomes: $1/6^2$ for two ones, $(5/6)^2$ for two non-ones, and $(1/6)(5/6)$ for each of the remaining two outcomes. To summarize the data, suppose the red and blue dice were thrown $n_{rb}$ times, the red and yellow dice $n_{ry}$ times, etc. This means we have observed the outcomes of $n_{rb}$ independent throws of the red and blue dice, etc. The sum of those red-blue throws is therefore the outcome of a multinomial distribution with count parameter $n_{rb}$ and probabilities $(1/6, 5/36, 5/36, 25/36)$. Similarly the sum of the red-yellow throws is an independent outcome of a multinomial distribution with count parameter $n_{ry}$ and the same probabilities; the green-blue throws have count parameter $n_{gb}$, and the green-yellow throws have count parameter $n_{gy}$. These four distributions, in this order, collectively describe a $16$-variate distribution. A visualization can help, so let's consider an example. Here are some raw data with descriptive headers: Red-blue Red-yellow Green-blue Green-yellow 00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11 Red Green Blue Yellow k0 k1 k2 k3 k4 k5 k6 k7 k8 k9 kA kB kC kD kE kF * * 1 * * 1 * * 1 * * 1 The variables are named k0 through kF: evidently they are a dummy coding for the $16$ possible outcomes. The outcomes are schematically shown in the second line: "00" means both dice were non-one, "01" means only the second die (as named on the first line) showed a one, etc. Redundantly, stars indicate which two dice were thrown: I have shown them only to illustrate what's going on. Thus, this dataset describes four trials: in the first, red was non-1 and blue was 1; in the second, red and blue were both non-1; in the third, green and blue were both 1; and in the fourth, green was 1 and yellow was non-1. A sufficient statistic for this experiment would be the sum of all the data rows (taking blanks to be zeros): this counts each of the 16 kinds of outcomes. We do not observe these raw data, though: they are condensed for us. Specifically, The count of cases where both R and B showed 1 is the sum in the k3 column. The count of cases where both R and Y showed 1 is the sum in the k7 column.. The count of cases where both G and B showed 1 is the sum in the kB column. The count of cases where both G and YB showed 1 is the sum in the kF column. The count of cases where only R showed 1 is the sum of the k2 and k6 columns. The count of cases where only G showed 1 is the sum of the kA and kE columns. The count of cases where only B showed 1 is the sum of the k1 and k9 columns. The count of cases where only Y showed 1 is the sum of the k5 and kd columns. The number of cases where neither die showed 1 is the sum of the k0, k4, k8, and kC columns (but this is not revealed to us). Writing $\mathbb{k}$ for the column matrix of k's (in the order shown), this information is conveniently written as a linear transformation $\mathbb{A k}$ where the matrix $\mathbb{A}$ is $$\left( \begin{array}{cccccccccccccccc} 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array} \right)$$ It is now immediate--your computer will happily tell you this--that the last four rows are redundant (the sum of rows 5 and 6 equals the sum of rows 7 and 8). We might just as well drop the last row from $\mathbb{A}$: it will then be of full rank. Here, then, is the abstract statement of the problem: Given one realization $\mathbb{k}$ from a family of multivariate distributions parameterized by natural numbers $\mathbb{n} = (n_{rb}, n_{ry}, n_{gb}, n_{gy})$ and given an observation of $\mathbb{A k}$, estimate the parameters (and obtain standard errors or confidence limits on those estimates). We probably should add that the entries of $\mathbb{k}$ are themselves natural numbers: this is a discrete distribution. In general, the structure of $\mathbb{A}$ induces dependencies among the entries of $\mathbb{A k}$. (Note that the entries of $\mathbb{k}$ themselves already have some slight dependencies arising from the underlying multinomial distributions). This, along with the discrete distribution of $\mathbb{A k}$ and discreteness of the parameter space, are going to create difficulties in developing estimators. We can at least get started by taking expectations, because it's easy to write the expectation of $\mathbb{k}$ in terms of the $n_{*}$: $E[k_0] = n_{rb}25/36$, $E[k_1] = n_{rb}5/36$, ..., and $E[k_A] = n_{gy}/36$. The linearity of expectation tells us that $E[\mathbb{Ak}] = \mathbb{A}E[\mathbb{k}]$. Working this out gives us a lot of possible method-of-moments estimators (not just one). (One of them was posted as a reply by the OP.) So yes, the problem is solvable. (The generalized problem could have a unique method-of-moments estimator or it might have none at all when $\mathbb{A}$ does not give enough information to identify all the parameters.) The important questions left to solve are: How well can it be solved? Can we find good (e.g., admissible) estimators? Can we obtain good confidence intervals or other expressions of uncertainty in the estimates? We can continue and compute the variance of these observations using standard rules, using second moments of the multinomial distribution. With this in hand one might be tempted to combine the seven observations (the counts in 1-7) using generalized least squares. Or, one might proceed directly to try a maximum likelihood approach (but this would be quite tricky to compute). When the components of $\mathbb{k}$ are expected to be large, normal approximations to the multinomial distribution will work nicely, for then $\mathbb{A k}$ will also be (approximately) multivariate normal and maximum likelihood estimates of the $n_{*}$ might be well behaved. That's the (limited) extent of my analysis. I wanted to share it at this point to give something for future answers to build on and to show the complexities and pitfalls involved in this apparently simple situation.
Estimating the number of times each of four pairs of dice was thrown This reply addresses the first and third bullet points: Is the problem solvable? Can we frame it in a conventional way to allow for searching or application of conventional methods? To address the lat
37,608
Estimating the number of times each of four pairs of dice was thrown
Let p be the probability of throwing a 1 on any die. Let ni be the probability of the outcome designated by i. The probability of the occurrences 1,2,3 and 4 are all p$^2$ as the dice are rolled independently. The probability of occurrences 5, 6, 7, and 8 are all 2p(1-p) since for example in case 5 a red must roll a 1 and the blue roll a number other than 1 or red rolls 1 and the yellow die rolls a number other than 1. Hence E(n1)=E(n2)=E(n3)=E(n4)=Np$^2$ and E(n5)=E(n6)=E(n7)=E(n8)=2Np(1-p). Estimate N by equating the nis with their expectations to get since n9=N-(n1+n2+n3+n4+n5+n6+n7+n8) and E(n1+n2+n3+n4)=4Np$^2$ and E(n5+n6+n7+n8) =4(2Np(1-p)) E(n9)=N-4N(p$^2$ +2p(1-p))= N(1-4(p$^2$+2p-2p$^2$))=N(1-8p+4p$^2$) This shows that if N were known I could estimate n9 by the nearest integer to N(1-8p+4p$^2$). Or on the other hand if I knew n9 I could estimate N by the nearest integer to n9/(1-8p+4p$^2$). But if I did not know N and I also do not know n9 then for any solution I could multiply N and n9 by any integer (say 2 or 5) and get another answer. So without additional information I cannot find a sensible and unique estimate for n9.
Estimating the number of times each of four pairs of dice was thrown
Let p be the probability of throwing a 1 on any die. Let ni be the probability of the outcome designated by i. The probability of the occurrences 1,2,3 and 4 are all p$^2$ as the dice are rolled inde
Estimating the number of times each of four pairs of dice was thrown Let p be the probability of throwing a 1 on any die. Let ni be the probability of the outcome designated by i. The probability of the occurrences 1,2,3 and 4 are all p$^2$ as the dice are rolled independently. The probability of occurrences 5, 6, 7, and 8 are all 2p(1-p) since for example in case 5 a red must roll a 1 and the blue roll a number other than 1 or red rolls 1 and the yellow die rolls a number other than 1. Hence E(n1)=E(n2)=E(n3)=E(n4)=Np$^2$ and E(n5)=E(n6)=E(n7)=E(n8)=2Np(1-p). Estimate N by equating the nis with their expectations to get since n9=N-(n1+n2+n3+n4+n5+n6+n7+n8) and E(n1+n2+n3+n4)=4Np$^2$ and E(n5+n6+n7+n8) =4(2Np(1-p)) E(n9)=N-4N(p$^2$ +2p(1-p))= N(1-4(p$^2$+2p-2p$^2$))=N(1-8p+4p$^2$) This shows that if N were known I could estimate n9 by the nearest integer to N(1-8p+4p$^2$). Or on the other hand if I knew n9 I could estimate N by the nearest integer to n9/(1-8p+4p$^2$). But if I did not know N and I also do not know n9 then for any solution I could multiply N and n9 by any integer (say 2 or 5) and get another answer. So without additional information I cannot find a sensible and unique estimate for n9.
Estimating the number of times each of four pairs of dice was thrown Let p be the probability of throwing a 1 on any die. Let ni be the probability of the outcome designated by i. The probability of the occurrences 1,2,3 and 4 are all p$^2$ as the dice are rolled inde
37,609
Estimating the number of times each of four pairs of dice was thrown
Let $p$ be the probability of any die rolling a one. Then the probability of outcomes 1, 2, 3 and 4 are all $p^2$. Let $n_1$, $n_2$, $n_3$ and $n_4$ be number of observations for outcomes 1 through 4. Let $n_{rb}$, $n_{ry}$, $n_{gb}$ and $n_{gy}$ be the values we want to estimate, which are the actual number of times each of the dice pairs was thrown. Then the best estimates are: $$n_{rb} = \frac{n_1}{p^2}$$ $$n_{ry} = \frac{n_2}{p^2}$$ $$n_{gb} = \frac{n_3}{p^2}$$ $$n_{gy} = \frac{n_4}{p^2}$$
Estimating the number of times each of four pairs of dice was thrown
Let $p$ be the probability of any die rolling a one. Then the probability of outcomes 1, 2, 3 and 4 are all $p^2$. Let $n_1$, $n_2$, $n_3$ and $n_4$ be number of observations for outcomes 1 through 4
Estimating the number of times each of four pairs of dice was thrown Let $p$ be the probability of any die rolling a one. Then the probability of outcomes 1, 2, 3 and 4 are all $p^2$. Let $n_1$, $n_2$, $n_3$ and $n_4$ be number of observations for outcomes 1 through 4. Let $n_{rb}$, $n_{ry}$, $n_{gb}$ and $n_{gy}$ be the values we want to estimate, which are the actual number of times each of the dice pairs was thrown. Then the best estimates are: $$n_{rb} = \frac{n_1}{p^2}$$ $$n_{ry} = \frac{n_2}{p^2}$$ $$n_{gb} = \frac{n_3}{p^2}$$ $$n_{gy} = \frac{n_4}{p^2}$$
Estimating the number of times each of four pairs of dice was thrown Let $p$ be the probability of any die rolling a one. Then the probability of outcomes 1, 2, 3 and 4 are all $p^2$. Let $n_1$, $n_2$, $n_3$ and $n_4$ be number of observations for outcomes 1 through 4
37,610
How to test that a set of distributions are located in a given order?
The Kruskal-Wallis procedure is the multi-group equivalent of the Wilcoxon rank-sum test, which is akin to one-way ANOVA without the normality assumption. A nonparametric ordered alternative equivalent of the Kruskal-Wallis test is the Jonckheere-Terpstra test. That is, where the Kruskal-Wallis tests against a general alternative "at least one $\neq$", J-T tests against a specified order ($\theta_1 \leq \theta_2 \leq \ldots \leq \theta_k$), with "at least one $<$". The test statistic basically consists of counting all the times the pairs of values (across groups) are in the anticipated order (the order specified in $H_1$), minus the times that pairs are "in the opposite order", though there are other, equivalent calculations (any calculation that yields a statistic with the same partial order will be equivalent).
How to test that a set of distributions are located in a given order?
The Kruskal-Wallis procedure is the multi-group equivalent of the Wilcoxon rank-sum test, which is akin to one-way ANOVA without the normality assumption. A nonparametric ordered alternative equivalen
How to test that a set of distributions are located in a given order? The Kruskal-Wallis procedure is the multi-group equivalent of the Wilcoxon rank-sum test, which is akin to one-way ANOVA without the normality assumption. A nonparametric ordered alternative equivalent of the Kruskal-Wallis test is the Jonckheere-Terpstra test. That is, where the Kruskal-Wallis tests against a general alternative "at least one $\neq$", J-T tests against a specified order ($\theta_1 \leq \theta_2 \leq \ldots \leq \theta_k$), with "at least one $<$". The test statistic basically consists of counting all the times the pairs of values (across groups) are in the anticipated order (the order specified in $H_1$), minus the times that pairs are "in the opposite order", though there are other, equivalent calculations (any calculation that yields a statistic with the same partial order will be equivalent).
How to test that a set of distributions are located in a given order? The Kruskal-Wallis procedure is the multi-group equivalent of the Wilcoxon rank-sum test, which is akin to one-way ANOVA without the normality assumption. A nonparametric ordered alternative equivalen
37,611
What software (paid or free) exists for learning large datasets?
You could try using Weka. It has implemented a large collection of classification algorithms. In your case you definitely want to experiment on the speed of algorithms given your dataset. The Naive Bayes and (Lib)SVM algorithms are known to be quite fast. Also try the LibLINEAR algorithm instead of LibSVM, it is sometimes better suited for large datasets. [NOTE: the LibLINEAR and LibSVM packages are not installed in Weka by default, but Weka's development version 3.7.6 offers a package manager to easily install them] You might also want to use Weka's Select attributes option to find the most informative features and remove unnecessary features. In general; I would start out to learn on only a fraction of the dataset and scale up from there. It might be the case the your performance won't go up with more data (though an often heard machine learning rule of thumb says "the more data the better").
What software (paid or free) exists for learning large datasets?
You could try using Weka. It has implemented a large collection of classification algorithms. In your case you definitely want to experiment on the speed of algorithms given your dataset. The Naive Ba
What software (paid or free) exists for learning large datasets? You could try using Weka. It has implemented a large collection of classification algorithms. In your case you definitely want to experiment on the speed of algorithms given your dataset. The Naive Bayes and (Lib)SVM algorithms are known to be quite fast. Also try the LibLINEAR algorithm instead of LibSVM, it is sometimes better suited for large datasets. [NOTE: the LibLINEAR and LibSVM packages are not installed in Weka by default, but Weka's development version 3.7.6 offers a package manager to easily install them] You might also want to use Weka's Select attributes option to find the most informative features and remove unnecessary features. In general; I would start out to learn on only a fraction of the dataset and scale up from there. It might be the case the your performance won't go up with more data (though an often heard machine learning rule of thumb says "the more data the better").
What software (paid or free) exists for learning large datasets? You could try using Weka. It has implemented a large collection of classification algorithms. In your case you definitely want to experiment on the speed of algorithms given your dataset. The Naive Ba
37,612
What software (paid or free) exists for learning large datasets?
Similar to Weka, you may also try SCaVis. You can create large data containers using Python language (or Java, Groovy, Rubu - they are all supported by SCaVis). I think if you do not want to create in-memory container, try to use PFile object that can scan your data line-by-line without loading all data to the memory
What software (paid or free) exists for learning large datasets?
Similar to Weka, you may also try SCaVis. You can create large data containers using Python language (or Java, Groovy, Rubu - they are all supported by SCaVis). I think if you do not want to create in
What software (paid or free) exists for learning large datasets? Similar to Weka, you may also try SCaVis. You can create large data containers using Python language (or Java, Groovy, Rubu - they are all supported by SCaVis). I think if you do not want to create in-memory container, try to use PFile object that can scan your data line-by-line without loading all data to the memory
What software (paid or free) exists for learning large datasets? Similar to Weka, you may also try SCaVis. You can create large data containers using Python language (or Java, Groovy, Rubu - they are all supported by SCaVis). I think if you do not want to create in
37,613
How to analyse RCT where significant baseline differences exist despite randomisation?
If there is enough data to do this include the most significant baseline covariates in the model giving you a way to adjust for covaraite imbalance. There is an interesting book by Vance Berger that specifically addresses the issue of covariate imbalance in clinical trials and how to detect it.
How to analyse RCT where significant baseline differences exist despite randomisation?
If there is enough data to do this include the most significant baseline covariates in the model giving you a way to adjust for covaraite imbalance. There is an interesting book by Vance Berger that
How to analyse RCT where significant baseline differences exist despite randomisation? If there is enough data to do this include the most significant baseline covariates in the model giving you a way to adjust for covaraite imbalance. There is an interesting book by Vance Berger that specifically addresses the issue of covariate imbalance in clinical trials and how to detect it.
How to analyse RCT where significant baseline differences exist despite randomisation? If there is enough data to do this include the most significant baseline covariates in the model giving you a way to adjust for covaraite imbalance. There is an interesting book by Vance Berger that
37,614
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up?
You have no interaction and no main effect if IV1. Do NOT go looking at all comparisons of all means. Even if you stick with IV2 I'm concerned that you're comparing significant and not significant and drawing conclusions and the difference between significant and not significant is likely not significant. Look at your first set of comparisons there. Do you want to make some conclusion about E-A being different from E-B? You can't from what you've presented. That's then another layer of testing. Have you even considered that it just might be E-ABCD is the main thing here? There's another layer of testing. You need to use some theory and narrow down what you're looking for. Or, you need a completely different approach. The first thing you should do is step back and look at the pattern of data. The significant main effect does NOT mean that there are two means different in there somewhere. And even if there were, it doesn't mean that's what the main effect is. It only means that the pattern of results observed is unlikely to have occurred if there really were no differences. You're fishing around for individual comparisons and haven't even described the data yet. How about just describing the pattern you observed and saying it was unlikely to have occurred by chance? That's reporting your main effect. You have to have a reason for going and rooting around for individual effects. What you're doing actually defeats the whole purpose of ANOVA. Consider that the ANOVA is run because doing multiple tests is subject to multiple comparison issues. So, you do the ANOVA, find an effect, and then go ahead and do all 1-1 multiple tests. Stop. The ANOVA tells you something. It tells you the pattern of data mean something. If you have a dram of theory and describe the pattern of data you might just avoid all of these tests.
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up?
You have no interaction and no main effect if IV1. Do NOT go looking at all comparisons of all means. Even if you stick with IV2 I'm concerned that you're comparing significant and not significant an
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up? You have no interaction and no main effect if IV1. Do NOT go looking at all comparisons of all means. Even if you stick with IV2 I'm concerned that you're comparing significant and not significant and drawing conclusions and the difference between significant and not significant is likely not significant. Look at your first set of comparisons there. Do you want to make some conclusion about E-A being different from E-B? You can't from what you've presented. That's then another layer of testing. Have you even considered that it just might be E-ABCD is the main thing here? There's another layer of testing. You need to use some theory and narrow down what you're looking for. Or, you need a completely different approach. The first thing you should do is step back and look at the pattern of data. The significant main effect does NOT mean that there are two means different in there somewhere. And even if there were, it doesn't mean that's what the main effect is. It only means that the pattern of results observed is unlikely to have occurred if there really were no differences. You're fishing around for individual comparisons and haven't even described the data yet. How about just describing the pattern you observed and saying it was unlikely to have occurred by chance? That's reporting your main effect. You have to have a reason for going and rooting around for individual effects. What you're doing actually defeats the whole purpose of ANOVA. Consider that the ANOVA is run because doing multiple tests is subject to multiple comparison issues. So, you do the ANOVA, find an effect, and then go ahead and do all 1-1 multiple tests. Stop. The ANOVA tells you something. It tells you the pattern of data mean something. If you have a dram of theory and describe the pattern of data you might just avoid all of these tests.
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up? You have no interaction and no main effect if IV1. Do NOT go looking at all comparisons of all means. Even if you stick with IV2 I'm concerned that you're comparing significant and not significant an
37,615
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up?
I don't like the term appropriate post hoc analysis. Generally speaking if you tested and found a statistically significant main effect but no significant first order interactions you might be interested in some pairwise difference to identify why there is an effect.
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up?
I don't like the term appropriate post hoc analysis. Generally speaking if you tested and found a statistically significant main effect but no significant first order interactions you might be intere
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up? I don't like the term appropriate post hoc analysis. Generally speaking if you tested and found a statistically significant main effect but no significant first order interactions you might be interested in some pairwise difference to identify why there is an effect.
If only a two-way ANOVA main effect is significant, what is the appropriate follow-up? I don't like the term appropriate post hoc analysis. Generally speaking if you tested and found a statistically significant main effect but no significant first order interactions you might be intere
37,616
Estimating an underlying pdf from binomial trials
This is just a simple idea and is not something I have seen in the literature. I will take away the randomness of the flips by conditioning on the observed number for each coin. Take the usual estimate for pi (i.e. number of heads divided by the number of flips) for the ith randomly selected coin. This set of estimates forms a histogram and one can then use a kernel density method to approximate the continuous curve. The difficulty with this approach is that it ignores the uncertainty in the estimate of p which depends on the number of flips and the true p. If ni is the number of flips for the ith coin and is large for each i, ignoring this uncertainty will not matter. I think it complicates things a little that each coin has a different variance associated with its estimate of p. Maybe this can be taken into account by using a variable width kernel.
Estimating an underlying pdf from binomial trials
This is just a simple idea and is not something I have seen in the literature. I will take away the randomness of the flips by conditioning on the observed number for each coin. Take the usual estim
Estimating an underlying pdf from binomial trials This is just a simple idea and is not something I have seen in the literature. I will take away the randomness of the flips by conditioning on the observed number for each coin. Take the usual estimate for pi (i.e. number of heads divided by the number of flips) for the ith randomly selected coin. This set of estimates forms a histogram and one can then use a kernel density method to approximate the continuous curve. The difficulty with this approach is that it ignores the uncertainty in the estimate of p which depends on the number of flips and the true p. If ni is the number of flips for the ith coin and is large for each i, ignoring this uncertainty will not matter. I think it complicates things a little that each coin has a different variance associated with its estimate of p. Maybe this can be taken into account by using a variable width kernel.
Estimating an underlying pdf from binomial trials This is just a simple idea and is not something I have seen in the literature. I will take away the randomness of the flips by conditioning on the observed number for each coin. Take the usual estim
37,617
Estimating an underlying pdf from binomial trials
Your problem can be generalized as follows: The parameter $p$ is generated according to an unknown probability density. You can measure $p$ only with a measurement error $\sigma$, which yields an estimator $\hat{p}$ for $p$ that has a known distribution (binomial distribution). A simple approach is suggested by Glen_b in this answer to a similar question: estimate a global bandwidth $h$ first as if the measurements $\hat{p}_1,\ldots,\hat{p}_N$ were exact, and then increase the bandwidth to $\sqrt{h^2+\sigma^2}$. In your case $\sigma_i^2=\hat{p}_i(1-\hat{p}_i)/n_i$. For more sophisticated methods, see Achilleos, Delaigle: Local bandwidth selectors for deconvolution kernel density estimation. Statistics and Computing 22,2 pp. 563–577, 2012
Estimating an underlying pdf from binomial trials
Your problem can be generalized as follows: The parameter $p$ is generated according to an unknown probability density. You can measure $p$ only with a measurement error $\sigma$, which yields an est
Estimating an underlying pdf from binomial trials Your problem can be generalized as follows: The parameter $p$ is generated according to an unknown probability density. You can measure $p$ only with a measurement error $\sigma$, which yields an estimator $\hat{p}$ for $p$ that has a known distribution (binomial distribution). A simple approach is suggested by Glen_b in this answer to a similar question: estimate a global bandwidth $h$ first as if the measurements $\hat{p}_1,\ldots,\hat{p}_N$ were exact, and then increase the bandwidth to $\sqrt{h^2+\sigma^2}$. In your case $\sigma_i^2=\hat{p}_i(1-\hat{p}_i)/n_i$. For more sophisticated methods, see Achilleos, Delaigle: Local bandwidth selectors for deconvolution kernel density estimation. Statistics and Computing 22,2 pp. 563–577, 2012
Estimating an underlying pdf from binomial trials Your problem can be generalized as follows: The parameter $p$ is generated according to an unknown probability density. You can measure $p$ only with a measurement error $\sigma$, which yields an est
37,618
Can I have too much data?
It sounds like the networks you train with more data overfit to that data and hence perform worse on different data. The concept the networks are supposed to learn may be obvious from the small data set, but adding more data in the other set obscures it (or even transforms it into a different concept). One way to mitigate this effect is to make sure that the distribution of the different predictions is approximately the same in the training and test data (stratification). Alternatively, you could train and evaluate your networks using cross validation.
Can I have too much data?
It sounds like the networks you train with more data overfit to that data and hence perform worse on different data. The concept the networks are supposed to learn may be obvious from the small data s
Can I have too much data? It sounds like the networks you train with more data overfit to that data and hence perform worse on different data. The concept the networks are supposed to learn may be obvious from the small data set, but adding more data in the other set obscures it (or even transforms it into a different concept). One way to mitigate this effect is to make sure that the distribution of the different predictions is approximately the same in the training and test data (stratification). Alternatively, you could train and evaluate your networks using cross validation.
Can I have too much data? It sounds like the networks you train with more data overfit to that data and hence perform worse on different data. The concept the networks are supposed to learn may be obvious from the small data s
37,619
Proportional hazards assumption
Peter is correct in it depends on what software you are using to check this, in the survival package with R there is the cox.ph() function. Most assessments of the assumption will involve looking at the Schoenfeld residuals. If plotted against time there should be no noticeable pattern. See Cox-Proportional Hazards starting at page 12. Also, if modeling categorical variables, you could create Kaplan-Meier curves for each variables and see if they are roughly proportional to each other.
Proportional hazards assumption
Peter is correct in it depends on what software you are using to check this, in the survival package with R there is the cox.ph() function. Most assessments of the assumption will involve looking at t
Proportional hazards assumption Peter is correct in it depends on what software you are using to check this, in the survival package with R there is the cox.ph() function. Most assessments of the assumption will involve looking at the Schoenfeld residuals. If plotted against time there should be no noticeable pattern. See Cox-Proportional Hazards starting at page 12. Also, if modeling categorical variables, you could create Kaplan-Meier curves for each variables and see if they are roughly proportional to each other.
Proportional hazards assumption Peter is correct in it depends on what software you are using to check this, in the survival package with R there is the cox.ph() function. Most assessments of the assumption will involve looking at t
37,620
Fitting a line to a log-log plot
There is nothing inherently wrong with a log-log regression and economists have used them for ages to estimate elasticity. Yet if you want to allow for the power law effect but do not want to bother too much, you may apply this simple correction: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=881759
Fitting a line to a log-log plot
There is nothing inherently wrong with a log-log regression and economists have used them for ages to estimate elasticity. Yet if you want to allow for the power law effect but do not want to bother t
Fitting a line to a log-log plot There is nothing inherently wrong with a log-log regression and economists have used them for ages to estimate elasticity. Yet if you want to allow for the power law effect but do not want to bother too much, you may apply this simple correction: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=881759
Fitting a line to a log-log plot There is nothing inherently wrong with a log-log regression and economists have used them for ages to estimate elasticity. Yet if you want to allow for the power law effect but do not want to bother t
37,621
How to combine the results of several binary tests?
"I'm wondering what is the best way to combine these numbers in a way that will yield a final score that is (hopefully) more reliable than any single test." A very common way is to compute Cronbach's alpha and, more generally, to perform what some would call a "standard" reliability analysis. This would show to what degree a given score correlates with the mean of the 17 other scores; which tests' scores might be best dropped from the scale; and what the internal consistency reliability is both with all 18 and with a given subset. Now, some of your comments seem to indicate that many of these 18 are uncorrelated; if that is true, you may end up with a scale that consists of just a few tests. EDIT AFTER COMMENT: Another approach draws on the idea that there is a tradeoff between internal consistency and validity. The less correlated your tests are, the better their content coverage, which enhances content validity (if not reliability). So thinking along these lines you would ignore Cronbach's alpha and the related indicators of item-total correlation and instead use a priori reasoning to combine the 18 tests into a scale. Hopefully such a scale would correlate highly with your gold standard.
How to combine the results of several binary tests?
"I'm wondering what is the best way to combine these numbers in a way that will yield a final score that is (hopefully) more reliable than any single test." A very common way is to compute Cronbach's
How to combine the results of several binary tests? "I'm wondering what is the best way to combine these numbers in a way that will yield a final score that is (hopefully) more reliable than any single test." A very common way is to compute Cronbach's alpha and, more generally, to perform what some would call a "standard" reliability analysis. This would show to what degree a given score correlates with the mean of the 17 other scores; which tests' scores might be best dropped from the scale; and what the internal consistency reliability is both with all 18 and with a given subset. Now, some of your comments seem to indicate that many of these 18 are uncorrelated; if that is true, you may end up with a scale that consists of just a few tests. EDIT AFTER COMMENT: Another approach draws on the idea that there is a tradeoff between internal consistency and validity. The less correlated your tests are, the better their content coverage, which enhances content validity (if not reliability). So thinking along these lines you would ignore Cronbach's alpha and the related indicators of item-total correlation and instead use a priori reasoning to combine the 18 tests into a scale. Hopefully such a scale would correlate highly with your gold standard.
How to combine the results of several binary tests? "I'm wondering what is the best way to combine these numbers in a way that will yield a final score that is (hopefully) more reliable than any single test." A very common way is to compute Cronbach's
37,622
How to combine the results of several binary tests?
To simplify a bit, let's assume that you only have two diagnostic tests. You want to calculate $$ \Pr(\text{Disease} \mid T_1,T_2) = \frac{\Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease})}{\Pr(T_1,T_2)} $$ You suggested that the results of these tests are independent, conditional on person having a disease. If so, then $$ \Pr(T_1,T_2 \mid \text{Disease}) = \Pr(T_1 \mid \text{Disease})\Pr(T_2 \mid \text{Disease}) $$ Where $\Pr(T_i \mid \text{Disease})$ is the sensitivity of Test $i$. $\Pr(T_1,T_2)$ is the unconditional probability of a random person testing positive on both tests: $$ \Pr(T_1,T_2) = \Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease}) + \Pr(T_1,T_2 \mid \text{No Disease})\Pr(\text{No Disease}) $$ Where $$ \Pr(T_1,T_2 \mid \text{No Disease}) = \Pr(T_1 \mid \text{No Disease})\Pr(T_2 \mid \text{No Disease}) $$ and $\Pr(T_i \mid \text{No Disease})$ is $1 - \text{specificity}$ for Test $i$.
How to combine the results of several binary tests?
To simplify a bit, let's assume that you only have two diagnostic tests. You want to calculate $$ \Pr(\text{Disease} \mid T_1,T_2) = \frac{\Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease})}{\Pr(T_1
How to combine the results of several binary tests? To simplify a bit, let's assume that you only have two diagnostic tests. You want to calculate $$ \Pr(\text{Disease} \mid T_1,T_2) = \frac{\Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease})}{\Pr(T_1,T_2)} $$ You suggested that the results of these tests are independent, conditional on person having a disease. If so, then $$ \Pr(T_1,T_2 \mid \text{Disease}) = \Pr(T_1 \mid \text{Disease})\Pr(T_2 \mid \text{Disease}) $$ Where $\Pr(T_i \mid \text{Disease})$ is the sensitivity of Test $i$. $\Pr(T_1,T_2)$ is the unconditional probability of a random person testing positive on both tests: $$ \Pr(T_1,T_2) = \Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease}) + \Pr(T_1,T_2 \mid \text{No Disease})\Pr(\text{No Disease}) $$ Where $$ \Pr(T_1,T_2 \mid \text{No Disease}) = \Pr(T_1 \mid \text{No Disease})\Pr(T_2 \mid \text{No Disease}) $$ and $\Pr(T_i \mid \text{No Disease})$ is $1 - \text{specificity}$ for Test $i$.
How to combine the results of several binary tests? To simplify a bit, let's assume that you only have two diagnostic tests. You want to calculate $$ \Pr(\text{Disease} \mid T_1,T_2) = \frac{\Pr(T_1,T_2 \mid \text{Disease})\Pr(\text{Disease})}{\Pr(T_1
37,623
Does similarity of coefficients and p-values regardless of whether dependent variable is transformed suggest untransformed model is reliable?
This looks like an example of the Li-Duan theorem: Li, Duan. Regression analysis under link violation. Annals of Statistics, 17:1009-1052, 1989 Which basically says that if your predictor variables are well behaved and you use the wrong link function (transformation on the response) then your coefficient estimates will be off by a multiplicative constant.
Does similarity of coefficients and p-values regardless of whether dependent variable is transformed
This looks like an example of the Li-Duan theorem: Li, Duan. Regression analysis under link violation. Annals of Statistics, 17:1009-1052, 1989 Which basically says that if your predictor variable
Does similarity of coefficients and p-values regardless of whether dependent variable is transformed suggest untransformed model is reliable? This looks like an example of the Li-Duan theorem: Li, Duan. Regression analysis under link violation. Annals of Statistics, 17:1009-1052, 1989 Which basically says that if your predictor variables are well behaved and you use the wrong link function (transformation on the response) then your coefficient estimates will be off by a multiplicative constant.
Does similarity of coefficients and p-values regardless of whether dependent variable is transformed This looks like an example of the Li-Duan theorem: Li, Duan. Regression analysis under link violation. Annals of Statistics, 17:1009-1052, 1989 Which basically says that if your predictor variable
37,624
Statistics for gambling machine validation
For a fair game successive plays should be independent. It sounds like they are asking you to perform a test that consecutive results are uncorrelated. You could do this by pairing the data let $R_1, R_2,...,R_{2n}$ be the first $2n$ results. Then you can form $n$ distnct pairs $(R_1,R_2)$, $(R3, R4),...,(R_{2n-1}, R_{2n})$. Calculate the pearson correlation coefficient is different from zero (if the data is continuous or even a set of integers). If the data are $0/1$ for lose/win you can test for independence in the $2\times2$ table obtained by using the counts for $(0,0), (0,1),(1,0)$ and $(1,1)$. In this case of $0/1$ the runs test of Wald and Wofowitz suggested above could also be used. The way it is described in the rule it sounds like they want you to construct a confidence interval for the correlation with halfwidth equal to $3\sigma$. You would pass if $0$ is contained in the interval. These tests seem to be a little too easy to pass though.
Statistics for gambling machine validation
For a fair game successive plays should be independent. It sounds like they are asking you to perform a test that consecutive results are uncorrelated. You could do this by pairing the data let $R_1,
Statistics for gambling machine validation For a fair game successive plays should be independent. It sounds like they are asking you to perform a test that consecutive results are uncorrelated. You could do this by pairing the data let $R_1, R_2,...,R_{2n}$ be the first $2n$ results. Then you can form $n$ distnct pairs $(R_1,R_2)$, $(R3, R4),...,(R_{2n-1}, R_{2n})$. Calculate the pearson correlation coefficient is different from zero (if the data is continuous or even a set of integers). If the data are $0/1$ for lose/win you can test for independence in the $2\times2$ table obtained by using the counts for $(0,0), (0,1),(1,0)$ and $(1,1)$. In this case of $0/1$ the runs test of Wald and Wofowitz suggested above could also be used. The way it is described in the rule it sounds like they want you to construct a confidence interval for the correlation with halfwidth equal to $3\sigma$. You would pass if $0$ is contained in the interval. These tests seem to be a little too easy to pass though.
Statistics for gambling machine validation For a fair game successive plays should be independent. It sounds like they are asking you to perform a test that consecutive results are uncorrelated. You could do this by pairing the data let $R_1,
37,625
Should factor loadings be dominated by items' ranges of answer options?
The effect you mention happens because of response set, which can be controlled by the phrasing of the reactives, and the interest of the respondent, and the order of the questions. I've seen this happen in my own experience, but it's by no means inescapable: one good way to avoid the spurious tendency is to, say, put two agreement questions, then a dichotomy, then a couple more agreement ones, then the dichotomy, that usually does the trick for me. A good way to test for that effect, if it's a scale you're building, is to compare the Cronbach's alpha of the entire scale with and without the dichotomies. yes, factor loadings might be affected by the range of possible responses, but it won't neccesarily be dominated by it, if you take the measures you would normally take to avoid response set.
Should factor loadings be dominated by items' ranges of answer options?
The effect you mention happens because of response set, which can be controlled by the phrasing of the reactives, and the interest of the respondent, and the order of the questions. I've seen this hap
Should factor loadings be dominated by items' ranges of answer options? The effect you mention happens because of response set, which can be controlled by the phrasing of the reactives, and the interest of the respondent, and the order of the questions. I've seen this happen in my own experience, but it's by no means inescapable: one good way to avoid the spurious tendency is to, say, put two agreement questions, then a dichotomy, then a couple more agreement ones, then the dichotomy, that usually does the trick for me. A good way to test for that effect, if it's a scale you're building, is to compare the Cronbach's alpha of the entire scale with and without the dichotomies. yes, factor loadings might be affected by the range of possible responses, but it won't neccesarily be dominated by it, if you take the measures you would normally take to avoid response set.
Should factor loadings be dominated by items' ranges of answer options? The effect you mention happens because of response set, which can be controlled by the phrasing of the reactives, and the interest of the respondent, and the order of the questions. I've seen this hap
37,626
How do I display detrended data from a linear regression?
As for me, it is terribly confusing, especially while you can do much simpler thing -- calculate price/carat to get a price of one carat, which would be way easier to interpret.
How do I display detrended data from a linear regression?
As for me, it is terribly confusing, especially while you can do much simpler thing -- calculate price/carat to get a price of one carat, which would be way easier to interpret.
How do I display detrended data from a linear regression? As for me, it is terribly confusing, especially while you can do much simpler thing -- calculate price/carat to get a price of one carat, which would be way easier to interpret.
How do I display detrended data from a linear regression? As for me, it is terribly confusing, especially while you can do much simpler thing -- calculate price/carat to get a price of one carat, which would be way easier to interpret.
37,627
How does a frequentist calculate the chance that group A beats group B regarding binary response
I will take this as an opportunity to explain some fundamental issues regarding the difference between frequentist and Bayesian statistics, by interpreting frequentist practices from a Bayesian standpoint. In this example, we have observed data $D_1$ for the original and data $D_2$ for the combination case. One assumes that these are generated by Bernoulli random variables with parameters $p_1$ and $p_2$, respectively, and that these parameters come from the priors, $f_i(p_i)$ (with cdfs $F_i(p_i)$). The probability $p_1 > p_2$ can be calculated, as you pointed out. It is: $$ P[p_1 > p_2;f_1,f_2] = \frac{\int_0^1 \int_0^1 I(p_1 > p_2) P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2)}{\int_0 ^1 \int_0^1 P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2) } $$ Here the Bayesian chooses priors $f_1(p_1)$ and $f_2(p_2)$ (and will usually choose the same prior for both, due to exchangeability) and proceeds with inference. The frequentist takes a "conservative" approach when choosing a prior. The possible values of $\theta$ are assumed to be known, but the frequentist has so little confidence in their ability to assign a meaningful prior, so that they effectively look at all possible priors and then only make an inferential statement when that inferential statement is true under all possible priors. When no inference is valid under all possible priors, the frequentist remains silent. That is the situation in this case. When one considers the priors $g_{\theta_i}(p_i)$ given by: $$ g_{\theta_i}(p_i) = \delta (\theta_i) $$ that is, the point mass concentrated at $\theta_i$, then one can easily see that the probability desired is $$ P[p_1 > p_2;g_{\theta_1},g_{\theta_2}] = \delta_{\theta_1, \theta_2}$$ that is, 1 when $\theta_1 = \theta_2$ and 0 otherwise. Thus the frequentist remains silent. (Or, alternatively, makes the trivial statement: "The probability is between 0 and 1...")
How does a frequentist calculate the chance that group A beats group B regarding binary response
I will take this as an opportunity to explain some fundamental issues regarding the difference between frequentist and Bayesian statistics, by interpreting frequentist practices from a Bayesian standp
How does a frequentist calculate the chance that group A beats group B regarding binary response I will take this as an opportunity to explain some fundamental issues regarding the difference between frequentist and Bayesian statistics, by interpreting frequentist practices from a Bayesian standpoint. In this example, we have observed data $D_1$ for the original and data $D_2$ for the combination case. One assumes that these are generated by Bernoulli random variables with parameters $p_1$ and $p_2$, respectively, and that these parameters come from the priors, $f_i(p_i)$ (with cdfs $F_i(p_i)$). The probability $p_1 > p_2$ can be calculated, as you pointed out. It is: $$ P[p_1 > p_2;f_1,f_2] = \frac{\int_0^1 \int_0^1 I(p_1 > p_2) P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2)}{\int_0 ^1 \int_0^1 P[D_1|p_1] P[D_2|p_1] dF_1(p_1) dF_2(p_2) } $$ Here the Bayesian chooses priors $f_1(p_1)$ and $f_2(p_2)$ (and will usually choose the same prior for both, due to exchangeability) and proceeds with inference. The frequentist takes a "conservative" approach when choosing a prior. The possible values of $\theta$ are assumed to be known, but the frequentist has so little confidence in their ability to assign a meaningful prior, so that they effectively look at all possible priors and then only make an inferential statement when that inferential statement is true under all possible priors. When no inference is valid under all possible priors, the frequentist remains silent. That is the situation in this case. When one considers the priors $g_{\theta_i}(p_i)$ given by: $$ g_{\theta_i}(p_i) = \delta (\theta_i) $$ that is, the point mass concentrated at $\theta_i$, then one can easily see that the probability desired is $$ P[p_1 > p_2;g_{\theta_1},g_{\theta_2}] = \delta_{\theta_1, \theta_2}$$ that is, 1 when $\theta_1 = \theta_2$ and 0 otherwise. Thus the frequentist remains silent. (Or, alternatively, makes the trivial statement: "The probability is between 0 and 1...")
How does a frequentist calculate the chance that group A beats group B regarding binary response I will take this as an opportunity to explain some fundamental issues regarding the difference between frequentist and Bayesian statistics, by interpreting frequentist practices from a Bayesian standp
37,628
Most powerful GoF test for normality
In generall I would advice to use more than just one test. Through statistic programms like R it isn't as much efford as just typing one line per test. The Shapiro-Wilks and the Anderson-Darling-Test are both tests which operate on comparing the distribution functions or analyse the variance. For example the Cramér-von-Mises or the Jaques-Bera-Test use comparisms between kurtosis and skrewness of a function. In some cases it is possible that the results between both test variants could differ. Depending on the situation of test every test has it's might and weakness. Hence it's advisable to try different tests on the same data set. If the data set is clearly normaly distributed the results shouldn't differ verry much.
Most powerful GoF test for normality
In generall I would advice to use more than just one test. Through statistic programms like R it isn't as much efford as just typing one line per test. The Shapiro-Wilks and the Anderson-Darling-Test
Most powerful GoF test for normality In generall I would advice to use more than just one test. Through statistic programms like R it isn't as much efford as just typing one line per test. The Shapiro-Wilks and the Anderson-Darling-Test are both tests which operate on comparing the distribution functions or analyse the variance. For example the Cramér-von-Mises or the Jaques-Bera-Test use comparisms between kurtosis and skrewness of a function. In some cases it is possible that the results between both test variants could differ. Depending on the situation of test every test has it's might and weakness. Hence it's advisable to try different tests on the same data set. If the data set is clearly normaly distributed the results shouldn't differ verry much.
Most powerful GoF test for normality In generall I would advice to use more than just one test. Through statistic programms like R it isn't as much efford as just typing one line per test. The Shapiro-Wilks and the Anderson-Darling-Test
37,629
Most powerful GoF test for normality
If the only criterion is most powerful then nothing beats SnowsPenultimateNormalityTest which is in the TeachingDemos package for R. However that test has an unfair advantage in the power competition and some may consider it less capable in other areas, for one, it is of the class of functions for which the documentation is probably (hopefully) more useful than the function itself. What is more important is to consider what it means when these tests of normality reject the null, or fail to reject the null.
Most powerful GoF test for normality
If the only criterion is most powerful then nothing beats SnowsPenultimateNormalityTest which is in the TeachingDemos package for R. However that test has an unfair advantage in the power competition
Most powerful GoF test for normality If the only criterion is most powerful then nothing beats SnowsPenultimateNormalityTest which is in the TeachingDemos package for R. However that test has an unfair advantage in the power competition and some may consider it less capable in other areas, for one, it is of the class of functions for which the documentation is probably (hopefully) more useful than the function itself. What is more important is to consider what it means when these tests of normality reject the null, or fail to reject the null.
Most powerful GoF test for normality If the only criterion is most powerful then nothing beats SnowsPenultimateNormalityTest which is in the TeachingDemos package for R. However that test has an unfair advantage in the power competition
37,630
Can someone help me understand what type of problem I am looking at? Not sure if this classifies as hypothesis-testing
You can do a student test to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group 4,6 is significantly different in average from the rest but it won't tell you that "The experiment was not conducted properly in Environments 4 and 6" which can't be answered without a knowledge of what "properly" means in the observations.
Can someone help me understand what type of problem I am looking at? Not sure if this classifies as
You can do a student test to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group
Can someone help me understand what type of problem I am looking at? Not sure if this classifies as hypothesis-testing You can do a student test to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group 4,6 is significantly different in average from the rest but it won't tell you that "The experiment was not conducted properly in Environments 4 and 6" which can't be answered without a knowledge of what "properly" means in the observations.
Can someone help me understand what type of problem I am looking at? Not sure if this classifies as You can do a student test to see if the mean is different between the group 4,6 and the rest. Even if your sample size is small you will conclude in a difference. Note that it will tell you that group
37,631
How to model pairwise preference with both strong and weak preferences?
The difficulty with specifying a model to resolve this problem is one of how to interpret the strength of preference information. Does A vs B 999:1 mean that 999 times out of 1000 people will prefer A, or, does it mean that a person prefers A by a large amount relative to B? If we take the interpretation that the data means that A is preferred to B 999 out of 1000, then you can fit a Bradley-Terry(-Luce) model, but most people these days would instead estimate a logit model, or a generalization thereof, as their "choice model": $ P(A > B\; |\; \vec{w} ) = \frac{e^w_A}{e^w_A + e^w_B} $ Maximum likelihood estimation with large data sets and aggregate data is straightforward as the sample size enters the log-likelihood as a weight for each pair. Complication arises if one wants to take into account how people differ in their preferences, in which case some type of mixture is required (see Train, Kenneth E. (2009), Discrete Choice Methods with Simulation (Second ed.). Cambridge: Cambridge University Press.). It is not unknown for researchers to take this frequency interpretation when modeling even if it is believed that it is not an accurate characterization of the problem. This is because it is not a straightforward exercise to specify a good model which deals with degree, as you then have to find some way of working out what, precisely, 999:1 means and how it relates to 998:2 and so on. There are lots of different models that have been developed for this problem (e.g., models designed for constant sum dependent variables, models designed to predict probabilities, diffusion models). It is impossible to say with any exactitude which model is most appropriate as it really depends upon the appropriateness of the inherent assumptions to your data and how well it fits your data.
How to model pairwise preference with both strong and weak preferences?
The difficulty with specifying a model to resolve this problem is one of how to interpret the strength of preference information. Does A vs B 999:1 mean that 999 times out of 1000 people will prefer
How to model pairwise preference with both strong and weak preferences? The difficulty with specifying a model to resolve this problem is one of how to interpret the strength of preference information. Does A vs B 999:1 mean that 999 times out of 1000 people will prefer A, or, does it mean that a person prefers A by a large amount relative to B? If we take the interpretation that the data means that A is preferred to B 999 out of 1000, then you can fit a Bradley-Terry(-Luce) model, but most people these days would instead estimate a logit model, or a generalization thereof, as their "choice model": $ P(A > B\; |\; \vec{w} ) = \frac{e^w_A}{e^w_A + e^w_B} $ Maximum likelihood estimation with large data sets and aggregate data is straightforward as the sample size enters the log-likelihood as a weight for each pair. Complication arises if one wants to take into account how people differ in their preferences, in which case some type of mixture is required (see Train, Kenneth E. (2009), Discrete Choice Methods with Simulation (Second ed.). Cambridge: Cambridge University Press.). It is not unknown for researchers to take this frequency interpretation when modeling even if it is believed that it is not an accurate characterization of the problem. This is because it is not a straightforward exercise to specify a good model which deals with degree, as you then have to find some way of working out what, precisely, 999:1 means and how it relates to 998:2 and so on. There are lots of different models that have been developed for this problem (e.g., models designed for constant sum dependent variables, models designed to predict probabilities, diffusion models). It is impossible to say with any exactitude which model is most appropriate as it really depends upon the appropriateness of the inherent assumptions to your data and how well it fits your data.
How to model pairwise preference with both strong and weak preferences? The difficulty with specifying a model to resolve this problem is one of how to interpret the strength of preference information. Does A vs B 999:1 mean that 999 times out of 1000 people will prefer
37,632
When does the expected value or variance of the $t$ statistic exist?
The case of independent and identical distributions for the variables $X_i$ in the sample The (non-central) t-distribution is proportional to the distribution of the inverse of the tangent of the angle of the sample $\vec{x}$ with the diagonal line $x_1 = x_2 = \dots = x_n$. $$t = \frac{\sqrt{n-1}}{\text{tan}(\theta)}$$ The problematic case is when this angle is 0 degrees in which case $\tan(\theta) = 0$ and the inverse is infinite. Discrete distribution In your example with a Bernoulli distribution you have a discrete distribution and there is a non-zero probability that $\theta = 0$, or that the sample $\vec{x}$ is on the diagonal line. Whenever that probability is non-zero then the mean of $t$ (and other moments) will be infinite or undefined. This happens with any iid discrete sampling because there is a non-zero probability $$P(X_1 = X_2 = \dots = X_n) = \sum_{\forall x} P(X=x)^n$$ Continuous distribution One can imagine the distribution of the sample $\vec{x}$ projected onto a sphere of radius $1$ and see how it is distributed near the two points of the intersection with the diagonal. For continuous distributions, the probability that $\theta = 0$ will be zero (unless you have fully correlated sample), but what also matters is how the density changes in the neighbourhood of $\theta = 0$. For the expected value to be finite we need to have the second derivative of the distribution of $z = \tan(\theta)$ to be zero in the point $z=0$. Since $$E[1/z] = \int_{-\infty}^{0} \frac{1}{z} f(z) dz + \int_0^\infty \frac{1}{z} f(z) dz$$ ... To be continued I imagine that we can come up with some distribution that concentrates in one or more values such that for cases with $n = 3$ we do not have a finite mean (unlike the case of a normal distribution). Also when $X$ already has finite mean, then probably $t$ will have a finite mean as well.
When does the expected value or variance of the $t$ statistic exist?
The case of independent and identical distributions for the variables $X_i$ in the sample The (non-central) t-distribution is proportional to the distribution of the inverse of the tangent of the angl
When does the expected value or variance of the $t$ statistic exist? The case of independent and identical distributions for the variables $X_i$ in the sample The (non-central) t-distribution is proportional to the distribution of the inverse of the tangent of the angle of the sample $\vec{x}$ with the diagonal line $x_1 = x_2 = \dots = x_n$. $$t = \frac{\sqrt{n-1}}{\text{tan}(\theta)}$$ The problematic case is when this angle is 0 degrees in which case $\tan(\theta) = 0$ and the inverse is infinite. Discrete distribution In your example with a Bernoulli distribution you have a discrete distribution and there is a non-zero probability that $\theta = 0$, or that the sample $\vec{x}$ is on the diagonal line. Whenever that probability is non-zero then the mean of $t$ (and other moments) will be infinite or undefined. This happens with any iid discrete sampling because there is a non-zero probability $$P(X_1 = X_2 = \dots = X_n) = \sum_{\forall x} P(X=x)^n$$ Continuous distribution One can imagine the distribution of the sample $\vec{x}$ projected onto a sphere of radius $1$ and see how it is distributed near the two points of the intersection with the diagonal. For continuous distributions, the probability that $\theta = 0$ will be zero (unless you have fully correlated sample), but what also matters is how the density changes in the neighbourhood of $\theta = 0$. For the expected value to be finite we need to have the second derivative of the distribution of $z = \tan(\theta)$ to be zero in the point $z=0$. Since $$E[1/z] = \int_{-\infty}^{0} \frac{1}{z} f(z) dz + \int_0^\infty \frac{1}{z} f(z) dz$$ ... To be continued I imagine that we can come up with some distribution that concentrates in one or more values such that for cases with $n = 3$ we do not have a finite mean (unlike the case of a normal distribution). Also when $X$ already has finite mean, then probably $t$ will have a finite mean as well.
When does the expected value or variance of the $t$ statistic exist? The case of independent and identical distributions for the variables $X_i$ in the sample The (non-central) t-distribution is proportional to the distribution of the inverse of the tangent of the angl
37,633
When does the expected value or variance of the $t$ statistic exist?
One way to investigate this is by simulation. Since the question is about existence of moments, we could use some estimator of the tail index of the distribution of the T statistic, which is related to existence of expectation and other moments a definition of tail index here. For an example I will simulate $N=100000$ times from a uniform distribution on $(-1,1)$. The distribution of the T statistic is with a $t_4$-density overlaid. But it is the tail behavior which is most important, which is difficult to judge from such a plot. Let us try a relative distribution plot, again comparing to the $t_4$: which indicated heavier tails. Then let us try the Hill estimator: (the lower curve is EPD estimator, see code below). This at least indicates a tail index < 1, so that expectation do not exist, in contrast to the $t_4$. R code: sim_Tstat <- function(N, n, rparent=rnorm, mu_0=0) { Tstat <- replicate(N, {x <- rparent(n) t <- sqrt(n)*(mean(x)-mu_0)/sd(x) t}) return(Tstat) } library(ReIns) library(reldist) set.seed(7*11*13) test <- sim_Tstat(1E5, 5, rparent=\(n){runif(n, min=-1, max=1)}) hist(test, prob=TRUE, xlim=c(-5,5), breaks="FD") plot( \(x) dt(x, df=4), from=-5, to=5, col="red", add=TRUE) reldist(test, qt(ppoints(1E5), df=4), method="bgk") Hill(test[test>0], plot=TRUE) EPD(test[test>0], add=TRUE)
When does the expected value or variance of the $t$ statistic exist?
One way to investigate this is by simulation. Since the question is about existence of moments, we could use some estimator of the tail index of the distribution of the T statistic, which is related t
When does the expected value or variance of the $t$ statistic exist? One way to investigate this is by simulation. Since the question is about existence of moments, we could use some estimator of the tail index of the distribution of the T statistic, which is related to existence of expectation and other moments a definition of tail index here. For an example I will simulate $N=100000$ times from a uniform distribution on $(-1,1)$. The distribution of the T statistic is with a $t_4$-density overlaid. But it is the tail behavior which is most important, which is difficult to judge from such a plot. Let us try a relative distribution plot, again comparing to the $t_4$: which indicated heavier tails. Then let us try the Hill estimator: (the lower curve is EPD estimator, see code below). This at least indicates a tail index < 1, so that expectation do not exist, in contrast to the $t_4$. R code: sim_Tstat <- function(N, n, rparent=rnorm, mu_0=0) { Tstat <- replicate(N, {x <- rparent(n) t <- sqrt(n)*(mean(x)-mu_0)/sd(x) t}) return(Tstat) } library(ReIns) library(reldist) set.seed(7*11*13) test <- sim_Tstat(1E5, 5, rparent=\(n){runif(n, min=-1, max=1)}) hist(test, prob=TRUE, xlim=c(-5,5), breaks="FD") plot( \(x) dt(x, df=4), from=-5, to=5, col="red", add=TRUE) reldist(test, qt(ppoints(1E5), df=4), method="bgk") Hill(test[test>0], plot=TRUE) EPD(test[test>0], add=TRUE)
When does the expected value or variance of the $t$ statistic exist? One way to investigate this is by simulation. Since the question is about existence of moments, we could use some estimator of the tail index of the distribution of the T statistic, which is related t
37,634
When does the expected value or variance of the $t$ statistic exist?
The distribution of Student's t statistic is known when the random variable x follows a Normal distribution. Sometimes, however, we apply it to random variables drawn from other distributions. I am curious if there are known conditions, sufficient and necessary, that the expectation of the t statistic, or its variance, are known to exist (i.e. be finite). If the observations are normally distributed, the t-statistic follows a t-distribution under the null hypothesis, but note that with many observations it amounts to the standard normal distribution. The "other distributions" you mention for observations are the fairly large category to which the Central Limit Theorem (CLT) can be applied. So the limiting distribution for t-stat is already the standard normal distribution. Therefore, it seems to me that the conditions you mention for the finiteness of the first two moments of t-stat go back to those of the CLT. For example, in the extreme if x were drawn from a Bernoulli distribution, there would be a non-zero probability the sample variance is zero, and thus t is infinite or not defined, and the expectation of t does not exist. Something like this will almost certainly not happen if the data come from Normal, and this is true for any large sample that meets the CLT assumptions. Finally, the CLT implies convergence of the t-stat to the standard normal distribution, but, as noted in Whuber's comment, the CLT does not imply convergence of the moments. Indeed, convergence in the distribution does not generally imply convergence in the moments. This problem is a weakness of my argument. I will not solve it now; suggestions are welcome. I can note, however, that even if convergence in the distribution does not imply convergence of moments, it is not true that convergence in the distribution implies or even suggests the absence of convergence of moments. The problem we are talking about here may be of little relevance in practice, and I suspect that it is. The rule “convergence in distribution do not imply convergence on moments” is useful also as warning about the fact that some moments may not exist. However, as mentioned above, the CLT is about the convergence of the t-stat with the standard normal distribution. I have a hard time finding an intuitive and concrete case where the t-stat approximates the standard normal distribution, but the mean and variance are significantly different compared to what that distribution should imply. Moreover, even if some cases are shown, they should be pathological and the main message of my proposal can remain.
When does the expected value or variance of the $t$ statistic exist?
The distribution of Student's t statistic is known when the random variable x follows a Normal distribution. Sometimes, however, we apply it to random variables drawn from other distributions. I am cu
When does the expected value or variance of the $t$ statistic exist? The distribution of Student's t statistic is known when the random variable x follows a Normal distribution. Sometimes, however, we apply it to random variables drawn from other distributions. I am curious if there are known conditions, sufficient and necessary, that the expectation of the t statistic, or its variance, are known to exist (i.e. be finite). If the observations are normally distributed, the t-statistic follows a t-distribution under the null hypothesis, but note that with many observations it amounts to the standard normal distribution. The "other distributions" you mention for observations are the fairly large category to which the Central Limit Theorem (CLT) can be applied. So the limiting distribution for t-stat is already the standard normal distribution. Therefore, it seems to me that the conditions you mention for the finiteness of the first two moments of t-stat go back to those of the CLT. For example, in the extreme if x were drawn from a Bernoulli distribution, there would be a non-zero probability the sample variance is zero, and thus t is infinite or not defined, and the expectation of t does not exist. Something like this will almost certainly not happen if the data come from Normal, and this is true for any large sample that meets the CLT assumptions. Finally, the CLT implies convergence of the t-stat to the standard normal distribution, but, as noted in Whuber's comment, the CLT does not imply convergence of the moments. Indeed, convergence in the distribution does not generally imply convergence in the moments. This problem is a weakness of my argument. I will not solve it now; suggestions are welcome. I can note, however, that even if convergence in the distribution does not imply convergence of moments, it is not true that convergence in the distribution implies or even suggests the absence of convergence of moments. The problem we are talking about here may be of little relevance in practice, and I suspect that it is. The rule “convergence in distribution do not imply convergence on moments” is useful also as warning about the fact that some moments may not exist. However, as mentioned above, the CLT is about the convergence of the t-stat with the standard normal distribution. I have a hard time finding an intuitive and concrete case where the t-stat approximates the standard normal distribution, but the mean and variance are significantly different compared to what that distribution should imply. Moreover, even if some cases are shown, they should be pathological and the main message of my proposal can remain.
When does the expected value or variance of the $t$ statistic exist? The distribution of Student's t statistic is known when the random variable x follows a Normal distribution. Sometimes, however, we apply it to random variables drawn from other distributions. I am cu
37,635
Help select prior function
This is a very common use-case for hierarchical models. These models have a lot of different names, so you'll sometimes see them called multi-level models or mixed-effects models. Hierarchical models aren't strictly Bayesian, but, in my experience, they are more common fit as Bayesian models than Frequentist. The basic idea is that you define population parameters then have member of the population distributed around the population parameters with their own variability. In your example, let $i$ denote the person and $j$ denote the measurement. $X_{ij}$ refers to measurement $j$ on person $i$. Each person does not need to have the same number of measurements, $j$. Each person has their own mean, $\theta_i$, and variance $\sigma^2_i$. The person-level means are distributed around an overall mean, $\mu$. Individual Level: $X_{i, 1}, X_{i ,2}, \dots, X_{i, j} \sim N(\theta_i, \sigma^2_i)$ for $i = 1, \dots, n$ Population Level: $\theta_1, \dots \theta_n \sim N(\mu, \tau) \\ \mu \sim N(0, 10), \; \tau \sim U[0, 10] \; \text{(or some other noninformative priors)}$. You'll be to get a posterior distribution of each person-level mean, as well as the population level mean and variance (or precision). Hierarchical models are too big a topic to explain in a Stack Exchange post. You can look into some resources to help you fit these models.
Help select prior function
This is a very common use-case for hierarchical models. These models have a lot of different names, so you'll sometimes see them called multi-level models or mixed-effects models. Hierarchical models
Help select prior function This is a very common use-case for hierarchical models. These models have a lot of different names, so you'll sometimes see them called multi-level models or mixed-effects models. Hierarchical models aren't strictly Bayesian, but, in my experience, they are more common fit as Bayesian models than Frequentist. The basic idea is that you define population parameters then have member of the population distributed around the population parameters with their own variability. In your example, let $i$ denote the person and $j$ denote the measurement. $X_{ij}$ refers to measurement $j$ on person $i$. Each person does not need to have the same number of measurements, $j$. Each person has their own mean, $\theta_i$, and variance $\sigma^2_i$. The person-level means are distributed around an overall mean, $\mu$. Individual Level: $X_{i, 1}, X_{i ,2}, \dots, X_{i, j} \sim N(\theta_i, \sigma^2_i)$ for $i = 1, \dots, n$ Population Level: $\theta_1, \dots \theta_n \sim N(\mu, \tau) \\ \mu \sim N(0, 10), \; \tau \sim U[0, 10] \; \text{(or some other noninformative priors)}$. You'll be to get a posterior distribution of each person-level mean, as well as the population level mean and variance (or precision). Hierarchical models are too big a topic to explain in a Stack Exchange post. You can look into some resources to help you fit these models.
Help select prior function This is a very common use-case for hierarchical models. These models have a lot of different names, so you'll sometimes see them called multi-level models or mixed-effects models. Hierarchical models
37,636
Sklar’s Extension Theorem and support restrictions
Reading Nelsen's 2006 "Introduction to Copulas": $k$-dimensional sub-copulas are defined in a subset of $[0,1]^k$ (their domain). The same is said in Sklar's paper that the OP mentioned. A Copula is the extension of a sub-copula to the whole $[0,1]^k$ (its domain). Sklar's Theorem (the "core" one) then asserts that every "usual" multivariate distribution function can be represented by some Copula, uniquely (continuous rv's), or not (discrete rv's). The OP talks about the "support" of the distribution function, and I suspect they meant the support of the rv's that have this distribution function. This "support" does not enter the picture nor is affected by all the above, because for the Copula what matters is to have as arguments distribution functions, no matter what goes on with the rv's being represented by them. So place any kind of restrictions you want on $(X,Y)$ that have the joint DF $H(x, y)$ and marginals $F(x), G(x)$, because these marginal DFs will range in $[0,1]$ no matter what restrictions you apply on the joint support -and the Copula will be some $C(F(x), G(x)) = H(x,y)$ for the designated support $S_X \times S_Y$ that incorporates your restrictions.
Sklar’s Extension Theorem and support restrictions
Reading Nelsen's 2006 "Introduction to Copulas": $k$-dimensional sub-copulas are defined in a subset of $[0,1]^k$ (their domain). The same is said in Sklar's paper that the OP mentioned. A Copula is
Sklar’s Extension Theorem and support restrictions Reading Nelsen's 2006 "Introduction to Copulas": $k$-dimensional sub-copulas are defined in a subset of $[0,1]^k$ (their domain). The same is said in Sklar's paper that the OP mentioned. A Copula is the extension of a sub-copula to the whole $[0,1]^k$ (its domain). Sklar's Theorem (the "core" one) then asserts that every "usual" multivariate distribution function can be represented by some Copula, uniquely (continuous rv's), or not (discrete rv's). The OP talks about the "support" of the distribution function, and I suspect they meant the support of the rv's that have this distribution function. This "support" does not enter the picture nor is affected by all the above, because for the Copula what matters is to have as arguments distribution functions, no matter what goes on with the rv's being represented by them. So place any kind of restrictions you want on $(X,Y)$ that have the joint DF $H(x, y)$ and marginals $F(x), G(x)$, because these marginal DFs will range in $[0,1]$ no matter what restrictions you apply on the joint support -and the Copula will be some $C(F(x), G(x)) = H(x,y)$ for the designated support $S_X \times S_Y$ that incorporates your restrictions.
Sklar’s Extension Theorem and support restrictions Reading Nelsen's 2006 "Introduction to Copulas": $k$-dimensional sub-copulas are defined in a subset of $[0,1]^k$ (their domain). The same is said in Sklar's paper that the OP mentioned. A Copula is
37,637
What does the term "regularization" refer to specifically?
I agree with the comment that regularization is a way to constrain parameters. In that regard, not only are $L_1$ and $L_2$ penalties regularization, but neural network dropout is regularization, since it forces certain parameters to be zero. Likewise, convolutional neural networks could be argued to perform regularization by forcing some parameters to be equal and others to be zero. Even if the terminology is a bit loose, the field of machine learning seems to do okay. Many fields are loose with terminology, too. For instance, ask a musician to explain the difference between a "riff" and a "lick".
What does the term "regularization" refer to specifically?
I agree with the comment that regularization is a way to constrain parameters. In that regard, not only are $L_1$ and $L_2$ penalties regularization, but neural network dropout is regularization, sinc
What does the term "regularization" refer to specifically? I agree with the comment that regularization is a way to constrain parameters. In that regard, not only are $L_1$ and $L_2$ penalties regularization, but neural network dropout is regularization, since it forces certain parameters to be zero. Likewise, convolutional neural networks could be argued to perform regularization by forcing some parameters to be equal and others to be zero. Even if the terminology is a bit loose, the field of machine learning seems to do okay. Many fields are loose with terminology, too. For instance, ask a musician to explain the difference between a "riff" and a "lick".
What does the term "regularization" refer to specifically? I agree with the comment that regularization is a way to constrain parameters. In that regard, not only are $L_1$ and $L_2$ penalties regularization, but neural network dropout is regularization, sinc
37,638
Is there a standard for statistical symbols?
So I found out that there is a standard. See ISO 3534-1:2006 Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability ISO 3534-2:2006 Statistics — Vocabulary and symbols — Part 2: Applied statistics ISO 3534-3:2013 Statistics — Vocabulary and symbols — Part 3: Design of experiments ISO 3534-4:2014 Statistics — Vocabulary and symbols — Part 4: Survey sampling
Is there a standard for statistical symbols?
So I found out that there is a standard. See ISO 3534-1:2006 Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability ISO 3534-2:2006 Statistics — Vocabul
Is there a standard for statistical symbols? So I found out that there is a standard. See ISO 3534-1:2006 Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability ISO 3534-2:2006 Statistics — Vocabulary and symbols — Part 2: Applied statistics ISO 3534-3:2013 Statistics — Vocabulary and symbols — Part 3: Design of experiments ISO 3534-4:2014 Statistics — Vocabulary and symbols — Part 4: Survey sampling
Is there a standard for statistical symbols? So I found out that there is a standard. See ISO 3534-1:2006 Statistics — Vocabulary and symbols — Part 1: General statistical terms and terms used in probability ISO 3534-2:2006 Statistics — Vocabul
37,639
Is there a standard for statistical symbols?
According to the Oxford dictionary of statistical terms (and probably many other standard works) the term 'standard' has already been claimed by Karl Pearson in 1894. Quote from Contributions to the mathematical theory of evolution (emphasis is mine) In the case of a frequency-curve whose components are two normal curves ... each component normal curve has three variables : (i.) the position of its axis, (ii.) its “standard-deviation” (Gauss’s “Mean Error”, Airy’s “Error of Mean Square“). and (iii.) its area. So, No, there aren't standards in statistics except for standard deviations and standard errors. We can blame professor Pearson for not contacting the International Standards Organization to discuss about standards in statistics (or as Crocefisso notes, not Pearson personally, but his spirit). Apparently, as Kjetil notes, there has been a small resistance and the ISO has also been working on statistics in ISO-3534 Statistics — Vocabulary and symbols (I am not sure I am linking correctly since I am not buying these standards)
Is there a standard for statistical symbols?
According to the Oxford dictionary of statistical terms (and probably many other standard works) the term 'standard' has already been claimed by Karl Pearson in 1894. Quote from Contributions to the m
Is there a standard for statistical symbols? According to the Oxford dictionary of statistical terms (and probably many other standard works) the term 'standard' has already been claimed by Karl Pearson in 1894. Quote from Contributions to the mathematical theory of evolution (emphasis is mine) In the case of a frequency-curve whose components are two normal curves ... each component normal curve has three variables : (i.) the position of its axis, (ii.) its “standard-deviation” (Gauss’s “Mean Error”, Airy’s “Error of Mean Square“). and (iii.) its area. So, No, there aren't standards in statistics except for standard deviations and standard errors. We can blame professor Pearson for not contacting the International Standards Organization to discuss about standards in statistics (or as Crocefisso notes, not Pearson personally, but his spirit). Apparently, as Kjetil notes, there has been a small resistance and the ISO has also been working on statistics in ISO-3534 Statistics — Vocabulary and symbols (I am not sure I am linking correctly since I am not buying these standards)
Is there a standard for statistical symbols? According to the Oxford dictionary of statistical terms (and probably many other standard works) the term 'standard' has already been claimed by Karl Pearson in 1894. Quote from Contributions to the m
37,640
Understanding how to find more "extreme" values when calculating p values in two sided hypothesis tests
Here are the probabilities for $0, 1, \dots, 10$ heads if we throw a coin $n=10$ times under your null hypothesis of $p=0.3$: So let us assume that we have observed $n=5$ heads and wish to run a two-sided test. I have indicated the probability for observing $k=5$ under the null hypothesis of $p=0.3$ with the horizontal red dashed line. Take a look at the bars below that line. What is an extreme outcome? It's an improbable one. Look at the probabilities. An outcome of $k=6$ is even more improbable than one of $k=5$, so it provides even more evidence against the null hypothesis. As do outcomes of $k=7, \dots, 10$. So these are all at least as improbable as the observed $k=5$, i.e., at least as extreme. However, an outcome of $k=0$ would also be at least as improbable as $k=5$. If we had run the experiment twice with two different coins, and observed $k=5$ in one experiment and $k=0$ in the other, we would be more confident in rejecting the null hypothesis in the second than in the first. In particular, when running the experiment only once (and testing two-sidedly), we need to include all events that are at least as improbable as the one we actually observed in calculating the $p$ value. Note that this does not argue for including $k=1$ in our calculation, because it is (slightly) less improbable than the observed $k=5$. However, the difference in the probabilities under the null hypothesis is quite small, so one could reasonably argue that observing $k=1$ provides almost as much evidence against the null hypothesis as $k=5$, and so we should include it in calculating the $p$ value.
Understanding how to find more "extreme" values when calculating p values in two sided hypothesis te
Here are the probabilities for $0, 1, \dots, 10$ heads if we throw a coin $n=10$ times under your null hypothesis of $p=0.3$: So let us assume that we have observed $n=5$ heads and wish to run a two-
Understanding how to find more "extreme" values when calculating p values in two sided hypothesis tests Here are the probabilities for $0, 1, \dots, 10$ heads if we throw a coin $n=10$ times under your null hypothesis of $p=0.3$: So let us assume that we have observed $n=5$ heads and wish to run a two-sided test. I have indicated the probability for observing $k=5$ under the null hypothesis of $p=0.3$ with the horizontal red dashed line. Take a look at the bars below that line. What is an extreme outcome? It's an improbable one. Look at the probabilities. An outcome of $k=6$ is even more improbable than one of $k=5$, so it provides even more evidence against the null hypothesis. As do outcomes of $k=7, \dots, 10$. So these are all at least as improbable as the observed $k=5$, i.e., at least as extreme. However, an outcome of $k=0$ would also be at least as improbable as $k=5$. If we had run the experiment twice with two different coins, and observed $k=5$ in one experiment and $k=0$ in the other, we would be more confident in rejecting the null hypothesis in the second than in the first. In particular, when running the experiment only once (and testing two-sidedly), we need to include all events that are at least as improbable as the one we actually observed in calculating the $p$ value. Note that this does not argue for including $k=1$ in our calculation, because it is (slightly) less improbable than the observed $k=5$. However, the difference in the probabilities under the null hypothesis is quite small, so one could reasonably argue that observing $k=1$ provides almost as much evidence against the null hypothesis as $k=5$, and so we should include it in calculating the $p$ value.
Understanding how to find more "extreme" values when calculating p values in two sided hypothesis te Here are the probabilities for $0, 1, \dots, 10$ heads if we throw a coin $n=10$ times under your null hypothesis of $p=0.3$: So let us assume that we have observed $n=5$ heads and wish to run a two-
37,641
Why do model selection (AIC and LOO) outcomes differ between ML and bayesian approaches
Something strikes me as particularly odd when comparing different likelihoods via AIC. Suppose I observed $x=2$. The log likelihood for a gaussian, gamma, and poisson each with mean and variance 1 is -0.91, -1, and -1. Should I assume this observation came from a gaussian simply because of the likelihood, ignoring details about the data generating process? I don't buy that. In my own opinion, the choice of family comes (partially) prior to modelling. Considering that you are modelling a necessarily non-negative quantity, the choice of Gaussian is suspicious. The areas are large, perhaps large enough to warrant making the gaussian approximation (as is sometimes done with height. The probability of negative height under this model is negligibly small), but the residual variance of the model is nearly 200. That means that when dB.s=1 (whatever that means, but it happens), 0 is almost 1 standard deviation away and so unphysical areas are not so improbable. In fact, calling simulate on lmm results in negative areas. That means that drawing samples from the distribution learned by your model results in sampling negative areas, which is clearly not physical. From this alone I would opt for the gamma were it my only other choice of family since it is supported on the non-negative reals (much like area). This doesn't answer your question per se, but I think it does address something important. The choice of the family, in my own opinion and by the arguments I present here, is not something that is chosen in a data driven fashion, and it probably isn't something you select based on comparing the same models in two different modelling frameworks. Have a think about what your modelling and what the assumptions you are making. That, in part, should help with family selection and it won't rely on measures of goodness of fit.
Why do model selection (AIC and LOO) outcomes differ between ML and bayesian approaches
Something strikes me as particularly odd when comparing different likelihoods via AIC. Suppose I observed $x=2$. The log likelihood for a gaussian, gamma, and poisson each with mean and variance 1
Why do model selection (AIC and LOO) outcomes differ between ML and bayesian approaches Something strikes me as particularly odd when comparing different likelihoods via AIC. Suppose I observed $x=2$. The log likelihood for a gaussian, gamma, and poisson each with mean and variance 1 is -0.91, -1, and -1. Should I assume this observation came from a gaussian simply because of the likelihood, ignoring details about the data generating process? I don't buy that. In my own opinion, the choice of family comes (partially) prior to modelling. Considering that you are modelling a necessarily non-negative quantity, the choice of Gaussian is suspicious. The areas are large, perhaps large enough to warrant making the gaussian approximation (as is sometimes done with height. The probability of negative height under this model is negligibly small), but the residual variance of the model is nearly 200. That means that when dB.s=1 (whatever that means, but it happens), 0 is almost 1 standard deviation away and so unphysical areas are not so improbable. In fact, calling simulate on lmm results in negative areas. That means that drawing samples from the distribution learned by your model results in sampling negative areas, which is clearly not physical. From this alone I would opt for the gamma were it my only other choice of family since it is supported on the non-negative reals (much like area). This doesn't answer your question per se, but I think it does address something important. The choice of the family, in my own opinion and by the arguments I present here, is not something that is chosen in a data driven fashion, and it probably isn't something you select based on comparing the same models in two different modelling frameworks. Have a think about what your modelling and what the assumptions you are making. That, in part, should help with family selection and it won't rely on measures of goodness of fit.
Why do model selection (AIC and LOO) outcomes differ between ML and bayesian approaches Something strikes me as particularly odd when comparing different likelihoods via AIC. Suppose I observed $x=2$. The log likelihood for a gaussian, gamma, and poisson each with mean and variance 1
37,642
A reliance on repeated sampling ideas can lead to logical paradoxes that appear in common rather than esoteric procedures?
Say you have a measurement of real value interfered with noise. The measurement is given by $y(n) = x + \omega_n$ where $x$ is assumed to be a constant value. An unbiased estimator of $x$ can be the mean value: $\hat{x}=\frac{\sum_n y(n)}{n}$ and the variance of the estimation should be: $var(\hat{x})=\frac{\sigma_\omega^2}{n}$ and can be estimated by $\hat{\sigma}_\hat{x}^2=\frac{\sum_n \left(y(n)-\frac{\sum_n y(n)}{n}\right)^2}{n-1}$ For repeatable tests, with a constant $x$, $\hat{\sigma}_\hat{x}^2$ should decrease as the number of the tests grow. However, if $x$ varies with time, say $x= x(n)$, then estimated variance will increase, rather than decrease.
A reliance on repeated sampling ideas can lead to logical paradoxes that appear in common rather tha
Say you have a measurement of real value interfered with noise. The measurement is given by $y(n) = x + \omega_n$ where $x$ is assumed to be a constant value. An unbiased estimator of $x$ can be the m
A reliance on repeated sampling ideas can lead to logical paradoxes that appear in common rather than esoteric procedures? Say you have a measurement of real value interfered with noise. The measurement is given by $y(n) = x + \omega_n$ where $x$ is assumed to be a constant value. An unbiased estimator of $x$ can be the mean value: $\hat{x}=\frac{\sum_n y(n)}{n}$ and the variance of the estimation should be: $var(\hat{x})=\frac{\sigma_\omega^2}{n}$ and can be estimated by $\hat{\sigma}_\hat{x}^2=\frac{\sum_n \left(y(n)-\frac{\sum_n y(n)}{n}\right)^2}{n-1}$ For repeatable tests, with a constant $x$, $\hat{\sigma}_\hat{x}^2$ should decrease as the number of the tests grow. However, if $x$ varies with time, say $x= x(n)$, then estimated variance will increase, rather than decrease.
A reliance on repeated sampling ideas can lead to logical paradoxes that appear in common rather tha Say you have a measurement of real value interfered with noise. The measurement is given by $y(n) = x + \omega_n$ where $x$ is assumed to be a constant value. An unbiased estimator of $x$ can be the m
37,643
Is it okay to perform PerMANOVA on PCA values?
I will leave it to others to decide if a permutational multivariate analysis of variance on principal components is 'acceptable' to them. But there isn't any logical error in computing such a statistic. It is simply composing functions of random variables. The more difficult question for you to ask yourself is whether this statistical procedure answers your research question. Recall that PCA involves performing a linear transformation that optimally, in a certain linear sense, decorrelates your variables. Of considerable importance here is that the principal components are ordered by how much variation they linearly explain. Also recall that the permutational analysis of variance tests a null that location and dispersion remain unchanged when values are exchanged among groups. If your perMANOVA was not significant on the original variables, but was on the principal components of those variables, that suggests to me that you have successfully detected that your PCA did something. Namely, it unequalized the location or dispersion among groups. This makes sense given that PCA puts an aforementioned ordering on its components. If that is what you wanted to test with this procedure, then great! If not, then do something else.
Is it okay to perform PerMANOVA on PCA values?
I will leave it to others to decide if a permutational multivariate analysis of variance on principal components is 'acceptable' to them. But there isn't any logical error in computing such a statisti
Is it okay to perform PerMANOVA on PCA values? I will leave it to others to decide if a permutational multivariate analysis of variance on principal components is 'acceptable' to them. But there isn't any logical error in computing such a statistic. It is simply composing functions of random variables. The more difficult question for you to ask yourself is whether this statistical procedure answers your research question. Recall that PCA involves performing a linear transformation that optimally, in a certain linear sense, decorrelates your variables. Of considerable importance here is that the principal components are ordered by how much variation they linearly explain. Also recall that the permutational analysis of variance tests a null that location and dispersion remain unchanged when values are exchanged among groups. If your perMANOVA was not significant on the original variables, but was on the principal components of those variables, that suggests to me that you have successfully detected that your PCA did something. Namely, it unequalized the location or dispersion among groups. This makes sense given that PCA puts an aforementioned ordering on its components. If that is what you wanted to test with this procedure, then great! If not, then do something else.
Is it okay to perform PerMANOVA on PCA values? I will leave it to others to decide if a permutational multivariate analysis of variance on principal components is 'acceptable' to them. But there isn't any logical error in computing such a statisti
37,644
Bayesian Inference in the presence of multiple hypotheses
Arguments that Bayesians do not need to worry about type I errors are starting from the premise that the type I error rate does not matter/is not a relevant concept* and simply adhere to the likelihood principle**. I don't think this kind of Bayesian viewpoint is compatible with coercing an inferential threshold, but for taking an action it can work well with decision theory, but then you really need utility functions for how bad it is to be wrong in what way. * Some Bayesian methods happen to perform well in a frequentist sense, but usually mostly because shrinkage towards plausible parmeter values is usually a good thing. ** If you take your data generating method to be "my main claim is the one that has the highest posterior probability", you can of course argue whether the likelihood principle could be seen to tell you to take that selection into account.
Bayesian Inference in the presence of multiple hypotheses
Arguments that Bayesians do not need to worry about type I errors are starting from the premise that the type I error rate does not matter/is not a relevant concept* and simply adhere to the likelihoo
Bayesian Inference in the presence of multiple hypotheses Arguments that Bayesians do not need to worry about type I errors are starting from the premise that the type I error rate does not matter/is not a relevant concept* and simply adhere to the likelihood principle**. I don't think this kind of Bayesian viewpoint is compatible with coercing an inferential threshold, but for taking an action it can work well with decision theory, but then you really need utility functions for how bad it is to be wrong in what way. * Some Bayesian methods happen to perform well in a frequentist sense, but usually mostly because shrinkage towards plausible parmeter values is usually a good thing. ** If you take your data generating method to be "my main claim is the one that has the highest posterior probability", you can of course argue whether the likelihood principle could be seen to tell you to take that selection into account.
Bayesian Inference in the presence of multiple hypotheses Arguments that Bayesians do not need to worry about type I errors are starting from the premise that the type I error rate does not matter/is not a relevant concept* and simply adhere to the likelihoo
37,645
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones
One example that comes to mind is the area under the ROC curve (AUC). For binary classification problems where the model outputs a continuous score (e.g. logistic regression or SVMs), AUC gives the probability that the model will score a randomly selected 'positive' instance higher than a randomly selected 'negative' instance. For evaluating prediction performance, AUC plays the same role as other metrics/loss functions (e.g. misclassification rate, log loss, etc). Namely, it maps predicted scores and true labels to a real number that summarizes performance. And, it can be used as the basis for decision rules; in particular, as an objective function for model selection. Higher AUC is more desirable, so AUC is actually a utility function rather than a loss function. But, this distinction is minor, as one can simply multiply AUC by negative one to obtain the loss incurred by choosing a particular model. Unlike misclassification rate, log loss, etc., AUC is non-additive (in the sense defined in the question). That is, if $y_i$ and $s_i$ are the true label and predicted score for the $i$th test case and $g$ is an arbitrary function, AUC can't be expressed in the form $\sum_{i=1}^n g(y_i, s_i)$. Rather, AUC is calculated by integrating the estimated ROC curve, which consists of the true positive rate vs. false positive rate as the classification threshold is varied. The integral is typically calculated using the trapezoid rule between points on the ROC curve. Although this involves a sum over trapezoids, AUC is non-additive because the area of each trapezoid depends non-additively on the predicted score and true labels of multiple test cases. For details, see section 7 and algorithm 2 in Fawcett (2006). Bradley (1997), Huang and Ling (2005), and others have argued for the use of AUC over accuracy (which is additive). Although AUC has found wide use (e.g. ~247k google scholar results for +auc +classification), there are arguments against it as well; e.g. see Lobo et al. (2008). References Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874. Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7), 1145-1159. Huang, J., & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on knowledge and Data Engineering, 17(3), 299-310. Lobo, J. M., Jimenez‐Valverde, A., & Real, R. (2008). AUC: a misleading measure of the performance of predictive distribution models. Global ecology and Biogeography, 17(2), 145-151.
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones
One example that comes to mind is the area under the ROC curve (AUC). For binary classification problems where the model outputs a continuous score (e.g. logistic regression or SVMs), AUC gives the pr
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones One example that comes to mind is the area under the ROC curve (AUC). For binary classification problems where the model outputs a continuous score (e.g. logistic regression or SVMs), AUC gives the probability that the model will score a randomly selected 'positive' instance higher than a randomly selected 'negative' instance. For evaluating prediction performance, AUC plays the same role as other metrics/loss functions (e.g. misclassification rate, log loss, etc). Namely, it maps predicted scores and true labels to a real number that summarizes performance. And, it can be used as the basis for decision rules; in particular, as an objective function for model selection. Higher AUC is more desirable, so AUC is actually a utility function rather than a loss function. But, this distinction is minor, as one can simply multiply AUC by negative one to obtain the loss incurred by choosing a particular model. Unlike misclassification rate, log loss, etc., AUC is non-additive (in the sense defined in the question). That is, if $y_i$ and $s_i$ are the true label and predicted score for the $i$th test case and $g$ is an arbitrary function, AUC can't be expressed in the form $\sum_{i=1}^n g(y_i, s_i)$. Rather, AUC is calculated by integrating the estimated ROC curve, which consists of the true positive rate vs. false positive rate as the classification threshold is varied. The integral is typically calculated using the trapezoid rule between points on the ROC curve. Although this involves a sum over trapezoids, AUC is non-additive because the area of each trapezoid depends non-additively on the predicted score and true labels of multiple test cases. For details, see section 7 and algorithm 2 in Fawcett (2006). Bradley (1997), Huang and Ling (2005), and others have argued for the use of AUC over accuracy (which is additive). Although AUC has found wide use (e.g. ~247k google scholar results for +auc +classification), there are arguments against it as well; e.g. see Lobo et al. (2008). References Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874. Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7), 1145-1159. Huang, J., & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on knowledge and Data Engineering, 17(3), 299-310. Lobo, J. M., Jimenez‐Valverde, A., & Real, R. (2008). AUC: a misleading measure of the performance of predictive distribution models. Global ecology and Biogeography, 17(2), 145-151.
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones One example that comes to mind is the area under the ROC curve (AUC). For binary classification problems where the model outputs a continuous score (e.g. logistic regression or SVMs), AUC gives the pr
37,646
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones
A sensible choice of loss could be the negative of utility. Within the framework of maximization of expected utility (MEU), we would have additive loss, since we would use average (over the set of test cases) negative loss as an estimate of expected utility. On the other hand, we could have nonadditive loss if we abandon MEU. E.g. if we are trying to maximize the utility of the worst outcome, we could choose the negative maximum (over the set of test cases) loss as an estimate of our target. A key observation here is that the maximum is not an additive function.
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones
A sensible choice of loss could be the negative of utility. Within the framework of maximization of expected utility (MEU), we would have additive loss, since we would use average (over the set of tes
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones A sensible choice of loss could be the negative of utility. Within the framework of maximization of expected utility (MEU), we would have additive loss, since we would use average (over the set of test cases) negative loss as an estimate of expected utility. On the other hand, we could have nonadditive loss if we abandon MEU. E.g. if we are trying to maximize the utility of the worst outcome, we could choose the negative maximum (over the set of test cases) loss as an estimate of our target. A key observation here is that the maximum is not an additive function.
Realistic/intuitive example where a nonadditive loss function is preferred over additive ones A sensible choice of loss could be the negative of utility. Within the framework of maximization of expected utility (MEU), we would have additive loss, since we would use average (over the set of tes
37,647
If the Bayesian probability is not a belief, what is it?
Perhaps what the author was getting as is distinguishing between the posterior distribution and an individual's posterior distribution. Suppose we knew as a fact that some of our parameters followed a given distribution before the data was distributed. Then based on those parameters, we observed some data conditional on those parameters and we knew the form of that conditional distribution. We could then apply Bayes theorem and know the probability distribution of said parameters, conditional on the data we saw. On the other hand, traditional Bayesian statistics tells us that our uncertainty can be a prior distribution. That's fine and I think most modern statisticians have zero problem with that concept. However, it's sometimes swept under the rug that the final output is still conditional on the original prior, and generally speaking there's no reason to believe the next person will have the same prior as you. To illustrate, consider the following R/psuedocode to demonstrate: # Simulate mean first, then simulate data simData = function(n = 10){ # Simulate uniform(0,1) rand_unif = runif(1, min = 0, max = 1) # Mu is either -1 or 1 with probability 0.5 if(rand_unif > 0.5){ mu = 1 } else{ mu = -1 } #Simulate output = rnorm(n, mu = mu, sd = 1) return(output) } Now if I showed you this code in advance and run the function, you can easily compute the probability that mu = 1 in the given simulation, despite never seeing it. This will unquestionable be correct (up to numerical error). On the other hand, suppose you know the code up to # Mu is some value <cliffsFavoriteNumber> simData = function(n = 10){ mu = cliffsFavoriteNumber #Simulate output = rnorm(n, mu = mu, sd = 1) return(output) } People other than me don't know the distribution on cliffsFavoriteNumber. Maybe Joe says "I don't think Cliff would choose a big number and it's gotta be a positive integer, so I'd say the prior for mu is a rounded exponential". Bob thinks "Cliff loves multiples of 9, so I'd say uniform prior on 9, 18, ..., 81". The computations they can compute from there are not mathematically wrong, but they condition on different people's prior believes and as such end up with very different answers. Thus, it's a little misleading for Joe to say something like "The posterior distribution is ...". It's more precise to say something like "Conditional on believing that the prior distribution on mu is a rounded exponential, the posterior distribution will be ...".
If the Bayesian probability is not a belief, what is it?
Perhaps what the author was getting as is distinguishing between the posterior distribution and an individual's posterior distribution. Suppose we knew as a fact that some of our parameters followed
If the Bayesian probability is not a belief, what is it? Perhaps what the author was getting as is distinguishing between the posterior distribution and an individual's posterior distribution. Suppose we knew as a fact that some of our parameters followed a given distribution before the data was distributed. Then based on those parameters, we observed some data conditional on those parameters and we knew the form of that conditional distribution. We could then apply Bayes theorem and know the probability distribution of said parameters, conditional on the data we saw. On the other hand, traditional Bayesian statistics tells us that our uncertainty can be a prior distribution. That's fine and I think most modern statisticians have zero problem with that concept. However, it's sometimes swept under the rug that the final output is still conditional on the original prior, and generally speaking there's no reason to believe the next person will have the same prior as you. To illustrate, consider the following R/psuedocode to demonstrate: # Simulate mean first, then simulate data simData = function(n = 10){ # Simulate uniform(0,1) rand_unif = runif(1, min = 0, max = 1) # Mu is either -1 or 1 with probability 0.5 if(rand_unif > 0.5){ mu = 1 } else{ mu = -1 } #Simulate output = rnorm(n, mu = mu, sd = 1) return(output) } Now if I showed you this code in advance and run the function, you can easily compute the probability that mu = 1 in the given simulation, despite never seeing it. This will unquestionable be correct (up to numerical error). On the other hand, suppose you know the code up to # Mu is some value <cliffsFavoriteNumber> simData = function(n = 10){ mu = cliffsFavoriteNumber #Simulate output = rnorm(n, mu = mu, sd = 1) return(output) } People other than me don't know the distribution on cliffsFavoriteNumber. Maybe Joe says "I don't think Cliff would choose a big number and it's gotta be a positive integer, so I'd say the prior for mu is a rounded exponential". Bob thinks "Cliff loves multiples of 9, so I'd say uniform prior on 9, 18, ..., 81". The computations they can compute from there are not mathematically wrong, but they condition on different people's prior believes and as such end up with very different answers. Thus, it's a little misleading for Joe to say something like "The posterior distribution is ...". It's more precise to say something like "Conditional on believing that the prior distribution on mu is a rounded exponential, the posterior distribution will be ...".
If the Bayesian probability is not a belief, what is it? Perhaps what the author was getting as is distinguishing between the posterior distribution and an individual's posterior distribution. Suppose we knew as a fact that some of our parameters followed
37,648
If the Bayesian probability is not a belief, what is it?
The problem is that you W can't be integrated out. Taken all the knowledge you have, W, we form our prior, p(Y|W). And with this, we can use Bayes theorem to get our posterior p(Y|XW). W just tags along with all our updates, but we don't actually know p(W). If we did have the probability of our prior beliefs we could integrate W out, but that probably would itself be conditioned on its own prior beliefs. No matter what there are always prior beliefs that can't be integrated out. I would like to note that many places don't include background knowledge as part of Bayes theorem, but it is there. Bayes theorem as properly expressed should read $p(y|xw)=\frac{p(x|yw)p(y|w)}{p(x|w)}$ See how w just tags along?
If the Bayesian probability is not a belief, what is it?
The problem is that you W can't be integrated out. Taken all the knowledge you have, W, we form our prior, p(Y|W). And with this, we can use Bayes theorem to get our posterior p(Y|XW). W just tags alo
If the Bayesian probability is not a belief, what is it? The problem is that you W can't be integrated out. Taken all the knowledge you have, W, we form our prior, p(Y|W). And with this, we can use Bayes theorem to get our posterior p(Y|XW). W just tags along with all our updates, but we don't actually know p(W). If we did have the probability of our prior beliefs we could integrate W out, but that probably would itself be conditioned on its own prior beliefs. No matter what there are always prior beliefs that can't be integrated out. I would like to note that many places don't include background knowledge as part of Bayes theorem, but it is there. Bayes theorem as properly expressed should read $p(y|xw)=\frac{p(x|yw)p(y|w)}{p(x|w)}$ See how w just tags along?
If the Bayesian probability is not a belief, what is it? The problem is that you W can't be integrated out. Taken all the knowledge you have, W, we form our prior, p(Y|W). And with this, we can use Bayes theorem to get our posterior p(Y|XW). W just tags alo
37,649
If the Bayesian probability is not a belief, what is it?
If the prior accurately reflects your beliefs about the parameter and you are willing to make the assumption that the model for the data is correct, then the posterior is the rational way to update your beliefs after seeing the data. The assumption that your model for the data is correct is implicit and sometimes not discussed, though it doesn't make sense to use conditional probability notation to make the assumption. Admittedly though, this would draw more attention to the assumptions being made that are sometimes not discussed explicitly. Similarly, $P(X)$ is understood to mean the marginal probability of the data averaged over your prior beliefs. Saying $P(Y)$ does not exist is odd, though it is true that it can be difficult to construct $P(Y)$ in such a way that it does accurately reflect your prior beliefs.
If the Bayesian probability is not a belief, what is it?
If the prior accurately reflects your beliefs about the parameter and you are willing to make the assumption that the model for the data is correct, then the posterior is the rational way to update yo
If the Bayesian probability is not a belief, what is it? If the prior accurately reflects your beliefs about the parameter and you are willing to make the assumption that the model for the data is correct, then the posterior is the rational way to update your beliefs after seeing the data. The assumption that your model for the data is correct is implicit and sometimes not discussed, though it doesn't make sense to use conditional probability notation to make the assumption. Admittedly though, this would draw more attention to the assumptions being made that are sometimes not discussed explicitly. Similarly, $P(X)$ is understood to mean the marginal probability of the data averaged over your prior beliefs. Saying $P(Y)$ does not exist is odd, though it is true that it can be difficult to construct $P(Y)$ in such a way that it does accurately reflect your prior beliefs.
If the Bayesian probability is not a belief, what is it? If the prior accurately reflects your beliefs about the parameter and you are willing to make the assumption that the model for the data is correct, then the posterior is the rational way to update yo
37,650
What is the idea behind Bayes By Backprop?
A VAE is a latent variable model. The encoder estimates for each input, the corresponding posterior distribution $P(z|x)$ on the latent space $z$. The objective is typically to obtain a density model $P(x)$ of the data. In Bayes by Backprop, the setting is that you want estimate the posterior distribution over the weights $P(\theta|D)$. This differs from a VAE because Every single data point $x$ corresponds to a different posterior $P(z|x)$ in a VAE -- and an encoder is used to estimate this posterior. On the other hand in BBB, there is only a single posterior distribution on the weights of the network, which isn't a direct function of any specific datapoint (of course it implicitly depends on the training data as a whole). The VAE in its most popular form opts for a gaussian prior and posterior, allowing easy computation of the KL term or "complexity cost". BBB opts to estimate this cost term by sampling, with the advantage of allowing more complex prior distributions (such as spike and slab). A mixture density network isn't intrinsically bayesian in any way that I'm aware of.
What is the idea behind Bayes By Backprop?
A VAE is a latent variable model. The encoder estimates for each input, the corresponding posterior distribution $P(z|x)$ on the latent space $z$. The objective is typically to obtain a density model
What is the idea behind Bayes By Backprop? A VAE is a latent variable model. The encoder estimates for each input, the corresponding posterior distribution $P(z|x)$ on the latent space $z$. The objective is typically to obtain a density model $P(x)$ of the data. In Bayes by Backprop, the setting is that you want estimate the posterior distribution over the weights $P(\theta|D)$. This differs from a VAE because Every single data point $x$ corresponds to a different posterior $P(z|x)$ in a VAE -- and an encoder is used to estimate this posterior. On the other hand in BBB, there is only a single posterior distribution on the weights of the network, which isn't a direct function of any specific datapoint (of course it implicitly depends on the training data as a whole). The VAE in its most popular form opts for a gaussian prior and posterior, allowing easy computation of the KL term or "complexity cost". BBB opts to estimate this cost term by sampling, with the advantage of allowing more complex prior distributions (such as spike and slab). A mixture density network isn't intrinsically bayesian in any way that I'm aware of.
What is the idea behind Bayes By Backprop? A VAE is a latent variable model. The encoder estimates for each input, the corresponding posterior distribution $P(z|x)$ on the latent space $z$. The objective is typically to obtain a density model
37,651
inferring most important features
There is a lot of options, it depends what exactly do you want. Feature importance or permutation importance Both methods tells you which features are most important for the model. It is a number for each feature. It is calculated after the model is fitted. It doesn't tell you anything about which values of a feature implies what scores. In sklearn most modelz has model.feature_importances_. Sum of all feature importances is 1. Permutation importance is calculated for a fitted model. It tells you how much the metric worsens if you shuffle the feature column. Pseudo-code: model.fit() base_score = model.score(x_dev, y_dev) for i in range(nr_features): x_dev_copy = copy(x_dev) x_dev_copy[:, i] = shuffle(x_dev_copy[:, i]) perm_score = model.score(x_dev_copy, y_dev) perm_imp[i] = (perm_score - base_score) / base_score You can read more about permutation importance here. Partial Dependence Plots tells you what values of a feature increases/decreases the values of prediction. It looks like this: More info on Kaggle: Partial Dependence Plots or go straight to the library PDPbox GitHub. SHAP value explains why the model gives particular prediction for given instance. It plots the following graph which tells you which feature values moved the prediction from an average value to current value for the current instance. Check SHAP library for more details.
inferring most important features
There is a lot of options, it depends what exactly do you want. Feature importance or permutation importance Both methods tells you which features are most important for the model. It is a number for
inferring most important features There is a lot of options, it depends what exactly do you want. Feature importance or permutation importance Both methods tells you which features are most important for the model. It is a number for each feature. It is calculated after the model is fitted. It doesn't tell you anything about which values of a feature implies what scores. In sklearn most modelz has model.feature_importances_. Sum of all feature importances is 1. Permutation importance is calculated for a fitted model. It tells you how much the metric worsens if you shuffle the feature column. Pseudo-code: model.fit() base_score = model.score(x_dev, y_dev) for i in range(nr_features): x_dev_copy = copy(x_dev) x_dev_copy[:, i] = shuffle(x_dev_copy[:, i]) perm_score = model.score(x_dev_copy, y_dev) perm_imp[i] = (perm_score - base_score) / base_score You can read more about permutation importance here. Partial Dependence Plots tells you what values of a feature increases/decreases the values of prediction. It looks like this: More info on Kaggle: Partial Dependence Plots or go straight to the library PDPbox GitHub. SHAP value explains why the model gives particular prediction for given instance. It plots the following graph which tells you which feature values moved the prediction from an average value to current value for the current instance. Check SHAP library for more details.
inferring most important features There is a lot of options, it depends what exactly do you want. Feature importance or permutation importance Both methods tells you which features are most important for the model. It is a number for
37,652
Does Leave One Out cross validation increase the chance of overfitting?
ML model will start to "memorize" data with increasing complexity of your algorithm (and its parameters), not the size of training set. Cross-validation is used to estimate your model performance on data that was not used to train. If you use LOOCV (k=n) then your k models will be (almost) identical. This gives you a high variance in your model evaluation and low bias regarding the final model trained on the entire data set. Use 10-times 10-fold stratified CV if you unsure about a good value for k.
Does Leave One Out cross validation increase the chance of overfitting?
ML model will start to "memorize" data with increasing complexity of your algorithm (and its parameters), not the size of training set. Cross-validation is used to estimate your model performance on d
Does Leave One Out cross validation increase the chance of overfitting? ML model will start to "memorize" data with increasing complexity of your algorithm (and its parameters), not the size of training set. Cross-validation is used to estimate your model performance on data that was not used to train. If you use LOOCV (k=n) then your k models will be (almost) identical. This gives you a high variance in your model evaluation and low bias regarding the final model trained on the entire data set. Use 10-times 10-fold stratified CV if you unsure about a good value for k.
Does Leave One Out cross validation increase the chance of overfitting? ML model will start to "memorize" data with increasing complexity of your algorithm (and its parameters), not the size of training set. Cross-validation is used to estimate your model performance on d
37,653
Probability calibration metric for multiclass classifier
Following Guo et al., I ended up using the Expected Calibration Error, defined as $$\sum_{m=1}^M\frac{|{B_{m}|}}{n}\left|acc(B_m) - conf(B_m)\right|$$ In extending this to multiclass, one can either take the maximum probability for each prediction, or average across the top $n$ predictions, if desired.
Probability calibration metric for multiclass classifier
Following Guo et al., I ended up using the Expected Calibration Error, defined as $$\sum_{m=1}^M\frac{|{B_{m}|}}{n}\left|acc(B_m) - conf(B_m)\right|$$ In extending this to multiclass, one can either t
Probability calibration metric for multiclass classifier Following Guo et al., I ended up using the Expected Calibration Error, defined as $$\sum_{m=1}^M\frac{|{B_{m}|}}{n}\left|acc(B_m) - conf(B_m)\right|$$ In extending this to multiclass, one can either take the maximum probability for each prediction, or average across the top $n$ predictions, if desired.
Probability calibration metric for multiclass classifier Following Guo et al., I ended up using the Expected Calibration Error, defined as $$\sum_{m=1}^M\frac{|{B_{m}|}}{n}\left|acc(B_m) - conf(B_m)\right|$$ In extending this to multiclass, one can either t
37,654
Random rotation of a set of distinct points in $R^n$
Restatement of the question. Let $X_1,\ldots,X_M\in \mathbb R^n$ be deterministic vectors such that no two vectors $X_j$ and $X_k$ are identical in all components, and let $A$ be a random unitary matrix drawn from the circular real ensemble CRE$(n)$. Your question asks us to show that for all $1\leq i\leq n$, the random numbers $\{(AX_j)_i\colon j=1,\ldots,M\}$ have the following property: $$ \mathbb P\bigl(\exists j\not= k\in 1,\ldots, M\colon (AX_j)_i=(AX_k)_i\bigr)=0. $$ Proof of the property. By the union bound, $$ \mathbb P\bigl( \exists j\not= k\in 1,\ldots, M\colon (AX_j)_i=(AX_k)_i\bigr)\leq \sum_{j\not=k}\mathbb P\bigl((AX_j)_i=(AX_k)_i\bigr). $$ Since the random vector $A(X_j-X_k)$ is uniformly distributed on the sphere of radius $\|X_j-X_k\|$ (and $\|X_j-X_k\|>0$ by hypothesis), it follows that the event $$\Bigl\{\bigl(A(X_j-X_k)\bigr)_i=0\Bigr\}$$ has probability $0$. Indeed, this event describes a uniformly random element of the sphere belonging to a certain great circle, which has codimension $1$. Thus the probability in question is zero as well.
Random rotation of a set of distinct points in $R^n$
Restatement of the question. Let $X_1,\ldots,X_M\in \mathbb R^n$ be deterministic vectors such that no two vectors $X_j$ and $X_k$ are identical in all components, and let $A$ be a random unitary matr
Random rotation of a set of distinct points in $R^n$ Restatement of the question. Let $X_1,\ldots,X_M\in \mathbb R^n$ be deterministic vectors such that no two vectors $X_j$ and $X_k$ are identical in all components, and let $A$ be a random unitary matrix drawn from the circular real ensemble CRE$(n)$. Your question asks us to show that for all $1\leq i\leq n$, the random numbers $\{(AX_j)_i\colon j=1,\ldots,M\}$ have the following property: $$ \mathbb P\bigl(\exists j\not= k\in 1,\ldots, M\colon (AX_j)_i=(AX_k)_i\bigr)=0. $$ Proof of the property. By the union bound, $$ \mathbb P\bigl( \exists j\not= k\in 1,\ldots, M\colon (AX_j)_i=(AX_k)_i\bigr)\leq \sum_{j\not=k}\mathbb P\bigl((AX_j)_i=(AX_k)_i\bigr). $$ Since the random vector $A(X_j-X_k)$ is uniformly distributed on the sphere of radius $\|X_j-X_k\|$ (and $\|X_j-X_k\|>0$ by hypothesis), it follows that the event $$\Bigl\{\bigl(A(X_j-X_k)\bigr)_i=0\Bigr\}$$ has probability $0$. Indeed, this event describes a uniformly random element of the sphere belonging to a certain great circle, which has codimension $1$. Thus the probability in question is zero as well.
Random rotation of a set of distinct points in $R^n$ Restatement of the question. Let $X_1,\ldots,X_M\in \mathbb R^n$ be deterministic vectors such that no two vectors $X_j$ and $X_k$ are identical in all components, and let $A$ be a random unitary matr
37,655
Random rotation of a set of distinct points in $R^n$
Consider $y = Ux$ where $U$ is some random matrix (orthogonal or otherwise). Since you are interested in only the $m^{th}$ coordinate, only the $m^{th}$ row of $U$ is relevant to your work. Therefore, you are basically only interested in the properties of $z_{i} = u^{T}x_{i}$, where $u$ is the $m^{th}$ row of the matrix. The orthogonal matrix part does not seem to be particularly relevant; for your case, $u$ is essentially a random unit vector. Suppose the $i^{th}$ and $j^{th}$ rotation are the same. Then we have $u^{T}(x_{i} -x_{j}) = 0$. This implies that $u^{T}(x_{i} - x_{k}) = - u^{T}(x_{j} - x_{k})$ for all $k$. Maybe you can use something like this if your points have some structure? Generally, nothing can be said. If $x_{i} = x_{j}$ for any $(i,j)$ pair the result you are thinking of doesnt hold
Random rotation of a set of distinct points in $R^n$
Consider $y = Ux$ where $U$ is some random matrix (orthogonal or otherwise). Since you are interested in only the $m^{th}$ coordinate, only the $m^{th}$ row of $U$ is relevant to your work. Therefore
Random rotation of a set of distinct points in $R^n$ Consider $y = Ux$ where $U$ is some random matrix (orthogonal or otherwise). Since you are interested in only the $m^{th}$ coordinate, only the $m^{th}$ row of $U$ is relevant to your work. Therefore, you are basically only interested in the properties of $z_{i} = u^{T}x_{i}$, where $u$ is the $m^{th}$ row of the matrix. The orthogonal matrix part does not seem to be particularly relevant; for your case, $u$ is essentially a random unit vector. Suppose the $i^{th}$ and $j^{th}$ rotation are the same. Then we have $u^{T}(x_{i} -x_{j}) = 0$. This implies that $u^{T}(x_{i} - x_{k}) = - u^{T}(x_{j} - x_{k})$ for all $k$. Maybe you can use something like this if your points have some structure? Generally, nothing can be said. If $x_{i} = x_{j}$ for any $(i,j)$ pair the result you are thinking of doesnt hold
Random rotation of a set of distinct points in $R^n$ Consider $y = Ux$ where $U$ is some random matrix (orthogonal or otherwise). Since you are interested in only the $m^{th}$ coordinate, only the $m^{th}$ row of $U$ is relevant to your work. Therefore
37,656
Independence of random vectors
In the general case, no; there is a distinction in probability theory between pairwise independence and mutual independence. In the special case where the two random vectors have a joint multivariate normal distribution, yes; the dependence between the random vectors is only through the second moment of the distribution, so pairwise independence between all pairs (or even pairwise uncorrelatedness) is sufficient to ensure mutual independence.
Independence of random vectors
In the general case, no; there is a distinction in probability theory between pairwise independence and mutual independence. In the special case where the two random vectors have a joint multivariate
Independence of random vectors In the general case, no; there is a distinction in probability theory between pairwise independence and mutual independence. In the special case where the two random vectors have a joint multivariate normal distribution, yes; the dependence between the random vectors is only through the second moment of the distribution, so pairwise independence between all pairs (or even pairwise uncorrelatedness) is sufficient to ensure mutual independence.
Independence of random vectors In the general case, no; there is a distinction in probability theory between pairwise independence and mutual independence. In the special case where the two random vectors have a joint multivariate
37,657
Train test split with time and person indexed data
Since this is a time-series data set the temporal issue is of utmost importance, meaning you really cannot use future data in order to predict past events. Thus, the split by Subject doesn't make much sense here since you'd be training your model with future observations but testing it on past ones (e.g. train on period {1 3 4 5 6} and test on period {2}). The flow should follow the pattern below, with new data that comes in being ingested into the training set (blue dots) and used to predict the target variable in the future (red dots). In light of this, I would use only the split by t, as in the right-side table. There is a similar thread here and you can read more about time series cross validation here.
Train test split with time and person indexed data
Since this is a time-series data set the temporal issue is of utmost importance, meaning you really cannot use future data in order to predict past events. Thus, the split by Subject doesn't make much
Train test split with time and person indexed data Since this is a time-series data set the temporal issue is of utmost importance, meaning you really cannot use future data in order to predict past events. Thus, the split by Subject doesn't make much sense here since you'd be training your model with future observations but testing it on past ones (e.g. train on period {1 3 4 5 6} and test on period {2}). The flow should follow the pattern below, with new data that comes in being ingested into the training set (blue dots) and used to predict the target variable in the future (red dots). In light of this, I would use only the split by t, as in the right-side table. There is a similar thread here and you can read more about time series cross validation here.
Train test split with time and person indexed data Since this is a time-series data set the temporal issue is of utmost importance, meaning you really cannot use future data in order to predict past events. Thus, the split by Subject doesn't make much
37,658
Train test split with time and person indexed data
I would go for a train/validation/test sets if you have enough data. Choose your validation set as close as possible as the test set, which should be as close as possible to testing the performance of your model in a productive environment. In this case, is the main focus to predict the target for new subjects (then split 1 would be better) or mainly for existing subjects (split 2)? If the focus is for existing subjects (split 2) you could also add features with historical information (e.g. time elapsed since last target, etc.). Which would be more difficult to do for new customers as their history will be much shorter. Note: if this problem is a time series analysis, you should use cross-validation only if you have no temporal issues / you de-trended all features. If you have temporal issues, you should use a train/validation/test set.
Train test split with time and person indexed data
I would go for a train/validation/test sets if you have enough data. Choose your validation set as close as possible as the test set, which should be as close as possible to testing the performance of
Train test split with time and person indexed data I would go for a train/validation/test sets if you have enough data. Choose your validation set as close as possible as the test set, which should be as close as possible to testing the performance of your model in a productive environment. In this case, is the main focus to predict the target for new subjects (then split 1 would be better) or mainly for existing subjects (split 2)? If the focus is for existing subjects (split 2) you could also add features with historical information (e.g. time elapsed since last target, etc.). Which would be more difficult to do for new customers as their history will be much shorter. Note: if this problem is a time series analysis, you should use cross-validation only if you have no temporal issues / you de-trended all features. If you have temporal issues, you should use a train/validation/test set.
Train test split with time and person indexed data I would go for a train/validation/test sets if you have enough data. Choose your validation set as close as possible as the test set, which should be as close as possible to testing the performance of
37,659
Train test split with time and person indexed data
Analysts trained in machine learning, whose main goal is optimizing computational efficiency when processing massive data, ignore any inherent variance and structure in data by treating each observation as iid (for a recent statement of this see Sirignano, et al., Deep Learning for Mortgage Risk, https://arxiv.org/pdf/1607.02470.pdf). Sampling using the iid rule would consist of random draws from the data to create train and test groups by time in this example. A statistician would argue that the ML method destroys structure and variance which can only be explained, preserved and recovered by partitioning train and test on, in this example, subjects. Sirignano, et al., explicitly compare the predictive accuracy of deep learning NNs with a baseline logistic regression and conclude that, using the iid rule, logistic regression is grossly inaccurate in comparison to NNs. But is this a fair comparison? One could argue that a comparison based on iid sampling leaves LR with one hand tied behind its back. I'm not aware of papers which use the statistician's rule and does the reverse comparison. In other words, how does the predictive accuracy of the two methods (LR and NNs) compare when the inherent structure and variance in the data is preserved? These issues bring up one of the biggest limitations of NNs: the requirement to convert multilevel categorical fields into 0,1 dummy variables. To take an extreme example residential US zip codes are an ~36,000 level massively categorical feature. Workarounds have been proposed in the literature enabling statistical, structure preserving modeling that facilitate estimating how even such a massively categorical feature explains variance in relative importance terms. On the other hand NNs would require converting this massive feature into ~36,000 dummy variables that would not only create a profusion of useless features (inevitably slowing down convergence) but also confusing the summary of results -- who cares about a specific zip code? Here's a theoretical graphic of performance vs amount of data for which I've lost the reference: There are some CV threads with related discussions, e.g.: Reference showing that only deep learning algorithms benefit from using huge datasets Fitting multilevel categorical variables with neural nets
Train test split with time and person indexed data
Analysts trained in machine learning, whose main goal is optimizing computational efficiency when processing massive data, ignore any inherent variance and structure in data by treating each observati
Train test split with time and person indexed data Analysts trained in machine learning, whose main goal is optimizing computational efficiency when processing massive data, ignore any inherent variance and structure in data by treating each observation as iid (for a recent statement of this see Sirignano, et al., Deep Learning for Mortgage Risk, https://arxiv.org/pdf/1607.02470.pdf). Sampling using the iid rule would consist of random draws from the data to create train and test groups by time in this example. A statistician would argue that the ML method destroys structure and variance which can only be explained, preserved and recovered by partitioning train and test on, in this example, subjects. Sirignano, et al., explicitly compare the predictive accuracy of deep learning NNs with a baseline logistic regression and conclude that, using the iid rule, logistic regression is grossly inaccurate in comparison to NNs. But is this a fair comparison? One could argue that a comparison based on iid sampling leaves LR with one hand tied behind its back. I'm not aware of papers which use the statistician's rule and does the reverse comparison. In other words, how does the predictive accuracy of the two methods (LR and NNs) compare when the inherent structure and variance in the data is preserved? These issues bring up one of the biggest limitations of NNs: the requirement to convert multilevel categorical fields into 0,1 dummy variables. To take an extreme example residential US zip codes are an ~36,000 level massively categorical feature. Workarounds have been proposed in the literature enabling statistical, structure preserving modeling that facilitate estimating how even such a massively categorical feature explains variance in relative importance terms. On the other hand NNs would require converting this massive feature into ~36,000 dummy variables that would not only create a profusion of useless features (inevitably slowing down convergence) but also confusing the summary of results -- who cares about a specific zip code? Here's a theoretical graphic of performance vs amount of data for which I've lost the reference: There are some CV threads with related discussions, e.g.: Reference showing that only deep learning algorithms benefit from using huge datasets Fitting multilevel categorical variables with neural nets
Train test split with time and person indexed data Analysts trained in machine learning, whose main goal is optimizing computational efficiency when processing massive data, ignore any inherent variance and structure in data by treating each observati
37,660
Why are (almost) all of my corrected (Benjamini-Hochberg) p-values equal?
From what I understand p.adjust("BH") returns the lowest alpha (FDR threshold) for which the test can be considered significant. So, 0.3779828 means that the test will be significant only if you accept a FDR of 0.3779828 or higher. In this case you will have 11/12 tests considered significant, and you are accepting that 38% of these will be significant by pure chance. Similar, the highest example in your example will only be significant if you accept a false discovery rate of 0.8125681, which is quite high. p.s. I confess I cannot reproduce it with code, but this is a start: original_pval <- c(0.8125681, 0.3411442, 0.317672, 0.3464842, 0.2220076, 0.2576271, 0.1929609, 0.275641, 0.3180882, 0.1962801, 0.219256, 0.1734164) manual_BH = function (pvals ) { data.frame(pval = pvals) %>% mutate(BH=p.adjust(pval, "BH")) %>% arrange(pval) %>% mutate(j = rank(pval), m = n()) %>% mutate(BHmanual_base = j/m) %>% mutate(BHmanual03 = 0.3 * j / m, BHsignif03 = pval <= BHmanual03) %>% mutate(BHmanual04 = 0.4 * j / m, BHsignifc04 = pval <= BHmanual04) %>% mutate(BHmanual08 = 0.8 * j / m, BHsignifc08 = pval <= BHmanual08) } manual_BH(original_pval)
Why are (almost) all of my corrected (Benjamini-Hochberg) p-values equal?
From what I understand p.adjust("BH") returns the lowest alpha (FDR threshold) for which the test can be considered significant. So, 0.3779828 means that the test will be significant only if you accep
Why are (almost) all of my corrected (Benjamini-Hochberg) p-values equal? From what I understand p.adjust("BH") returns the lowest alpha (FDR threshold) for which the test can be considered significant. So, 0.3779828 means that the test will be significant only if you accept a FDR of 0.3779828 or higher. In this case you will have 11/12 tests considered significant, and you are accepting that 38% of these will be significant by pure chance. Similar, the highest example in your example will only be significant if you accept a false discovery rate of 0.8125681, which is quite high. p.s. I confess I cannot reproduce it with code, but this is a start: original_pval <- c(0.8125681, 0.3411442, 0.317672, 0.3464842, 0.2220076, 0.2576271, 0.1929609, 0.275641, 0.3180882, 0.1962801, 0.219256, 0.1734164) manual_BH = function (pvals ) { data.frame(pval = pvals) %>% mutate(BH=p.adjust(pval, "BH")) %>% arrange(pval) %>% mutate(j = rank(pval), m = n()) %>% mutate(BHmanual_base = j/m) %>% mutate(BHmanual03 = 0.3 * j / m, BHsignif03 = pval <= BHmanual03) %>% mutate(BHmanual04 = 0.4 * j / m, BHsignifc04 = pval <= BHmanual04) %>% mutate(BHmanual08 = 0.8 * j / m, BHsignifc08 = pval <= BHmanual08) } manual_BH(original_pval)
Why are (almost) all of my corrected (Benjamini-Hochberg) p-values equal? From what I understand p.adjust("BH") returns the lowest alpha (FDR threshold) for which the test can be considered significant. So, 0.3779828 means that the test will be significant only if you accep
37,661
Is there current consensus on the value of the Information Bottleneck Principle to understanding Deep Learning?
What I will say here is that the proofs that compression guarantees a better lower bound on generalization are accepted, but it's not widely accepted if this lower bound is practically relevant. For example, a model with better compression might increase the lower bound from 1.0 to 1.5, but it might not be relevant if all models are already performing from 2.0-2.5. Likewise, I think it's apparent that while compression is sufficient for some amount of guaranteed generalization, it's clearly not necessary (for example, invertible neural networks can get just fine generalization). Probably the right conclusion is that the theory and analysis are a useful direction but it's unclear if it says anything about real networks.
Is there current consensus on the value of the Information Bottleneck Principle to understanding Dee
What I will say here is that the proofs that compression guarantees a better lower bound on generalization are accepted, but it's not widely accepted if this lower bound is practically relevant. For
Is there current consensus on the value of the Information Bottleneck Principle to understanding Deep Learning? What I will say here is that the proofs that compression guarantees a better lower bound on generalization are accepted, but it's not widely accepted if this lower bound is practically relevant. For example, a model with better compression might increase the lower bound from 1.0 to 1.5, but it might not be relevant if all models are already performing from 2.0-2.5. Likewise, I think it's apparent that while compression is sufficient for some amount of guaranteed generalization, it's clearly not necessary (for example, invertible neural networks can get just fine generalization). Probably the right conclusion is that the theory and analysis are a useful direction but it's unclear if it says anything about real networks.
Is there current consensus on the value of the Information Bottleneck Principle to understanding Dee What I will say here is that the proofs that compression guarantees a better lower bound on generalization are accepted, but it's not widely accepted if this lower bound is practically relevant. For
37,662
Guassian Process for Data Imputation
While I cannot answer all of your questions and I am unable to post a comment due to lack of reputation, in response to this: Is there a defensible way to constrain the length scale parameters in the >model (the l's) based on the frequencies I am working with (1.5 Hz, .25 Hz and >the x-axis in seconds downsampled to 10 Hz). You may want to look at a spectral mixture kernel. This basically involves representing your covariance matrix by it's Fourier transform, and thus deals in frequencies as opposed to distances. This may more naturally encode some of your prior information. You might find this page useful if this sounds interesting, and the original thesis is here.
Guassian Process for Data Imputation
While I cannot answer all of your questions and I am unable to post a comment due to lack of reputation, in response to this: Is there a defensible way to constrain the length scale parameters in the
Guassian Process for Data Imputation While I cannot answer all of your questions and I am unable to post a comment due to lack of reputation, in response to this: Is there a defensible way to constrain the length scale parameters in the >model (the l's) based on the frequencies I am working with (1.5 Hz, .25 Hz and >the x-axis in seconds downsampled to 10 Hz). You may want to look at a spectral mixture kernel. This basically involves representing your covariance matrix by it's Fourier transform, and thus deals in frequencies as opposed to distances. This may more naturally encode some of your prior information. You might find this page useful if this sounds interesting, and the original thesis is here.
Guassian Process for Data Imputation While I cannot answer all of your questions and I am unable to post a comment due to lack of reputation, in response to this: Is there a defensible way to constrain the length scale parameters in the
37,663
How Do You Choose The Number of Bins To Use For A Chi-Squared GOF Test?
Is this discrepancy between outcomes for different bin sizes something that I should have known about*, or is indicative of some larger problem in my proposed data analysis? The binning of the radioactive decay sample set is a red herring here. The real problem originates from the fact that chi-square (alongside other hypothesis testing frameworks) is highly sensitive to sample size. In the case of chi-square, as sample size increases, absolute differences become an increasingly smaller portion of the expected value. As such, if the sample size is very large we may find small p-values and statistical significance when the findings are small and uninteresting. Conversely, a reasonably strong association may not come up as significant if the sample size is small. Is there a good rule of thumb for choosing bin sizes when doing a χ2 GOF test? The answer seems that one should not aim to find the right N (I am not sure it is doable, but would be great if someone else chips in to contradict), but look beyond p-values solely when N is high. This seems a good paper on the subject: Too Big to Fail: Large Samples and the p-Value Problem P.S. There are alternatives to χ2 test such as Cramer's V and G-Test; however you will still hit the same issues with large N -> small p-value.
How Do You Choose The Number of Bins To Use For A Chi-Squared GOF Test?
Is this discrepancy between outcomes for different bin sizes something that I should have known about*, or is indicative of some larger problem in my proposed data analysis? The binning of the ra
How Do You Choose The Number of Bins To Use For A Chi-Squared GOF Test? Is this discrepancy between outcomes for different bin sizes something that I should have known about*, or is indicative of some larger problem in my proposed data analysis? The binning of the radioactive decay sample set is a red herring here. The real problem originates from the fact that chi-square (alongside other hypothesis testing frameworks) is highly sensitive to sample size. In the case of chi-square, as sample size increases, absolute differences become an increasingly smaller portion of the expected value. As such, if the sample size is very large we may find small p-values and statistical significance when the findings are small and uninteresting. Conversely, a reasonably strong association may not come up as significant if the sample size is small. Is there a good rule of thumb for choosing bin sizes when doing a χ2 GOF test? The answer seems that one should not aim to find the right N (I am not sure it is doable, but would be great if someone else chips in to contradict), but look beyond p-values solely when N is high. This seems a good paper on the subject: Too Big to Fail: Large Samples and the p-Value Problem P.S. There are alternatives to χ2 test such as Cramer's V and G-Test; however you will still hit the same issues with large N -> small p-value.
How Do You Choose The Number of Bins To Use For A Chi-Squared GOF Test? Is this discrepancy between outcomes for different bin sizes something that I should have known about*, or is indicative of some larger problem in my proposed data analysis? The binning of the ra
37,664
Why deep learning prefer the probability distribution with a sharp point?
These two distribution have a connection with deep learning via regularisation. In deep learning we are often concerned with regularising the parameters of a neural network because neural networks tend to overfit and we want to improve ability of the model to generalise to new data. From a Bayesian perspective, fitting a regularised model can be interpreted as computing the maximum a posteriori (MAP) estimate given a specific prior distribution over the weights $w_i$. In particular, the $L^2$ (a.k.a. weight decay) norm corresponds to a Gaussian prior on the weights $w$, and the $L^1$ norm corresponds to an isotropic Laplace prior over the weights $w$. The $L^1$ norm (a.k.a. the Laplace prior on weights) by virtue of it's sharpness encourages sparsity (many zeros) in $w$ for reasons explained here: Why L1 norm for sparse models . This type of regularisation can be quite desirable. The connection between the Laplacian distribution and the $L^1$ norm is explained in more detail here: Why is Lasso penalty equivalent to the double exponential (Laplace) prior? Most of what I have mentioned here is discussed in more detail in the same "Deep Learning" book in section 5.6.1 and section 7.1.2.
Why deep learning prefer the probability distribution with a sharp point?
These two distribution have a connection with deep learning via regularisation. In deep learning we are often concerned with regularising the parameters of a neural network because neural networks ten
Why deep learning prefer the probability distribution with a sharp point? These two distribution have a connection with deep learning via regularisation. In deep learning we are often concerned with regularising the parameters of a neural network because neural networks tend to overfit and we want to improve ability of the model to generalise to new data. From a Bayesian perspective, fitting a regularised model can be interpreted as computing the maximum a posteriori (MAP) estimate given a specific prior distribution over the weights $w_i$. In particular, the $L^2$ (a.k.a. weight decay) norm corresponds to a Gaussian prior on the weights $w$, and the $L^1$ norm corresponds to an isotropic Laplace prior over the weights $w$. The $L^1$ norm (a.k.a. the Laplace prior on weights) by virtue of it's sharpness encourages sparsity (many zeros) in $w$ for reasons explained here: Why L1 norm for sparse models . This type of regularisation can be quite desirable. The connection between the Laplacian distribution and the $L^1$ norm is explained in more detail here: Why is Lasso penalty equivalent to the double exponential (Laplace) prior? Most of what I have mentioned here is discussed in more detail in the same "Deep Learning" book in section 5.6.1 and section 7.1.2.
Why deep learning prefer the probability distribution with a sharp point? These two distribution have a connection with deep learning via regularisation. In deep learning we are often concerned with regularising the parameters of a neural network because neural networks ten
37,665
Probability distribution models compatible with quantile regression
Question 1: In case of a single quantile, the quantile estimator is the maximum likelihood estimator of an asymmetric double exponential (a.k.a. Laplace) distribution that may look like this: (Picture borrowed from Abeywardana "Deep Quantile Regression" (2018).) Thanks to @machazthegamer and @Dave for helpful links in the comments. Question 2: In case of multiple quantiles, I doubt there can be a simple expression unless one puts some strong restrictions on the relationships between the slopes at the different quantiles for tractability. (Answers with concrete examples of such restrictions and the resulting tractable distributions are still welcome.)
Probability distribution models compatible with quantile regression
Question 1: In case of a single quantile, the quantile estimator is the maximum likelihood estimator of an asymmetric double exponential (a.k.a. Laplace) distribution that may look like this: (Pictur
Probability distribution models compatible with quantile regression Question 1: In case of a single quantile, the quantile estimator is the maximum likelihood estimator of an asymmetric double exponential (a.k.a. Laplace) distribution that may look like this: (Picture borrowed from Abeywardana "Deep Quantile Regression" (2018).) Thanks to @machazthegamer and @Dave for helpful links in the comments. Question 2: In case of multiple quantiles, I doubt there can be a simple expression unless one puts some strong restrictions on the relationships between the slopes at the different quantiles for tractability. (Answers with concrete examples of such restrictions and the resulting tractable distributions are still welcome.)
Probability distribution models compatible with quantile regression Question 1: In case of a single quantile, the quantile estimator is the maximum likelihood estimator of an asymmetric double exponential (a.k.a. Laplace) distribution that may look like this: (Pictur
37,666
Cases where TDA outperforms public benchmarks?
Comparisons between ML-at-large to topological data analysis (TDA) is difficult. This in part because much of what TDA attempts to do isn't even on the menu for most ML projects. One place (other than article databases of course) to look for examples of TDA is the Applied Algebraic Topology Research Network's website, or their YouTube channel. I am relatively new to TDA, and primarily focusing on persistent homology at this time, but I will offer a purported example. In a talk given by Renata Turkeš titled On the effectiveness of persistent homology they describe a comparison of deep neural networks vs other techniques including classic persistent homology algorithm in predicting the correct homology class, and the latter was non-dominated by the deep neural network architecture. Such comparisons only look at a small fraction of the possible models, datasets, and other choices. So arguments over the superiority of broad frameworks are often quite drawn out, and quite uninteresting in my opinion. There is almost always a Pareto front lurking nearby such debates. But if you want to get into the tradeoffs of the comparisons that were made in this particular case, I would recommend reading Turkeš 2022 to dig a little deeper. Figure 3 is the bar plot they showed in the talk.
Cases where TDA outperforms public benchmarks?
Comparisons between ML-at-large to topological data analysis (TDA) is difficult. This in part because much of what TDA attempts to do isn't even on the menu for most ML projects. One place (other than
Cases where TDA outperforms public benchmarks? Comparisons between ML-at-large to topological data analysis (TDA) is difficult. This in part because much of what TDA attempts to do isn't even on the menu for most ML projects. One place (other than article databases of course) to look for examples of TDA is the Applied Algebraic Topology Research Network's website, or their YouTube channel. I am relatively new to TDA, and primarily focusing on persistent homology at this time, but I will offer a purported example. In a talk given by Renata Turkeš titled On the effectiveness of persistent homology they describe a comparison of deep neural networks vs other techniques including classic persistent homology algorithm in predicting the correct homology class, and the latter was non-dominated by the deep neural network architecture. Such comparisons only look at a small fraction of the possible models, datasets, and other choices. So arguments over the superiority of broad frameworks are often quite drawn out, and quite uninteresting in my opinion. There is almost always a Pareto front lurking nearby such debates. But if you want to get into the tradeoffs of the comparisons that were made in this particular case, I would recommend reading Turkeš 2022 to dig a little deeper. Figure 3 is the bar plot they showed in the talk.
Cases where TDA outperforms public benchmarks? Comparisons between ML-at-large to topological data analysis (TDA) is difficult. This in part because much of what TDA attempts to do isn't even on the menu for most ML projects. One place (other than
37,667
Cases where TDA outperforms public benchmarks?
As of 2016, it seems difficult to concretely answer the question. Perhaps an ambitious, risk-seeking, grad student will provide more insight on TDA's performance as compared with well-known benchmarks.
Cases where TDA outperforms public benchmarks?
As of 2016, it seems difficult to concretely answer the question. Perhaps an ambitious, risk-seeking, grad student will provide more insight on TDA's performance as compared with well-known benchmarks
Cases where TDA outperforms public benchmarks? As of 2016, it seems difficult to concretely answer the question. Perhaps an ambitious, risk-seeking, grad student will provide more insight on TDA's performance as compared with well-known benchmarks.
Cases where TDA outperforms public benchmarks? As of 2016, it seems difficult to concretely answer the question. Perhaps an ambitious, risk-seeking, grad student will provide more insight on TDA's performance as compared with well-known benchmarks
37,668
Choice of time-series model for store sales prediction
I will elaborate one point not mentioned by the other answers. With many series for different stores/products, there might be competition/substitution effects so you could want to use some form of hierarchical forecasting. Specifically, some product might be possible substitues for other product, leading to negative correlations in sales. There might be seasonality effects common for all/most products, leading to positive correlations. I would start investigation of such effects maybe with a principal components analysis. If such effects are important (they probably are), some kind of hierarchical prediction could be much better than univariate modeling. Multiple approaches are possible. One way, I used in one project, was first modeling total sales, and then modeling proportions of total sales. That would be top/down, one could also get the other way, start with individual series, and then correcting them if the total gets unrealistic. This is discussed in some other post on this site, like Hierarchical time-series forecasting with complex aggregation constraints or Single prediction vs. summing more granular n-step ahead predictions There is now even an R package for hierarchical forecasting on CRAN, hts https://CRAN.R-project.org/package=hts Its documentation contains references you should have a look at.
Choice of time-series model for store sales prediction
I will elaborate one point not mentioned by the other answers. With many series for different stores/products, there might be competition/substitution effects so you could want to use some form of hie
Choice of time-series model for store sales prediction I will elaborate one point not mentioned by the other answers. With many series for different stores/products, there might be competition/substitution effects so you could want to use some form of hierarchical forecasting. Specifically, some product might be possible substitues for other product, leading to negative correlations in sales. There might be seasonality effects common for all/most products, leading to positive correlations. I would start investigation of such effects maybe with a principal components analysis. If such effects are important (they probably are), some kind of hierarchical prediction could be much better than univariate modeling. Multiple approaches are possible. One way, I used in one project, was first modeling total sales, and then modeling proportions of total sales. That would be top/down, one could also get the other way, start with individual series, and then correcting them if the total gets unrealistic. This is discussed in some other post on this site, like Hierarchical time-series forecasting with complex aggregation constraints or Single prediction vs. summing more granular n-step ahead predictions There is now even an R package for hierarchical forecasting on CRAN, hts https://CRAN.R-project.org/package=hts Its documentation contains references you should have a look at.
Choice of time-series model for store sales prediction I will elaborate one point not mentioned by the other answers. With many series for different stores/products, there might be competition/substitution effects so you could want to use some form of hie
37,669
Choice of time-series model for store sales prediction
For this type of problem of model selection / hyperparameter-optimization, I would recommend you look into cross-validation approaches. Especially since your primary goal seems to be out-of-sample prediction, you want to be careful about overfitting your training data. I can't think of a reason why you couldn't have different models for different items, but you may also want to share information across the different models (perhaps some sort of hierarchical setup), which may be more difficult or impossible if the models are incompatible.
Choice of time-series model for store sales prediction
For this type of problem of model selection / hyperparameter-optimization, I would recommend you look into cross-validation approaches. Especially since your primary goal seems to be out-of-sample pre
Choice of time-series model for store sales prediction For this type of problem of model selection / hyperparameter-optimization, I would recommend you look into cross-validation approaches. Especially since your primary goal seems to be out-of-sample prediction, you want to be careful about overfitting your training data. I can't think of a reason why you couldn't have different models for different items, but you may also want to share information across the different models (perhaps some sort of hierarchical setup), which may be more difficult or impossible if the models are incompatible.
Choice of time-series model for store sales prediction For this type of problem of model selection / hyperparameter-optimization, I would recommend you look into cross-validation approaches. Especially since your primary goal seems to be out-of-sample pre
37,670
Choice of time-series model for store sales prediction
ARIMA models easily incorporate empirically identified pulses ,level shifts and local time trends while incorporating parameter and error variance changes. HW models are a fixed procedure lacking the robustness and adaptability of ARIMA models with Intervention Detection and are bloated by incorporating unnecessary/non-significant parameters. Additionally ARIMA models easily incorporate user-specified causals morphing into Transfer Function Models. Why settle for an assumed model form like HW when diagnostics can lead to better modelling. Better modelling often includes day-of-week effects, holiday effects, monthly effects, weekly effects , day-of-the-month effects et al . You might want to look at this reference http://www.autobox.com/cms/index.php/blog/entry/advantages-and-disadvantages-of-using-monthly-weekly-and-daily-data to more fully understand why you need to be using daily data
Choice of time-series model for store sales prediction
ARIMA models easily incorporate empirically identified pulses ,level shifts and local time trends while incorporating parameter and error variance changes. HW models are a fixed procedure lacking the
Choice of time-series model for store sales prediction ARIMA models easily incorporate empirically identified pulses ,level shifts and local time trends while incorporating parameter and error variance changes. HW models are a fixed procedure lacking the robustness and adaptability of ARIMA models with Intervention Detection and are bloated by incorporating unnecessary/non-significant parameters. Additionally ARIMA models easily incorporate user-specified causals morphing into Transfer Function Models. Why settle for an assumed model form like HW when diagnostics can lead to better modelling. Better modelling often includes day-of-week effects, holiday effects, monthly effects, weekly effects , day-of-the-month effects et al . You might want to look at this reference http://www.autobox.com/cms/index.php/blog/entry/advantages-and-disadvantages-of-using-monthly-weekly-and-daily-data to more fully understand why you need to be using daily data
Choice of time-series model for store sales prediction ARIMA models easily incorporate empirically identified pulses ,level shifts and local time trends while incorporating parameter and error variance changes. HW models are a fixed procedure lacking the
37,671
Choice of time-series model for store sales prediction
It is useful to know which time-horizon you care about (a month, a week or a day in advance?), and how much data you have (can you reliably estimate yearly seasonality?). Personally, I've found ARIMA to be unintuitive and full of traps, and I haven't had much success with it. If you have daily data and care about daily fluctuations, then it would probably the right choice anyway. But whatever you end up doing, my suggestion is to start with a "simple" regression model, include some yearly seasonality (some cyclic splines), holidays and a trend, ideally set it up with a hierarchical structure like mentioned in another answer. The coefficients will be understandable, it's quite easy to start with simple and expand. Crossvalidation in timeseries doesn't work, so just create a couple of windows of a reasonable time period (at least two years if you want to have yearly seasonality and are not using a hierarchical model) and evaluate your method by it's forecast of the next week, or month, or year, whatever range you care about. The evaluation is not straightforward (do you care about a forecast in the near future more than a later one?) and should also reflect your business situation.
Choice of time-series model for store sales prediction
It is useful to know which time-horizon you care about (a month, a week or a day in advance?), and how much data you have (can you reliably estimate yearly seasonality?). Personally, I've found ARIMA
Choice of time-series model for store sales prediction It is useful to know which time-horizon you care about (a month, a week or a day in advance?), and how much data you have (can you reliably estimate yearly seasonality?). Personally, I've found ARIMA to be unintuitive and full of traps, and I haven't had much success with it. If you have daily data and care about daily fluctuations, then it would probably the right choice anyway. But whatever you end up doing, my suggestion is to start with a "simple" regression model, include some yearly seasonality (some cyclic splines), holidays and a trend, ideally set it up with a hierarchical structure like mentioned in another answer. The coefficients will be understandable, it's quite easy to start with simple and expand. Crossvalidation in timeseries doesn't work, so just create a couple of windows of a reasonable time period (at least two years if you want to have yearly seasonality and are not using a hierarchical model) and evaluate your method by it's forecast of the next week, or month, or year, whatever range you care about. The evaluation is not straightforward (do you care about a forecast in the near future more than a later one?) and should also reflect your business situation.
Choice of time-series model for store sales prediction It is useful to know which time-horizon you care about (a month, a week or a day in advance?), and how much data you have (can you reliably estimate yearly seasonality?). Personally, I've found ARIMA
37,672
How to get multivariate credible interval estimate(s) / highest density regions (HDR) after MCMC
I have found a Matlab Wrapper for ANN. ANN is a library for approximate nearest neighbor searches (homepage). Besides the usual parameters of a spatial index region query, it uses an additional error parameter eps which gives the "approximateness" of the search: A returned nearest neighbor will be at most 1+eps times farther from the query point than the true (non-approximate) nearest neighbor. Search for the term "error bound" in the Programmers Manual to find information about eps. This enables me to include a fast nearest neighbor search in my DBSCAN implementation, which speeds up the process outlined in my question to a feasable duration. I will provide a link once the implementation is complete.
How to get multivariate credible interval estimate(s) / highest density regions (HDR) after MCMC
I have found a Matlab Wrapper for ANN. ANN is a library for approximate nearest neighbor searches (homepage). Besides the usual parameters of a spatial index region query, it uses an additional error
How to get multivariate credible interval estimate(s) / highest density regions (HDR) after MCMC I have found a Matlab Wrapper for ANN. ANN is a library for approximate nearest neighbor searches (homepage). Besides the usual parameters of a spatial index region query, it uses an additional error parameter eps which gives the "approximateness" of the search: A returned nearest neighbor will be at most 1+eps times farther from the query point than the true (non-approximate) nearest neighbor. Search for the term "error bound" in the Programmers Manual to find information about eps. This enables me to include a fast nearest neighbor search in my DBSCAN implementation, which speeds up the process outlined in my question to a feasable duration. I will provide a link once the implementation is complete.
How to get multivariate credible interval estimate(s) / highest density regions (HDR) after MCMC I have found a Matlab Wrapper for ANN. ANN is a library for approximate nearest neighbor searches (homepage). Besides the usual parameters of a spatial index region query, it uses an additional error
37,673
How to interpret the estimate from Wilcoxon signed rank paired test?
From the documentation for the wilcox.test package: Optionally (if argument conf.int is true), a nonparametric confidence interval and an estimator for the pseudomedian (one-sample case) or for the difference of the location parameters x-y is computed. (The pseudomedian of a distribution F is the median of the distribution of (u+v)/2, where u and v are independent, each with distribution F. If F is symmetric, then the pseudomedian and median coincide. See Hollander & Wolfe (1973), page 34.) Note that in the two-sample case the estimator for the difference in location parameters does not estimate the difference in medians (a common misconception) but rather the median of the difference between a sample from x and a sample from y. The pseudomedian is informative in that it is an estimator of the location parameter for a shift alternative. Comparing it with the sample median can also be a heuristic measurement of symmetry. I will add that I was under the impression the pseudomedian should be equivalent to the Hodges-Lehmann estimate of the median pairwise distance, yet calculating that using your data: median(as.vector(outer(a,b,"-"))) gives me 0.7 rather than 0.8. Perhaps someone else will be able to provide illumination as to the discrepancy, there.
How to interpret the estimate from Wilcoxon signed rank paired test?
From the documentation for the wilcox.test package: Optionally (if argument conf.int is true), a nonparametric confidence interval and an estimator for the pseudomedian (one-sample case) or for the d
How to interpret the estimate from Wilcoxon signed rank paired test? From the documentation for the wilcox.test package: Optionally (if argument conf.int is true), a nonparametric confidence interval and an estimator for the pseudomedian (one-sample case) or for the difference of the location parameters x-y is computed. (The pseudomedian of a distribution F is the median of the distribution of (u+v)/2, where u and v are independent, each with distribution F. If F is symmetric, then the pseudomedian and median coincide. See Hollander & Wolfe (1973), page 34.) Note that in the two-sample case the estimator for the difference in location parameters does not estimate the difference in medians (a common misconception) but rather the median of the difference between a sample from x and a sample from y. The pseudomedian is informative in that it is an estimator of the location parameter for a shift alternative. Comparing it with the sample median can also be a heuristic measurement of symmetry. I will add that I was under the impression the pseudomedian should be equivalent to the Hodges-Lehmann estimate of the median pairwise distance, yet calculating that using your data: median(as.vector(outer(a,b,"-"))) gives me 0.7 rather than 0.8. Perhaps someone else will be able to provide illumination as to the discrepancy, there.
How to interpret the estimate from Wilcoxon signed rank paired test? From the documentation for the wilcox.test package: Optionally (if argument conf.int is true), a nonparametric confidence interval and an estimator for the pseudomedian (one-sample case) or for the d
37,674
Best way to average F-score with unbalanced classes
"I am not sure which one is right" There is no right or wrong here. A classifier's performance can be represented using an $n\cdot n$ matrix. When trying to represent the performance using a single metric you lose some information. In other words, since it is impossible to recover the confusion matrix based on a single metric, there is a loss of information when we consider only a single metric to interpret the performance of a classifier. But still... to decide which classifier is better among several alternatives - we need a single metric... Which single metric best represents the performance? That's a subjective questions. This is where statisticians become creative. This is why so many metrics have been purposed. Different metrics 'prefer' different types of information that can be extracted from the confusion matrix. It is up to you to decide which one captures the information your regard as 'most important'. Some criteria you may consider: Are all classes are equally important / are all instances are equally important? Are classification and misclassifications are equally 'important'? Are false positives and false negatives are equally 'important'? Should the performance be absolute, or relative to some random classifier? Should the metric be linear in some sense? etc.
Best way to average F-score with unbalanced classes
"I am not sure which one is right" There is no right or wrong here. A classifier's performance can be represented using an $n\cdot n$ matrix. When trying to represent the performance using a single
Best way to average F-score with unbalanced classes "I am not sure which one is right" There is no right or wrong here. A classifier's performance can be represented using an $n\cdot n$ matrix. When trying to represent the performance using a single metric you lose some information. In other words, since it is impossible to recover the confusion matrix based on a single metric, there is a loss of information when we consider only a single metric to interpret the performance of a classifier. But still... to decide which classifier is better among several alternatives - we need a single metric... Which single metric best represents the performance? That's a subjective questions. This is where statisticians become creative. This is why so many metrics have been purposed. Different metrics 'prefer' different types of information that can be extracted from the confusion matrix. It is up to you to decide which one captures the information your regard as 'most important'. Some criteria you may consider: Are all classes are equally important / are all instances are equally important? Are classification and misclassifications are equally 'important'? Are false positives and false negatives are equally 'important'? Should the performance be absolute, or relative to some random classifier? Should the metric be linear in some sense? etc.
Best way to average F-score with unbalanced classes "I am not sure which one is right" There is no right or wrong here. A classifier's performance can be represented using an $n\cdot n$ matrix. When trying to represent the performance using a single
37,675
Nonparametric nonlinear regression with prediction uncertainty (besides Gaussian Processes)
A Matérn covariance matrix with $ν=5/2$ is almost converging to a Squared Exponential kernel. So I think that a Radial Basis Function (RBF) based approach is perfect in this scenario. It is fast, it works for the kind of black-box function that you have, and you can get measures of uncertainty. You can alternatively use inducing point approximations for GPs, have a look at FITC in the literature, but you have the same problem of where to select the inducing points.
Nonparametric nonlinear regression with prediction uncertainty (besides Gaussian Processes)
A Matérn covariance matrix with $ν=5/2$ is almost converging to a Squared Exponential kernel. So I think that a Radial Basis Function (RBF) based approach is perfect in this scenario. It is fast, it w
Nonparametric nonlinear regression with prediction uncertainty (besides Gaussian Processes) A Matérn covariance matrix with $ν=5/2$ is almost converging to a Squared Exponential kernel. So I think that a Radial Basis Function (RBF) based approach is perfect in this scenario. It is fast, it works for the kind of black-box function that you have, and you can get measures of uncertainty. You can alternatively use inducing point approximations for GPs, have a look at FITC in the literature, but you have the same problem of where to select the inducing points.
Nonparametric nonlinear regression with prediction uncertainty (besides Gaussian Processes) A Matérn covariance matrix with $ν=5/2$ is almost converging to a Squared Exponential kernel. So I think that a Radial Basis Function (RBF) based approach is perfect in this scenario. It is fast, it w
37,676
Does the determination of the mean and SD imply the loss of one or two degrees of freedom?
The T-distribution is defined as the distribution of the ratio of a standard normal random variable and an independent scaled-chi random variable. Its degrees-of-freedom-parameter is equal to the degrees-of-freedom parameter for the chi random variable in its denominator. So the DF parameter is a matter of determining the degrees-of-freedom of the variance estimator you are using. Remember: The T-distribution only arises when you take the ratio of a normal random variable and a denominator which is some kind of standard deviation estimator (square root of a variance estimator). This presumes that there is already a variance estimator in the picture. The loss of degrees-of-freedom then occurs from the mean estimate (or in the context of regression, from multiple coefficient estimates). It is possible to form quantities similar to the one you have shown, and find their distributions. Suppose we have $X_1, ..., X_n \sim \text{IID N}(\mu, \sigma^2)$ and we form some standardised value. If we assume that $\mu$ is known but $\sigma$ is unknown, we would standardise by defining the T-statistic: $$T_\mu \equiv \frac{X_i - \mu}{S_\mu} = \frac{X_i - \mu}{\sigma} / \frac{S_\mu}{\sigma} \sim \text{T} (n),$$ where $S_\mu^2 \equiv \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2$ is the sample variance estimator with known $\mu$. The quantity $S_\mu / \sigma$ is a scaled-chi random variable with $n$ degrees-of-freedom, so the statistic $T_\mu$ has a T-distribution with $n$ degrees-of-freedom. This is a baseline case where there has not been any loss of degrees-of-freedom, even though we have estimated the variance. Now, in the case where $\mu$ is also unknown we would replace the known mean $\mu$ in the variance estimator with the sample mean $\bar{x}$ we have: $$T \equiv \frac{X_i - \mu}{S} = \frac{X_i - \mu}{\sigma} / \frac{S}{\sigma} \sim \text{T}(n-1),$$ where $S^2 \equiv \frac{1}{n-1} \sum_{i=1}^n (X_i - \bar{x})^2$ is the sample variance estimator with unknown $\mu$. The quantity $S / \sigma$ is a scaled-chi random variable with $n-1$ degrees-of-freedom, so the statistic $T$ has a T-distribution with $n-1$ degrees-of-freedom. We have lost one degree-of-freedom due to estimating the mean inside the variance estimator. Hopefully this assists you in understanding this issue. The concept of degrees-of-freedom, within the context of talking about the T-distribution, presumes that there is already some variance estimator being used for the studentisation. Estimating the mean parameter (or coefficient parameters in a regression) alters this variance estimator by making it less variable, and this entails a loss of degrees-of-freedom.
Does the determination of the mean and SD imply the loss of one or two degrees of freedom?
The T-distribution is defined as the distribution of the ratio of a standard normal random variable and an independent scaled-chi random variable. Its degrees-of-freedom-parameter is equal to the deg
Does the determination of the mean and SD imply the loss of one or two degrees of freedom? The T-distribution is defined as the distribution of the ratio of a standard normal random variable and an independent scaled-chi random variable. Its degrees-of-freedom-parameter is equal to the degrees-of-freedom parameter for the chi random variable in its denominator. So the DF parameter is a matter of determining the degrees-of-freedom of the variance estimator you are using. Remember: The T-distribution only arises when you take the ratio of a normal random variable and a denominator which is some kind of standard deviation estimator (square root of a variance estimator). This presumes that there is already a variance estimator in the picture. The loss of degrees-of-freedom then occurs from the mean estimate (or in the context of regression, from multiple coefficient estimates). It is possible to form quantities similar to the one you have shown, and find their distributions. Suppose we have $X_1, ..., X_n \sim \text{IID N}(\mu, \sigma^2)$ and we form some standardised value. If we assume that $\mu$ is known but $\sigma$ is unknown, we would standardise by defining the T-statistic: $$T_\mu \equiv \frac{X_i - \mu}{S_\mu} = \frac{X_i - \mu}{\sigma} / \frac{S_\mu}{\sigma} \sim \text{T} (n),$$ where $S_\mu^2 \equiv \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2$ is the sample variance estimator with known $\mu$. The quantity $S_\mu / \sigma$ is a scaled-chi random variable with $n$ degrees-of-freedom, so the statistic $T_\mu$ has a T-distribution with $n$ degrees-of-freedom. This is a baseline case where there has not been any loss of degrees-of-freedom, even though we have estimated the variance. Now, in the case where $\mu$ is also unknown we would replace the known mean $\mu$ in the variance estimator with the sample mean $\bar{x}$ we have: $$T \equiv \frac{X_i - \mu}{S} = \frac{X_i - \mu}{\sigma} / \frac{S}{\sigma} \sim \text{T}(n-1),$$ where $S^2 \equiv \frac{1}{n-1} \sum_{i=1}^n (X_i - \bar{x})^2$ is the sample variance estimator with unknown $\mu$. The quantity $S / \sigma$ is a scaled-chi random variable with $n-1$ degrees-of-freedom, so the statistic $T$ has a T-distribution with $n-1$ degrees-of-freedom. We have lost one degree-of-freedom due to estimating the mean inside the variance estimator. Hopefully this assists you in understanding this issue. The concept of degrees-of-freedom, within the context of talking about the T-distribution, presumes that there is already some variance estimator being used for the studentisation. Estimating the mean parameter (or coefficient parameters in a regression) alters this variance estimator by making it less variable, and this entails a loss of degrees-of-freedom.
Does the determination of the mean and SD imply the loss of one or two degrees of freedom? The T-distribution is defined as the distribution of the ratio of a standard normal random variable and an independent scaled-chi random variable. Its degrees-of-freedom-parameter is equal to the deg
37,677
Does the determination of the mean and SD imply the loss of one or two degrees of freedom?
Let's consider an example to understand degrees of freedom: Pretend we have 5 observations, $(1, 2, 1, 3, 5)$. If I tell you the mean of this data set ($2.4$) but not the values of the observations themselves, you can make up four values without changing the mean. If you pick $(3, 4, 3, 5)$ as your first four observations, then the last number to choose must be $-3$ if the mean is fixed at $2.4$. If we only care about the mean, then we have one equation and one unknown. If you have $n$ observations with a fixed mean, you have the freedom to pick any $n - 1$ numbers you want without changing the mean -- but the $n^{th}$ observation is determined. Notice however, I chose the value of $2.4$ in the paragraph above arbitrarily, so I could have chosen something else. Therefore, I have $n - 1$ degrees of freedom from the data and $1$ degree of freedom because I picked the mean, so I have $n$ degrees of freedom if I estimate 1 parameter. Now, let's say I tell you the mean and the standard deviation: for the same sample of $(1, 2, 1, 3, 5)$, the mean is $2.4$ and the standard deviation is $1.673$. Now I can pick three of the five numbers, and the last two will be determined (two equations, two unknowns). The parameters are a little different however, because the sample standard deviation is a function of the sample mean -- they are not independent of each other. This means that I have $n - 2$ degrees of freedom from the data, but still only $1$ degree of freedom from the parameters, for a total of $n - 1$ degrees of freedom. See this Stack Exchange question for more information.
Does the determination of the mean and SD imply the loss of one or two degrees of freedom?
Let's consider an example to understand degrees of freedom: Pretend we have 5 observations, $(1, 2, 1, 3, 5)$. If I tell you the mean of this data set ($2.4$) but not the values of the observations th
Does the determination of the mean and SD imply the loss of one or two degrees of freedom? Let's consider an example to understand degrees of freedom: Pretend we have 5 observations, $(1, 2, 1, 3, 5)$. If I tell you the mean of this data set ($2.4$) but not the values of the observations themselves, you can make up four values without changing the mean. If you pick $(3, 4, 3, 5)$ as your first four observations, then the last number to choose must be $-3$ if the mean is fixed at $2.4$. If we only care about the mean, then we have one equation and one unknown. If you have $n$ observations with a fixed mean, you have the freedom to pick any $n - 1$ numbers you want without changing the mean -- but the $n^{th}$ observation is determined. Notice however, I chose the value of $2.4$ in the paragraph above arbitrarily, so I could have chosen something else. Therefore, I have $n - 1$ degrees of freedom from the data and $1$ degree of freedom because I picked the mean, so I have $n$ degrees of freedom if I estimate 1 parameter. Now, let's say I tell you the mean and the standard deviation: for the same sample of $(1, 2, 1, 3, 5)$, the mean is $2.4$ and the standard deviation is $1.673$. Now I can pick three of the five numbers, and the last two will be determined (two equations, two unknowns). The parameters are a little different however, because the sample standard deviation is a function of the sample mean -- they are not independent of each other. This means that I have $n - 2$ degrees of freedom from the data, but still only $1$ degree of freedom from the parameters, for a total of $n - 1$ degrees of freedom. See this Stack Exchange question for more information.
Does the determination of the mean and SD imply the loss of one or two degrees of freedom? Let's consider an example to understand degrees of freedom: Pretend we have 5 observations, $(1, 2, 1, 3, 5)$. If I tell you the mean of this data set ($2.4$) but not the values of the observations th
37,678
Heteroskedasticity and Distribution of the Dependent Variable in Linear Models
This is the solution for the problem above: In brief, for my case, the heteroskedasticity is caused by at least two different sources: Group's differences which OLS and all the family of "mono-level" regression model hardly can account for; Wrong specification of the model functional form: more in detail (as suggested by @Robert Long in first place) the relation between the DV and the covariates is not linear. For what concerns the group differences causing heteroskedasticity it has been of great help running analysis on truncated data for single groups, and acknowledge from the BP-test that heteroskedasticity was gone almost in all groups when considered singularly. By fitting a random intercept model the error structure has improved, but as noted by the commentors above heteroskedasticity could still be detected. Even after including a variable in the random part of the equation which has been able to improve the error structure even more, the problem could not be considered solved. (This key variable, coping strategies, well describes habits of household in case of food shortages, indeed these habits usually vary much across geographical regions and ethnic groups.) Here comes the second point, the most important. The relation between DV (as it is originally) and covariates is not linear. More options are available at this stage: Use a non linear model to explicitly take into account for the issue; Transform the DV, if you can find a suitable transformation. In my case square root of DV. Try using models that do not make any assumption on the distribution of the error term (glm family). In my view, the first option complicates a bit the interpretation of the coefficients (is a personal project-dependent observation just because I want to keep things simple for this article) and at least from my (recent) experiences, needs more computational power which for complicated models with many random coefficients and observations could bring R to crash. Transforming the DV is surely the best solution, if it works and if you are more lucky than me. What do I mean? In case of log transformed DV the interpretation would be in terms of percentage, but what about the square root transformation? How can I compare my results with other studies? Maybe a standardization of the transformed variable could help in interpreting the results in z-scores. In my opinion is just too much. About the glm or glmm models I cannot say much, in my case none of those worked, glm does not properly account for random differences between groups and the output of glmm reported convergence problems. Note that for my model the transformation of the DV does not work with OLS either for the same reason regarding glm above. However, there is at least one option left: assigning weights to the regression in order to correct for the heteroskedasticity without transforming the DV. Ergo: simple interpretation of the coeff.s. This is the result obtained by weigthing with DV_sqrt while using the un-transformed DV in a random coefficient model. At this stage I can compare my cofficients' standard errors with their counterpart from the robust estimator. Regarding the direct use of robust estimators in case as mine without trying understanding the source of the problem, I would like to suggest this reading: G. King, M. E. Roberts (2014), "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It".
Heteroskedasticity and Distribution of the Dependent Variable in Linear Models
This is the solution for the problem above: In brief, for my case, the heteroskedasticity is caused by at least two different sources: Group's differences which OLS and all the family of "mono-level"
Heteroskedasticity and Distribution of the Dependent Variable in Linear Models This is the solution for the problem above: In brief, for my case, the heteroskedasticity is caused by at least two different sources: Group's differences which OLS and all the family of "mono-level" regression model hardly can account for; Wrong specification of the model functional form: more in detail (as suggested by @Robert Long in first place) the relation between the DV and the covariates is not linear. For what concerns the group differences causing heteroskedasticity it has been of great help running analysis on truncated data for single groups, and acknowledge from the BP-test that heteroskedasticity was gone almost in all groups when considered singularly. By fitting a random intercept model the error structure has improved, but as noted by the commentors above heteroskedasticity could still be detected. Even after including a variable in the random part of the equation which has been able to improve the error structure even more, the problem could not be considered solved. (This key variable, coping strategies, well describes habits of household in case of food shortages, indeed these habits usually vary much across geographical regions and ethnic groups.) Here comes the second point, the most important. The relation between DV (as it is originally) and covariates is not linear. More options are available at this stage: Use a non linear model to explicitly take into account for the issue; Transform the DV, if you can find a suitable transformation. In my case square root of DV. Try using models that do not make any assumption on the distribution of the error term (glm family). In my view, the first option complicates a bit the interpretation of the coefficients (is a personal project-dependent observation just because I want to keep things simple for this article) and at least from my (recent) experiences, needs more computational power which for complicated models with many random coefficients and observations could bring R to crash. Transforming the DV is surely the best solution, if it works and if you are more lucky than me. What do I mean? In case of log transformed DV the interpretation would be in terms of percentage, but what about the square root transformation? How can I compare my results with other studies? Maybe a standardization of the transformed variable could help in interpreting the results in z-scores. In my opinion is just too much. About the glm or glmm models I cannot say much, in my case none of those worked, glm does not properly account for random differences between groups and the output of glmm reported convergence problems. Note that for my model the transformation of the DV does not work with OLS either for the same reason regarding glm above. However, there is at least one option left: assigning weights to the regression in order to correct for the heteroskedasticity without transforming the DV. Ergo: simple interpretation of the coeff.s. This is the result obtained by weigthing with DV_sqrt while using the un-transformed DV in a random coefficient model. At this stage I can compare my cofficients' standard errors with their counterpart from the robust estimator. Regarding the direct use of robust estimators in case as mine without trying understanding the source of the problem, I would like to suggest this reading: G. King, M. E. Roberts (2014), "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It".
Heteroskedasticity and Distribution of the Dependent Variable in Linear Models This is the solution for the problem above: In brief, for my case, the heteroskedasticity is caused by at least two different sources: Group's differences which OLS and all the family of "mono-level"
37,679
How do I statistically rephrase this question
Darts is the simplest of games. Each player starts with a score of 501 and takes turns to throw 3 darts. The score for each turn is calculated and deducted from the players total. Bullseye scores 50, the outer ring scores 25 and a dart in the double or treble ring counts double or treble the segment score. Image from Now the probabilities have been examined elsewhere . On that site, we are told "A medium skilled darts player will have a larger standard deviation; even though the shots may, on average, be centered around the same target, they will be distributed over a broader region. A poor darts player will have a high standard deviation and their shots will be, probabilistically, scattered over a much wider region." That is, . Thus, to answer the question, we do what we always do. We build up a histogram of scores, and for the game itself we might use 501-score, and then we fit a density function, and then we test that density function against other players' density functions. So, we need enough data so that our location and its deviation have enough predictive power to discriminate properly between players. The less data, the more fuzzy the answers, and there is no magic number for it, the more the merrier.
How do I statistically rephrase this question
Darts is the simplest of games. Each player starts with a score of 501 and takes turns to throw 3 darts. The score for each turn is calculated and deducted from the players total. Bullseye scores 50,
How do I statistically rephrase this question Darts is the simplest of games. Each player starts with a score of 501 and takes turns to throw 3 darts. The score for each turn is calculated and deducted from the players total. Bullseye scores 50, the outer ring scores 25 and a dart in the double or treble ring counts double or treble the segment score. Image from Now the probabilities have been examined elsewhere . On that site, we are told "A medium skilled darts player will have a larger standard deviation; even though the shots may, on average, be centered around the same target, they will be distributed over a broader region. A poor darts player will have a high standard deviation and their shots will be, probabilistically, scattered over a much wider region." That is, . Thus, to answer the question, we do what we always do. We build up a histogram of scores, and for the game itself we might use 501-score, and then we fit a density function, and then we test that density function against other players' density functions. So, we need enough data so that our location and its deviation have enough predictive power to discriminate properly between players. The less data, the more fuzzy the answers, and there is no magic number for it, the more the merrier.
How do I statistically rephrase this question Darts is the simplest of games. Each player starts with a score of 501 and takes turns to throw 3 darts. The score for each turn is calculated and deducted from the players total. Bullseye scores 50,
37,680
Is there any meaningfully robust approach to conduct a network meta-analysis of diagnostic test accuracy studies?
So this does not go unanswered here, based on comments from @GGA and from the OP's further research, are some references which the OP edited into the question as an update. Upon feedback from GGA and extensive search, we can mention as potentially suitable approaches: a Bayesian method proposed by Menten and Lesaffre to conduct a Bayesian network meta-analysis of diagnostic test accuracy studies (Menten and Lesaffre, BMC Med Res Methodol 2015), and two different Bayesian methods proposed by Nyaga et al (Nyaga et al, Stat Methods Med Res 2016; Nyaga et al, Stat Methods Med Res 2016).
Is there any meaningfully robust approach to conduct a network meta-analysis of diagnostic test accu
So this does not go unanswered here, based on comments from @GGA and from the OP's further research, are some references which the OP edited into the question as an update. Upon feedback from GGA and
Is there any meaningfully robust approach to conduct a network meta-analysis of diagnostic test accuracy studies? So this does not go unanswered here, based on comments from @GGA and from the OP's further research, are some references which the OP edited into the question as an update. Upon feedback from GGA and extensive search, we can mention as potentially suitable approaches: a Bayesian method proposed by Menten and Lesaffre to conduct a Bayesian network meta-analysis of diagnostic test accuracy studies (Menten and Lesaffre, BMC Med Res Methodol 2015), and two different Bayesian methods proposed by Nyaga et al (Nyaga et al, Stat Methods Med Res 2016; Nyaga et al, Stat Methods Med Res 2016).
Is there any meaningfully robust approach to conduct a network meta-analysis of diagnostic test accu So this does not go unanswered here, based on comments from @GGA and from the OP's further research, are some references which the OP edited into the question as an update. Upon feedback from GGA and
37,681
Dirichlet process mixture MCMC
$$ P(c_i=c|\vec{c_{-i}},y_i,\phi) =\frac{p(\vec{c_i},y_i,\phi)}{p(\vec{c_{-i}},y_i,\phi)} = \frac{p(y_i|\vec{c_{i}},\phi)p(\vec{c_{i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\ = \frac{p(y_i|c_i,\phi)p(c_i|\vec{c_{-i},\phi})p(\vec{c_{-i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\= \frac{p(y_i|c_i,\phi)p(c_i|\vec{c_{-i}},\phi)}{p(y_i)} $$ Here, $p(c_i|\vec{c_{-i}},\phi)=\frac{n_{-i,c}}{n-1+\alpha}$. When $c_i$ is a existing one, then: $$ p(y_i|c_i,\phi)=F(y_i,\phi_c) $$ When $c_i$ is a new cluster, then: $$ p(y_i|c_i,\phi)=\int p(y_i|\phi_c,c_i)p(\phi_c|\phi,c_i)d\phi_c $$ Since $dG_0=p(\phi_c|\phi)$, we can conclude that: $$ p(y_i|c_i,\phi)=\int F(y_i,\phi_c)dG_0 $$ This is what I thought about the derivation. I am not quite understand that if it is correct. Did you solve this problem? @Daeyoung Lim
Dirichlet process mixture MCMC
$$ P(c_i=c|\vec{c_{-i}},y_i,\phi) =\frac{p(\vec{c_i},y_i,\phi)}{p(\vec{c_{-i}},y_i,\phi)} = \frac{p(y_i|\vec{c_{i}},\phi)p(\vec{c_{i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\ = \frac{p
Dirichlet process mixture MCMC $$ P(c_i=c|\vec{c_{-i}},y_i,\phi) =\frac{p(\vec{c_i},y_i,\phi)}{p(\vec{c_{-i}},y_i,\phi)} = \frac{p(y_i|\vec{c_{i}},\phi)p(\vec{c_{i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\ = \frac{p(y_i|c_i,\phi)p(c_i|\vec{c_{-i},\phi})p(\vec{c_{-i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\= \frac{p(y_i|c_i,\phi)p(c_i|\vec{c_{-i}},\phi)}{p(y_i)} $$ Here, $p(c_i|\vec{c_{-i}},\phi)=\frac{n_{-i,c}}{n-1+\alpha}$. When $c_i$ is a existing one, then: $$ p(y_i|c_i,\phi)=F(y_i,\phi_c) $$ When $c_i$ is a new cluster, then: $$ p(y_i|c_i,\phi)=\int p(y_i|\phi_c,c_i)p(\phi_c|\phi,c_i)d\phi_c $$ Since $dG_0=p(\phi_c|\phi)$, we can conclude that: $$ p(y_i|c_i,\phi)=\int F(y_i,\phi_c)dG_0 $$ This is what I thought about the derivation. I am not quite understand that if it is correct. Did you solve this problem? @Daeyoung Lim
Dirichlet process mixture MCMC $$ P(c_i=c|\vec{c_{-i}},y_i,\phi) =\frac{p(\vec{c_i},y_i,\phi)}{p(\vec{c_{-i}},y_i,\phi)} = \frac{p(y_i|\vec{c_{i}},\phi)p(\vec{c_{i}},\phi)}{p(y_i|\vec{c_{-i}},\phi)p(\vec{c_{-i}},\phi)} \\ = \frac{p
37,682
The Effect of Stopword Filtering prior to Word Embedding Training
One common approach is to simply subsample the most common words in the corpus. This way they have less effect on the model but you don't have to completely get rid of them. It can also speed up training because you spend less time dealing with stopwords that don't carry all that much information compared to the amount of times they appear in the corpus. https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
The Effect of Stopword Filtering prior to Word Embedding Training
One common approach is to simply subsample the most common words in the corpus. This way they have less effect on the model but you don't have to completely get rid of them. It can also speed up train
The Effect of Stopword Filtering prior to Word Embedding Training One common approach is to simply subsample the most common words in the corpus. This way they have less effect on the model but you don't have to completely get rid of them. It can also speed up training because you spend less time dealing with stopwords that don't carry all that much information compared to the amount of times they appear in the corpus. https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
The Effect of Stopword Filtering prior to Word Embedding Training One common approach is to simply subsample the most common words in the corpus. This way they have less effect on the model but you don't have to completely get rid of them. It can also speed up train
37,683
KDE for censored data
Yes, it's possible. In order to do anything with censored data you need to assume at least 'censoring at random', which says that the hazard for observations censored at time $t$ is the same conditional on covariates as for uncensored observations still alive at time $t$. The main difference from ordinary uncensored KDE is that the effective sample size decreases as more observations are censored, so that a constant bandwidth is unlikely to be optimal. One of the earliest references (now open access) is a 1983 paper by Tanner & Wong. There are implementations such as https://rdrr.io/cran/kernhaz/man/khazard.html and https://cran.r-project.org/web/packages/muhaz/index.html
KDE for censored data
Yes, it's possible. In order to do anything with censored data you need to assume at least 'censoring at random', which says that the hazard for observations censored at time $t$ is the same condition
KDE for censored data Yes, it's possible. In order to do anything with censored data you need to assume at least 'censoring at random', which says that the hazard for observations censored at time $t$ is the same conditional on covariates as for uncensored observations still alive at time $t$. The main difference from ordinary uncensored KDE is that the effective sample size decreases as more observations are censored, so that a constant bandwidth is unlikely to be optimal. One of the earliest references (now open access) is a 1983 paper by Tanner & Wong. There are implementations such as https://rdrr.io/cran/kernhaz/man/khazard.html and https://cran.r-project.org/web/packages/muhaz/index.html
KDE for censored data Yes, it's possible. In order to do anything with censored data you need to assume at least 'censoring at random', which says that the hazard for observations censored at time $t$ is the same condition
37,684
KDE for censored data
Kernel density estimates probability density function of the empirical distribution, it doesn't "know" and "care" about the underlying distribution. By "empirical" we mean here that it uses only the observed data and makes minimal distributional assumptions. Obviously, if it uses the observed data, it cannot say anything about the censored, unobserved data. If you say that the distribution is censored, it means that you should have some notion of the underlying distribution, so you should use it to define a parametric model to estimate the density.
KDE for censored data
Kernel density estimates probability density function of the empirical distribution, it doesn't "know" and "care" about the underlying distribution. By "empirical" we mean here that it uses only the o
KDE for censored data Kernel density estimates probability density function of the empirical distribution, it doesn't "know" and "care" about the underlying distribution. By "empirical" we mean here that it uses only the observed data and makes minimal distributional assumptions. Obviously, if it uses the observed data, it cannot say anything about the censored, unobserved data. If you say that the distribution is censored, it means that you should have some notion of the underlying distribution, so you should use it to define a parametric model to estimate the density.
KDE for censored data Kernel density estimates probability density function of the empirical distribution, it doesn't "know" and "care" about the underlying distribution. By "empirical" we mean here that it uses only the o
37,685
Can one compute the autocorrelation of covariance matrices sampled by MCMC?
The way I see it, when drawing $p \times p$ matrices $S_1, S_2, \dots S_n$ from a Wishart distribution, you are really drawing $p\cdot (p+1)/2$ specifically related univariate random variables at each time step. I.e., only the $\text{vech}(S_i)$ part of any $S_i$ [ = the upper triangular part] is random, and symmetry gives you the rest. In other words, the autocorrelation is defined pairwise for any two entries of the $p\cdot (p+1)/2$-dimensional vectors $\text{vech}(S_i)$ and $\text{vech}(S_{i-h})$ for any $h>0$. Of course, this leaves you with a possibly infeasibly large number of $(p\cdot (p+1)/2)^2$ univariate autocorrelations to track, and since the entries of each $\text{vech}(S_i)$ will be meanfingully related to one another (see for example the definition given via draws from a normal here: https://en.wikipedia.org/wiki/Wishart_distribution), I could well imagine that you discard information by doing this univariate analysis. That being said, the univariate autocorrelations can be calculated entry-wise by first defining \begin{align} \bar{S} &= \frac{1}{n}\sum_{i=1}^n \text{vech}(S_i) \\ \bar{S_h} & = \frac{1}{n-h}\sum_{i=h+1}^n \text{vech}(S_i)\text{vech}(S_{i-h})^T. \end{align} Clearly, $\bar{S}$ is a natural estimator for the upper triangular part of the expectation (that you can replace with the true expectation of your Wishart distribution if it is known to you). Similarly, $\bar{S}_h$ is a natural estimator for the moment $\mathbb{E}(\text{vech}(S_i)\text{vech}(S_{i-h})^T)$. Lastly, noting that \begin{align} \text{Cov}(\text{vech}(S_i)\text{vech}(S_{i-h})^T) = \mathbb{E}(\text{vech}(S_i)\text{vech}(S_{i-h})^T) - \mathbb{E}(\text{vech}(S_i))\mathbb{E}(\text{vech}(S_i))^T, \end{align} one arrives at the autocorrelation estimates $A(h)$ for $\text{vech}(S_i)$ via \begin{align} A(h) &= \bar{S}_h - \bar{S}\bar{S}^T. \end{align} As mentioned before, this gives you the $(p\cdot (p+1)/2)^2$ autocorrelations of each Wishart matrix entry with each other Wishart matrix entry. If that is too much information to display, I think one strategy that you could take would be to define the univariate time series \begin{align} a(h) &= \frac{1}{(p\cdot (p+1)/2)^2}\sum_{i=1}^{(p\cdot (p+1)/2}\sum_{j=1}^{(p\cdot (p+1)/2}|A(h)_{ij}|, \end{align} i.e. you simply take the average of the absolute value of the autocorrelation. If you only care about positive autocorrelations and don't think that negative autocorrelation is detrimental, then spare yourself the absolute value. Similarly, if you think that autocorrelation along the diagonal is worse than off the diagonal or the other way around, you could add weights $w_{ij}$ that take this 'loss function' into account.
Can one compute the autocorrelation of covariance matrices sampled by MCMC?
The way I see it, when drawing $p \times p$ matrices $S_1, S_2, \dots S_n$ from a Wishart distribution, you are really drawing $p\cdot (p+1)/2$ specifically related univariate random variables at each
Can one compute the autocorrelation of covariance matrices sampled by MCMC? The way I see it, when drawing $p \times p$ matrices $S_1, S_2, \dots S_n$ from a Wishart distribution, you are really drawing $p\cdot (p+1)/2$ specifically related univariate random variables at each time step. I.e., only the $\text{vech}(S_i)$ part of any $S_i$ [ = the upper triangular part] is random, and symmetry gives you the rest. In other words, the autocorrelation is defined pairwise for any two entries of the $p\cdot (p+1)/2$-dimensional vectors $\text{vech}(S_i)$ and $\text{vech}(S_{i-h})$ for any $h>0$. Of course, this leaves you with a possibly infeasibly large number of $(p\cdot (p+1)/2)^2$ univariate autocorrelations to track, and since the entries of each $\text{vech}(S_i)$ will be meanfingully related to one another (see for example the definition given via draws from a normal here: https://en.wikipedia.org/wiki/Wishart_distribution), I could well imagine that you discard information by doing this univariate analysis. That being said, the univariate autocorrelations can be calculated entry-wise by first defining \begin{align} \bar{S} &= \frac{1}{n}\sum_{i=1}^n \text{vech}(S_i) \\ \bar{S_h} & = \frac{1}{n-h}\sum_{i=h+1}^n \text{vech}(S_i)\text{vech}(S_{i-h})^T. \end{align} Clearly, $\bar{S}$ is a natural estimator for the upper triangular part of the expectation (that you can replace with the true expectation of your Wishart distribution if it is known to you). Similarly, $\bar{S}_h$ is a natural estimator for the moment $\mathbb{E}(\text{vech}(S_i)\text{vech}(S_{i-h})^T)$. Lastly, noting that \begin{align} \text{Cov}(\text{vech}(S_i)\text{vech}(S_{i-h})^T) = \mathbb{E}(\text{vech}(S_i)\text{vech}(S_{i-h})^T) - \mathbb{E}(\text{vech}(S_i))\mathbb{E}(\text{vech}(S_i))^T, \end{align} one arrives at the autocorrelation estimates $A(h)$ for $\text{vech}(S_i)$ via \begin{align} A(h) &= \bar{S}_h - \bar{S}\bar{S}^T. \end{align} As mentioned before, this gives you the $(p\cdot (p+1)/2)^2$ autocorrelations of each Wishart matrix entry with each other Wishart matrix entry. If that is too much information to display, I think one strategy that you could take would be to define the univariate time series \begin{align} a(h) &= \frac{1}{(p\cdot (p+1)/2)^2}\sum_{i=1}^{(p\cdot (p+1)/2}\sum_{j=1}^{(p\cdot (p+1)/2}|A(h)_{ij}|, \end{align} i.e. you simply take the average of the absolute value of the autocorrelation. If you only care about positive autocorrelations and don't think that negative autocorrelation is detrimental, then spare yourself the absolute value. Similarly, if you think that autocorrelation along the diagonal is worse than off the diagonal or the other way around, you could add weights $w_{ij}$ that take this 'loss function' into account.
Can one compute the autocorrelation of covariance matrices sampled by MCMC? The way I see it, when drawing $p \times p$ matrices $S_1, S_2, \dots S_n$ from a Wishart distribution, you are really drawing $p\cdot (p+1)/2$ specifically related univariate random variables at each
37,686
Distribution for Paired Elo Matches Drawn from Normal Distribution
Looking at the MGF of $Z$ \begin{align*} \mathbb E[e^{tZ}] &= \mathbb E[\mathbb E[e^{tZ}|X_1,X_2]]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}} + e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_1}}\right]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}}\right] + \mathbb E\left[e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_1}}\right]\\ &=2 \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}}\right] \end{align*} which is not a Normal MGF.
Distribution for Paired Elo Matches Drawn from Normal Distribution
Looking at the MGF of $Z$ \begin{align*} \mathbb E[e^{tZ}] &= \mathbb E[\mathbb E[e^{tZ}|X_1,X_2]]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}} + e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_
Distribution for Paired Elo Matches Drawn from Normal Distribution Looking at the MGF of $Z$ \begin{align*} \mathbb E[e^{tZ}] &= \mathbb E[\mathbb E[e^{tZ}|X_1,X_2]]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}} + e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_1}}\right]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}}\right] + \mathbb E\left[e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_1}}\right]\\ &=2 \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}}\right] \end{align*} which is not a Normal MGF.
Distribution for Paired Elo Matches Drawn from Normal Distribution Looking at the MGF of $Z$ \begin{align*} \mathbb E[e^{tZ}] &= \mathbb E[\mathbb E[e^{tZ}|X_1,X_2]]\\ &= \mathbb E\left[e^{tX_1} \frac{e^{X_1}}{e^{X_1}+e^{X_2}} + e^{tX_2} \frac{e^{X_2}}{e^{X_2}+e^{X_
37,687
Distribution for Paired Elo Matches Drawn from Normal Distribution
A proof will have to wait, but some simulations indicate that the distribution of the Elo score of the winner is normal: Here is the R code: N <- 1e7 set.seed(7*11*13) X1 <- rnorm(N); X2 <- rnorm(N) Y <- runif(N) Z <- ifelse(Y<= 1/(1+exp(X2-X1)), X1, X2) (mu <-mean(Z)) [1] 0.363302 (sigma <- sd(Z)) [1] 0.9316078
Distribution for Paired Elo Matches Drawn from Normal Distribution
A proof will have to wait, but some simulations indicate that the distribution of the Elo score of the winner is normal: Here is the R code: N <- 1e7 set.seed(7*11*13) X1 <- rnorm(N); X2 <- rnorm(N)
Distribution for Paired Elo Matches Drawn from Normal Distribution A proof will have to wait, but some simulations indicate that the distribution of the Elo score of the winner is normal: Here is the R code: N <- 1e7 set.seed(7*11*13) X1 <- rnorm(N); X2 <- rnorm(N) Y <- runif(N) Z <- ifelse(Y<= 1/(1+exp(X2-X1)), X1, X2) (mu <-mean(Z)) [1] 0.363302 (sigma <- sd(Z)) [1] 0.9316078
Distribution for Paired Elo Matches Drawn from Normal Distribution A proof will have to wait, but some simulations indicate that the distribution of the Elo score of the winner is normal: Here is the R code: N <- 1e7 set.seed(7*11*13) X1 <- rnorm(N); X2 <- rnorm(N)
37,688
Log of Average v. Average of Log
If you maintain the assumption that the daily dependent variable $Y_{ji}$ of month $i$ follows a log-normal distribution, this means that $$\ln Y_{ji} \sim \mathbf N (\mu_{ji}, \sigma_{ji}^2)$$ Then, denoting $d_i$ the number of days of month $i$, we also have $$ \frac {1}{d_i} \ln Y_{ji} \sim \mathbf N\left(\frac {\mu_{ji}}{d_i}, \frac {\sigma_{ji}^2}{d_i^2}\right)$$ If you also maintain the assumption that your sample is comprised of independent observations, the sum of independent normal random variables is certain to also follow a normal distribution and so $$\sum_{j=1}^{d_i}\frac {1}{d_i} \ln Y_{ji} =\frac {1}{d_i} \sum_{j=1}^{d_i}\ln Y_{ji} \sim N\left(\frac {1}{d_i} \sum_{j=1}^{d_i}\mu_{ji}, \frac {1}{d_i^2} \sum_{j=1}^{d_i} \sigma_{ji}^2\right)$$ In words, if a log-normality assumption is stated at the level of a sample of independent daily data, then the monthly average of the logs of the original daily variables (their geometric mean, as a comment mentioned) will also be normally distributed.
Log of Average v. Average of Log
If you maintain the assumption that the daily dependent variable $Y_{ji}$ of month $i$ follows a log-normal distribution, this means that $$\ln Y_{ji} \sim \mathbf N (\mu_{ji}, \sigma_{ji}^2)$$ Then,
Log of Average v. Average of Log If you maintain the assumption that the daily dependent variable $Y_{ji}$ of month $i$ follows a log-normal distribution, this means that $$\ln Y_{ji} \sim \mathbf N (\mu_{ji}, \sigma_{ji}^2)$$ Then, denoting $d_i$ the number of days of month $i$, we also have $$ \frac {1}{d_i} \ln Y_{ji} \sim \mathbf N\left(\frac {\mu_{ji}}{d_i}, \frac {\sigma_{ji}^2}{d_i^2}\right)$$ If you also maintain the assumption that your sample is comprised of independent observations, the sum of independent normal random variables is certain to also follow a normal distribution and so $$\sum_{j=1}^{d_i}\frac {1}{d_i} \ln Y_{ji} =\frac {1}{d_i} \sum_{j=1}^{d_i}\ln Y_{ji} \sim N\left(\frac {1}{d_i} \sum_{j=1}^{d_i}\mu_{ji}, \frac {1}{d_i^2} \sum_{j=1}^{d_i} \sigma_{ji}^2\right)$$ In words, if a log-normality assumption is stated at the level of a sample of independent daily data, then the monthly average of the logs of the original daily variables (their geometric mean, as a comment mentioned) will also be normally distributed.
Log of Average v. Average of Log If you maintain the assumption that the daily dependent variable $Y_{ji}$ of month $i$ follows a log-normal distribution, this means that $$\ln Y_{ji} \sim \mathbf N (\mu_{ji}, \sigma_{ji}^2)$$ Then,
37,689
Log of Average v. Average of Log
The question is what is log-normally distributed? I'm assuming it's monthly series. In this case get the average, then log. If you thought that the daily series are log-normally distributed, then your average monthly series would be very close to normal distribution if there is no large autocorrelation.
Log of Average v. Average of Log
The question is what is log-normally distributed? I'm assuming it's monthly series. In this case get the average, then log. If you thought that the daily series are log-normally distributed, then you
Log of Average v. Average of Log The question is what is log-normally distributed? I'm assuming it's monthly series. In this case get the average, then log. If you thought that the daily series are log-normally distributed, then your average monthly series would be very close to normal distribution if there is no large autocorrelation.
Log of Average v. Average of Log The question is what is log-normally distributed? I'm assuming it's monthly series. In this case get the average, then log. If you thought that the daily series are log-normally distributed, then you
37,690
Calculating and plotting confidence interval for Theil-Sen estimator
The following R code is copied verbatim from Appendix C, page C-20 of U.S. EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities; Unified Guidance; March 2009 (EPA 530/R-09-007; https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P10055GQ.TXT). The code was written by Kirk M. Cameron, Ph.D. of MacStat Consulting, Ltd. # R script for Theil-Sen Confidence band # Compute bootstrapped confidence band around Theil-Sen trend line # user inputs: list of x-values, list of y-values, desired confidence level # Note: replace numbers in parentheses below with specific x and y values # corresponding to data-specific ordered pairs # x-values should be numeric values representing sampling dates or events # y-values should be concentration values corresponding to these dates or events # Script produces a plot of the Theil-Sen trend line, the confidence band around the trend, # and an overlay of the actual data values x= c(89.6,90.1,90.8,91.1,92.1,93.1,94.1,95.6,96.1,96.3) y= c(56,53,51,55,52,60,62,59,61,63) conf = .90 elimna= function(m){ # # remove any rows of data having missing values m= as.matrix(m) ikeep= c(1:nrow(m)) for(i in 1:nrow(m)) if (sum(is.na(m[i,])>=1)) ikeep[i]= 0 elimna= m[ikeep[ikeep>=1],] elimna } theilsen2= function(x,y){ # # Compute the Theil-Sen regression estimator # Do not compute residuals in this version # Assumes missing pairs already removed # ord= order(x) xs= x[ord] ys= y[ord] vec1= outer(ys,ys,"-") vec2= outer(xs,xs,"-") v1= vec1[vec2>0] v2= vec2[vec2>0] slope= median(v1/v2) coef= 0 coef[1]= median(y)-slope*median(x) coef[2]= slope list(coef=coef) } nb= 1000 temp= matrix(c(x,y),ncol=2) temp= elimna(temp) #remove any pairs with missing values x= temp[,1] y= temp[,2] n= length(x) ord= order(x) cut= min(x) + (0:100)*(max(x)-min(x))/100 #compute 101 cut pts t0= theilsen2(x,y) #compute trend line on original data tmp= matrix(nrow=nb,ncol=101) for (i in 1:nb) { idx= sample(ord,n,rep=T) xboot= x[idx] yboot= y[idx] tboot= theilsen2(xboot,yboot) tmp[i,]= tboot$coef[1] + cut*tboot$coef[2] } lb= 0; ub= 0 for (i in 1:101){ lb[i]= quantile(tmp[,i],c((1-conf)/2)) ub[i]= quantile(tmp[,i],c((1+conf)/2)) } tband= list(xcut=cut,lo=lb,hi=ub,ths0=t0) yt= tband$ths0$coef[1] + tband$ths0$coef[2]*tband$xcut plot(yt~tband$xcut,type='l',xlim=range(x),ylim=c(min(tband$lo),max(tband$hi)),xlab='Date',ylab='Conc') points(x,y,pch=16) lines(tband$hi~tband$xcut,type='l',lty=2) lines(tband$lo~tband$xcut,type='l',lty=2)
Calculating and plotting confidence interval for Theil-Sen estimator
The following R code is copied verbatim from Appendix C, page C-20 of U.S. EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities; Unified Guidance; March 2009 (EPA 530/R-09-007;
Calculating and plotting confidence interval for Theil-Sen estimator The following R code is copied verbatim from Appendix C, page C-20 of U.S. EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities; Unified Guidance; March 2009 (EPA 530/R-09-007; https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P10055GQ.TXT). The code was written by Kirk M. Cameron, Ph.D. of MacStat Consulting, Ltd. # R script for Theil-Sen Confidence band # Compute bootstrapped confidence band around Theil-Sen trend line # user inputs: list of x-values, list of y-values, desired confidence level # Note: replace numbers in parentheses below with specific x and y values # corresponding to data-specific ordered pairs # x-values should be numeric values representing sampling dates or events # y-values should be concentration values corresponding to these dates or events # Script produces a plot of the Theil-Sen trend line, the confidence band around the trend, # and an overlay of the actual data values x= c(89.6,90.1,90.8,91.1,92.1,93.1,94.1,95.6,96.1,96.3) y= c(56,53,51,55,52,60,62,59,61,63) conf = .90 elimna= function(m){ # # remove any rows of data having missing values m= as.matrix(m) ikeep= c(1:nrow(m)) for(i in 1:nrow(m)) if (sum(is.na(m[i,])>=1)) ikeep[i]= 0 elimna= m[ikeep[ikeep>=1],] elimna } theilsen2= function(x,y){ # # Compute the Theil-Sen regression estimator # Do not compute residuals in this version # Assumes missing pairs already removed # ord= order(x) xs= x[ord] ys= y[ord] vec1= outer(ys,ys,"-") vec2= outer(xs,xs,"-") v1= vec1[vec2>0] v2= vec2[vec2>0] slope= median(v1/v2) coef= 0 coef[1]= median(y)-slope*median(x) coef[2]= slope list(coef=coef) } nb= 1000 temp= matrix(c(x,y),ncol=2) temp= elimna(temp) #remove any pairs with missing values x= temp[,1] y= temp[,2] n= length(x) ord= order(x) cut= min(x) + (0:100)*(max(x)-min(x))/100 #compute 101 cut pts t0= theilsen2(x,y) #compute trend line on original data tmp= matrix(nrow=nb,ncol=101) for (i in 1:nb) { idx= sample(ord,n,rep=T) xboot= x[idx] yboot= y[idx] tboot= theilsen2(xboot,yboot) tmp[i,]= tboot$coef[1] + cut*tboot$coef[2] } lb= 0; ub= 0 for (i in 1:101){ lb[i]= quantile(tmp[,i],c((1-conf)/2)) ub[i]= quantile(tmp[,i],c((1+conf)/2)) } tband= list(xcut=cut,lo=lb,hi=ub,ths0=t0) yt= tband$ths0$coef[1] + tband$ths0$coef[2]*tband$xcut plot(yt~tband$xcut,type='l',xlim=range(x),ylim=c(min(tband$lo),max(tband$hi)),xlab='Date',ylab='Conc') points(x,y,pch=16) lines(tband$hi~tband$xcut,type='l',lty=2) lines(tband$lo~tband$xcut,type='l',lty=2)
Calculating and plotting confidence interval for Theil-Sen estimator The following R code is copied verbatim from Appendix C, page C-20 of U.S. EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities; Unified Guidance; March 2009 (EPA 530/R-09-007;
37,691
Covariance estimation of overlapping time series
I faced a similar problem earlier and found some related literature, e.g. Britten‐Jones, Mark, Anthony Neuberger, and Ingmar Nolte. "Improved inference in regression with overlapping observations." Journal of Business Finance & Accounting 38.5‐6 (2011): 657-683. Harri, Ardian, and B. Wade Brorsen. "The overlapping data problem." Available at SSRN 76460 (1998). Hansen, Lars Peter, and Robert J. Hodrick. "Forward exchange rates as optimal predictors of future spot rates: An econometric analysis." The Journal of Political Economy (1980): 829-853. I do not remember finding any really simple solution in these papers (but my memory cannot be trusted). I was after correlation (rather than covariance) given overlapping observations. I thought that the following could perhaps help: run a simple regression of $\Delta_h x_t$ on $\Delta_h z_t$ with ARMA errors (such as described in Rob J. Hyndman's blog post "The ARIMAX model muddle"). The ARMA errors should take care of the statistical artifacts resulting from the data being overlapping. The resulting $R^2$ could perhaps be interpreted as the squared correlation. Going from correlations from covariances should not be too difficult. My thinking is only heuristic, but perhaps the idea could be developed and become useful.
Covariance estimation of overlapping time series
I faced a similar problem earlier and found some related literature, e.g. Britten‐Jones, Mark, Anthony Neuberger, and Ingmar Nolte. "Improved inference in regression with overlapping observations." J
Covariance estimation of overlapping time series I faced a similar problem earlier and found some related literature, e.g. Britten‐Jones, Mark, Anthony Neuberger, and Ingmar Nolte. "Improved inference in regression with overlapping observations." Journal of Business Finance & Accounting 38.5‐6 (2011): 657-683. Harri, Ardian, and B. Wade Brorsen. "The overlapping data problem." Available at SSRN 76460 (1998). Hansen, Lars Peter, and Robert J. Hodrick. "Forward exchange rates as optimal predictors of future spot rates: An econometric analysis." The Journal of Political Economy (1980): 829-853. I do not remember finding any really simple solution in these papers (but my memory cannot be trusted). I was after correlation (rather than covariance) given overlapping observations. I thought that the following could perhaps help: run a simple regression of $\Delta_h x_t$ on $\Delta_h z_t$ with ARMA errors (such as described in Rob J. Hyndman's blog post "The ARIMAX model muddle"). The ARMA errors should take care of the statistical artifacts resulting from the data being overlapping. The resulting $R^2$ could perhaps be interpreted as the squared correlation. Going from correlations from covariances should not be too difficult. My thinking is only heuristic, but perhaps the idea could be developed and become useful.
Covariance estimation of overlapping time series I faced a similar problem earlier and found some related literature, e.g. Britten‐Jones, Mark, Anthony Neuberger, and Ingmar Nolte. "Improved inference in regression with overlapping observations." J
37,692
Covariance estimation of overlapping time series
If I understand the question well, then in the univariate case the autocorrelation matrix is a Toeplitz matrix. In the mulitvariate case the the matrix will be a block Toeplitz matrix. In a block Toeplitz matrix all the variables are grouped per time t. That is blocks of correlations for the variables on h=0 on the diagonal, and of diagonal blocks for h>0. Toeplitz matrices are highly constrained, the number of estimated elements in a Toeplitz matrix is far smaller than in a normal correlation matrix.
Covariance estimation of overlapping time series
If I understand the question well, then in the univariate case the autocorrelation matrix is a Toeplitz matrix. In the mulitvariate case the the matrix will be a block Toeplitz matrix. In a block Toe
Covariance estimation of overlapping time series If I understand the question well, then in the univariate case the autocorrelation matrix is a Toeplitz matrix. In the mulitvariate case the the matrix will be a block Toeplitz matrix. In a block Toeplitz matrix all the variables are grouped per time t. That is blocks of correlations for the variables on h=0 on the diagonal, and of diagonal blocks for h>0. Toeplitz matrices are highly constrained, the number of estimated elements in a Toeplitz matrix is far smaller than in a normal correlation matrix.
Covariance estimation of overlapping time series If I understand the question well, then in the univariate case the autocorrelation matrix is a Toeplitz matrix. In the mulitvariate case the the matrix will be a block Toeplitz matrix. In a block Toe
37,693
Is the true linear regressor equal to the average linear regressor?
I am not sure what you are doing here, so only a quick comment. In general, for two random variables $Z$ and $W$, $E(ZW)\neq E(Z)E(W)$. I am not sure what your operator $E_D$ does but my guess it is that this property applies to $E_D$ too. So, the last line does not follows from the previous one.
Is the true linear regressor equal to the average linear regressor?
I am not sure what you are doing here, so only a quick comment. In general, for two random variables $Z$ and $W$, $E(ZW)\neq E(Z)E(W)$. I am not sure what your operator $E_D$ does but my guess it is t
Is the true linear regressor equal to the average linear regressor? I am not sure what you are doing here, so only a quick comment. In general, for two random variables $Z$ and $W$, $E(ZW)\neq E(Z)E(W)$. I am not sure what your operator $E_D$ does but my guess it is that this property applies to $E_D$ too. So, the last line does not follows from the previous one.
Is the true linear regressor equal to the average linear regressor? I am not sure what you are doing here, so only a quick comment. In general, for two random variables $Z$ and $W$, $E(ZW)\neq E(Z)E(W)$. I am not sure what your operator $E_D$ does but my guess it is t
37,694
Why does value iteration converge to the optimal value function in **finite** amount of steps?
According to Section VI in "An Introduction To Reinforcement Learning" written by Sutton and Barto, this is an experienced conclusion, not a strictly-proved theorem. So it implies that they believe in some special MDP model cases, it's possible that the convergence couldn't happen. Please correct me if my understanding is wrong.
Why does value iteration converge to the optimal value function in **finite** amount of steps?
According to Section VI in "An Introduction To Reinforcement Learning" written by Sutton and Barto, this is an experienced conclusion, not a strictly-proved theorem. So it implies that they believe in
Why does value iteration converge to the optimal value function in **finite** amount of steps? According to Section VI in "An Introduction To Reinforcement Learning" written by Sutton and Barto, this is an experienced conclusion, not a strictly-proved theorem. So it implies that they believe in some special MDP model cases, it's possible that the convergence couldn't happen. Please correct me if my understanding is wrong.
Why does value iteration converge to the optimal value function in **finite** amount of steps? According to Section VI in "An Introduction To Reinforcement Learning" written by Sutton and Barto, this is an experienced conclusion, not a strictly-proved theorem. So it implies that they believe in
37,695
Why does value iteration converge to the optimal value function in **finite** amount of steps?
An example: $S = \{s\}, A = \{a\}, R = \frac{1}{2}$. By taking the only action $a$, the next state should be $s$ again. Let's assume initial $V(s): = 0$. Updating by bellman formula: $V(s)= \frac{1}{2}, \frac{3}{4}, \frac{7}{8}...$
Why does value iteration converge to the optimal value function in **finite** amount of steps?
An example: $S = \{s\}, A = \{a\}, R = \frac{1}{2}$. By taking the only action $a$, the next state should be $s$ again. Let's assume initial $V(s): = 0$. Updating by bellman formula: $V(s)= \frac{1}{2
Why does value iteration converge to the optimal value function in **finite** amount of steps? An example: $S = \{s\}, A = \{a\}, R = \frac{1}{2}$. By taking the only action $a$, the next state should be $s$ again. Let's assume initial $V(s): = 0$. Updating by bellman formula: $V(s)= \frac{1}{2}, \frac{3}{4}, \frac{7}{8}...$
Why does value iteration converge to the optimal value function in **finite** amount of steps? An example: $S = \{s\}, A = \{a\}, R = \frac{1}{2}$. By taking the only action $a$, the next state should be $s$ again. Let's assume initial $V(s): = 0$. Updating by bellman formula: $V(s)= \frac{1}{2
37,696
Review paper on particle filter
This appears to be a dissertation thesis or something close to that, not published in a scholarly journal but by Hamilton University, as clarified in ref. 2 of Feng et al.
Review paper on particle filter
This appears to be a dissertation thesis or something close to that, not published in a scholarly journal but by Hamilton University, as clarified in ref. 2 of Feng et al.
Review paper on particle filter This appears to be a dissertation thesis or something close to that, not published in a scholarly journal but by Hamilton University, as clarified in ref. 2 of Feng et al.
Review paper on particle filter This appears to be a dissertation thesis or something close to that, not published in a scholarly journal but by Hamilton University, as clarified in ref. 2 of Feng et al.
37,697
What is the probability of observing some function given a gaussian process?
I looked into the in the case of a 1D GP and the parametric function $y = f(x)$ a few months ago. This was my findings: In order to get the probability of a function, $y = f(x)$, we have to perform a line integral through the probability distribution provided by the GP. This line integral would therefore be the probability of the curve given the GP. Unfortunately, we can't do this easily as there is no way of writing the GP in closed form to perform this operation. For this reason a sample based approach is the best option (or at least this was what I believed). I thought of to methods to consider. The prediction of the GP at a point is $O(n^2)$ for every additional prediction after training, so if $n$ is small it is cheep to sample the function and the GP at many $x$'s and Monte Carlo methods make sense. When $n$ is very large it is expensive to perform many samples. In this case it makes sense to perform bayesian quadrature to estimate this line integral. In this situation I looked at using normal BQ sampling policies but did think at the time that there maybe an opportunity for novel acquisition functions for this exact situation.
What is the probability of observing some function given a gaussian process?
I looked into the in the case of a 1D GP and the parametric function $y = f(x)$ a few months ago. This was my findings: In order to get the probability of a function, $y = f(x)$, we have to perform a
What is the probability of observing some function given a gaussian process? I looked into the in the case of a 1D GP and the parametric function $y = f(x)$ a few months ago. This was my findings: In order to get the probability of a function, $y = f(x)$, we have to perform a line integral through the probability distribution provided by the GP. This line integral would therefore be the probability of the curve given the GP. Unfortunately, we can't do this easily as there is no way of writing the GP in closed form to perform this operation. For this reason a sample based approach is the best option (or at least this was what I believed). I thought of to methods to consider. The prediction of the GP at a point is $O(n^2)$ for every additional prediction after training, so if $n$ is small it is cheep to sample the function and the GP at many $x$'s and Monte Carlo methods make sense. When $n$ is very large it is expensive to perform many samples. In this case it makes sense to perform bayesian quadrature to estimate this line integral. In this situation I looked at using normal BQ sampling policies but did think at the time that there maybe an opportunity for novel acquisition functions for this exact situation.
What is the probability of observing some function given a gaussian process? I looked into the in the case of a 1D GP and the parametric function $y = f(x)$ a few months ago. This was my findings: In order to get the probability of a function, $y = f(x)$, we have to perform a
37,698
Nonparametric tolerance intervals for discrete variables
The variable of interest is multinomially distributed with class (cell) probabilities: $p_1, p_2, ..., p_{10}$. Further, the classes are endowed with a natural order. First attempt: smallest "predictive interval" containing $90\%$ p = [p1, ..., p10] # empirical proportions summing to 1 l = 1 u = length(p) cover = 0.9 pmass = sum(p) while (pmass - p[l] >= cover) OR (pmass - p[u] >= cover) if p[l] <= p[u] pmass = pmass - p[l] l = l + 1 else # p[l] > p[u] pmass = pmass - p[u] u = u - 1 end end A non-parametric measure of the uncertainty (e.g., variance, confidence) in the $l,u$-quantile estimates could indeed be obtained by standard bootstrap methods. Second approach: direct "bootstrap search" Below I provide runable Matlab code that approaches the question directly from a bootstrap perspective (the code is not optimally vectorized). %% set DGP parameters: p = [0.35, 0.8, 3.5, 2.2, 0.3, 2.9, 4.3, 2.1, 0.4, 0.2]; p = p./sum(p); % true probabilities ncat = numel(p); cats = 1:ncat; % draw a sample: rng(1703) % set seed nsamp = 10^3; samp = datasample(1:10, nsamp, 'Weights', p, 'Replace', true); Check that this makes sense. psamp = mean(bsxfun(@eq, samp', cats)); % sample probabilities bar([p(:), psamp(:)]) Run the bootstrap simulation. %% bootstrap simulation: rng(240947) nboots = 2*10^3; cover = 0.9; conf = 0.95; tic Pmat = nan(nboots, ncat, ncat); for b = 1:nboots boot = datasample(samp, nsamp, 'Replace', true); % draw bootstrap sample pboot = mean(bsxfun(@eq, boot', cats)); for l = 1:ncat for u = l:ncat Pmat(b, l, u) = sum(pboot(l:u)); end end end toc % Elapsed time is 0.442703 seconds. Filter from each bootstrap replicate the intervals, $[l,u]$, that contain at least $90\%$ probability mass and calculate a (frequentist) confidence estimate of those intervals. conf_mat = squeeze(mean(Pmat >= cover, 1)) 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 0.3360 0.9770 1.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Select those that satisfy the confidence desideratum. [L, U] = find(conf_mat >= conf); [L, U] 1 8 2 8 1 9 2 9 3 9 1 10 2 10 3 10 Convincing yourself that the above bootstrap method is valid Bootstrap samples are intended to be stand-ins for something we would like to have, but do not, i.e.: new, independent draws from the true underlying population (short: new data). In the example that I gave, we know the data generating process (DGP), therefore we could "cheat" and replace the code lines pertaining to bootstrap re-samples by new, independent draws from the actual DGP. newsamp = datasample(cats, nsamp, 'Weights', p, 'Replace', true); pnew = mean(bsxfun(@eq, newsamp', cats)); Then we can validate the bootstrap approach by comparing it to the ideal. Below are the results. The confidence matrix from new, independent data draws: 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 0.4075 0.9925 1.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The corresponding $95\%$-confidence lower and upper bounds: 1 8 2 8 1 9 2 9 3 9 1 10 2 10 3 10 We find that the confidence matrices closely agree and that the bounds are identical... Thus validating the bootstrap approach.
Nonparametric tolerance intervals for discrete variables
The variable of interest is multinomially distributed with class (cell) probabilities: $p_1, p_2, ..., p_{10}$. Further, the classes are endowed with a natural order. First attempt: smallest "predicti
Nonparametric tolerance intervals for discrete variables The variable of interest is multinomially distributed with class (cell) probabilities: $p_1, p_2, ..., p_{10}$. Further, the classes are endowed with a natural order. First attempt: smallest "predictive interval" containing $90\%$ p = [p1, ..., p10] # empirical proportions summing to 1 l = 1 u = length(p) cover = 0.9 pmass = sum(p) while (pmass - p[l] >= cover) OR (pmass - p[u] >= cover) if p[l] <= p[u] pmass = pmass - p[l] l = l + 1 else # p[l] > p[u] pmass = pmass - p[u] u = u - 1 end end A non-parametric measure of the uncertainty (e.g., variance, confidence) in the $l,u$-quantile estimates could indeed be obtained by standard bootstrap methods. Second approach: direct "bootstrap search" Below I provide runable Matlab code that approaches the question directly from a bootstrap perspective (the code is not optimally vectorized). %% set DGP parameters: p = [0.35, 0.8, 3.5, 2.2, 0.3, 2.9, 4.3, 2.1, 0.4, 0.2]; p = p./sum(p); % true probabilities ncat = numel(p); cats = 1:ncat; % draw a sample: rng(1703) % set seed nsamp = 10^3; samp = datasample(1:10, nsamp, 'Weights', p, 'Replace', true); Check that this makes sense. psamp = mean(bsxfun(@eq, samp', cats)); % sample probabilities bar([p(:), psamp(:)]) Run the bootstrap simulation. %% bootstrap simulation: rng(240947) nboots = 2*10^3; cover = 0.9; conf = 0.95; tic Pmat = nan(nboots, ncat, ncat); for b = 1:nboots boot = datasample(samp, nsamp, 'Replace', true); % draw bootstrap sample pboot = mean(bsxfun(@eq, boot', cats)); for l = 1:ncat for u = l:ncat Pmat(b, l, u) = sum(pboot(l:u)); end end end toc % Elapsed time is 0.442703 seconds. Filter from each bootstrap replicate the intervals, $[l,u]$, that contain at least $90\%$ probability mass and calculate a (frequentist) confidence estimate of those intervals. conf_mat = squeeze(mean(Pmat >= cover, 1)) 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 0.3360 0.9770 1.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Select those that satisfy the confidence desideratum. [L, U] = find(conf_mat >= conf); [L, U] 1 8 2 8 1 9 2 9 3 9 1 10 2 10 3 10 Convincing yourself that the above bootstrap method is valid Bootstrap samples are intended to be stand-ins for something we would like to have, but do not, i.e.: new, independent draws from the true underlying population (short: new data). In the example that I gave, we know the data generating process (DGP), therefore we could "cheat" and replace the code lines pertaining to bootstrap re-samples by new, independent draws from the actual DGP. newsamp = datasample(cats, nsamp, 'Weights', p, 'Replace', true); pnew = mean(bsxfun(@eq, newsamp', cats)); Then we can validate the bootstrap approach by comparing it to the ideal. Below are the results. The confidence matrix from new, independent data draws: 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 1.0000 1.0000 1.0000 0 0 0 0 0 0 0 0.4075 0.9925 1.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The corresponding $95\%$-confidence lower and upper bounds: 1 8 2 8 1 9 2 9 3 9 1 10 2 10 3 10 We find that the confidence matrices closely agree and that the bounds are identical... Thus validating the bootstrap approach.
Nonparametric tolerance intervals for discrete variables The variable of interest is multinomially distributed with class (cell) probabilities: $p_1, p_2, ..., p_{10}$. Further, the classes are endowed with a natural order. First attempt: smallest "predicti
37,699
Plotting a "posterior median surface"
It is very likely that the author used a Gaussian process to produce the interpolation. I think that is true because an exercise in the book describes a very similar problem to this one and requires a plot based on a Gaussian process. I tried it and I think the resulting plot shares features with the posterior median surface of the original question. This is the median of the posterior distribution of $w(s)$ as above (it is slightly different because I ran another MCMC simulation): And this is the interpolation based on a Gaussian process: As you can see, the method of interpolation makes a huge difference.
Plotting a "posterior median surface"
It is very likely that the author used a Gaussian process to produce the interpolation. I think that is true because an exercise in the book describes a very similar problem to this one and requires a
Plotting a "posterior median surface" It is very likely that the author used a Gaussian process to produce the interpolation. I think that is true because an exercise in the book describes a very similar problem to this one and requires a plot based on a Gaussian process. I tried it and I think the resulting plot shares features with the posterior median surface of the original question. This is the median of the posterior distribution of $w(s)$ as above (it is slightly different because I ran another MCMC simulation): And this is the interpolation based on a Gaussian process: As you can see, the method of interpolation makes a huge difference.
Plotting a "posterior median surface" It is very likely that the author used a Gaussian process to produce the interpolation. I think that is true because an exercise in the book describes a very similar problem to this one and requires a
37,700
Motivating likelihood ratio test vs Wald test for paper reviewer
Wald vs likelihood ratio is secondary to the fact that comparing $y\sim B$ to $y\sim A+B$ is a test of $A$, not a test of $B$. I hope this is a typo in the referee report. Agresti’s Foundations of Linear and Generalized Linear Models gives some reasons to prefer likelihood ratio testing to Wald testing in logistic regression, though the difference usually is not considerable. However, the reviewer is all kinds of wrong to have you testing $A$ to say how good of a predictor $B$ is. That is roughly equivalent to running ANCOVA by testing the slope instead of the factor—complete nonsense.
Motivating likelihood ratio test vs Wald test for paper reviewer
Wald vs likelihood ratio is secondary to the fact that comparing $y\sim B$ to $y\sim A+B$ is a test of $A$, not a test of $B$. I hope this is a typo in the referee report. Agresti’s Foundations of Lin
Motivating likelihood ratio test vs Wald test for paper reviewer Wald vs likelihood ratio is secondary to the fact that comparing $y\sim B$ to $y\sim A+B$ is a test of $A$, not a test of $B$. I hope this is a typo in the referee report. Agresti’s Foundations of Linear and Generalized Linear Models gives some reasons to prefer likelihood ratio testing to Wald testing in logistic regression, though the difference usually is not considerable. However, the reviewer is all kinds of wrong to have you testing $A$ to say how good of a predictor $B$ is. That is roughly equivalent to running ANCOVA by testing the slope instead of the factor—complete nonsense.
Motivating likelihood ratio test vs Wald test for paper reviewer Wald vs likelihood ratio is secondary to the fact that comparing $y\sim B$ to $y\sim A+B$ is a test of $A$, not a test of $B$. I hope this is a typo in the referee report. Agresti’s Foundations of Lin