category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
bayesian inference
Probability measures at play in Bayesian inference
https://stats.stackexchange.com/questions/230252/probability-measures-at-play-in-bayesian-inference
<p>This might be a purely notational, but I'm confused about the probability measures at play when using Bayesian inference. It's sufficient to focus on the numerator here. Let's assume that I have a prior over hypotheses $P(H)$ and that these hypotheses are themselves distributions about some data. When some data $D$ is witnessed, I want to update my prior in the usual way:</p> <p>$P(H|D) \propto P(D|H) P(H)$. This might be silly, but what I'm confused about is the distribution of the first term, $P(D|H)$. If I'm not completely off track, $P(D|H)$ is $H(E)$, i.e., the probability of the witnessed data under the hypothesis. However, $P(\cdot)$ is a distribution over $H$ and not over $D$, which is why $P(E|H)$ is not making much sense to me. </p> <p>Where is my thinking wrong? That is, why is $P(E|H)$ defined if $P(\cdot)$ is a distribution over $H$? I'd appreciate any clarifications about the conceptual underpinnings that allow us to go from $P(E|H)$ to $P(H)$ using the same $P(\cdot)$. </p> <p>--</p> <p>As requested, an example that fixes the notation: Let $h \in H$ be a distribution over coin toss outcomes, and $d_{heads} \in D$ is the event of a coin landing heads. For simplicity, let's assume we only have two hypothesized distributions, $h_1$ and $h_2$, where $h_1(d_{heads}) = .5$ (fair coin), and $h_2(d_{heads}) = .25 (unfair coin).</p> <p>I want to update my prior over $H$, $P(h_i)$, after witnessing a coin toss that resulted in $d_{heads}$, and apply Bayes' rule. For $h_1$ this is,</p> <p>$P(h_1|d_{heads}) \propto P(d_{heads}|h_1) P(h_1)$</p> <p>So, $P(h_1)$ is not an issue, as it is given by my prior. $P(d_{heads}|h_1)$ is what confuses me. This is the probability of the the coin landing heads given that $h_1$ is true -- but the probability measure $P(\cdot)$ is defined over distributions of coin tosses, not over coin tosses. That's what I'm having trouble with. Either $P(h_i)$ makes sense as the probability of hypothesis $i$, or it makes sense as $P(d_j|h_i)$ where it gives the probability of data $j$ given $i$'s truth. But I don't see how it can be both nor how to (conceptually) justify that $P(d_j|h_i)$ is often $h_i(d_j)$.</p>
<p>First, note that a Bayesian model is a joint probability distribution of all unknowns modeled as random variables. Or, if you use Bayes rule in other contexts, it still requires all random variables / events to be defined in the same joint probability space. The $P$s or $p$s then refer to marginal and conditional probabilities/distributions obtained from this joint space. </p> <p>The issue may be just notational. In an attempt to clarify the notational issues, I first go over your discrete example, somewhat rigorously, considering everything as events. Second, I then point out notational problems in the more general case (where we are dealing with random variables and densities rather than events), which may have confused you since the notation annoyingly refers to different functions with the same letter $p$.</p> <h3>The event case (your example)</h3> <p>A possible probability space modeling your example situation would be the following: let the sample space be \begin{equation} \Omega = \{hh_1 , hh_2\} \times \{dd_{heads}, dd_{tails}\}, \end{equation} (I just doubled the letters to notationally separate the outcomes and the events). Let the event space be $2^\Omega$. Now, introduce notation for the events: \begin{align} h_i &amp;= \{hh_i\} \times \{dd_{heads}, dd_{tails}\} \\ d_{foo} &amp;= \{hh_1,hh_2\} \times \{dd_{foo}\} \end{align} There is just one probability $P$. </p> <p>The point is that after specifying the following $P$-probabilities: 1. the probability of event $h_1$, 2. the conditional probability of $d_{heads}$ conditional on $h_1$, and 3. the conditional probability of $d_{heads}$ conditional on $h_2$, $P$ is uniquely determined and you may thus deduce the conditional probability of $h_1$ conditional on $d_{heads}$, using the Bayes rule. All in this same probability space.</p> <p>Note also that your \begin{equation} P(h_1 \mid d_{heads}) \propto P(d_{heads} \mid h_1)\,P(h_1) \end{equation} does not make sense since both the LHS and the RHS are just single numbers, so the proportionality does not mean anything. You meant \begin{equation} P(h_i \mid d_{heads}) \propto P(d_{heads} \mid h_i)\,P(h_i), \end{equation} where the point is that the proportionality factor is the same with $i=1$ and $i=2$.</p> <h3>A note on notation with densities</h3> <p>There is a popular simplifying-but-confusing notational convention in Bayesian statistics is to just use $p(x)$ and $p(y)$ to refer to density functions (or probability mass functions in the discrete case) of $x$ and $y$ respectively. A more precise notation would i) separate the random variables and their values (say, $X$ vs. $x$) and ii) separate the different density functions by, e.g., subscripts so that $p_X(x)$ is the value of the density function of $X$ evaluated at $x$. In the popular notation, which function each $p$ refers to is left implicit. So, instead of writing</p> <p>\begin{equation} p(\theta\mid y) \propto p(\theta)\,p(y\mid \theta) \end{equation}</p> <p>it may be clearer to write</p> <p>\begin{equation} p_{\Theta \mid Y}(\theta \mid y) \propto p_\Theta(\theta)\,p_{Y \mid \Theta}(y \mid \theta). \end{equation}</p>
200
bayesian inference
Inferring sample size from proportions
https://stats.stackexchange.com/questions/257998/inferring-sample-size-from-proportions
<p>How would one infer the number of people that took a test based on the percentages of people that got particular questions correctly. </p> <p>For example <code> 1. 85% 2. 25% 3. 95% 4. 15% 5. 35% </code> $ n = 20 $</p> <p>A caveat is that these percentages actually come with some noise, therefore you cannot be sure that something like <code>35.6%</code> is <code>89/250</code> is a better answer than <code>7/20</code> (In fact the larger the estimate of N the less likely the estimate is true). </p> <p>I hope this question is clear and I assume this will require Bayesian methods. </p> <p>(Using Python if that matters)</p>
201
bayesian inference
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution for the parameter?
https://stats.stackexchange.com/questions/26199/how-do-i-calculate-a-posterior-distribution-for-a-poisson-model-with-exponential
<p>Suppose:</p> <ul> <li>$N \sim {\rm Poisson}(\lambda)$</li> <li>$\lambda$ is unknown, but we believe that it can be assumed $\sim \exp(1)$</li> </ul> <p>If I want to calculate $N | X$, i.e., $P(model | data)$, I need to use the Bayes theorem in the following way:</p> <p>$P(model|data) \propto P(data|model)*P(model)$</p> <ul> <li>$P(data|model)$ is the likelihood function</li> <li>$P(model)$ is my prior distribution density</li> </ul> <p>So:</p> <p>$P(data|model) = L(\lambda) = \exp\{-n\lambda + \log\lambda \sum k_i - \sum \log (k_i!)\}$</p> <p>And</p> <p>$P(model) = g(\lambda = 1) = e^{-\lambda}$</p> <p>Therefore</p> <p>$P(model|data) = \exp\{-n\lambda + \log\lambda \sum k_i - \sum \log(k_i!)\} e^{-\lambda}$</p> <p>And if I had a sample of $k_1 = j$, then</p> <p>$P(\lambda|k_1 = j) = \exp(-\lambda + \log \lambda j - \log j!) e^{-\lambda}$</p> <p>Is it correct? How do I calculate an expected value for the parameter?</p>
<p><span class="math-container">$\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$</span>.</p> <p><span class="math-container">$p(\text{model}) = p(\lambda) = e^{-\lambda}$</span>.</p> <p><span class="math-container">$p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-\lambda}}{\int_0^\infty \frac{\lambda^n}{n!}e^{-\lambda} \cdot e^{-\lambda}\, d\lambda} = 2^{n+1}\frac{\lambda^n}{n!} e^{-2\lambda} $</span></p> <p>which is a <a href="http://en.wikipedia.org/wiki/Gamma_distribution" rel="nofollow noreferrer">Gamma distribution</a> with parameters <span class="math-container">$n+1$</span> and <span class="math-container">$2$</span>.</p>
202
bayesian inference
Should Bayesian inference be avoided with a small sample size and weakly informative priors?
https://stats.stackexchange.com/questions/621780/should-bayesian-inference-be-avoided-with-a-small-sample-size-and-weakly-informa
<p>When specifying a Bayesian model, one can specify weakly-informative priors. However, such priors may represent a concern to many researchers. After all, if they are weakly-informed, one may be concerned that the imprecision of such a prior may be biasing the results. It is also my understanding that the influence of the prior diminishes as more data is introduced. However, the prior may be very influential if the N is small. Given this, if you know beforehand that your priors are weak <em>and</em> that you have a small sample size, should you avoid Bayesian inference?</p>
<p>Rather than &quot;Am I using a prior that is strong or weak?&quot; (in the sense of concentrated or diffuse over the parameter space), it depends on why you are using that particular prior. Do you actually trust your prior, or was it chosen arbitrarily / for convenience?</p> <p>On the one hand, we may have a well-justified prior (because of sincerely held beliefs; or lots of prior data; or a reference prior that is widely agreed to be useful by people in our scientific domain, like @JohnMadden's beta/binomial example in some fields), and we want to update it with new data. Then even with small n, a Bayesian would OK with the prior having a big effect on the posterior.</p> <p>On the other hand, if we don't have a prior that we <em>trust</em>, and we're just plugging a prior into our calculations for <em>convenience</em>, that's when your concern is valid: When n is small, the posterior can be very sensitive to tiny changes in the prior (even a weakly-informative prior). Due to this sensitivity, using an arbitrary prior just adds noise to your results.</p> <p>For large n, the effect of the prior should wash out. But for small n, if you didn't trust the prior before you saw the data, you shouldn't trust the posterior either. With small n and no reliable prior, instead of a Bayesian analysis---or even a Frequentist analysis (which may just confirm that &quot;The sample is too small to estimate these parameters with adequate precision&quot;)---I would just report descriptive statistics / graphs and be very transparent about the study's limitations: due to the sample size, our results cannot safely generalize to the population, etc.</p>
203
bayesian inference
Bayesian approach on games of chance with physical devices
https://stats.stackexchange.com/questions/71834/bayesian-approach-on-games-of-chance-with-physical-devices
<p>Suppose we alter side 6 of a die to appear more than 1/6th of the time. We do not know the actual proportion of the time each side of the die will appear because all or some of the other 5 sides do not have the 1/6 proportion too. What do we need to be 99% confidence of each of the 6 sides of this altered die? How does the <em>a priori</em> knowledge help us to determine how biased it is? What are the benefits of using Bayes instead of classic study? </p>
204
bayesian inference
Bayesian inference adaptable to not mutually exlclusive events?
https://stats.stackexchange.com/questions/76229/bayesian-inference-adaptable-to-not-mutually-exlclusive-events
<p>Say I have $n$ possible events that lead to $m$ observable effects.</p> <p>The Bayesian inference hypothesis is that events are mutually exclusive and jointly exhaustive. Could I still be able to use Bayesian inference (modified, eventually) when more than one event occurs at some observation?</p> <p>Real example: $n$ possible faulty components on a single board with $m$ on-board test points that can result "in range" or "out of range," and I cannot <em>a priori</em> exclude that two components are faulty at the same time.</p>
205
bayesian inference
Finding optimal parameter values using a Bayesian model
https://stats.stackexchange.com/questions/95081/finding-optimal-parameter-values-using-a-bayesian-model
<p>I have a problem with the following setup. I've been reading <a href="http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/" rel="nofollow">"Doing Bayesian Data Analysis: A Tutorial with R and BUGS"</a> and it seems like the Bayesian approach is a good one, but I'm not entirely sure how to model it.</p> <p>I've got roughly 20 events that I'd like to analyze. Each event should be similar, and I can apply the same analysis to each. I'll assume that the events are independent (they might not be, but I'm not worried about that). For each event, I have a function of three parameters: threshold, s, and t. Threshold is a continuous variable between 0 and .5, and I'm guessing that the useful values are around .1. s is an integer between 15 and 2, and t is an integer between s and 1. I can run the function over all values of s and t and a sample of values for threshold, probably .05, .06, ..., .2 (or whatever, this is flexible). The function gives me a raw score, where higher is better, and negative values are, well, negative; they represent an actual loss. It will be straightforward enough to normalize the raw score to a value between 1 and -1.</p> <p>What I want to know is what are the best values for the parameters (or, technically the distributions of the parameters) that will lead to the best(ish) score(s), after looking at all the events. What I'm thinking I'll do is the following:</p> <p>Model threshold as a normal-like variable - using either a normal distribution, or beta, or gamma. I'm leaning towards a beta or gamma distribution because it is fairly easy to model my high uncertainty about the prior, and I don't expect the distribution to be perfectly bell-shaped, or even symmetric.</p> <p>Similarly, model s and t as normal-like distributions (again, leaning towards beta or gamma, for the same reasons). Even though the values are integers, it's no big deal at the end of the day if the mean of the posterior distribution is, say 8.6 - I can work with that. If there were distributions that handled discrete values, that would be cool, but, again, not necessary. I realize the binomial distribution is discrete, but it's more about modelling binary outcomes, if my understanding is correct, so I'm going to steer away from it in this particular instance.</p> <p>My question is, how do I utilize the score, either in raw form (guessing not that useful) or the normalized form (hopefully useful)? It seems like the most important piece of information, but I don't know how to incorporate it. I'm not trying to build a regression model; I'm looking more for the optimal results. And I don't want to do a logistic regression; again, it's not a prediction, but also because it's not a simple yes or no - best score or not - I want to weight parameters that give a normalized score of say .98 nearly as high as the ones that produce a 1.</p> <p>If I were doing this without Bayesian analysis, I think what I would do is something like, for each data point, for each parameter, add a value equal to normalized score for that data point. Then, at the end, look at, say, the top 3 or 5 values for each parameter, and try to make a decision based on that.</p> <p>But I'd like to understand how to model this in a Bayesian way. Any advice appreciated, and thanks in advance!</p>
206
bayesian inference
Marginal posterior and prior are similar (and flat!)
https://stats.stackexchange.com/questions/49496/marginal-posterior-and-prior-are-similar-and-flat
<p>I designed a Bayesian model and sampled the posterior using a MCMC algorithm. My problem is that the posterior marginal distribution of a given latent intermediate variable appears to be uniform just as the prior I assigned to it. In practice this variable is supposed to have a substential importance on the model. Moreover, the posteriors over the other variables sound in accordance with intuition and essentially monomodal. </p> <p>I am a bit confused with that situation. How to interpret this result? Have I to change my model? Considering that the problematic variable is an auxiliary variable that is not interpreted after inference, can I satisfied myself from these results, or can it be interpret as a failure of the modelisation ? </p>
<p>When the likelihood surface contains flat ridges related to the parameters of interest, the prior information has a substantial impact on the shape of the corresponding posterior distribution. In these cases, it is important to employ reasonable priors since they will basically drive the inference. Therefore, if the prior is flat, then the posterior is likely to be flat as well. Flat likelihoods appear in several contexts, in particular when the sample size is relatively small.</p> <p>Some simple recommendations:</p> <ol> <li>Simulate several data sets of the same size to check if you observe the same problem. </li> <li>Increase the sample size to check if the posterior concentrates around the true values (This should happen due to the <a href="http://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem" rel="nofollow">Bernstein–von Mises theorem</a>). If this does not happen, then the problem may come from the software.</li> <li>Employ a very concentrated prior to check if the posterior also looks like the prior. This would <em>suggest</em> that the likelihood contains a flat ridge.</li> <li>Double check your code.</li> </ol>
207
bayesian inference
Bayesian combination of expert opinion
https://stats.stackexchange.com/questions/535101/bayesian-combination-of-expert-opinion
<p>In a population of <span class="math-container">$N$</span>, <span class="math-container">$K$</span> experts pick <span class="math-container">$M_{k\in\{1, ..., K\}}$</span> individuals that will have a certain attribute. Note that <span class="math-container">$M$</span> can be different across experts (e.g., one expert can pick 5 individuals, another can pick 20). My goal is to sort the individuals by their relative likelihood of having that attribute. First thought that came to mind was to use Bayesian inference, but I don't have a history of how well the experts have done in the past. How would you formulate this problem?</p> <p>[Edit] A bit about the data set --- a close analogy to my problem is in fantasy football. At the start of every season, a panel of &quot;pundits&quot; pick their best XI to watch. My job is to aggregate all the picks without knowing each expert's track record. All I have is a bunch of IDs for each expert's picks but I can theoretically work out how the experts arrive at their selection by regressing on players' height, weight, shirt number (if it matters), performance in previous season, club, country etc. I guess one restriction is that I need to combine experts' picks, rather than regressing past performance on player attributes and build my own model.</p>
<p>I understand that you worry that you don't know the historical performance of the experts, so you don't know how reliable are their responses. It's a valid worry, but think of it in terms of <a href="https://en.wikipedia.org/wiki/Wisdom_of_the_crowd" rel="nofollow noreferrer">wisdom of a crowd</a>, their aggregate opinion should outweigh their individual biases and mistakes. For this to work, you would want the opinions to be made independently of each other (experts don't consult each other opinions), and best if the experts would not be a very heterogeneous group, though those are rather nice-to-haves. So you can just count the votes and sort the individuals by counts.</p> <p>But let's say that you worry that some of the experts can be biased, for example, they tend to be very generous when voting, or the opposite, they pick very few individuals by default. You might want to use a statistical model to correct those biases. <a href="https://stats.stackexchange.com/questions/140561/what-is-a-good-method-to-identify-outliers-in-exam-data/140609#140609">One of the models</a> used in such cases is the <a href="https://stats.stackexchange.com/questions/142276/how-to-judge-which-test-is-more-difficult/142279#142279">Rasch model</a></p> <p><span class="math-container">$$ P(Y_{ij} = 1) = \frac{\exp(\theta_i - \beta_j)}{1+\exp(\theta_i - \beta_j)} $$</span></p> <p>Where the probability of seeing the <span class="math-container">$i$</span>-th individual picked by <span class="math-container">$j$</span>-th expert is coded as a binary variable <span class="math-container">$Y_{ij}$</span> and modeled in terms of the two parameters for the true attribute <span class="math-container">$\theta_i$</span> of the individual and the expert's bias <span class="math-container">$\beta_j$</span>. Notice that the model <a href="https://stats.stackexchange.com/questions/449283/adjusting-mean-for-review-scores/449352#449352">is just</a> a generalized mixed-effects logistic regression model. You can use such a model for your data and treat the <span class="math-container">$\theta_i$</span> as the &quot;distilled&quot; attribute scores. There are also more complicated <a href="https://en.wikipedia.org/wiki/Item_response_theory" rel="nofollow noreferrer">Item Response Theory</a> models, such models are commonly used for modeling what is called the <em>examiner effect</em>, I encourage you to <a href="https://duckduckgo.com/?q=examiner%20effect" rel="nofollow noreferrer">search for the term</a> to learn more.</p> <p>Notice that this is very similar to <a href="https://stats.stackexchange.com/questions/444283/recommendation-system/444289#444289">matrix factorization</a>, <a href="https://developers.google.com/machine-learning/recommendation/collaborative/matrix" rel="nofollow noreferrer">used by recommender engines</a>. In matrix factorization, you would decompose the <span class="math-container">$n \times k$</span> matrix of the binary picks by <span class="math-container">$k$</span> experts for <span class="math-container">$n$</span> individuals into two components <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> of the dimensions <span class="math-container">$n \times m$</span> and <span class="math-container">$m \times k$</span></p> <p><span class="math-container">$$ Y \approx PQ $$</span></p> <p>In such a case, you created embedding vectors for the data, where <span class="math-container">$P_{i\cdot}$</span> can be thought of as a vector of <span class="math-container">$m$</span> latent features describing the individual, relevant to the <span class="math-container">$Y_{i\cdot}$</span> votes they got. In the recommender system, you could use the embeddings for making the &quot;people similar to you liked this&quot; kind of recommendations. Here you don't need such a feature, as you don't really need to &quot;recommend&quot; the experts, but still, the <span class="math-container">$Y_{i\cdot}$</span> vector can be used for ranking or clustering the individuals. I'm describing this to show how a similar idea is used in a different field, so it can serve as an inspiration. The recommender systems literature can also give you some hints about scaling this if you are dealing with a big sample size <span class="math-container">$n$</span>.</p>
208
bayesian inference
How do you deal with a &quot;multiple choice&quot; observation in Bayesian inference, when the choices are on a scale?
https://stats.stackexchange.com/questions/13352/how-do-you-deal-with-a-multiple-choice-observation-in-bayesian-inference-when
<p>Suppose I have a questionnaire and I ask respondents how often they eat at McDonalds:</p> <ol> <li>Never</li> <li>Less than once a month</li> <li>At least once a month but less than once a week</li> <li>1-3 times a week</li> <li>More than 3 times a week</li> </ol> <p>I then correlate these answers with whether the respondents are wearing brown shoes.</p> <ol> <li>Brown 65 -- not brown 38</li> <li>Brown 32 -- not brown 62</li> <li>Brown 17 -- not brown 53</li> <li>Brown 10 -- not brown 48</li> <li>Brown 9 -- not brown 6</li> </ol> <p><em><strong>The thing I can't get my head around is this:</em></strong> If a respondent picks #5, he (statistically) has a higher probability of wearing brown shoes than not. But, in a sense, his response subsumes responses 2-4, and if you accumulate their statistics (ie, "eats at McDonalds sometimes") he has a higher probability of <em>not</em> wearing brown shoes.</p> <p>Now I realize that there are a bunch of caveats here -- sampling error in the stats, etc. But is there ever a valid argument for "rolling up" the stats (so that the values used in inference for #5 eg, would consist of the sums of the 2-5 values, or some other scheme), or is this concept just a product of my twisted mind?</p> <p>(Note that I'm not talking about "collapsing" the stats into fewer observations, which I assume would be perfectly valid, but rather adjusting the probabilities that are used in inference, based on the knowledge that the various possible mutually-exclusive observations are on a sliding scale.)</p>
<p>Are you being confused by the correlation / regression distinction perhaps? When you say 'I correlate these answers' with shoe colour, you could mean either</p> <ol> <li>'I compute the conditional probability of shoe colour given eating habits', or</li> <li>'I compute a measure of linear relatedness underlying the joint distribution of eating habits and shoe colour choices'. </li> </ol> <p>Number 1 would be realised as a <em>regression</em> which, unlike a correlation, would <em>completely ignore</em> the dominance structure in the eating habits answers. Hence your mystification and interest in an data aggregation method that will embody this structure.</p> <p>Number 2 is a job for a <em>correlation</em> coefficient. There are special, structure-respecting correlation coefficients that might be better for this sort of dominance-structured data (see the excellent <a href="http://www.john-uebersax.com/stat/tetra.htm" rel="nofollow" title="page">tetrachoric and polychoric correlations page</a> by John Uebersax, for an overview.) </p> <p>Alternatively there are more explicit models in the latent trait / IRT family that will provide the raw materials for the various marginal and conditional distributional inferences that are, I think, the 'thing you can't get your head around'.</p>
209
bayesian inference
Bayesian Interval Estimates for Multinomial Probabilities
https://stats.stackexchange.com/questions/339357/bayesian-interval-estimates-for-multinomial-probabilities
<p>Can anyone point me to a reference for calculating Bayesian interval estimates for multinomial probabilities? I am familiar with conventional methods (i.e.: Quesenberry and Hurst (1964), Goodman (1965), Bailey (1980), Fitzpatrick and Scott (1987), and Glaz and Sison (1999)). I have found methods to calculate intervals for differences between proportions (Piegorsch and Richwine (2001)). However, after reading for about two weeks, I cannot find a reference for Bayesian intervals for individual proportions.</p> <p>Thank you very much for assistance. </p>
210
bayesian inference
Effect of event on average probabilities given different base rates
https://stats.stackexchange.com/questions/440155/effect-of-event-on-average-probabilities-given-different-base-rates
<p>I am trying to solve the following question with my very rusty stats expertise:</p> <p>I have a data set of people of which some do exercise with different frequencies per month and other don’t exercise at all (base rates of exercise). My data contains all the dates when each person did exercise. At some known point (or multiple times) some people go to a physio/doctor and get told to do exercise – this event is independent if they do exercise by themselves anyway. Some of the people who were told to exercise, do exercise in the week after the visit. I would like to calculate the impact the visit to physio/doc has on exercise the week after a visit. I was thinking of increase of average probability of exercise the week after the visit, but I am flexible to use other sensible measures.</p>
<p>What you're looking for can be formulated as:</p> <p><span class="math-container">$$Imp := \frac {p(E|P,D=1)} {p(E|P,D=0)} $$</span></p> <p>Where <span class="math-container">$Imp$</span> is Impact of the recommendation, <span class="math-container">$E$</span> is Exercising (to a minimum acceptable level at least), <span class="math-container">$P$</span> is a patient and <span class="math-container">$D$</span> is a binary-valued 'got a Doctor's recommendation or not'.</p> <p>With the notation out of the way, time to model. Since we're only caring about passing a minimum threshold in just one period at any time, we can model Exercise as a Bernoulli-distributed variable. </p> <p>Next, we could say that whether they do exercise or not depends on either Patient-specific baseline factors (inclination, health, free time, whatever) and the Doctor's recommendation independently. </p> <p>That is, if you kidnapped a random person off the street for a checkup, the checkup itself does not itself affect their odds of exercising. If the visit entailed chopping the Patient's leg off, then this assumption would not hold, as it's harder to exercise leg-less.</p> <p>So with that in mind, <span class="math-container">$p(E|P,D) = p(E|P)p(P) + p(E|D)p(D)$</span>. Therefore:</p> <p><span class="math-container">$$ Imp = \frac { p(E|D=1)p(D=1) + p(E|P)p(P) } { p(E|D=0)p(D=0) + p(E|P)p(P) } $$</span></p> <p>If we assume that <em>not</em> making a recommendation has no impact on people's lifestyles, i.e. <span class="math-container">$p(E|D=0)p(D=0)=0$</span> that leaves us with:</p> <p><span class="math-container">$$ Imp = \frac { p(E|D=1)p(D=1) + p(E|P)p(P) } { p(E|P)p(P) } $$</span></p> <p>If you're familiar with the formula for the mean of a Beta distribution, it looks a lot of like the reciprocal of this: <span class="math-container">$\mu_{Beta} = \frac{\alpha}{\alpha+\beta}$</span>.</p> <p>This makes sense - if we flipped the question to 'how much <em>fewer</em> people would exercise if not given the recommendation', we'd wind up with some fraction of the total amount of people who would have exercised given proper encouragement.</p> <p>So, the simplest solution is to do just that - fit <span class="math-container">$Imp^{-1}$</span> as a Beta then invert the fraction to get your target value. </p> <p><span class="math-container">$p(E|P)p(P)$</span> is the ratio of the pre-treatment exercisers to all patients, <span class="math-container">$p(E|D=1)p(D=1)$</span> is the ratio of post-treatment exercises to all <em>exercisers</em>, and both are something you can calculate from your data, probably fairly easily at that.</p> <p>The nice thing about Beta is that in cases like that, you could probably even calculate some small samples on paper if you're fine with just a point estimate of the mean.</p>
211
bayesian inference
Problem faced by Bayes when developing his method for Bayesian Inference
https://stats.stackexchange.com/questions/519436/problem-faced-by-bayes-when-developing-his-method-for-bayesian-inference
<p>I am reading <em>Principles of Statistics</em> (MG Bulmer, 1965) and stumbled upon the problem that Bayes considered when developing his theorem. Bulmer makes use of <span class="math-container">$dP$</span> and I have no idea what it means. In his words:</p> <blockquote> <p>The problem that Bayes himself considered was the following. Suppose that some event has an unknown probability, <span class="math-container">$P$</span>, of occurring and that in <span class="math-container">$n$</span> trials it has occurred <span class="math-container">$x$</span> times and failed to occur <span class="math-container">$n - x$</span> times. What is the probability that <span class="math-container">$P$</span> lies between two fixed values, <span class="math-container">$a$</span> and <span class="math-container">$b$</span>? Bayes first notes that the probability of the event occurring <span class="math-container">$x$</span> times out of <span class="math-container">$n$</span> is</p> <p><span class="math-container">$\frac{n!}{x!(n-x)!}P^x(1-P)^{n-x}$</span>.</p> <p>This is the likelihood. He then remarks that if nothing was known about <span class="math-container">$P$</span> before the experiment was done it is reasonable to suppose that it was equally likely to lie in any equal interval; hence the probability that <span class="math-container">$P$</span> lay in a small interval of length <span class="math-container">$dP$</span> was initially <span class="math-container">$dP$</span> and so the joint probability that it lay in this interval and that the even occurs <span class="math-container">$x$</span> times out of <span class="math-container">$n$</span> is</p> <p><span class="math-container">$\frac{n!}{x!(n-x)!}P^x(1-P)^{n-x}dP$</span>.</p> <p>The posterior probability that <span class="math-container">$P$</span> lies between <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is thus proportional to the integral of this expression from <span class="math-container">$a$</span> to <span class="math-container">$b$</span> and is equal to</p> <p><span class="math-container">$\frac{\int_a^b P^x(1-P)^{n-x}dP}{\int_0^1 P^x(1-P)^{n-x}dP}$</span>.</p> </blockquote> <p>I believe this last ratio comes directly from the conditional probability formula and I understand where the likelihood comes from. I do not understand the argument regarding <span class="math-container">$dP$</span> however.</p> <p>What is <span class="math-container">$dP$</span>? Does the author mean <span class="math-container">$dP = d \times P$</span> or is <span class="math-container">$dP$</span> some infinitesimally small interval that has nothing to do with the parameter <span class="math-container">$P$</span>?</p> <p>In case it's some super small interval, how come the joint probability is <span class="math-container">$P(X=x) \times dP$</span>?</p>
212
bayesian inference
Bayesian updating of a constant probability using one data point
https://stats.stackexchange.com/questions/544034/bayesian-updating-of-a-constant-probability-using-one-data-point
<p>A reformulation of a question that came up in a model:</p> <p>Imagine a toy store that sells <span class="math-container">$K$</span> toys, where our prior is that each toy has equal probability <span class="math-container">$1/K$</span> of being purchased by a customer. Then you have a customer come in and buy a teddy bear. Can we use Bayesian updating to find a new probability estimate of a customer buying a teddy bear?</p>
<p>This is how I read your question: when a customer comes to a store, they always buy one, and only one, of <span class="math-container">$K$</span> items. There’s an infinite supply of the items (i.e. they are sampled with replacement). With no other prior knowledge, you assume a prior probability for picking any of the items to be <span class="math-container">$1/K$</span>. A customer comes and buys <span class="math-container">$i$</span>-th item, you want to use this information to update the prior.</p> <p>In such a case, use <a href="https://stats.stackexchange.com/a/244946/35989">Dirichlet-multinomial model</a>. Your prior is an uniform <a href="https://en.wikipedia.org/wiki/Dirichlet_distribution" rel="nofollow noreferrer">Dirichlet distribution</a> with parameters <span class="math-container">$\alpha_1=\alpha_2=\dots= \alpha_K=1$</span>, hence the individual probabilities are on average equal to <span class="math-container">$1/K$</span>. When you observe a purchase for the <span class="math-container">$i$</span>-th category, you update the prior to get the posterior <span class="math-container">$E[p_i]=\frac{2}{K + 1}$</span> and <span class="math-container">$\frac{1}{K+1}$</span> for other categories.</p>
213
bayesian inference
Intuition behind posterior predictive distribution
https://stats.stackexchange.com/questions/438218/intuition-behind-posterior-predictive-distribution
<p>I've recently encountered the "posterior predictive distribution" <span class="math-container">$$p(\bar{x}|X)=E_\theta[p(\bar{x}|\theta)]=\int_\theta p(\bar{x}|\theta)\hspace{0.5mm}p(\theta|X)d\theta$$</span> where <span class="math-container">$\bar{x}$</span> is a new point, <span class="math-container">$\theta$</span> is the (vector or 1-D) parameters of the distribution and <span class="math-container">$X$</span> is the already observed sample.</p> <p>However I'm not sure I understand where this formula comes from. I know that we don't know the true value of <span class="math-container">$\theta$</span>. I think that since we don't know <span class="math-container">$\theta$</span>, we say "why settle for one estimate of it, and not scan all it's possible values?" Is that correct? Even if it is, I'm not certain that I understand the expected value part. </p> <p>Sorry if my thoughts are a bit jumbled. Any insight would be appreciated, thanks.</p>
<p>Let <span class="math-container">$X$</span> denotes the <em>observations</em> and <span class="math-container">$\theta \in \Theta$</span> the <em>parameter</em>. In a Bayesian approach, both are considered random quantities. The first step of <em>modeling</em> is to define a statistical model, i.e. the distribution of <span class="math-container">$X$</span> given <span class="math-container">$\theta$</span>, which can be written as <span class="math-container">$X \mid \theta \sim p(\cdot \mid \theta)$</span>. This is mainly done by expliciting a <em>likelihood function</em>.<br> Thus our statistical model describe the <em>conditional</em> distribution of <span class="math-container">$X$</span> given <span class="math-container">$\theta$</span>.<br> From a Bayesian perspective, we also define a <em>prior distribution</em> for <span class="math-container">$\theta$</span> on <span class="math-container">$\Theta$</span>: <span class="math-container">$\theta \sim \pi(\theta)$</span>.</p> <h3>The prior predictive distribution</h3> <p>Before observing any data, what we have is simply the chosen model, <span class="math-container">$p(x \mid \theta)$</span>, and the prior distribution of <span class="math-container">$\theta$</span>, <span class="math-container">$\pi(\theta)$</span>. One can then ask to see what is the <em>marginal distribution</em> of <span class="math-container">$X$</span>, that is, the distribution of <span class="math-container">$X \mid \theta $</span> <strong>averaged</strong> over all possible values of <span class="math-container">$\theta$</span>.<br> This can be simply written using expectation: <span class="math-container">\begin{align*} p(x) &amp;= \mathbb{E}_\theta \Big [ p(x \mid \theta) \Big ] \\ &amp;= \int_\Theta p(x \mid \theta) \pi(\theta) d\theta. \end{align*}</span></p> <h3>The posterior predictive distribution</h3> <p>The interpretation is the same than for the prior predictive distribution, is it the marginal distribution of <span class="math-container">$X \mid \theta$</span> <strong>averaged</strong> over all values of <span class="math-container">$\theta$</span>.<br> But this time the "weighting" function to be used is not <span class="math-container">$\pi(\theta)$</span> but our <strong>updated</strong> knowledge about <span class="math-container">$\theta$</span> after observing data <span class="math-container">$X^*$</span>: <span class="math-container">$\pi(\theta \mid X^*)$</span>.<br> Using the known known Bayes theorem we have: <span class="math-container">$$ \pi(\theta \mid X^*) = \frac{p(X^* \mid \theta) \pi(\theta)}{p(X^*)} $$</span> And thus, the marginal distribution of <span class="math-container">$X \mid (X^*,\theta)$</span> averaged over <span class="math-container">$\Theta$</span> is: <span class="math-container">$$ p(x \mid X^*) = \int_\Theta p(x \mid \theta) \pi(\theta \mid X^*)d\theta $$</span></p> <h2>Example: Gamma-Poisson mixture.</h2> <p>Suppose our observations are made of counts, <span class="math-container">$X$</span>, and we define a Poisson model that is: <span class="math-container">$X \mid \lambda \sim \mathcal{P}(\lambda)$</span>.<br> From a Bayesian perspective, we also define a prior distribution for <span class="math-container">$\lambda$</span>.<br> For mathematical reasons, it is appealing to use a Gamma distribution, <span class="math-container">$\lambda \sim \mathcal{G}(a,b)$</span>. </p> <h3>The prior predictive distribution</h3> <p>One particulariy of this Gamma-Poisson mixture, is that the marginal distribution will be distribution as a Negative-Binomial random variable.<br> That is, if <span class="math-container">$X \mid \lambda \sim \mathcal{P}(\lambda)$</span> and <span class="math-container">$\lambda \sim \mathcal{G}(a,b)$</span> then, <span class="math-container">$X \sim \mathcal{NB}\big (a,\frac{b}{b+1} \big )$</span>.<br> Thus the prior predictive distribution of <span class="math-container">$X$</span> is a Negative Binomial distribution <span class="math-container">$\mathcal{NB}\big (a,\frac{b}{b+1} \big )$</span>.</p> <h3>The posterior predictive distribution</h3> <p>Now, say we have observed <span class="math-container">$n$</span> counts <span class="math-container">$X =(X_1,\dots,X_n)$</span>.<br> First, thanks to the choice of a Gamma prior for <span class="math-container">$\lambda$</span>, the posterior distribution of <span class="math-container">$\lambda$</span> can be easily derived as being also a Gamma distribution: <span class="math-container">$$ \lambda \mid X \sim \mathcal{G} \bigg ( a + \sum_{i=1}^n X_i , b+n \bigg) $$</span></p> <p>From what we saw for the prior predictive distribution, the <strong>posterior predictive distribution of <span class="math-container">$X$</span></strong> will also be a Negative-Binomial: <span class="math-container">$$ \mathcal{NB} \bigg ( a + \sum_{i=1}^n X_i, \frac{b+n}{b+1+n} \bigg ) $$</span></p> <p>Here is an example where <span class="math-container">$a=100$</span>, <span class="math-container">$b=2$</span> and we observe the vector of counts <span class="math-container">$X=(85,80,70,65,71,92)$</span> : <a href="https://i.sstatic.net/AwAom.jpg" rel="noreferrer"><img src="https://i.sstatic.net/AwAom.jpg" alt="enter image description here"></a></p> <p>Here is the R code to produce the plot :</p> <pre><code>### Gamma-Poisson mixture: prior and posterior predictive distributions : require(ggplot2) # Parameters of the prior distribution of lambda a&lt;-100 b&lt;-2 x&lt;-0:150 y1&lt;-dnbinom(x,a,b/(b+1)) # Vector of observations and posterior predictive distribution X&lt;-c(85,80,70,65,71,92) n&lt;-length(X) XS&lt;-sum(X) x&lt;-0:150 au&lt;-a+XS bu&lt;-b+n y2&lt;-dnbinom(x,size=au,prob=bu/(bu+1)) plot1&lt;-ggplot() + aes(x=x,y=y1,colour="Prior") + geom_line(size=1)+ geom_line(aes(x=x,y=y2,colour="Post"))+ scale_colour_manual(breaks=c("Prior", "Post"), values=c("#cd7118","#1874cd"),labels=c("Prior Predictive", "Postererior Predictive"))+ ggtitle("Prior and posterior predictive distributions for a=100 and b=2")+ labs(x="x", y="Density") + theme( panel.background = element_rect(fill = "white", colour = "white", size = 0.5, linetype = "solid"), axis.line = element_line(size = 0.2, linetype = 'solid', colour = "black"), axis.text = element_text(size=10), axis.title = element_text(size=10), legend.title = element_blank(), legend.background = element_blank(), legend.key = element_blank(), legend.position = c(.7,.5) ) plot1 </code></pre>
214
bayesian inference
Is it possible to infer both prior and posterior simultaneously?
https://stats.stackexchange.com/questions/313768/is-it-possible-to-infer-both-prior-and-posterior-simultaneously
<p>It seems that most Bayesian inference focuses on inferring the posterior. Is it possible to infer both the prior and the posterior.</p>
<p>Your question is ill-posed, it doesn't make sense to "infer" a prior. </p> <p>Let's say you have a likelihood $p(x|\theta)$, where $x$ is the data and $\theta$ are some parameters. In bayesian inference, the objective is to find the distribution of the parameters given the data, $p(\theta|x)$, which is the posterior. In order to do this, you first posit a prior distribution over the parameters $p(\theta)$. We then have from bayes rule that $p(\theta|x)\propto p(x|\theta)p(\theta)$. </p> <p>"Inference" is usually reserved for the process of using data to learn something about the parameters of a model. Note that the prior doesn't involve the data at all, so it doesn't make sense to talk about inferring it. </p> <p>Also note that empirical bayes isn't really related to reference/Jeffreys' priors (not Jerry's priors). In an empirical bayes setup you already posit a prior distribution, and use the data to set the hyperparameters of this prior. The point of reference/Jeffrey's priors (and objective bayesian inference) is to construct a prior using just the assumed likelihood.</p>
215
bayesian inference
Bayesian inference on a sum of iid real-valued random variables
https://stats.stackexchange.com/questions/24344/bayesian-inference-on-a-sum-of-iid-real-valued-random-variables
<p>Let $X_1$, $X_2$, ..., $X_n$ be iid RV's with range $[0,1]$ but unknown distribution. (I'm OK with assuming that the distribution is continuous, etc., if necessary.)</p> <p>Define $S_n = X_1 + \cdots + X_n$.</p> <p>I am given $S_k$, and ask: What can I infer, in a Bayesian manner, about $S_n$?</p> <p>That is, I am given the sum of a sample of size $k$ of the RV's, and I would like to know what I can infer about the distribution of the sum of all the RV's, using a Bayesian approach (and assuming reasonable priors about the distribution).</p> <p>If the support were $\{0,1\}$ instead of $[0,1]$, then this problem is well-studied, and (with uniform priors) you get beta-binomial compound distributions for the inferred distribution on $S_n$. But I'm not sure how to approach it with $[0,1]$ as the range...</p> <p><strong>Full disclosure</strong>: I already <a href="https://mathoverflow.net/questions/90580/bayesian-inference-on-sum-of-random-variables">posted this on MathOverflow</a>, but was told it would be better posted here, so this is a re-post.</p>
<p>Consider the following Bayesian nonparametric analysis.</p> <p>Define $\mathscr{X}=[0,1]$ and let $\mathscr{B}$ be the Borel subsets of $\mathscr{X}$. Let $\alpha$ be a nonzero finite measure over $(\mathscr{X},\mathscr{B})$.</p> <p>Let $Q$ be a Dirichlet process with parameter $\alpha$, and suppose that $X_1,\dots,X_n$ are conditionally i.i.d., given that $Q=q$, such that $\mu_{X_1}(B)=P\{X_1\in B\} = q(B)$, for every $B\in\mathscr{B}$.</p> <p>From the properties of the Dirichlet process, we know that, given $X_1,\dots,X_k$, the predictive distribution of a future observation like $X_{k+1}$ is the measure $\beta$ over $(\mathscr{X},\mathscr{B})$ defined by $$ \beta(B) = \frac{1}{\alpha(\mathscr{X})+k} \left( \alpha(B) + \sum_{i=1}^k I_B(X_i)\right) \, . $$</p> <p>Now, define $\mathscr{F}_k$ as the sigma-field generated by $X_1,\dots,X_k$, and use measurability and the symmetry of the $X_i$'s to get $$ E\left[ S_n \mid \mathscr{F}_k \right] = S_k + E\left[ \sum_{i=k+1}^n X_i \,\Bigg\vert\, \mathscr{F}_k \right] = S_k + (n-k) E\left[ X_{k+1} \mid \mathscr{F}_k \right] \, , $$ almost surely.</p> <p>To find an explicit answer, suppose that $\alpha(\cdot)/\alpha(\mathscr{X})$ is $U[0,1]$. Defining $c=\alpha(\mathscr{X})&gt;0$, we have $$ E\left[ S_n \mid X_1=x_1,\dots,X_k=x_k \right] = s_k + \frac{n-k}{c+k}\left(\frac{c}{2}+s_k\right) \, , $$ almost surely $[\mu_{X_1,\dots,X_k}]$ (the joint distribution of $X_1,\dots,X_k$), where $s_k=x_1+\dots+x_k$. In the "noninformative" limit of $c\to 0$, the former expectation reduces to $n\cdot (s_k/k)$, which means that, in this case, your posterior guess for $S_n$ is just $n$ times the mean of the first $k$ observations, which looks like as intuitive as possible.</p>
216
bayesian inference
Bayesian inferential target
https://stats.stackexchange.com/questions/250793/bayesian-inferential-target
<p>In frequentist, i.e., sampling-based statistics, we envision a target population to which inference is made. Notwithstanding the fact that our so-called random samples from this population are usually more convenience-based samples, we try to infer from a sample to the population. For example in a randomized clinical trial we pretend we have a random sample of patients with heart failure and try to make an inference about the mean treatment effect in the world population of heart failure patients. </p> <p>On the other hand, in Bayesian inference we make probability statements about the unknown mean treatment effect without necessarily speaking about a "population". What is the exact statement of what we are inferring? Is it that the treatment was actually effective in the group of patients we analyzed? Some deeper inference?</p> <p><strong>Update</strong></p> <p>It seems that it is safe to phrase the inferential target as a parameter in the underlying data generation process. This is somewhat more general than envisioning a certain human population. This generality allows for one to envision not only making an inference to a population but inference to repeated experiments involving the subjects initially included in the analysis. For example, one might estimate a parameter for the process that generated the observations for a specific set of subjects, were those subjects to be repeatedly studied afresh (as is done in crossover studies with no carryover effects). Such repetitions would observe different random errors for those subjects in measuring their outcomes.</p> <p>There may be yet a more general way to phrase the target.</p>
<p>"What is the exact statement of what we are inferring? Is it that the treatment was actually effective in the group of patients we analyzed? Some deeper inference?"</p> <p>I think you are confusing the terms of art with the discussion. One of the challenges of talking about things in multiple paradigms is that the different paradigms may use the same words to define different things, or they may not directly discuss something that is of critical importance to one paradigm, but not the other. Both Frequentists and Bayesians, for example, have a concept called an "expectation," but they both define it in a manner that is nonsensical in the other paradigm.</p> <p>I think this is what is happening here. Sampling statistics have to concern themselves with the "population," precisely because they work in the sample space. It isn't that a Bayesian does not care, it is that it doesn't impact their calculation on anything as directly.</p> <p>A second problem is that Bayesian statistics isn't one field as there are multiple axiomatic structures you could use. How you discuss reality may change if you use de Finetti's axioms instead of Cox's. It also could depend upon whether you are an objectivist Bayesian who believes as Frequentists do that population parameters are fixed points, but whose location is unknown, versus subjectivist Bayesians who believe that the population parameter is a distribution that nature draws from and not a fixed point.</p> <p>Someone like Jaynes, who uses Cox's postulates, would create hypothesis in terms of logical assertions. For example, hypothesis one could be that a drug is non-harmful. Hypothesis two would be that it is harmful. Implicitly, this is a universal statement and hence a population statement. The population is never mentioned.</p> <p>Both methods depend upon the sample for inference, but a Bayesian can have an infinite number of hypothesis. It is more important for a Bayesian to be clear in what they are asserting and why.</p> <p>There is one other difference that is important. When you use a Frequentist method you are concerned with the sampling distribution of the statistic and not the sampling distribution of the data. Infinitely many distributions will have a population mean and they will all use either a t-test or z-test. The Bayesian is concerned with the sampling distribution of the data, but not the parameters. </p> <p>Consider a set of independent events that map to a probability over the set [0,1] in $d$ dimensions. It will be approximately multivariate normal as the sample size becomes large enough. Now let us assume that although the events are independent, the components that make up the dimensions are not. They are part of a system. Let us also assume they share a common variance, $\sigma^2_i=\sigma^2_j,\forall{i,j}\in{D}$, and that information about any one mean exists in the other means. </p> <p>The Bayesian posterior for the set $\mu_i,i\in{1\dots{d}}$ for independent dimensions with independent variances and no shared information on the means would look very different from one where you assume a common variance $\sigma^2$ and shared information. The Frequentist tests would be no different but the Bayesian posteriors would be.</p> <p>Bayesian methods are concerned about the population through the likelihood function because it models how the data is generated in the first place in nature. That is why Bayesian model selection methods are so important, because you may not know the true model in nature that the population uses.</p>
217
bayesian inference
Bayesian counterpart to parameter estimate precision
https://stats.stackexchange.com/questions/333749/bayesian-counterpart-to-parameter-estimate-precision
<p>In maximum likelihood theory it is common to summarise parameter estimates by their maxium likelihood estimate $\theta_{\mathrm{MLE}}$ and the corresponding standard error $\sigma_{\mathrm{MLE}}$ or coefficient of variation $$CV = \frac{\sigma_{\mathrm{MLE}}}{\theta_{\mathrm{MLE}}}.$$ This works since we assume that the MLE is normally distributed.</p> <p>Especially, when using the $CV$ it is easy to understand the precision of the estimate independent of the scale of the parameter.</p> <p>In Bayesian statistics, we get the posterior density for $\theta$ $$p(\theta | \mathcal{D}) \propto p(\mathcal{D} | \theta) p(\theta).$$ From this we can calculate the mean, the mode and whichever credible interval we want. However, I am having a hard time to find a reasonable equivalent for the kind of scale free precision estimate I can get from the $CV$ in the maximum likelihood case.</p> <p>The problem here is that my posterior parameter distribution does not have an analytical form. I have it only in the form of MCMC samples.</p> <p>This seems like such a standard question, but I was quite surprised that I didn't seem to find anything sensible on Google.</p> <p>My idea so far is to present median and mode, as well as 68% and 95% credible intervals, to give readers a sense of comparison with the normal distribution. But I want to also be able to tell scale-free if my estimate has a good precision or not. Compared to my prior it is localized, but how would I tell if precision is good?</p> <p>I feel like I might have misunderstood something fundamental here.</p> <p><strong>EDIT</strong> To clarify my question:</p> <p>Assume I have two parameters in my model $\theta_1$ and $\theta_2$ and assume that $\theta_1$'s magnitude is somewhere around 0.3 and $\theta_2$'s somewhere around 13. These parameters fulfill different rolls in my (non-linear) model, so that they are both impactful. In a maximum likelihood analysis, I could present the $CV$ of these parameters which normalizes the standard deviation by the MLE estimate and therefore is scale free.</p> <p>My main question is if there is a standard procedure for this in Bayesian analysis? Or do I have to come up with my own normalization? </p> <p>Since I have a non-linear model, maybe it would be necessary to normalize the posterior distributions spread by the <a href="https://en.wikipedia.org/wiki/Sensitivity_analysis" rel="nofollow noreferrer">sensitivity</a> of the parameter?</p>
<p>Given that $\theta_{\text{MLE}}$ is a point estimator, the obvious counterpart in Bayesian analysis would be the <a href="https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation" rel="nofollow noreferrer">posterior mode estimator</a> $\theta_{\text{MAP}} \equiv \arg \max_\theta f( \theta | x )$. Although this is a Bayesian estimator, you could still derive its frequentist sampling properties, including its standard error and corresponding coefficient of variation, just as you can with the MLE. Both the MLE and MAP estimators have asymptotic normal distributions under appropriate regularity conditions. In Bayesian analysis it is common to encounter problems where the posterior mode does not have a closed form solution and so you would obtain this via MCMC methods, and similarly, its frequentist properties would also be obtained via numerical methods.</p>
218
bayesian inference
Does the posterior necessarily follow the same conditional dependence structure as the prior?
https://stats.stackexchange.com/questions/414045/does-the-posterior-necessarily-follow-the-same-conditional-dependence-structure
<p>One of the assumptions in a model is the conditional dependence between random variables in the joint prior distribution. Consider the following model, <span class="math-container">$$p(a,b|X) \propto p(X|a,b)p(a,b)$$</span></p> <p>Now suppose an independence assumption for the prior <span class="math-container">$p(a,b) = p(a)p(b)$</span>.</p> <p>Does this assumption imply the posterior has the following conditional dependence as well? <span class="math-container">$$p(a|X)p(b|X) \propto p(X|a,b)p(a)p(b)$$</span></p>
<p>Your question can also be stated as: "<span class="math-container">$X$</span> is dependent on <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. And <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are independent. Does this imply that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are conditionally independent given <span class="math-container">$X$</span>?" </p> <p>The answer is no. We just need a counter-example to show it isn't the case. Suppose <span class="math-container">$X = a + b$</span>.</p> <p>Then, once we know <span class="math-container">$X$</span>'s value, <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are dependent (information about one tells us what the other will be). For example, suppose <span class="math-container">$X=5$</span>. Then, if <span class="math-container">$a=3$</span>, it tells us that <span class="math-container">$b=2$</span>. Similarly, if <span class="math-container">$b=4$</span>, it tells <span class="math-container">$a=1$</span>. </p>
219
bayesian inference
Probability of hitting X shots in N tries knowing that the P(hit) is the ratio of previous hits
https://stats.stackexchange.com/questions/592969/probability-of-hitting-x-shots-in-n-tries-knowing-that-the-phit-is-the-ratio-o
<p>Let <span class="math-container">$X$</span> be the number of hits in <span class="math-container">$N$</span> tries, I know that the probability of the next hit is <span class="math-container">$P(\text{Hit}) =X/N$</span>.</p> <p>How can I get the generic expression for the probability distribution function of hitting <span class="math-container">$x$</span> hits in <span class="math-container">$N$</span> tries, knowing that the probability of hitting the first shot is <span class="math-container">$p_1$</span>?</p> <p>I've been trying to find the solution to this, I know from brute forcing that for <span class="math-container">$p_1 = 0.5$</span> the distribution is a constant ( <span class="math-container">$1/(N+1)$</span> I believe), when it's higher than 0.5 it's a positive slope line and lower than 0.5 a negative slope line. I just don't know how to mathematically reach a result.</p> <p>Edit: Clarification. Let's assume the first two 6 shots are known. There were 3 hits and 3 misses, and so the probability of the 7th shot hitting is 0.5. Should that hit, the probability of hitting the 8th shot would be <span class="math-container">$4/7$</span>. Should that miss, the probability of hitting the 8th shot would be <span class="math-container">$3/7$</span>.</p> <p>I forgot to say that when I was brute forcing I assumed the first two shots to be a hit and a miss, but knowing any sequence of two or more previous shots would be necessary for the problem to make sense.</p>
<p>&quot;There were 3 hits and 3 misses, and so the probability of the 7th shot hitting is 0.5. &quot; Do you mean: &quot;There were 3 hits and 3 misses, and so <em>my estimate of</em> the probability of the 7th shot hitting is 0.5. &quot;</p> <p>Some more information is needed before your question can be answered. Firstly, is the process stable? i.e. is the real probability of a hit constant, or does it drift over time?</p> <p>Secondly, what prior knowledge do you have of the process? If you can encode your prior knowledge as a probability distribution of the unknown P then bayesian methods provide a precise answer to your question.</p> <p>In the absence of prior knowledge your question has no answer. Put it this way: suppose I have already planned the infinite sequence miss, miss, hit, hit, miss, hit, miss ... according to some whim, then the first <span class="math-container">$n$</span> results don't really tell you anything about the <span class="math-container">$n+1^{st}$</span></p>
220
bayesian inference
Bayesian inference from &quot;extra&quot; information - Beta-binomial case
https://stats.stackexchange.com/questions/541233/bayesian-inference-from-extra-information-beta-binomial-case
<p>Say we have two coins with unknown success probabilities <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span>. To know more about the probabilities, say that we use Bayesian approach.</p> <p>To do so, we first set our prior: <span class="math-container">$P_1\sim Beta(1,1)$</span> and <span class="math-container">$P_2\sim Beta(1,1)$</span>.</p> <p>Tossing both of the coins together, we update each of the prior as usual.</p> <p>For example, we ran 10 rounds of tossing, 4 Heads on coin 1 and 8 Heads on coin 2.</p> <p>The posterior should be <span class="math-container">$P_1\sim Beta(5,7)$</span> and <span class="math-container">$P_2\sim Beta(9,3)$</span>.</p> <p>My question is that what happens if we receive an &quot;extra&quot; piece of information that says &quot;<span class="math-container">$p_1$</span> is greater than <span class="math-container">$p_2$</span>&quot;</p> <p>Is there any systematic way to accommodate this extra piece of information into the posterior?</p>
<p>What you're referring to in the first part of the question, is the <a href="https://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution/47782#47782">beta-binomial model</a>. where binomial distribution is assumed as the likelihood and beta as a prior, hence by conjugacy posterior is also a beta distribution.</p> <p>Your problem description in the second part describes a different scenario because it is multivariate. If you know that <span class="math-container">$p_1 &gt; p_2$</span>, this means that the parameters are dependent and you are talking about some multivariate distribution for the parameters (vs two univariate beta distributions). In such a case, you cannot use two (independent) beta-binomial models. The constraint can be imposed by choosing a multivariate prior for the parameters. For such a model you won't have a closed-form solution, so you would need to use MCMC or some other kind of approximate inference.</p>
221
bayesian inference
why does posterior prediction involve integration over all parameter space?
https://stats.stackexchange.com/questions/608478/why-does-posterior-prediction-involve-integration-over-all-parameter-space
<p>The primary objective of Bayesian inference is to compute the posterior.</p> <p>For instance, if the posterior <span class="math-container">$p(\theta | x)$</span> is known then the expectation <span class="math-container">$\mathbb{E}$</span> of the test function <span class="math-container">$\tau(x)$</span> under the posterior <span class="math-container">$p(\theta | x)$</span> can be computed like</p> <p><span class="math-container">$E[\tau | x] = \int d\theta \tau(\theta) p(\theta | x)$</span>.</p> <p>To make a prediction <span class="math-container">$x'$</span> from the data distribution <span class="math-container">$p(x | \theta)$</span>, assuming <span class="math-container">$x'$</span> and <span class="math-container">$x$</span> are independent to each other, the posterior predictive distribution of <span class="math-container">$x'$</span> is</p> <p><span class="math-container">$p(x' | x) = \int d\theta p(x' | x, \theta) p(\theta | x) = \int d\theta p(x' | \theta) p(\theta | x)$</span>.</p> <p>How should I convince myself on an intuitive level the necessity for the integration over all <span class="math-container">$\theta$</span> in order to compute the predictive posterior?</p>
<p>What you are looking at is the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability" rel="nofollow noreferrer">law of total probability</a>, <a href="https://en.wikipedia.org/wiki/Law_of_total_expectation" rel="nofollow noreferrer">law of total expectation</a>, etc. These laws follow directly from the definition of conditional probability and conditional probability densities. If you would like to convince yourself of the validity of these rules then I recommend you review the definition and properties of conditional probability densities.</p>
222
bayesian inference
Distribution of oddsratio after bayesian inference under binomial model
https://stats.stackexchange.com/questions/305815/distribution-of-oddsratio-after-bayesian-inference-under-binomial-model
<p>Let us have 2 groups - treatment group (1) and control group(2). Each group has survival probability $p_1$ and $p_2$ respectively. Ofc each patient survives or dies independently given $p_i$ of his group. Let us define $y_i$ as number of survivors in group i, $n_i - y_i$ - number of deceased in group i.</p> <p>So, I am setting up bayesian model for this:</p> <p>$p(p_1, p_2| Y) \propto p_1^{y_1}(1-p_1)^{n_1 - y_1} * p_2^{y_2}(1-p_2)^{n_2 - y_2} * 1 $</p> <p>1 comes from $Beta(1, 1)*Beta(1, 1)$ which I assume as prior for $p(p_1, p_2)$. </p> <p>As I understand the posterior will be a product of 2 Betas?</p> <p>Now, how can I summarize the posterior distribution for the odds ratio, $\frac{(p_2/(1 − p_2))}{(p_1/(1 − p_1))}$?</p> <p>Also is the choice of noninformative prior correct in this case? Or is there a way to claim that this or that particular prior is better than others? </p>
223
bayesian inference
Is there an implied distribution given the below for where a transaction happens?
https://stats.stackexchange.com/questions/635804/is-there-an-implied-distribution-given-the-below-for-where-a-transaction-happens
<p>Assume you have a buyer and a seller.</p> <p>You know that the buyer's probability of buying the good at different prices (ie smth like P(B|price) and similarly for the seller P(S|price).</p> <p>Given you know these you know a transaction happens if both agree - in that case for any given price P(transaction|price) = P(B|price) x P(S|price).</p> <p>The obvious thing to get P(transaction) is something like P(price) however intuitively this seems wrong. For example, the fact that I know that the buyer wouldn't buy above some max (call it P_max) and the seller wouldn't sell below some min (call it P_min) this means that a transaction at those prices should never be proposed.</p> <p>If you consider the price to be suggested by an agent (say a real estate broker) then they should maximise the chance that a transaction happens so one &quot;dumb method&quot; would be to say where is P(transaction|price) maximised and present that price.</p> <p>My sense is that given P(B|price) and P(S|price) are &quot;known&quot; there is an implicit distribution of P(price|transaction) as well from which I'd expect to be able to calculate P(transcation|price).</p> <p>Not sure what the &quot;right&quot; tags would be - feel free to edit them / add to them.</p>
224
bayesian inference
Why does continuous Bayesian analysis seem to give this contradictory result?
https://stats.stackexchange.com/questions/5453/why-does-continuous-bayesian-analysis-seem-to-give-this-contradictory-result
<p>Let's say you have a process that generates data according to r = sin(t) + epsilon, where epsilon ~ N(0,V) is Gaussian noise. The unconditional variance of r is 0.5 + V. </p> <p>Let's say we're forecasting r with a model m, and that our forecast is "perfect" in that m = sin(t). Construct v = r - m, which is the forecast error, and will be ~ N(0,V). </p> <p>According to Bayes, we then have p(r|v) ~ p(v|r)*p(r), which is the the product of two Gaussian PDFs, one with variance V, the other (0.5+V). This product will itself be a Gaussian with total variance T = 1/(1/V + 1/(0.5+V)). </p> <p>The funny thing is that T &lt; V guaranteed, in fact T/V = (V+0.5)/(2V+0.5)! In other words, according to Bayes, the variance of your forecast error is less than the inherent noise in the data generation process itself!?! Isn't that impossible? Can anyone help me sort through this?</p> <p>Thank you in advance, -Jesse</p> <hr> <p>whuber, a derivation of the product of two Gaussian pdfs is from <a href="https://people.ok.ubc.ca/jbobowsk/phys327/Gaussian%20Convolution.pdf" rel="nofollow">https://people.ok.ubc.ca/jbobowsk/phys327/Gaussian%20Convolution.pdf</a>. The formula for the variance I gave, 1 / (1 / (V+0.5) + 1/V) seems to be accurate. Another way of writing this same expression is V*(V+0.5)/(V + (V + 0.5)) from which my equation for T/V follows directly - showing that T &lt; V. </p> <p>Perhaps you were citing the variance for the convolution of two Gaussian pdfs? The sum of the variances is correct for the convolution. I believe Bayes specifies the product, not the convolution, however, or do I have that wrong?</p>
<p>Actually, the variances are zero: $V(v|r) = V(r|v) = 0$.$p(v|r)$ is a Dyrac function which has a peak at the right spot (where $r = v -m$).</p> <p>If you know $v$ or $r$, the other one is a deterministic function of the one you know.</p>
225
bayesian inference
Understanding posterior probability (Bayesian inference)
https://stats.stackexchange.com/questions/445096/understanding-posterior-probability-bayesian-inference
<p>I'm reading this <a href="https://statswithr.github.io/book/the-basics-of-bayesian-statistics.html" rel="nofollow noreferrer">online book</a> and there is something unclear to me in <a href="https://statswithr.github.io/book/the-basics-of-bayesian-statistics.html#tab:RU-486prior" rel="nofollow noreferrer">this table</a> where the posterior probability of each model computed as:</p> <p><span class="math-container">$$\ P(model \ | \ data) = \frac{P(data \ | \ model) \times P(model)}{P(data) } $$</span></p> <p>I understand <span class="math-container">$\ P(model) $</span> is a prior, and <span class="math-container">$\ P(data \ | model ) $</span> is just a binomial (probability of observing such data given the prior) distribution but what exactly is <span class="math-container">$\ P(data) $</span> ?</p>
<p>It's just a normalizing constant which makes the posterior a valid density. In practice, we don't care so much about it. It should be noted that </p> <p><span class="math-container">$$p(x) = \int p(x\vert \theta) p(\theta) \, d\theta$$</span></p> <p>So it is as if you are averaging the likelihood over the prior.</p>
226
bayesian inference
Definition of Likelihood in Bayesian Statistics
https://stats.stackexchange.com/questions/444781/definition-of-likelihood-in-bayesian-statistics
<p>Can the likelihood be defined as the probability of the rate parameter given a range of data. Or as the probability of the data, given a range of rate parameters?</p>
<p>I think I understand your confusion. Typically, Bayes' rule is written as:</p> <p><span class="math-container">$$p(\theta |y) = \frac{ p(y|\theta) p(\theta)}{p(y)}$$</span></p> <p>where <span class="math-container">$p(\theta |y)$</span> is the posterior for the observed data <span class="math-container">$y$</span> given unknown parameters <span class="math-container">$\theta$</span>, <span class="math-container">$p(\theta)$</span> is the prior distribution, and <span class="math-container">$p(y)$</span> is the marginal distribution of <span class="math-container">$y$</span>. As far as Bayes' rule is concerned, <span class="math-container">$p(y)$</span> is a constant since it doesn't depend on the unknown parameters, so this simplifies to:</p> <p><span class="math-container">$$p(\theta |y) \propto p(y|\theta) p(\theta).$$</span></p> <p>Now some would refer to <span class="math-container">$p(y|\theta)$</span> as the likelihood function. Technically, the likelihood is a function of <span class="math-container">$\theta$</span> for fixed data <span class="math-container">$y$</span>, say <span class="math-container">$L(\theta |y)$</span>. However, the liklelihood is proportional to the sampling distribution, so <span class="math-container">$L(\theta |y) \propto p(y|\theta)$</span>.</p> <p>In other words, <span class="math-container">$p(y|\theta)$</span> isn't technically the likelihood, but it is proportional to it, and as far as applying the Bayesian methodology is concerned, the distinction is not important. Hence why it is often referred to as the likelihood.</p>
227
bayesian inference
On the Bayesian setup in inference
https://stats.stackexchange.com/questions/212866/on-the-bayesian-setup-in-inference
<p>I've been trying to get into the chapter 4 in Lehmann's <em>Theory of point estimation</em>, but I can't seem to understand his presentation of the Bayesian setup. He starts of by the introduction below and after a few examples of uses of Bayesian estimators he outlines the idea (after dots in my photo). I don't know what he means by: $EL(\Theta,d)$.</p> <p>In my opinion there should be two expectations there since we want to find d to minimize (1.1), I can't see how minimizing the one above being sufficient. I've tried with "Law of total expectation" and Fubini but nothing has been satisfactory. I have a similar problem with a theorem which comes right after the second paragraph.</p> <p><a href="https://i.sstatic.net/JZ4zV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZ4zV.png" alt="enter image description here"></a></p>
<p>This is Fubini's theorem in action: when minimising in $\delta$ $$\mathbb{E_{\Lambda}} \{\mathbb{E}_{\theta}[L(\theta,\delta)]\}=\int_\Theta\int_\mathcal{X} L(\theta,\delta(x))\text{d}P_\theta(x)\text{d}\Lambda(\theta)=\int_\mathcal{X} \int_\Theta L(\theta,\delta(x))\text{d}\Lambda_x(\theta)\text{d}P(x)$$where $\Lambda_x$ denotes the posterior distribution of $\theta$ conditional on $x$, one minimises in $d$ for each value of $x$ the posterior expected loss $$\int_\Theta L(\theta,d)\text{d}\Lambda_x(\theta)$$and set $$\delta(x)=\arg\min_d \int_\Theta L(\theta,d)\text{d}\Lambda_x(\theta)$$assuming all quantities are finite.</p>
228
bayesian inference
Can the Bayes factor be negative?
https://stats.stackexchange.com/questions/571438/can-the-bayes-factor-be-negative
<p>This is what I saw in a source I am referring to:</p> <p><a href="https://i.sstatic.net/uAubu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uAubu.png" alt="enter image description here" /></a></p> <p>Since both the numerator and the denominator are probabilities (so they can only take any value between 0 and 1), how can the result of division be negative?</p>
<p>Not sure what your source is, but whoever it is seems to have botched <a href="https://en.wikipedia.org/wiki/Bayes_factor#Interpretation" rel="noreferrer">Harold Jeffreys' cutoffs</a>. The items in the table <em>do</em> match his recommendations, but with two problems. The first is that the cutoffs are intended to be for <span class="math-container">$\log \text{BF}$</span> rather than for the Bayes factor itself (<strong>EDIT:</strong> slightly misspoke, the unit is &quot;decihartleys&quot; which corresponds to <span class="math-container">$10 \log_{10}(\text{BF})$</span>). The second is that, probably because the person who put the table together didn't realize the scale was suppose to be <span class="math-container">$\log$</span>, they &quot;corrected&quot; the first row to be <span class="math-container">$&lt; 1$</span> instead of <span class="math-container">$&lt; 0$</span> because they assumed <span class="math-container">$&lt; 0$</span> must be a typo (since Bayes factors can't be negative).</p>
229
bayesian inference
The role of variance of the distribution plays in Bayesian inference
https://stats.stackexchange.com/questions/626327/the-role-of-variance-of-the-distribution-plays-in-bayesian-inference
<p>Given prior <span class="math-container">$ \mu \sim \mathcal{N}(\mu_0, \tau^2) $</span>, likelihood <span class="math-container">$ X_i | \mu \sim \mathcal{N}(\mu, \sigma^2) $</span>, we know the closed-form solution of posterior is <span class="math-container">$ \mu | X_1, X_2, \ldots, X_n \sim \mathcal{N}(\mu_n, \tau_n^2) $</span> with <span class="math-container">$ \frac{1}{\tau_n^2} = \frac{1}{\tau^2} + \frac{n}{\sigma^2} $</span> and <span class="math-container">$ \mu_n = \frac{\frac{1}{\tau^2} \mu_0 + \frac{n}{\sigma^2} \bar{X}}{\frac{1}{\tau^2} + \frac{n}{\sigma^2}} $</span>. It can be observed that given the same number of observations <span class="math-container">$n$</span>,the larger the <span class="math-container">$\sigma$</span> is, the larger the variance in the posterior is. I find it hard to get an intuition of this, because I feel fitting the same number of observations (i.e., MLE) should have the same level of certainty regardless of how spread out the observations are. In other words, I feel it is easy to understand as the number of observations gets large, the uncertainty in posterior diminishes, but the uncertainty should be independent of the sample variance (obviously I am wrong but I want to have the intuition why that is the case).</p>
<p>Think about a simpler problem: estimating the mean <span class="math-container">$\mu$</span> in a frequentist setting.</p> <p>It is sensible to use the MLE, <span class="math-container">$\bar X$</span>, as our estimator. How accurate is this on average? We know that <span class="math-container">$\bar X \sim \mathcal{N}(\mu, \sigma^2/n)$</span> so the mean squared error is <span class="math-container">$\sigma^2/n$</span>, i.e proportional to the population variance.</p> <p>This is quite a fundamental point. It is easier to estimate a population mean when the population variance is low, because (on average) your observations will be closer to this mean. For example, if the effect of a medical intervention is very similar for all patients (low <span class="math-container">$\sigma$</span>), you don't need a large sample to estimate the mean effect accurately.</p>
230
bayesian inference
Why is the posterior distribution in Bayesian Inference often intractable?
https://stats.stackexchange.com/questions/208176/why-is-the-posterior-distribution-in-bayesian-inference-often-intractable
<p>I have a problem understanding why Bayesian Inference leads to intractable problems. The problem is often explained like this:</p> <p><a href="https://i.sstatic.net/fIwYu.png" rel="noreferrer"><img src="https://i.sstatic.net/fIwYu.png" alt="enter image description here"></a></p> <p>What I don't understand is why this integral has to be evaluated in the first place: It seems to me that the result of the integral is simply a normalization constant (as the dataset D is given). Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1?</p> <p>What am I missing?</p> <p>Thanks!</p>
<blockquote> <p>Why can one not simply calculate the posterior distribution as the numerator of the right-hand side and then infer this normalization constant by requiring that the integral over the posterior distribution has to be 1?</p> </blockquote> <p>This is precisely what is being done. The posterior distribution is $$P(\theta|D) = \dfrac{p(D|\theta) \, P(\theta)}{P(D)}. $$</p> <p>The numerator on the right hand side is $P(D|\theta)P(\theta)$. This is a function over $\theta$ and to be a probability distribution, it has to integrate to 1. Thus we need to find the constant $c$, such that </p> <p>\begin{align*} &amp;\int_{\theta} cP(D|\theta) \, P(\theta)\, d\theta = 1\\ \Rightarrow &amp; \int_{\theta} cP(D, \theta) \, d\theta = 1\\ \Rightarrow &amp; cP(D) = 1\\ \Rightarrow&amp; c = \dfrac{1}{P(D)}. \end{align*}</p> <p>Thus, the normalizing constant is $P(D)$ which is often intractable, or overtly complicated.</p>
231
bayesian inference
At the end of the day, what do you do with Bayesian Estimates?
https://stats.stackexchange.com/questions/547923/at-the-end-of-the-day-what-do-you-do-with-bayesian-estimates
<p>I have often heard that in certain instances, it can be more beneficial to use Bayesian based methods because they provide &quot;a distribution of possible answers&quot; (i.e. the posterior distribution) instead of a single answer (as done in the frequentist case). However, it seems that at the end of the day, the analyst is still required to transform this &quot;distribution of possible answers&quot; into a single answer.</p> <p>For example : if a Bayesian model is used to estimate the posterior distribution of &quot;mu&quot;, the analyst is still required to either take the MAP or the Expectation of this distribution to return a final answer.</p> <p>Is this the main benefit of Bayesian models? If the priors are correctly specified, the credible intervals associated with the expectation of the posterior distribution (of the parameter of interest) are more reliable?</p>
<p>First of all, Frequentist methods also provide a distribution over possible answers. It is just that we do not call them distributions because of a philosophical point. Frequentists consider parameters of a distribution as a fixed quantity. It is not allowed to be random; therefore, you cannot talk about distributions over parameters in a meaningful way. In frequentist methods, we estimate confidence intervals which can be thought of as distributions if we are letting go of the philosophical details. But in Bayesian methods the fixed parameters are allowed to be random; therefore, we talk about the (prior and posterior) distributions over the parameters.</p> <p>Second, it is not always the case that only a single value is used at the end. Many applications require us to use the entire posterior distributions in subsequent analysis. In fact, to derive a suitable point estimate, full distribution is required. A well known example is risk minimization. Another example is model identification in natural sciences in the presence of significant uncertainties.</p> <p>Third, Bayesian inference has many benefits over a frequentist analysis (not just the one that you metion):</p> <ol> <li><p>Ease of interpretation: It is hard to understand what a confidence interval is and why it is not a probability distributions. The reason is simply a philosophical one as I have explained above briefly. The probability distributions in Bayesian inference are easier to understand becuase that is how we typically tend to think in uncertain situations.</p> </li> <li><p>Ease of implementation: It is easier to get Bayesian probability distributions than frequentist confidence intervals. Frequentist analysis requires us to identify a sampling distribution which is very difficult for many real world applications.</p> </li> <li><p>Assumptions of the model are explicit in Bayesian inference: For example, many frequentist analyses assume asymptotic Normality for computing the confidence interval. But no such assumptions are required for Bayesian inference. Moreover, the assumptions made in Bayesian inference are more explicit.</p> </li> <li><p>Prior information: Most importantly, Bayesian inference allows us to incorporate prior knowledge into the analyses in a relatively simple manner. In frequentist methods, regularization is used to incorporate prior information which is very difficult to do in many problems. It is not to say that incorporation of prior information is easy in Bayesian analysis; but it is easier than that in frequentist analysis.</p> </li> </ol> <p>Edit: A particularly good example of ease-of-interpretation of Bayesian methods is their use in probabilistic machine learning (ML). There are several method developed in ML literature with the backdrop of Bayesian ideas. For example, relevance vector machines (RVMs), Gaussian processes (GPs).</p> <p>As Richard hardy pointed, this answer gives the reasons why someone would want to use Bayesian analysis. There are good reasons to use frequentist analysis also. In general, frequentist methods are computationally more efficient. I would suggest reading first 3-4 chapters of 'Statistical Decision Theory and Bayesian Analysis' by James Berger which gives a balanced view on this issue but with an emphasis on Bayesian practice.</p> <p><strong>To elaborate on the use of entire distribution rather a point estimate to make a decision in risk minimization, a simple example follows.</strong> Suppose you have to choose between different parameters of a process to make a decision, and the cost of choosing wrong parameters is <span class="math-container">$L(\hat{\theta},\theta)$</span> where <span class="math-container">$\hat{\theta}$</span> is the parameter estimate and <span class="math-container">$\theta$</span> is assumed to be true parameter. Now given the posterior distribution <span class="math-container">$p(\hat{\theta}|D)$</span> (where <span class="math-container">$D$</span> denotes observations)we can minimize expected loss which is <span class="math-container">$\int L(\hat{\theta},\theta)p(\hat{\theta}|D)d\hat{\theta}$</span>. This expected loss can be minimized for every value of <span class="math-container">$\theta$</span> and the <span class="math-container">$\theta$</span> value with minimum expected loss can be used for decision making. This will result in a point estimate; but the value of the point estimate depends upon the loss function.</p> <p><strong>Based on a comment by Alexis, here is why frequentist confidence intervals are harder to interpret.</strong> Confidence intervals are (as Alexis has pointed out): <em>A plausible range of estimates for a parameter given a Type I error rate</em>. One naturally asks where does this possible range come from. The frequentist answer is that it comes from the sampling distribution. But then the question is we only observe one sample? The frequentist answer is we infer what other samples could have been observed based on the likelihood function. But if we are inferring other samples based on likelihood function, those samples should have a probability distribution over them, and, consequently, the confidence interval should be interpreted as a probability distribution. But for the philosophical reason mentioned above, this last extension of probability distribution to confidence interval is not allowed. Compare this to a Bayesian statement: <em>A 95% credible-region means that the true parameter lies in this region with 95% probability.</em></p> <p><strong>A side note on philosophical differences between Bayesian and frequentist theory (based on a comment by ):</strong> In frequentist theory probability of an event is relative frequencies of that event over a large number of repeated trials of the experiment in question. Therefore, the parameters of a distribution are fixed because they stay the same in all the repetitions of the experiment. In Bayesian theory, the probabilities are degrees of belief in that an event would occur for in a single trial of the experiment in question. The problem with frequentist definition of probability is that it cannot be used to define probabilities in many real world applications. As an example, try to define the probability that I am typing this answer an android smartphone. Frequentist would say that the probability is either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>. While the Bayesian definition allows you to choose an appropriate number between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>.</p>
232
bayesian inference
Question about probability vs inferential statistics
https://stats.stackexchange.com/questions/477324/question-about-probability-vs-inferential-statistics
<p>I'm currently struggling with a question involving probability and statistics.</p> <p>I have this dataset of sales, and I was trying to make a probability of sales based on that dataset and the data that it provides me of weeks, months and years back. I started using Bayes Theorem to do it, and after a conversation with a data scientist, it was suggested that I applied concepts like prior and posterior distribution to my model. The thing is, those concepts are way ahead of my statistical knowledge right now, so I would have to take time and study them so that I could move on with my project.</p> <p>But, I thought to my self, since I've got a whole dataset containing all the information I'm gonna need (the population information) there's no need to try and use such statistical concepts, I could just use the dataset observations to create the probability, since everything I could try to predict is already there, right? In my head, those concepts would only need to be used if I was working with samples.</p> <p>Can someone clarify this to me?</p>
233
bayesian inference
How to compute the variance for a bayesian estimator
https://stats.stackexchange.com/questions/481595/how-to-compute-the-variance-for-a-bayesian-estimator
<p>I can't figure out how to compute the variance of an estimator which is the mean of the posterior distribution let's say Gamma(<span class="math-container">$\sum x_i+3, n+a$</span>) How to find out the variance of this mean ?</p>
<p>In the Bayesian paradigm, distributions of interest are uncertainty distributions of unknown parameters. So if you have a posterior distribution <span class="math-container">$f(\theta)$</span> for parameter <span class="math-container">$\theta$</span> you can get an uncertainty (credible) interval for <span class="math-container">$\theta$</span>. This interval is &quot;summary measure agnostic&quot; since it does not refer to the use of a point estimate summary for <span class="math-container">$f$</span> such as the posterior median, mean, or mode. The posterior mean is a convenient posterior distribution point estimate but doesn't play a central role unless you are doing a formal loss function-based analysis and your loss function is the squared error.</p> <p>The bottom line: concern yourself with uncertainty about the primary parameter of interest: <span class="math-container">$\theta$</span>, not with some convenient point summary of an entire posterior distribution.</p>
234
bayesian inference
What are the factors that cause the posterior distributions to be intractable?
https://stats.stackexchange.com/questions/4417/what-are-the-factors-that-cause-the-posterior-distributions-to-be-intractable
<p>In Bayesian statistics, it is often mentioned that the posterior distribution is intractable and thus approximate inference must be applied. What are the factors that cause this intractability? </p>
<p>I had the opportunity to ask <a href="https://scholar.google.com/citations?user=8OYE6iEAAAAJ&amp;hl=en&amp;oi=ao" rel="noreferrer">David Blei</a> this question in person, and he told me that <em>intractability</em> in this context means one of two things:</p> <ol> <li><p>The integral has no closed-form solution. This might be when we're modeling some complex, real-world data and we simply cannot write the distribution down on paper.</p></li> <li><p>The integral is computationally intractable. He recommended that I sit down with a pen and paper and actually work out the marginal evidence for the Bayesian mixture of Gaussians. You'll see that it is computationally intractable, i.e. exponential. <a href="https://arxiv.org/pdf/1601.00670.pdf" rel="noreferrer">He gives a nice example of this in a recent paper</a> (See <strong>2.1 The problem of approximate inference</strong>).</p></li> </ol> <p>FWIW, I find this word choice confusing, since (1) it is overloaded in meaning and (2) it is already used widely in CS to refer only to computational intractability.</p>
235
bayesian inference
We flip a coin 20 times and observe 12 heads. What is the probability that the coin is fair?
https://stats.stackexchange.com/questions/442512/we-flip-a-coin-20-times-and-observe-12-heads-what-is-the-probability-that-the-c
<p><a href="https://i.sstatic.net/ZbJUs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZbJUs.jpg" alt="enter image description here"></a><a href="https://i.sstatic.net/k8NUI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k8NUI.jpg" alt="enter image description here"></a>im having some trouble getting around this. A little explanation would be really helpful. </p>
<p>You appear to be using a Beta(1,1) prior on <span class="math-container">$\theta$</span>. Since this is a continuous distribution, the prior (and posterior) probability of the event that the coin is exactly fair, <span class="math-container">$\theta=1/2$</span>, is zero.</p> <p>What would perhaps be a more sensible prior (see <a href="https://www.jstor.org/stable/2333251" rel="nofollow noreferrer">Lindley 1957 pp. 188-189</a> for a discussion of similar examples) would be a point mass at <span class="math-container">$\theta=1/2$</span> given the event <span class="math-container">$H_0$</span> that the coin is fair and <span class="math-container">$\theta\sim \mbox{Beta}(\alpha,\beta)$</span> given an unfair coin (the event <span class="math-container">$H_1$</span>) and some prior probabilities <span class="math-container">$q$</span> and <span class="math-container">$1-q$</span> that <span class="math-container">$H_0$</span> and <span class="math-container">$H_1$</span> are true respectively.</p> <p>The probabilities of observing <span class="math-container">$X=x$</span> heads out of <span class="math-container">$n$</span> coin flips under each hypothesis would then be, <span class="math-container">\begin{align} P(X=x|H_1)&amp;=\int_0^1 P(X=x|\theta,H_1)f_{\theta|H_1}(\theta)d\theta \\&amp;=\frac{n!}{x!(n-x)!B(\alpha,\beta)}\int_0^1 \theta^{x+\alpha-1}(1-\theta)^{n-x+\beta-1}d\theta \\&amp;=\frac{n!B(x+\alpha,n-x+\beta)}{x!(n-x)!B(\alpha,\beta)}, \end{align}</span> and <span class="math-container">$$ P(X=x|H_0)=\frac{n!}{x!(n-x)!2^n}. $$</span> Using Bayes theorem, the posterior probability of <span class="math-container">$H_0$</span> would be <span class="math-container">\begin{align} P(H_0|X=x) &amp;=\frac{P(X=x|H_0)P(H_0)}{P(X=x|H_0)P(H_0)+P(X=x|H_1)P(H_1)} \\&amp;=\frac{q}{q + 2^n(1-q)B(x+\alpha,n-x+\beta)/B(\alpha,\beta)} \end{align}</span> instead of zero.</p> <p>The Figure below shows typical realisations of this posterior probability for increasing sample sizes <span class="math-container">$n$</span> for a Beta(1,1) prior and <span class="math-container">$q=0.5$</span>. For a truly fair coin (<span class="math-container">$\theta=1/2$</span>, blue curve), the posterior probability of <span class="math-container">$H_0$</span> tends to 1 as expected. If the coin is slightly unfair (<span class="math-container">$\theta=0.55$</span>, red curve) the hypothesis that the coin is fair appear more probable initially until the evidence against <span class="math-container">$H_0$</span> eventually becomes overwhelming.</p> <p><a href="https://i.sstatic.net/xdCvN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xdCvN.png" alt="enter image description here" /></a></p>
236
bayesian inference
Parameter covariance in Bayesian regression of time series
https://stats.stackexchange.com/questions/660688/parameter-covariance-in-bayesian-regression-of-time-series
<p>My problem is thus: given set of time series data <span class="math-container">$D = \{t_m, x_m\}$</span> where <span class="math-container">$m$</span> is a label, <span class="math-container">$m=1,2,3,...,n$</span> I have a model <span class="math-container">$f(t,\mathbf{w})$</span> which generates an equivalent time series given a set of a parameters. I can write <span class="math-container">$x_m = f_i(t_m,\mathbf{w}) + \varepsilon_m$</span> where <span class="math-container">$\varepsilon_m \sim{N}(0,\sigma^2)$</span> is some random noise with unknown <span class="math-container">$\sigma$</span>. Here, <span class="math-container">$f_i(t_m,\mathbf{w})$</span> is a non-linear function of the parameters <span class="math-container">$\mathbf{w}\in\mathbb{R}^k$</span>. Specifically, it is the solution to a system of differential equations with equation parameters <span class="math-container">$\mathbf{w}$</span>. A particular model <span class="math-container">$f_i$</span> is taken from the set of plausible models <span class="math-container">${F}$</span>. Note that I can evaluate <span class="math-container">$f$</span>, <span class="math-container">$\nabla_\mathbf{w} f$</span> and <span class="math-container">$\nabla_\mathbf{w}\nabla_\mathbf{w} f$</span> for any set of <span class="math-container">$t$</span> and parameters <span class="math-container">$\mathbf{w}$</span> (I assume fixed initial conditions of the ODEs).</p> <p>My approach has been to define an <span class="math-container">$L_2$</span> loss function, <span class="math-container">$Q(t_m,x_m,\mathbf{w}) = \sum_m [x_m - f(t_m,\mathbf{w})]^2$</span>, for which I can also evaluate the gradients <span class="math-container">$\nabla_\mathbf{w} Q$</span> and <span class="math-container">$\nabla_\mathbf{w}\nabla_\mathbf{w} Q$</span>. I then use gradient descent to find a set of <span class="math-container">$\mathbf{\hat{w}}$</span> to minimize <span class="math-container">$Q$</span>.</p> <p>I want to 1.) evaluate the uncertainty of my predictions <span class="math-container">$\mathbf{\hat{w}}$</span> and 2.) compute the evidence of the model for model comparison testing. In the first case, it seems I need to take <span class="math-container">$-\nabla_\mathbf{w}\nabla_\mathbf{w}\log P(\mathbf{w}|D,\sigma,f_i)$</span> which requires an estimate of <span class="math-container">$\sigma$</span>. The second problem seems to require evaluating the likelihood function <span class="math-container">$L = P(D|\mathbf{w},\sigma,f)$</span> at <span class="math-container">$\mathbf{\hat{w}}$</span>, and from which I could do model comparison using BIC or estimate the evidence directly using the likelihood and MacKay's Occam factor (assuming a uniform prior on <span class="math-container">$\mathbf{w}$</span>).</p> <p>Actually finding these probability densities has proven challenging, as most references make assumptions on the form of <span class="math-container">$f$</span> or use conjugate priors which assume that <span class="math-container">$\mathbf{w}$</span> are drawn from a normal distribution of mean zero (which is definitely not the case in this problem).</p> <p>My current thought process is to take an unbiased estimate of <span class="math-container">$\sigma$</span> as <span class="math-container">$\hat{\sigma}^2 = \frac{1}{n-k}\sum_m [x_m - f(t_m,\mathbf{w})]^2$</span>. From here, I take <span class="math-container">$$P(D|\mathbf{w},\hat{\sigma},f) = \frac{1}{\left(\sqrt{2\pi\hat{\sigma}}\right)^n}\exp[-Q/(2\hat{\sigma}^2)]$$</span> which allows me to compute the likelihood and thus the BIC. At this point, I can assume a uniform prior <span class="math-container">$P(\mathbf{w}|f_i)$</span> to let me write <span class="math-container">$-\nabla_\mathbf{w}\nabla_\mathbf{w}\log P(\mathbf{w}|D,\hat{\sigma},f_i)\approx -\nabla_\mathbf{w}\nabla_\mathbf{w}\log P(D|\mathbf{w},\hat{\sigma},f_i)$</span> and thereby get the error bars on my parameters. Note that this may be a bad assumption, as some of the parameters <span class="math-container">$w_j$</span> are fixed such that <span class="math-container">$w_j\geq0$</span> or <span class="math-container">$w_j \in [0,1]$</span>.</p> <p>I would appreciate any advice as to whether this is the correct approach for applying Bayesian methods to this kind of inference problem as well as any assumptions I may have missed.</p>
237
bayesian inference
Can you use the beta-binomial distribution instead of MCMC?
https://stats.stackexchange.com/questions/653748/can-you-use-the-beta-binomial-distribution-instead-of-mcmc
<p>So, I have a project to test the hypothesis that a marketing campaign with a new art generates more purchases than the old one, I have 2 samples of data, one using the standard ad and one using the new ad. So we have the total number of impressions and the number of purchases. We can estimate this as a <span class="math-container">$\text{Binomial}(\text{impressions}, \text{purchases} / \text{impressions})$</span>.</p> <p>So, if we say that: <span class="math-container">$$ \begin{align} &amp;X_{c} \sim \text{Binomial}(I_{c},\theta_{c})\\ &amp;X_{t} \sim \text{Binomial}(I_{t},\theta_{t}) \end{align} $$</span> and we use a <span class="math-container">$\theta \sim \text{Beta}(1,1)$</span> as a prior we can get the respective posterior for the test and the control. So, my question is, can i use the beta-binomial distribution as a analytic way to do the hypothesis test? Or do I have to use a MonteCarlo Markov Chain simulation and do it the usual way? My reasoning was that, given the data, we can calculate the probability of the number of puchases <span class="math-container">$\tilde{x}$</span> as <span class="math-container">$$ P(\tilde{x}=x \ |X_{t},\theta'_{t})\sim \text{Beta-Bin}(I_t,1+x_t,1+I_t-x_t) $$</span> with <span class="math-container">$\theta'_t$</span> being the posterior of <span class="math-container">$f(X_t \ | \ \theta_t)$</span> and <span class="math-container">$x_t$</span> the number of purchases that the test advertisement had. So I could use this to calculate the BayesFactor <span class="math-container">$$ BF_{10}=\frac{P(\tilde{x}\geq x_t \ | X_t ,\theta_t')}{P(\tilde{x}\geq x_t \ | X_c ,\theta_c')} $$</span> instead of using the usual MCMC simulation and comparing the results.</p> <p>Obs: I had this posted on the Math stack but after being told about this one i felt it would be a better fit.</p>
<p>Yes - if the model is simple enough to calculate the posterior analytically, there is no reason you can't do that instead of sampling the posterior. In practice, this is rarely the case, hence MCMC, but theoretically the analytical solution should always give accurate possible answer.</p>
238
bayesian inference
How to find the support of the posterior distribution to apply Metropolis-Hastings MCMC algorithm?
https://stats.stackexchange.com/questions/74330/how-to-find-the-support-of-the-posterior-distribution-to-apply-metropolis-hastin
<p>I am trying to sample from a posterior distribution using a MCMC algorithm using the Metropolis-Hastings sampler.</p> <p>How should I deal with the situations where I'm stuck in regions of the posterior with zero probability?</p> <p>These regions are present because the posterior distribution is truncated and also due to numerical limitations on the computer the likelihood can become zero if you are very far from the mean. That is, say the likelihood is distributed normally; if you are 100 standard deviations away from the mean you get what appears as zero probability to the computer.</p> <p>What I want to know is how to chose the initial value of the chain in order to be sure that it is contained in the support of the posterior.</p>
<p>This is an implementation problem since, theoretically, MCMC has no problem with truncated distributions.</p> <p>Let $D$ be the support of your posterior. Just define your log-posterior as $-\infty$ if $\theta\not\in D$ and choose a suitable value on $D$ as an initial point.</p> <p>For example, suppose that $x_1,\dots,x_n \stackrel{ind.}{\sim} \exp(\lambda)$ and that $\lambda\sim Unif(0,10)$. The following code shows how to implement a Metropolis-Hastings for this model using the package 'mcmc'. </p> <pre><code>library(mcmc) set.seed(123) x = rexp(100,1) # The theoretical value of lambda is 1 # logposterior logpost = function(lambda){ if(lambda&gt;0 &amp; lambda &lt;10) return(sum(dexp(x,lambda,log=T))) else return(-Inf) } # Metropolis-Hastings NS = 55000 out &lt;- metrop(logpost, scale = .5, initial = 1, nbatch = NS) out$accept lambdap = out$batch[ , 1][seq(5000,NS,25)] # posterior simulations after burning and thinning hist(lambdap) </code></pre>
239
bayesian inference
Approximating distributions in expectation propagation
https://stats.stackexchange.com/questions/82348/approximating-distributions-in-expectation-propagation
<p>Can the approximating distributions for various factors in Expectation propagation be different distributions but still from the exponential family. For example, I have the following posterior form:</p> <p>$$ p(w, \lambda, \phi) = p(\phi)p(\lambda)p(w|\lambda) \prod_{i}p(y_{i}|w, \phi, \lambda) $$</p> <p>So, I need to approximate each of these factors. My question is can, for example, $p(\phi)$ be approximated as a gamma distribution and $p(w|\lambda)$ be a multivariate Gaussian and $p(\lambda)$ be some thing else from the exponential family.</p>
<p>Yes, you can use different exponential families to approximate the marginal for different variables. You only need all messages into a variable to have the same type, so that they can be multiplied together to get the marginal. In your example, $p(w|\lambda)$ (call it factor $a$) can be approximated by the two messages $m_{a \rightarrow w}(w) m_{a \rightarrow \lambda}(\lambda)$ while $p(\lambda)$ (call it factor $b$) is approximated by the single message $m_{b \rightarrow \lambda}(\lambda)$. You need $m_{a \rightarrow \lambda}(\lambda)$ to have the same type as $m_{b \rightarrow \lambda}(\lambda)$ so that they can be multiplied together to get the approximate marginal for $\lambda$.</p>
240
bayesian inference
Global search operators for approximate MAP inference?
https://stats.stackexchange.com/questions/82946/global-search-operators-for-approximate-map-inference
<p>In complicated Bayesian models, like for instance a hierarchical nonparameteric one, often times it's intractable to do Gibbs or other MCMC sampling methods to convergence. Rather, people tend to do variational inference and use expectation maximization to find the approximate MAP parameters.</p> <p>Is there a reason people use a local search algorithm like EM rather than a global search algorithm like CMA-ES? It seems like the latter would require much less effort since you don't need to derive the E and M steps.</p>
<p>Gibbs and MCMC methods are sampling methods. EM methods are optimisation methods. Two very different things. The first ones sample from the posterior distribution, the other one finds a maximum/minimum.</p> <p>EM algorithms are used whenever you can conduct the E and M steps relatively easy. Otherwise, you usually go for some other sort of optimisation method such as the one you mention.</p>
241
bayesian inference
Expectation propagation for feature selection
https://stats.stackexchange.com/questions/121757/expectation-propagation-for-feature-selection
<p>I'm using Expectation propagation algorithm (<code>infer.net</code> library) for my feature selection problem. </p> <p>I generate input data and test my model. The thing is that when I use different number of data points, I get very different results. </p> <p>For example, in my current setting it really works well with 50 data points. However, with more than 60 and less than 40 the result are dramatically poorer. </p> <p>Can anyone explain this to me? Is it something related to Expectation Propagation? Or it is because of the way it is may be implemented in <code>infer.net</code>?</p> <p>Any help is appreciated. </p>
<ol> <li>Maybe it hasn't converged.</li> <li>Try initializing near the true answer.</li> <li>Try a different inference algorithm.</li> </ol>
242
bayesian inference
Points to keep in mind while implementing a nonparametric bayesian inference procedure from scratch
https://stats.stackexchange.com/questions/61580/points-to-keep-in-mind-while-implementing-a-nonparametric-bayesian-inference-pro
<p>I have been trying to implement a Bayesian inference procedure from scratch for a specific problem, but I have implemented the procedure, and it doesn't seem to work. </p> <p>Since, I can't just post the code online and ask community to debug my code, I was wondering if someone could provide with a broader checklist when going about coding up a Bayesian inference procedure. (regardless of language) </p> <p><strong>EDIT: Specifics of the problem</strong></p> <p>I am trying to implement the procedure described in Section 5 of <a href="http://uai.sis.pitt.edu/papers/11/p736-wilson.pdf" rel="nofollow">this paper</a> on <strong>MATLAB</strong> . Briefly put, the procedure I've implemented is - </p> <ol> <li>I have 3 zero mean variables (i.e., $D = 3$ time series) for $500$ timepoints. I'm using initial $N = 350$ data points as training sample.</li> <li>The covariance function I'm using is a squared exponential kernel with 1 hyperparameter - characteristic length scale $l$. I'm assuming it to be the same for all 3 timeseries.</li> <li>I'm keeping degrees of freedom constant, $\nu = D + 1$.</li> <li>$L$, the lower Cholesky decomposition of the scale matrix $V$ is computed as the $D \times D$ covariance matrix of the $N \times D$ training dataset.</li> <li><p>The sampling procedure essentially involves 2 steps (using Gibbs sampling)</p> <p>5.1 Sample $u$ ($N \times D \times \nu$) dimensional vector, assuming Gaussian process prior (as defined in equation 19 of the paper). I've assumed a Gaussian likelihood function (as defined in equation 24). For this I'm using <a href="http://homepages.inf.ed.ac.uk/imurray2/pub/10ess/elliptical_slice.m" rel="nofollow">Elliptical Slice Sampling</a></p> <p>5.2 Sample GP hyperparameter $l$, using a lognormal prior (assumption, $mean=1.5$, $var = 1$). I've used <a href="http://homepages.inf.ed.ac.uk/imurray2/teaching/09mlss/slice_sample.m" rel="nofollow">slice sampling for this</a> with posterior as product of GP prior(eq. 19) and lognormal density.</p></li> </ol> <p>I let this Gibbs sampler run for $10000$ iterations ($5000$ burn-in). But convergence plot of $u$ doesn't seem to converge. </p> <p>I also tried this with smaller $N$ (~ $50$) and increased no. of iterations but didn't work.</p>
243
bayesian inference
Bayesian inference - posterior in a simple model
https://stats.stackexchange.com/questions/232037/bayesian-inference-posterior-in-a-simple-model
<p>Suppose you are measuring $n$ quantities with error. Let $\beta_1,\ldots, \beta_n$ represent the true values and $X_1, \ldots, X_n$ represent the measured values of those quantities. Assume that the errors are centered normal. Let $\sigma_i^2\,, i=1, \ldots, n$ represent the <strong>known</strong> standard deviation of each measurement. So that the measurements are $$ X_i | \beta_i \sim N(\beta_i, \sigma_i^2)\,.$$</p> <p>I can recover the above model by writing $$ X_i = \beta_i + \varepsilon_i\,,$$ where $\varepsilon_i \sim N(0, \sigma_{i}^2)$.</p> <p><strong>Now I make the following extension to the model after which I get confused.</strong></p> <p>Suppose that $\beta_i \sim N(\mu, \sigma_b^2)$, with <strong>known</strong> parameters $\mu, \sigma_b^2$. I want to write down form of the posterior distribution $p(\beta_i | X)$. </p> <hr> <p>On the one hand, if the relationship $X_i = \beta_i + \varepsilon_i $ is still in force, then $$ \beta_i = X_i - \varepsilon_i $$ so the posterior is $p(\beta_i | \{X_i\}) \sim N(X_i, \sigma_i^2)$ In particular it does not depend on $\sigma_b, \mu$.</p> <p>On the other hand, if I just proceed by Bayesian theorem then $$ p(\beta_i | \{X_i\}) = \frac{p(\beta_i, \{X_i\})}{P(\{X_i\})} = \frac{p(\{X_i\}|\beta_i) p(\beta_i)}{P(\{X_i\})} \propto p(\{X_i\}|\beta_i) p(\beta_i) = f_1(\{X_i\}| \beta_i) f_2(\beta)\,, $$ </p> <p>with $f_1(\{X_i\}| \beta_i) = \prod_{i=1}^n f(X_i| \beta_i)$ where $ f(X_i; \beta_i)$ is density of $N(\beta_i, \sigma_i^2)$ and $f_2(\beta_i)$ is the density of $N(\mu, \sigma_b^2)$.</p> <hr> <p>The results of those two approaches differ, what am I confusing here? </p> <p><strong>Added late</strong>: As one of the comments suggested my question is related to <a href="https://stats.stackexchange.com/questions/194784/deriving-the-ridge-regression-boldsymbol-beta-mid-mathbfy-distribution/194794#194794">this question</a>, but the refereed question asks about the specific form of posterior distribution (why the posterior is normally distributed), this is different from what I was trying to figure out. </p>
<p>I assume the intention is that the $\epsilon$s are independent of the $\beta$s (like in a typical measurement noise model). </p> <p>Then, $\epsilon_i$ and $X_i=\beta_i+\epsilon_i$ are not independent. The error in the first approach is assuming they are and proceeding as if $\epsilon_i \mid X_i \sim N(0, \sigma_i^2)$ would hold.</p>
244
bayesian inference
Coin tossing posterior density calculation
https://stats.stackexchange.com/questions/553926/coin-tossing-posterior-density-calculation
<p>I know that my prior distribution is Beta(3,3) and that after tossing 12 coins, the number of 'heads' is less than 4 but I don't know the exact number. How do I calculate the posterior density?</p> <p>What I've tried to do is:</p> <p>If <span class="math-container">$X=\#$</span> of heads in <span class="math-container">$n=12$</span> tosses then <span class="math-container">$X\sim Bin(12,\theta)$</span></p> <p><span class="math-container">$$\pi(\theta|x)=\frac{p(x|\theta)\pi(\theta)}{f(x)}$$</span></p> <p>Where <span class="math-container">$f(x)$</span> is the marginal density of the Likelihood.</p> <p>I tried using 4 cases for <span class="math-container">$X={0,1,2,3}$</span></p> <p>And calculate 4 different values of the posterior density. But is that correct?</p>
<p>What use is four posterior densities? I would have thought you wanted one. It would be a weighted average of them, but perhaps difficult to find the weights.</p> <p>If <span class="math-container">$p$</span> is the probability of a head then, if my calculations are correct,</p> <ul> <li><p>The prior density is proportional to <span class="math-container">$p^2(1-p)^2$</span> from your Beta distribution;</p> </li> <li><p>The likelihood given <span class="math-container">$3$</span> or fewer heads from <span class="math-container">$12$</span> attempts is proportional to <span class="math-container">${12 \choose 0}(1-p)^{12}+{12 \choose 1}p(1-p)^{11}+{12 \choose 2}p^2(1-p)^{10}+{12 \choose 3}p^3(1-p)^9$</span>; and</p> </li> <li><p>the posterior density is proportional to the product of these, but needs to integrate to <span class="math-container">$1$</span> over <span class="math-container">$[0,1]$</span></p> </li> </ul> <p>which suggests to me a posterior density of <span class="math-container">$$\dfrac{185640}{1271} p^2(1-p)^{11}\left(1+9p+45p^2+165p^3\right)$$</span></p> <p>which is in a sense a weighted average of <span class="math-container">$\operatorname{Beta}(3,15)$</span>, <span class="math-container">$\operatorname{Beta}(4,14)$</span>, <span class="math-container">$\operatorname{Beta}(5,13)$</span> and <span class="math-container">$\operatorname{Beta}(6,12)$</span> densities, but I do not see a quicker way which involves calculating the weights required</p> <p>In the chart below, the information from the observation has shifted the red prior density to the blue posterior density</p> <p><a href="https://i.sstatic.net/ozajh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ozajh.png" alt="enter image description here" /></a></p>
245
bayesian inference
Can I interpret the p-value of a statistic test as a part in Bayesian Formula?
https://stats.stackexchange.com/questions/558394/can-i-interpret-the-p-value-of-a-statistic-test-as-a-part-in-bayesian-formula
<p>Suppose we have a hypothesis test: <span class="math-container">$$H_0: \theta≥\theta_0 ~~~ vs~~~ H_1:\theta&lt;\theta_0$$</span></p> <p>With the observation <span class="math-container">$X$</span>, the p-value is calculated by <span class="math-container">$p = P(X|H_0)$</span>. <em>Which means the sum of probability for less or equal likely events.</em></p> <p>The p-value tells us <strong>the likelihood of this set of observation to happen</strong> given the Null hypothesis. But usually, what we really want to know is <strong>how likely my hypothesis is right</strong>, given the observed data. Which, I think, should be <span class="math-container">$P(H_0|X)$</span>.</p> <p>Then I recall the Bayes' Formula, where I can derive <span class="math-container">$P(H_0|X)$</span> from <span class="math-container">$P(X|H_0)$</span>. <span class="math-container">$$P(H_0|X) = \frac{P(X|H_0) ~P(H_0)}{P(X)}$$</span></p> <p>So in this perspective, p value is merely a term in the formula. Am I wrong? If I'm right, why don't we use Bayes' Formula to fix our hypothesis test?</p>
<p>Welcome to Cross Validated and a +1 from me. I once wondered this, and I saw two issues.</p> <ol> <li>That definition of a p-value is not quite right.</li> </ol> <p><span class="math-container">$$p=P(X\ge x \vert H_o)$$</span></p> <p>(Or something similar for a two-sided test)</p> <p>With that in mind, you are not quite flipping around the conditioning to derive to posterior probability of the null hypothesis. We want to know the posterior probability of <span class="math-container">$H_0$</span> after looking at the data, but that p-value bring in that “or more extreme” business.</p> <ol start="2"> <li>What are the probabilities of the null hypothesis and the observed test statistic? I have yet to come up with a better answer than to say that both have a probability of zero. In that case, our fraction has a <span class="math-container">$\frac{0}{0}$</span> term.</li> </ol> <p>Finally, even in a true Bayesian setting, the probability of a single point like <span class="math-container">$\mu=0$</span> is zero for a continuous posterior distribution, so I’m not convinced that this kind of Bayesian p-value is at all what we want. Bayesian inference can be wonderful for a lot, but it doesn’t magically make continuous distributions have positive probability measure for individual points.</p>
246
bayesian inference
Can a global sensitivity analysis be performed on Bayesian inference?
https://stats.stackexchange.com/questions/559157/can-a-global-sensitivity-analysis-be-performed-on-bayesian-inference
<p>My question is, is it possible to perform a Global Sensitivity Analysis on a Bayesian inference model (not just on the prior, the entire model)?</p> <p>A bit of context: I am fairly new to Bayesian statistics. Being a research student in astrobiology, I have read a few papers using Bayesian approaches to try constraining the probability of detecting life on an exoplanet (e.g. <a href="https://www.liebertpub.com/doi/10.1089/ast.2017.1737" rel="nofollow noreferrer">Catling et al. (2018)</a>).</p> <p>Several papers use the equation below as a starting point:</p> <p><span class="math-container">$$P(\text{life}|D,C) = \frac{P(D|\text{life},C) * P(\text{life}|C)}{P(D|C,\text{life}) * P(\text{life}|C) + P(D|C,\text{no life}) * P(\text{no life} |C)}$$</span></p> <p>Where <span class="math-container">$\text{life}$</span> and <span class="math-container">$\text{no life}$</span> are the hypotheses of the exoplanet to host life and not host life, respectively. <span class="math-container">$D$</span> is the data collected from the exoplanet, and <span class="math-container">$C$</span> is the exoplanetary context (stellar properties, etc...). Here, the &quot;prior&quot; terms are <span class="math-container">$P(\text{life}|C)$</span> and <span class="math-container">$P(\text{no life}|C)$</span>. The &quot;likelihood&quot; term is <span class="math-container">$P(D|\text{life}, C)$</span> and the &quot;false positive&quot; term is <span class="math-container">$P(D|C, \text{no life})$</span>.</p> <p>I then simply added <span class="math-container">$n$</span> and <span class="math-container">$i$</span> indices to the above equation to express the Bayesian inference (i.e. &quot;every time new data are collected, we recalculate the posterior, which then becomes the prior for the next inference&quot;):</p> <p><span class="math-container">$$P(\text{life}|D,C)_n = \frac{P(D|\text{life},C)_i * P(\text{life}|C)_{n-1}}{P(D|C,\text{life})_i * P(\text{life}|C)_{n-1} + P(D|C,\text{no life})_i * P(\text{no life} |C)_{n-1}}$$</span></p> <p>The few articles I saw about Global Sensitivity Analysis (GSA) on Bayesian frameworks were doing the GSA on the prior probability distribution only and not the entire model.</p> <p>But I could see how a GSA could give very useful information about the relative importance of each variable and parameters (prior, likelihood of detecting the data, false positive, number of inferences), especially in a mission design and instrumentation context.</p> <p>So again my question is: is it possible/valid to perform a Global Sensitivity Analysis on the inference expressed above (i.e. the entire model, not just on the prior(s))?</p> <p>Thanks a lot for any help, I hope it's clear enough!</p>
247
bayesian inference
Using indirect prior information in Bayesian inference
https://stats.stackexchange.com/questions/296682/using-indirect-prior-information-in-bayesian-inference
<p>Hi I am trying to estimate the posteriors of four calibration parameters namely $c_1, c_2, c_3$ and $c_4$ in the following equation using Bayesian inference</p> <p>$$ F=c_1 \cdot (i^{c_2}) \cdot(s^{c_3}) \cdot (1-\exp(c_4 t)) $$</p> <p>I have the observed data for the output $F$ and inputs $i$,$s$ and $t$. I know the range of $c_4$ from my prior knowledge, so I will use a uniform prior with this range for $c_4$. I don't have any prior knowledge about parameters $c_1, c_2, c_3$ individually. All I know is that $0 &lt; c_1 \cdot (i^{c_2}) \cdot(s^{c_3}) &lt; 1$ for all $i$ and $s$. Now I want to use Bayesian inference to find the posterior of parameters $c_1, c_2, c_3$ and $c_4$. Is there any way to use my indirect prior knowledge about $c_1, c_2, c_3$ here?</p>
<p>As a first try, I would suggest you building the multidimensional Jeffreys prior $\Pi(c_1,c_2,c_3,c_4)$ weighted by the support function returning 1 if your constraints are fullfilled and 0 else. This will give you a procedure to get a prior that will not depend from the way you choosed to parametrize $F$, which may be quite satisfying. I think it can be computed in a decent time. If some particular parameters of interest you can try to derivate the reference priors but this may be much more complicated. </p> <p>An alternative approach if multiple fittings are performed and if it seems reasonable, would be to use hierarchical priors where each parameter have a prior whose parameters depends from other fittings (e.g. as in <a href="https://stats.stackexchange.com/a/245440/14346">https://stats.stackexchange.com/a/245440/14346</a>). Nevertheless adding the support condition on the $c_1, c_2, c_3$ may be cumbersome.</p>
248
bayesian inference
Calculating Bayes&#39; factor for 2 Gamma distributions
https://stats.stackexchange.com/questions/336127/calculating-bayes-factor-for-2-gamma-distributions
<p>I have 2 model $M_1$ and $M_2$ which both have a gamma distribution and the same priors</p> <p>$H_0 : \quad x_i \sim M_1 \\ H_a: \quad x_i \sim M_2$</p> <p>Both $M_1$ and $M_2$ have prior $\sim Ga(7,3000)$ but my posteriors are </p> <p>$M_1 \sim Ga(191,116665.4) \\ M_2 \sim Ga(192, 116188.9)$</p> <p>I get the values of my posterior via simulated data for both models. This means that I have simulated my data using $M_1$, then repeated the simulation but this time using $M_2$.</p> <p>I now have an issue, as I am unsure whether I can calculate the Bayes factor for these 2 models now, because I have 2 lots of simulated data.</p> <p>Would calculating the Bayes' factor for this data still be valid?</p> <p>I know that my Bayes' factor $B$ should be</p> <p>$B = \frac{Ga(191,116665.4)/Ga(7,3000)}{Ga(192, 116188.9)/Ga(7,3000)}$</p> <p>But I don't know if this is a stupid calculation due to the earlier mentioned problem.</p> <p>I'm really sorry if this isn't clear, I will try to explain better if it is not, but I've been thinking about this all day and I can't get my head around it.</p> <p>Just for clarity, I basically want to test which model is better, $M_1$ or $M_2$ and I thought looking at the Bayes' factor would be useful in determining a superior model.</p> <p>Any help/explanation would be so useful</p>
249
bayesian inference
Can you interpret this question?
https://stats.stackexchange.com/questions/341115/can-you-interpret-this-question
<p>I'm studying for past exam and I'm actually stumped on what a particular question is asking me. I've thought about it for days and I actually just don't know what are they asking. Can anyone interpret the question?</p> <p>It's part (ii)</p> <p>" The Hobbits living in the Shire are not known for being very tall: while 97.5% of the population is taller than 60 cm, only 2.5% of the population exceeds 122 cm. The average height of the 144 guest who attended Bilbo’s birthday party is 95 cm, with sample standard deviation 19 cm. Assume for the population’s height a normal model with unknown mean μ and fixed variance $\sigma ^2$ .</p> <p>i) Show that the normal distribution is a conjugate prior for the normal sampling model μ (assuming $\sigma^2$ fixed)</p> <p>ii) Elicit prior parameters for the prior of point 1 (without considering the data on Bilbo’s party)."</p> <p>normally for normal prior we just have some $\mu_0$ and $\sigma_0^2$. If we are not to consider the data how can we pick any particular values? there is footnote for part (ii) "You may can use the following equality $\Phi(-1.96)=0.025"$</p> <p>Can anyone shed some light on what the question is actually asking in part (ii)? I can do part (i) but just cant get my head around what part (ii) is asking me to do as we cannot consider the actual data available.</p>
<p>In part (ii) you can use the <em>population</em> information -- that is, the distribution of all hobbits living in the Shire -- but not information about the hobbits who attended Bilbo's party, to elicit your prior parameters. So your prior should be a normal distribution such that 2.5% of its mass falls below 60 cm and 2.5% falls above 122 cm. This is enough information to determine the parameters of the prior.</p>
250
bayesian inference
Estimate the mean and variance 95% HPD credible region using Bayesian inference
https://stats.stackexchange.com/questions/346378/estimate-the-mean-and-variance-95-hpd-credible-region-using-bayesian-inference
<p>I have the following data:</p> <p>31.0, 30.5, 20.6, 27.2, 26.5, 28.1, 25.8, 29.6, 30.0, 25.8, 25.1, 27.9, 23.0, 29.4, 28.7, 25.0, 31.1, 24.8, 24.8, 27.0, 22.3, 29.5, 31.5, 26.2, 24.6, 23.2, 25.7, 24.2, 28.8, 27.4, 29.6, 23.5, 26.4, 28.7, 25.5, 18.6, 25.2, 24.5, 27.9, 33.0, 21.4, 34.4, 27.2, 23.3, 29.3, 31.4, 24.6, 32.3, 22.8, 19.7, 24.6</p> <p>And I have to conduct a bayesian analysis to make inferences about the 95% HPD credible region for the mean $\mu$ and the variance $\sigma^2$. Supposing the semi-conjugate prior is assigned:</p> <p>$$\sigma^2 \sim IG(3,36)$$ $$\mu | \sigma^2 \sim N(26, \sigma^2)$$</p> <p>And Supposing normal model $N(\mu,\sigma^2)$</p>
<p>Letting $\tau = \frac{1}{\sigma^{2}}$, be the precision, the priors become:</p> <p>$\tau \sim Gamma(3, 36)$</p> <p>$\mu \space | \space \tau \sim \mathcal{N}(26, n_{0}\tau)$</p> <p>Then the posterior distributions have the form:</p> <p>$\mu \space | \space \tau, x \sim \mathcal{N}(\frac{n\tau}{n\tau + n_{0}\tau}\bar{x} + \frac{n_{0}\tau}{n\tau + n_{0}\tau} \mu_{0}, \space \space n\tau + n_{0}\tau)$</p> <p>$\tau \space | \space x \sim Gamma(\alpha+\frac{n}{2}, \space \beta +\frac{1}{2}\sum(x_{i}-\bar{x})^2 + \frac{nn_{0}}{2(n+n_{0})} (\bar{x}-\mu_{0})^2)$</p> <p>Now you should be able to plug in the values from your priors and the data to get the posterior distributions. Then you can sample from those distributions to get point estimates, credible intervals, HPDs, etc. Hopefully that helps you get started; more details can be found <a href="https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf" rel="nofollow noreferrer">here</a> and <a href="https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf" rel="nofollow noreferrer">here</a>.</p>
251
bayesian inference
Bayesian updating for coin toss
https://stats.stackexchange.com/questions/367227/bayesian-updating-for-coin-toss
<p>I have used Bayesian reasoning in my research work and it has been extremely useful. The book I have read is E.T. Jayne's <em>Probability theory</em>. The idea is to formulate propositions and then probability theory tells how to assign numbers (viz. probability) to those propositions, conditional on one's information and beginning from some prior probabilities.</p> <p>A proposition is something that is decidedly either true or false, irrespective of any observer. Therefore for a given coin "Probability of Heads is $p$" is <strong>not</strong> a proposition, because in Bayesian view probability depends on the observer (his/her information) and is not an objective property of the coin (like its mass or temperature).</p> <p>Suppose I toss the coin once and get Heads. I ask "What's the probability of Heads in the next toss?" Consider the propositions: $$H_k\equiv\textrm{Heads in $k$-th toss}\\ T_k\equiv\textrm{Tails in $k$-th toss}$$</p> <p>Then my question is: $P(H_2|H_1)=?$ </p> <p>I assume uniform prior probabilities: $P(H_k)=P(T_k)=1/2$ for any $k$. The result of first toss must change the probability of Heads in the second toss (I'm not assuming that the coin is fair; if say 100 tosses were to turn up Heads then I would suspect the coin to be biased in favour of Heads and probability theory must indicate the same to me).</p> <p>Bayes rule gives: $$P(H_2|H_1)=\frac{P(H_1|H_2)P(H_2)}{P(H_1)}=P(H_1|H_2)$$</p> <p>This gets me nowhere. How do I get a number and thus update the probability of Heads with each toss?</p> <p>In this <a href="https://stats.stackexchange.com/questions/244396/bayesian-updating-coin-tossing-example">post</a> and some articles I read on the net, this issue is resolved by taking "Probability of Heads is $p$" as a proposition and then seeking its probability (which amounts to seeking the probability of a probability). This does give an answer. My only problem (which I believe is a major problem) is that the aforementioned statement is <em>not</em> a proposition and so asking for its probability is nonsense. What's a way out of this conundrum?</p>
252
bayesian inference
Truncated count model -- including information about the number of unobserved realisations
https://stats.stackexchange.com/questions/374316/truncated-count-model-including-information-about-the-number-of-unobserved-re
<h2>Background</h2> <p>Suppose we have a model such that <span class="math-container">$Y \sim \mathcal{M}(\theta)$</span> is a discrete random variable taking values in <span class="math-container">$[0, 1, \ldots]$</span>. We would like to make inference about <span class="math-container">$\theta$</span> from a collection of observations <span class="math-container">$\boldsymbol y = \{y_1, y_2, \ldots, y_J\},\: y_i &gt;0$</span>, i.e., we only observe realisations of <span class="math-container">$Y$</span> if they are non-zero. There is some literature from the sixties on performing inference when <span class="math-container">$\mathcal{M}$</span> is a Poisson distribution, for instance. I have a question for which I haven't seen a Bayesian treatment, which is most likely due to my own ignorance and/or poor Googling skills.</p> <p>Suppose I have some (imperfect) knowledge about the size of the "population", <span class="math-container">$N = J + n_0$</span> (see below). First question is (i) should I include this information into the model? and (ii) how should I do that? Below I discuss my partial answers to these. <strong>What I am asking for is</strong>: feedback as to whether these are correct and where in the literature I can find more information.</p> <h2>An attempt at a solution</h2> <p>Let <span class="math-container">$\pi(\theta)$</span> be a joint prior on the parameters and let <span class="math-container">$L(\boldsymbol y | \theta)$</span> be the likelihood, such that the joint posterior is <span class="math-container">$p(\theta | \boldsymbol y) \propto L(\boldsymbol y | \theta)\pi(\theta)$</span>. We can use a "compressed" likelihood of the form <span class="math-container">$\prod_{i = 0}^U \text{Pr}(i | \theta)^{n_i}$</span>, where <span class="math-container">$n_i$</span> is number of occurrences of <span class="math-container">$i$</span> in the sample <span class="math-container">$\boldsymbol y$</span> and <span class="math-container">$U$</span> is the maximum such value. It seems to me that we can see <span class="math-container">$n_0$</span> as an extra parameter in the model and assign it a prior <span class="math-container">$\pi_K(n_0)$</span>. I wonder if we can then use <span class="math-container">$L^\prime(\boldsymbol y | \theta) := \prod_{i = 1}^U \text{Pr}(i | \theta)^{n_i}\sum_{j=J+1}^\infty \text{Pr}(0 |\theta)^j \pi_K(j)$</span> as the new likelihood. Summation could also be from <span class="math-container">$J + 1$</span> to <span class="math-container">$N$</span> (instead of <span class="math-container">$\infty$</span>). I choose to marginalise over <span class="math-container">$n_0$</span> to get around having to simulate the discrete parameter <span class="math-container">$n_0$</span>, which is important for fitting the model in Stan, for instance.</p>
253
bayesian inference
Trouble with MLE
https://stats.stackexchange.com/questions/387298/trouble-with-mle
<p>I have a random sample <span class="math-container">$X_1, X_2, ..., X_n$</span> with <span class="math-container">$X_i$</span> having a pdf</p> <p><span class="math-container">$$ f(x;\theta) = 2\theta^2x^{-2} $$</span></p> <p>I'd like to find the MLE of <span class="math-container">$\theta$</span>.</p> <p>First, because this is a random sample, all <span class="math-container">$X_i$</span> are iid. I proceed to find the joint pdf:</p> <p><span class="math-container">$$ f(\textbf{x}; \theta) = \prod{f(x_i; \theta)} = (2\theta^2)^n\prod{x_i^{-2}} = L(\theta; \textbf{x}) $$</span></p> <p>Then log-likelihood:</p> <p><span class="math-container">$$ \ln(L) = n\ln(2\theta^2) -2\sum{\ln(x_i)} = n\ln(2) + 2n\ln(\theta) -2\sum{\ln(x_i)} $$</span></p> <p>Differentiate wrt <span class="math-container">$\theta$</span>:</p> <p><span class="math-container">$$ \frac{\partial \ln(L)}{\partial \theta} = \frac{2n}{\theta} $$</span></p> <p>To find MLE, solve <span class="math-container">$\frac{\partial \ln(L)}{\partial \theta} = 0$</span>. However, the solution to this is that <span class="math-container">$\theta \rightarrow \infty$</span></p>
254
bayesian inference
Posterior predictive: what happens to integral over parameters?
https://stats.stackexchange.com/questions/402552/posterior-predictive-what-happens-to-integral-over-parameters
<h3>Question</h3> <p>I don't understand how when integrating over the parameters in the posterior predictive, the integration "disappears". It's hard for me to ask simply because I am confused, so here is an example.</p> <h3>Example</h3> <p>Imagine we have a Gaussian model with unknown mean <span class="math-container">$\mu$</span> and fixed variance <span class="math-container">$\sigma^2$</span>. If <span class="math-container">$D$</span> is the training data and <span class="math-container">$D'$</span> is unseen data, then the posterior predictive is</p> <p><span class="math-container">$$ \begin{align} p(D' \mid D) &amp;= \int p(D' \mid D, \mu) p(\mu \mid D) d\mu \\ &amp;\stackrel{\star}{=} \int p(D' \mid \mu) p(\mu \mid D) d\mu \\ &amp;\triangleq \int \mathcal{N}(D' \mid \mu, \sigma^2) \mathcal{N}(\mu \mid \mu_N, \sigma_N^2) d \mu \end{align} $$</span></p> <p>where step <span class="math-container">$\star$</span> holds because the modeling assumption is that <span class="math-container">$D'$</span> is conditionally independent from <span class="math-container">$D$</span> given <span class="math-container">$\mu$</span> and the definitions of <span class="math-container">$\mu_N$</span> and <span class="math-container">$\sigma_N^2$</span> fall out of computing the posterior. See Bishop (2006) page 98 for details.</p> <p>Here is where I am confused. I can show that</p> <p><span class="math-container">$$ \mathcal{N}(D' \mid \mu, \sigma^2) \mathcal{N}(\mu \mid \mu_N, \sigma_N^2) = \mathcal{N}(D' \mid \mu_N, \sigma_N^2 + \sigma^2) $$</span></p> <p>Murphy's derivation in <a href="https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf" rel="nofollow noreferrer">Conjugate Bayesian analysis of the Gaussian distribution</a> suggests looking at Bishop, equation 2.115 (see my edit for more). My trouble is, Murphy then claims that</p> <p><span class="math-container">$$ p(D \mid D') = \mathcal{N}(D' \mid \mu_N, \sigma_N^2 + \sigma^2) $$</span></p> <p>which is what I mean by the integration should "disappearing". What happened? I understand that this new distribution has no dependence on <span class="math-container">$\mu$</span>, but I would have expected</p> <p><span class="math-container">$$ \begin{align} p(D' \mid D) &amp;= \int \mathcal{N}(D' \mid \mu, \sigma^2) \mathcal{N}(\mu \mid \mu_N, \sigma_N^2) d \mu \\ &amp;= \int \mathcal{N}(D' \mid \mu_N, \sigma_N^2 + \sigma^2) d \mu \\ &amp;= \mathcal{N}(D' \mid \mu_N, \sigma_N^2 + \sigma^2) \int d \mu \end{align} $$</span></p> <p>But it's unclear what becomes of the integral. It's not a probability, so it's not like this is guaranteed to be 1.</p> <hr> <h3>Edit</h3> <p>This is my derivation based on Murphy's hint to look at Bishop (2006), page 93. Since both our posterior and prior are Gaussians, we can use the following fact:</p> <p><span class="math-container">$$ \begin{align} p(\textbf{x}) &amp;= \mathcal{N}(\textbf{x} \mid \boldsymbol{\mu}, \boldsymbol{\Psi}) \\ p(\textbf{y} \mid \textbf{x}) &amp;= \mathcal{N}(\textbf{y} \mid A \textbf{x} + \textbf{b}, \textbf{P}) \\ &amp;\Downarrow \\ p(\textbf{y}) &amp;= \mathcal{N}(\textbf{y} \mid \textbf{A} \boldsymbol{\mu} + \textbf{b}, \textbf{P} + \textbf{A} \boldsymbol{\Psi} \textbf{A}^{\top}) \end{align} $$</span></p> <p>where we have</p> <p><span class="math-container">$$ \begin{align} \textbf{x} &amp;= \mu \\ \boldsymbol{\mu} &amp;= \mu_N \\ \boldsymbol{\Psi} &amp;= \sigma_N^2 \\ \textbf{y} &amp;= D' \\ \textbf{A} &amp;= 1 \\ \textbf{b} &amp;= 0 \\ \textbf{P} &amp;= \sigma^2 \end{align} $$</span></p> <p>This gives us</p> <p><span class="math-container">$$ \begin{align} p(\mu) &amp;= \mathcal{N}(\mu \mid \mu_N, \mu_N^2) \\ p(D' \mid \mu) &amp;= \mathcal{N}(D' \mid \mu, \sigma^2) \\ p(D' \mid \mu) p(\mu) = p(D') &amp;= \mathcal{N}(D' \mid \mu_N, \sigma^2 + \sigma_N^2) \end{align} $$</span></p> <p>We can add conditioning on <span class="math-container">$D$</span> at every step if we'd like, since it doesn't effect the distributions provided we have the parameters (i.i.d.):</p> <p><span class="math-container">$$ \begin{align} p(\mu \mid D) &amp;= \mathcal{N}(\mu \mid \mu_N, \mu_N^2) \\ p(D' \mid \mu, D) &amp;= \mathcal{N}(D' \mid \mu, \sigma^2) \\ p(D' \mid \mu, D) p(\mu) = p(D' \mid D) &amp;= \mathcal{N}(D' \mid \mu_N, \sigma^2 + \sigma_N^2) \end{align} $$</span></p>
<p>If I understand correctly the question, it seems to me (and others before me in the comment section) that the derivation <span class="math-container">$$\mathcal{N}(D' \mid \mu, \sigma^2) \mathcal{N}(\mu \mid \mu_N, \sigma_N^2) = \mathcal{N}(D' \mid \mu_N, \sigma_N^2 + \sigma^2)$$</span> or equivalently <span class="math-container">$$p(D' \mid \mu, D) p(\mu) = p(D' \mid D) = \mathcal{N}(D' \mid \mu_N, \sigma^2 + \sigma_N^2)$$</span> is incorrect since the left hand side is a <em>joint</em> density on <span class="math-container">$(D',\mu)$</span> and the right hand side is a <em>marginal</em> density on <span class="math-container">$D'$</span>. (When removing <span class="math-container">$\mu$</span> from the above rhs, it is as if <span class="math-container">$\mu$</span> is already integrated, but I advise against such loose reasonging.)</p> <p>What is correct is that, if <span class="math-container">$$\underbrace{D'|\mu\sim}_\text{conditional}\mathcal{N}(\mu,\sigma^2)\qquad\text{and}\qquad\underbrace{\mu\sim}_\text{marginal}\mathcal{N}(\mu_N, \sigma_N^2)$$</span>then <span class="math-container">$$\underbrace{D'\sim}_\text{marginal}\mathcal{N}(\mu_N, \sigma^2+\sigma_N^2)$$</span> as stated in <a href="https://amzn.to/2X6cR5z" rel="nofollow noreferrer">the book</a>.</p>
255
bayesian inference
Best sampling method within the normal family
https://stats.stackexchange.com/questions/417198/best-sampling-method-within-the-normal-family
<p>Suppose that we want to make the best Bayesian inference about some value <span class="math-container">$\mu$</span> we have some normal prior about it. I.e. <span class="math-container">$\mu\sim N(\mu_0, \sigma_0^2)$</span> with known parameters. To do so, we can choose parameters <span class="math-container">$(\mu_x, \sigma_x)$</span> that define a normal sampling method with mean <span class="math-container">$\mu_s=\mu \frac{\sigma_x^2}{\sigma_x^2+\sigma_0^2}+\mu_x\frac{\sigma_0^2}{\sigma_x^2+\sigma_0^2}$</span> and variance <span class="math-container">$\sigma_s^2=\frac{\sigma_0^2\sigma_x^2}{\sigma_0^2+\sigma_x^2}$</span>. </p> <p>Can it be shown that the optimal value of <span class="math-container">$\mu_x$</span> is <span class="math-container">$\mu_x=\mu_0$</span>? </p> <p>Can it be shown that the optimal value of <span class="math-container">$\sigma_x$</span> is an intermediate value, i.e. <span class="math-container">$\sigma_x\in(0,\infty)$</span>?</p> <p>Can the optimal value of <span class="math-container">$\sigma_x$</span> be characterized in closed form? </p> <p>Details: to be more precise, after choosing parameters <span class="math-container">$(\mu_x,\sigma_x)$</span> we will get an observation from the induced normal distribution and the goal is for the expected posterior to be as close as possible to <span class="math-container">$\mu$</span></p>
256
bayesian inference
Posterior probability of hypothesis distributions
https://stats.stackexchange.com/questions/448987/posterior-probability-of-hypothesis-distributions
<p>Suppose I have <span class="math-container">$K$</span> classes with distribution <span class="math-container">$\theta$</span> over <span class="math-container">$\{1,...,K\}$</span> and an underlying domain <span class="math-container">$D$</span> on which each class defines a categorical distribution <span class="math-container">$\phi_i$</span>.</p> <p>Given a draw <span class="math-container">$i\sim\theta$</span> and <span class="math-container">$x\sim\phi_i$</span>, where only <span class="math-container">$x$</span> is observed, I want to update both <span class="math-container">$\theta$</span> and the <span class="math-container">$\phi_i$</span>'s. The posterior for <span class="math-container">$\theta$</span> is easy:</p> <p><span class="math-container">$\hat{\theta}(i) = P(i\mid x) \propto P(x\mid i)\cdot P(i) = \phi_i(x)\cdot \theta(i)$</span></p> <p>Is it possible to compute a posterior for <span class="math-container">$\phi_i$</span> as well? From the definition of the posterior, it seems that it should be:</p> <p><span class="math-container">$\hat{\phi}_i(\tilde{x}) = P(\tilde{x}\mid x, i) \propto P(x\mid \tilde{x}, i)\cdot P(\tilde{x} \mid i) = \phi_i(x) \cdot \phi_i(\tilde{x})$</span>.</p> <p>but something about that just seems wrong. Shouldn't <span class="math-container">$\theta$</span> appear in the numerator somewhere? Am I interpreting the likelihood term wrong?</p>
<p>You seem to be talking about <a href="https://en.wikipedia.org/wiki/Posterior_predictive_distribution" rel="nofollow noreferrer">posterior predictive distribution</a>, i.e. the <em>a posteriori</em> distribution of the data. You don't see <span class="math-container">$\theta$</span>, because it is <em>marginalized over</em> possible parameter values. The distribution of the data given some particular parameter value is the <em>likelihood</em> function <span class="math-container">$P(x|i) = \phi_i(x)$</span>.</p> <p>Regarding your comments, I guess the other thing that you may mean is Bayesian updating. Given some data we update a prior</p> <p><span class="math-container">$$ p(\theta | x) \propto p(x | \theta)\,p(\theta) $$</span> </p> <p>next, knowing this you can use the posterior as a prior to be updated with new data <span class="math-container">$\tilde{x}$</span>,</p> <p><span class="math-container">$$ p(\theta|x,\tilde{x}) \propto p(\tilde{x}|\theta,x) \, p(\theta|x) $$</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Chain_rule_(probability)" rel="nofollow noreferrer">chain rule</a>, this can done in one step. So you just plug-in the posterior estimates given <span class="math-container">$x$</span> as a prior for the likelihood for <span class="math-container">$\tilde{x}$</span>, this may be what you are asking about.</p>
257
bayesian inference
confidence intervals for probabilities of a biased die
https://stats.stackexchange.com/questions/455919/confidence-intervals-for-probabilities-of-a-biased-die
<p>Given a biased die with d faces, you are given results of n die rolls. I need to calculate the confidence intervals of the probabilities of each of the d outcomes of the die.</p> <p>A solution in R! - even better.</p>
258
bayesian inference
Applying Bayesian Reasoning to Estimate the Type of a Feature
https://stats.stackexchange.com/questions/511886/applying-bayesian-reasoning-to-estimate-the-type-of-a-feature
<p>Suppose I have a set of strings <span class="math-container">$S$</span> and I want to find out whether these strings have a certain type. To be more specific, I want to find out whether these strings are surnames. Suppose I have a large list <span class="math-container">$L$</span> of surnames from many regions in the world. (<span class="math-container">$L$</span> can be viewed as a set.) Now, there are likely many surnames not in <span class="math-container">$L$</span> - either because the list does not capture them, or because they are spelled in a way that is similar but not exactly as on the list (e.g. &quot;Markov&quot; vs &quot;Markow&quot; vs &quot;Markoff&quot; etc).</p> <p>Suppose that for each entry in <span class="math-container">$S$</span> (or a sufficiently large sample thereof) I can provide a confidence or probability <span class="math-container">$p$</span> that this is a surname (1 if the surname matches some entry in <span class="math-container">$L$</span> exactly and less than 1 if there is only a fuzzy match; for those who want to know specifically, I am using the Python fuzzyset package that calculates that value on the basis of the Damerau-Levenshtein distance).</p> <p>How can I calculate the probability that the entire set <span class="math-container">$S$</span> is a list of surnames? I thought of Bayesian reasoning, but I can't put the calculation together properly.</p> <p>What I want is the probability that <span class="math-container">$S$</span> contains surnames given the confidence that I have about individual entries of <span class="math-container">$S$</span> being &quot;fuzzily&quot; in <span class="math-container">$L$</span>.</p> <p>Specifically, everything conditioned on the specific list <span class="math-container">$L$</span>, I want to know <span class="math-container">$\Pr(A_S | B_s)$</span> where <span class="math-container">$A_S$</span> is the event that <span class="math-container">$S$</span> is indeed a set of surnames and <span class="math-container">$B_s$</span> is the event that <span class="math-container">$s \in S$</span> matches &quot;fuzzily&quot; some element in <span class="math-container">$L$</span>; let <span class="math-container">$\Pr(B_s)$</span> be the confidence <span class="math-container">$p$</span>.</p> <p>I don't get how to proceed from here - or am I entirely on the wrong track?</p>
<p>What you are missing is a likelihood function. For that, you will need to go linguists. My mother's maiden name, for example, contains two changes. One letter was dropped and another letter substituted when my great grandparents arrived in America. The original spelling has survived in other branches of the family. Nonetheless, her family came from a country that uses Latin letters.</p> <p>In your example, you chose Markov versus Markoff. In Cyrillic, Markov is spelled Марков. While all of those look like Latin characters, one need only encounter Ж and you are out of similar characters. That presents a different challenge to immigration officials.</p> <p>It becomes more complex when you encounter languages that don't use alphabets. For example, 周 doesn't work at all for Latin languages without sounding things out. In addition to alphabets for writing systems, the world uses abjads, abugidas, syllabic and lexographic systems.</p> <p>In addition, some languages have sounds English lacks, such as Georgian or Russian. In addition, some languages have a first surname and a second surname here in the United States. For example, Tom Many Hides or Mary Runs With Deer have two and three surnames respectively. Indeed, it could be one surname with spaces as letters.</p> <p>Indeed, probably the only reason the Siksikaitsitapi in the United States uses English surnames is that it was a crime to speak their own language in the United States.</p> <p>So your question becomes &quot;what is the probability Smyth is a surname given that Smith is a known surname?&quot; To discover that, you need to know the transition rules from one language to another. That is complicated by the transition to print and then a later transition to computers.</p> <p>Giseldone became Gislandune which, over ten centuries, has become Islington, a borough in London.</p> <p>Close languages, such as French or German suffer small changes. As the distance increases, such as Italy or Greece, the changes become larger. When larger transitions happen such as Russian, Swahili, or Chinese, the changes are greater. The probabilities are different.</p> <p>Printing and computers changed the rules a bit too. Preprinting name variations such as Smith and Smyth came from the absence of a defined lexography.</p> <p>Of the six known examples of William Shakespeare's signature, he signed his name Willm Shakp, William Shaksper, Wm Shakspe, William Shakspere, Willm Shakspere, and William Shakspeare. Printing made people think that there should be a single solidified form. There should be one correct way and the others are incorrect.</p> <p>The printing press created a crisis that had not previously existed in Christianity. There is not a single known version of the early bibles. They don't agree with each other. Nobody cared either. Modern textual scholarship shows that there are 400,000 variant passages for the new testament, which has 33,000 verses. About half of those are due to spelling differences, but the rest are differences in substance. The verses actually say different things. Prior to the printing press, that wasn't an important thing, but printing makes people think there is a solidified form that is right.</p> <p>Computers created a new problem. The longest surname in the United States is 53 characters long. Old computer systems couldn't handle a 53 character name. Also, there is a real surname called &quot;Null,&quot; which creates its own nightmares. Whereas printed names have some sense of being fixed or solidified, computerized storage means that there can be no changes. If your name doesn't fit the computer's lexicography, well, it needs to be fixed until it does into a new solidified, computer-compliant form.</p> <p>What you need from linguists is a probability of a transition happening. That is what a likelihood function is. It is the ability to map transitions to probabilities. As I am not a linguist, I cannot help with that. However, my suspicion is that there is a giant literature on this. My other guess is that the Damerau-Levenshtein distance will perform well on French and German names and horribly on names originating in Tagalog.</p> <p>To get to a Bayesian probability statement, you need to know <span class="math-container">$\Pr(X|\theta)$</span>, what is the probability of a transition given that a specific transition rule applies.</p>
259
bayesian inference
Bayesian hypothesis test: Type I and II errors
https://stats.stackexchange.com/questions/387974/bayesian-hypothesis-test-type-i-and-ii-errors
<p>In a Bayesian hypothesis test between two alternatives A and B, what is the probability of making a type I and type II error?</p> <p>This question has been asked many times on this forum in various formats: Is Bayesian hypothesis testing immune to peaking? What is the optimal stopping point? If the Bayes factor is more reliable than p-values, can we completely trust it? In this <a href="https://alexdeng.github.io/public/files/continuousMonitoring.pdf" rel="nofollow noreferrer">paper</a> Deng provides a strategy to stop early through FDR control and states that Bayesian testing does not provide type I error control based on simulation studies. </p> <p>However, I do feel that the only way to clarify those questions and make Bayesian hypothesis testing more mainstream is to provide a solid argument against a frequentist concern namely type I and type II errors through a mathematical formula. </p> <p>Consider the following situation: In a Bayesian hypothesis test between two alternatives A and B, what is the probability of making a type I and type II error?</p> <p>Consider the following situation: </p> <p>The posterior distribution is given by rebeta(z+a,N-z+b), where the posterior probability of A is rbeta(z0+a,N-z+b) and the posterior probability of B is rbeta(z1+a,N-z+b). Both have a Bernoulli likelihood and a beta-distributed prior. </p> <ul> <li>What is the type I and II error after the first test? after the nth test? </li> </ul>
260
bayesian inference
In Bayesian inference, it is said that for large samples, the posterior density is dominated by the likelihood. What does this mean?
https://stats.stackexchange.com/questions/519250/in-bayesian-inference-it-is-said-that-for-large-samples-the-posterior-density
<p>In Bayesian inference, it is said that <em><strong>for large samples, the posterior density is dominated by the likelihood. Furthermore, in the region where the likelihood is large, the posterior density is nearly constant.</strong></em> Could you kindly explain the logic behind such as a statement? I would really appreciate if somebody could shed some light on this.</p>
261
bayesian inference
How to optimise waterfall questions of purchase value
https://stats.stackexchange.com/questions/434205/how-to-optimise-waterfall-questions-of-purchase-value
<p>Imagine I have an item I want to sell to a person. I know for sure that the person is not willing to pay more than \$X for this item, but I don't know which value between \$0 and \$X they <em>are</em> willing to pay for it, and I'm equally uncertain about all of them. So if I call how much they're willing to pay for it V, <span class="math-container">$p(V)$</span> is a uniform distribution between 0 and X.</p> <p>I cannot, however, straightforwardly ask them. What I can do instead is ask whether they're willing to pay some value \$Y for the item. If they are, then I get \$Y and sell the item. If they aren't, then I cannot sell the item at all, and they pay me nothing.</p> <p>Therefore, they will buy the product if they value my item at \$Y or more and will not otherwise, and so the expected value of asking for \$Y is <span class="math-container">$p(V\geq Y) * Y$</span>. Since <span class="math-container">$p(V) = 1/X$</span>, it is straightforward to see that the value Y which maximises that expectation is X/2, and the actual expected value is \$X/4.</p> <hr> <p>Suppose, however, that I can ask this person <em>twice</em> instead of just once. That is, I can ask them whether they're willing to pay \$Y for the item. Then:</p> <ul> <li>If they are, I get $Y and sell the item.</li> <li>If they are not, I can ask again for a different value \$X, and then either I sell the item for \$X or I don't and get nothing.</li> </ul> <p>An initial, naive solution would be to just iterate the above suggestion: first I ask for \$X/2 and then, if I'm refused, I update my probability distribution and ask for \$X/4. The expected value of that strategy is \$5X/16 (50% chance that I get \$X/2, then a 50% chance that there's a 50% chance that I get \$X/4, and then in the remaining case I get \$0).</p> <p>However, that is not the optimal strategy. Suppose, instead, that I decide to ask for \$2X/3 and then for \$X/3. There's a 1/3 chance that they'll take me up on my first request and a 2/3 chance that they won't; after that, there's a 50% chance that they'll take me up on my <em>second</em> request, and a 50% chance that they won't. The final expected value of that strategy is \$X/3, which is greater than the \$5X/16 from the previous idea.</p> <p>How would I have found this out in advance? What if I'm allowed to ask N questions, is it always better to ask for \$(N - 1)X/N and then \$(N - 2)X/N and so on until I get to \$X/N?</p> <hr> <p>Now, suppose instead that I have M different people who might be willing to buy my item. I have a joint prior distribution about how much they value my item. I can ask a <em>total of</em> N questions between them all and, since my beliefs about how much they value this item are correlated, whenever one of them refuses the item this is also information about how much the others value it.</p> <p>How do I solve <em>this</em> problem? If it's too open, suppose I constrain it to having a multivariate Normal distribution (with known mean vector and covariance matrix) for my prior beliefs about how much they value my item; where do I go from here?</p>
<p><strong>Short answer:</strong></p> <ol> <li><p>Indeed, when the same customer may be approached at most <span class="math-container">$n$</span> times, it is optimal to start with offer <span class="math-container">$y_1=\frac{n-1}{n}x$</span> and decrease the price by <span class="math-container">$\frac{x}{n}$</span> with every refusal.</p></li> <li><p>The above result only holds for the uniform distribution of the customer's valuation <span class="math-container">$v$</span>. Under the normal distribution, closed form answer is feasible only numerically.</p></li> <li><p>When <span class="math-container">$n$</span> customer may be approached, each at most once, the optimization problem assumes a double layer structure: inner smooth optimization problem is wrapped in outer combinatorial optimization task. Coupled with non-trivial covariance structure in valuations, this makes the analytical (and/or tractable) closed form solution infeasible. </p></li> <li><p>A promising approach for the latter case would be to assume that valuations <span class="math-container">$v_1,...,v_n$</span> are uniformly distributed over a <a href="https://en.wikipedia.org/wiki/Parallelepiped#Parallelotope" rel="nofollow noreferrer">parallelotope</a> and leverage the <a href="https://www.cs.mcgill.ca/~fukuda/soft/polyfaq/node26.html" rel="nofollow noreferrer">polyhedral computation</a> methods to speed up computations.</p></li> </ol> <p><strong>Detailed answer</strong></p> <ol> <li><strong>Approaching the same customer at most <span class="math-container">$n$</span> times</strong> When we are dealing with asingle customer and may try up to <span class="math-container">$n$</span> times until he walks away, the problem solution is straightforward and and follows the logic presented in the other answer to this post. The expected profit can be written down as follows:</li> </ol> <p><span class="math-container">\begin{align} \mathbb{E}\pi =&amp; y_1 \cdot \mathbb{P}(v &gt; y_1) + y_2 \cdot \mathbb{P}(v &lt; y_1 \cap v &lt; y_2 )+ ... + y_n \cdot \mathbb{P}(v &lt; y_n \cap (\cap_{j=1}^{n-1} v &lt; y_j )) =\\ = &amp; \sum_{i=1}^n y_i \mathbb{P}(v &lt; y_n \cap (\cap_{j=1}^{i-1} v &lt; y_j )) = \\ = &amp; \sum_{i=1}^n y_i \mathbb{P}(y_i &lt; v &lt; y_{i-1}) \end{align}</span></p> <p>where the implicit convention is that <span class="math-container">$y_0$</span> is some upper bound of the support of <span class="math-container">$v$</span> so that <span class="math-container">$\mathbb{P}( v &lt; y_0) = 1$</span>.</p> <p>The solution to the problem is thus a sequence <span class="math-container">$\mathbf{y}:=(y_1,...,y_n)$</span> maximizing the expected profit. In the case of uniform distribution, <span class="math-container">$v \sim U[0, x]$</span> the first order conditions with respect to <span class="math-container">$y_i$</span> represent a system of linear equations:</p> <p><span class="math-container">\begin{equation} \frac{\partial}{\partial y_i} \mathbb{E} \pi = 0 \iff \begin{cases} \frac{x-y_1}{x} - y_1 \frac{1}{x} + y_2 \frac{1}{x} = 0 \\ \frac{y_{i-1}-y_{i}}{x} - y_i \frac{1}{x} + y_{i+1} \frac{1}{x} = 0 \quad \forall i=2,..n-1 \\ \frac{y_{n-1}-y_{n}}{x} - y_n \frac{1}{x} = 0 \end{cases} \end{equation}</span></p> <p>It is immediate from the second equality that there exists an increment <span class="math-container">$\delta$</span> such that <span class="math-container">$\forall i=2,..n$</span> holds <span class="math-container">$y_{i} = y_{i-1} - \delta$</span>, implying that <span class="math-container">$y_i = y_1 - (i-1)\delta$</span>. This is consistent with the first and the last equality iff <span class="math-container">$\delta = \frac{x}{n}$</span> and <span class="math-container">$y_1 = \frac{n-1}{n}x$</span> as per intuition described in the OP. </p> <p><strong>Remark:</strong></p> <p>Once we switch to a more contrived assumption about the distribution of the customer valuation, explicit answer becomes impossible to obtain analytically. E.g. in the case of a normal distribution truncated to the interval <span class="math-container">$[0,x]$</span> the system of first order conditions becomes</p> <p><span class="math-container">\begin{equation} \begin{cases} \frac{\Phi(x)-\Phi(y_1)}{\Phi(x)-1/2} - y_1 \frac{\phi(y_1)}{\Phi(x)-1/2} + y_2 \frac{\phi(y_1)}{\Phi(x)-1/2} = 0 \\ \frac{\Phi(y_{i-1})-\Phi(y_{i})}{\Phi(x)-1/2} - y_i \frac{\phi(y_i)}{\Phi(x)-1/2} + y_{i+1} \frac{\phi(y_i)}{\Phi(x)-1/2} = 0 \quad \forall i=2,..n-1 \\ \frac{\Phi(y_{n-1})-\Phi(y_{n})}{\Phi(x)-1/2} - y_n \frac{\phi(y_n)}{\Phi(x)-1/2} = 0 \end{cases} \end{equation}</span></p> <p>where <span class="math-container">$\phi(z) = (2\pi)^{-1/2}\exp(-\frac{z^2}{2})$</span> is the pdf of the standart normal and <span class="math-container">$\Phi(z) = \int_{-\infty}^{z} \phi(\xi)d\xi$</span> is its cdf. </p> <p>One can clearly see that the analytical closed form solution is infeasible in this case.</p> <ol start="2"> <li><strong>Approaching at most <span class="math-container">$n$</span> different customers no more than once each</strong></li> </ol> <p>First take a look at a 2-customer problem: for a sequence of offers <span class="math-container">$y_1, y_2$</span> and the ordering (permutation) of clients <span class="math-container">$\sigma(1),\sigma(2)$</span> the expected revenue looks as follows:</p> <p><span class="math-container">\begin{align} \mathbb{E}\pi = &amp;y_1 \mathbb{P}(v_{\sigma(1)} &gt; y_1) + \mathbb{P}(v_{\sigma(1)} &lt; y_1) \cdot y_2 \mathbb{P}(v_{\sigma(2)}&gt;y_2 | y_1 &gt; v_{\sigma(1)}) =\\ = &amp; y_1 \mathbb{P}(v_{\sigma(1)} &gt; y_1) + y_2 \mathbb{P}(v_{\sigma(1)} &lt; y_1 \cap v_{\sigma(2)}&gt;y_2). \end{align}</span></p> <p>Now, it is not hard to write down the formula for <span class="math-container">$n$</span> clients:</p> <p><span class="math-container">\begin{align} \mathbb{E}\pi =&amp; \sum_{i=1}^n y_i \cdot \mathbb{P}\left(v_{\sigma(i)} &gt; y_i \cap ( \cap_{j=1}^{i-1} v_{\sigma(j)}&lt;y_j)\right) = \\ = &amp; \sum_{i=1}^n y_i \cdot (\mathbb{P}\left(\cap_{j=1}^{i-1} v_{\sigma(j)}&lt;y_j\right) - \mathbb{P}\left(\cap_{j=1}^i v_{\sigma(j)}&lt;y_j\right) )= \\ = &amp; y_1 + \sum_{i=1}^{n} (y_{i+1} - y_i) \mathbb{P}\left(\cap_{j=1}^{i} v_{\sigma(j)}&lt;y_j\right), \end{align}</span></p> <p>where the last equality holds true if we posit <span class="math-container">$y_{n+1}\equiv 0$</span>.</p> <p>Here just as in the remark above, one can see that analytical closed form solution of the inner smooth optimization problem is impossible to obtain under the joint normality assumption as the first order conditions will certainly be non-polynomial.</p> <p>This and given that the outer problem is combinatorial makes the numerical (algorithmical) solution of the problem the most promising way forward.</p> <p><strong>An idea for a way forward:</strong></p> <p>Computation of probabilities <span class="math-container">$\mathbb{P}\left(\cap_{j=1}^{i} v_{\sigma(j)}&lt;y_j\right)$</span> might turn out to be easier if we assume a <em>distorted uniform distribution</em> (uniform over a [parallelotope][3]). I haven't seen anything like this in the litterature myself, but it might be the good compromise between the simplicity of operations with the uniform distribution and the capacity of the normal to capture non-trivial covariance structure. </p> <p>More precisely, one may assume that <span class="math-container">$\mathbf{v}=(v_1,...,v_n)$</span> is distributed uniformly over an <em><span class="math-container">$n$</span>-parallelotope</em>. In this case the parameter of the distribution would be the matrix of <span class="math-container">$n$</span> stacked vectors <span class="math-container">$a_1,...,a_n$</span> in the frame of the parallelotope, and the unconditional density would be the inverse of its volume, whereas conditional probabilities may be computed as volumes of straightforwardly formulated convex polytopes.</p> <p>Given a reasonably fast oracle for computing optimal <span class="math-container">$\mathbf{y}$</span>, the combinatorial optimization for moderate <span class="math-container">$n$</span> may then be performed by a simple comparison of expected profits of different permutations.</p>
262
bayesian inference
Fitting a single model to different datasets that include different variables
https://stats.stackexchange.com/questions/541925/fitting-a-single-model-to-different-datasets-that-include-different-variables
<p>Suppose we have two datasets <em>df_1</em> with variables {A,B,C} and <em>df_2</em> with variables {A,C,D} (A &amp; C are the only mutual variable in the two datasets). Our aim is to predict A using B &amp; C or C &amp; D (depending on which pair is given). The simplest approach is to model A using <em>df_1</em> and <em>df_2</em> separately and have two different fits suitable for each pair of inputs. Now I have two questions:</p> <ol> <li><p>Is it possible to use a hierarchical model where we fit A ~ B + C and A ~ C + D at the same time and allow for information sharing via a hierarchical structure to boost the learning of the effect of C on A?</p> </li> <li><p>Can we use a change point model in this 3D space (which I suppose makes it a change plane model) to have two planes instead of a single one fitted to the data?</p> </li> </ol> <p><strong>Note: I don't like P-Splines to be used here for dealing with non-linearity because it harms interpretability and that's why I prefer using change points (change planes) to report the linear effect of B and C on A in maybe two or three different regions in the space.</strong></p>
263
bayesian inference
How to find the likelihood probability of an exponential data model
https://stats.stackexchange.com/questions/563991/how-to-find-the-likelihood-probability-of-an-exponential-data-model
<p>I have a very basic knowledge in statistics, so I am struggling a bit with the ideas of Bayesian inference.</p> <p>My data model looks like this,</p> <p><span class="math-container">$$ z(t) = \sum_{n = 1}^{N} e^{j 4\pi/\lambda \sqrt{(x_{n, t -1} + u_n.dt)^2 + (y_{n, t -1} + v_n.dt)^2}} + \mathcal{N}(0, \sigma^2) $$</span></p> <p>Parameter space is <span class="math-container">$\theta = [u_n \quad v_n]$</span></p> <p>The idea is to find a distribution for <span class="math-container">$u$</span> and <span class="math-container">$v$</span> given the measurements. I know the prior probability distributions for <span class="math-container">$u$</span> and <span class="math-container">$v$</span>. So, that is <span class="math-container">$p(\theta) = p(u_n) . p(v_n)$</span>. I assume they are independent. Am I correct here?</p> <p>Then, as I understand from the model, <span class="math-container">$z$</span> is an exponential complex function of <span class="math-container">$u$</span> and <span class="math-container">$v$</span>. Therefore, the probability density of <span class="math-container">$z$</span> shouldn't be the same as <span class="math-container">$u_n$</span> and <span class="math-container">$v_n$</span> right?</p> <p>So, how to come up with the likelihood that is necessary in Bayesian inference algorithms? That is <span class="math-container">$p(z|\theta)$</span>? Every example I see, they always start with a PDF model for the data instead of the data model directly. Any idea how to proceed?</p> <p>In the equation, <span class="math-container">$\mathcal{N}(0, \sigma^2)$</span> is additive zero-mean Gaussian noise with variance <span class="math-container">$\sigma^2$</span>, <span class="math-container">$dt$</span> is the time step. The <span class="math-container">$z$</span> samples are samples in time. The term <span class="math-container">$N$</span> is the number of targets. <span class="math-container">$x_n$</span> and <span class="math-container">$y_n$</span> are positions of <span class="math-container">$N$</span> targets and <span class="math-container">$u_n$</span> and <span class="math-container">$v_n$</span> are their velocities. I also see a potential difference between <span class="math-container">$z$</span> and <span class="math-container">$\theta$</span>. The <span class="math-container">$\theta$</span> is a function of targets and <span class="math-container">$z$</span> is a signal in time <span class="math-container">$t$</span>.</p>
264
bayesian inference
Understanding convergence in Bayesian inference of coin tossing
https://stats.stackexchange.com/questions/432710/understanding-convergence-in-bayesian-inference-of-coin-tossing
<p>When we are uncertain about the probability of head, <span class="math-container">$p_H$</span>, in a coin tossing, we often model it using a Beta prior as follows: <span class="math-container">$$p_H\sim \text{Beta}(a_0,b_0),$$</span> for some parameters <span class="math-container">$a_0,b_0$</span>. </p> <p>When we toss the coin <span class="math-container">$N$</span> times and when we get <span class="math-container">$N_H$</span> and <span class="math-container">$N_T$</span> number of heads and tails, respectively, the posterior we have is <span class="math-container">$$p_H\sim \text{Beta}(a_0+N_H,b_0+N_T).$$</span></p> <p>So, the mean value of <span class="math-container">$p_H$</span> is <span class="math-container">$\frac{a_0+N_H}{a_0+N_H+b_0+N_T}$</span> with a variance <span class="math-container">$\frac{(a_0+N_H)(a_0+N_H)}{N^2(N+1)}.$</span></p> <p>My question here is</p> <p><strong>Can we say the posterior distribution converges to the "true" distribution?</strong></p> <p>when the true distribution (or the true scalar value of <span class="math-container">$p_H$</span>) is never going to be observed?</p>
<h4>Yes, it does converge to the &quot;true distribution&quot; (suitably defined)</h4> <p>First of all, it is worth noting that it is a little strange to refer to the &quot;true distribution&quot; of the parameter as something aside from the prior and posterior. If you proceed under the operational Bayesian approach then the parameter has an operational definition as a function of the sequence of observable values, and so it is legitimate to refer to a &quot;true value&quot; of the parameter (see esp. <a href="https://rads.stackoverflow.com/amzn/click/com/047149464X" rel="nofollow noreferrer" rel="nofollow noreferrer">Bernardo and Smith 2000</a>). The standard convergence theorems in Bayesian statistics show that the posterior converges weakly to the true parameter, defined operationally through the law-of-large numbers. It is less common to refer to a &quot;true distribution&quot; of the parameter, as something apart from the prior or posterior. If I understand your intention correctly, that would essentially just be a point-mass distribution on the true value. If that is what you mean, then yes, the posterior will converge (weakly) to this. That is really just another restatement of the standard convergence theorems in Bayesian statistics.</p> <p>Taking <span class="math-container">$\mathbf{X} = (X_1, X_2, X_3, ...)$</span> to be the sequence of observable coin-toss outcomes, we define <span class="math-container">$p_H$</span> operationally as a function of <span class="math-container">$\mathbf{X}$</span>. In Bayesian analysis with IID data the most operational definition of the parameters is an index to the limiting empirical distribution of the observable sequence (see <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1751-5823.2008.00059.x" rel="nofollow noreferrer">O'Neill 2009</a> for discussion and details). Now, the beta posterior you are referring to arises in the IID model:</p> <p><span class="math-container">$$X_1,X_2,X_3,... | p_H \sim \text{IID Bern}(p_H).$$</span></p> <p>The parameter <span class="math-container">$p_H$</span> can be given an operational definition as the limit of the sample mean of the observable coin-tosses (see <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1751-5823.2008.00059.x" rel="nofollow noreferrer">O'Neill 2009</a> again). To facilitate our analysis, we will use the notation <span class="math-container">$\hat{p}_H \equiv \lim N_H/N$</span> to denote this limit, so the &quot;true value&quot; of the parameter <span class="math-container">$p_H$</span> is the point <span class="math-container">$\hat{p}_H$</span>. (In other words <span class="math-container">$p_H$</span> <em>is</em> <span class="math-container">$\hat{p}_H$</span>, but we will use two separate referents to elucidate the convergence.)</p> <p>Your posterior distribution does indeed converge to the &quot;true distribution&quot; of <span class="math-container">$p_H$</span>, which is a point-mass distribution on <span class="math-container">$\hat{p}_H$</span>. To see this, we first derive the asymptotic mean and variance<span class="math-container">$^\dagger$</span> of the posterior:</p> <p><span class="math-container">$$\begin{equation} \begin{aligned} \lim_{N \rightarrow \infty} \mathbb{E}(p_H| \mathbf{X}_N) &amp;= \lim_{N \rightarrow \infty} \frac{a_0+N_H}{a_0 + b_0 + N} \\[6pt] &amp;= \lim_{N \rightarrow \infty} \frac{N_H}{N} \cdot \frac{a_0+N_H}{N_H} \Bigg/ \frac{a_0+b_0+N}{N} \\[6pt] &amp;\overset{a.s}{=} \lim_{N \rightarrow \infty} \frac{N_H}{N} = \hat{p}_H. \\[6pt] \lim_{N \rightarrow \infty} \mathbb{V}(p_H| \mathbf{X}_N) &amp;= \lim_{N \rightarrow \infty} \frac{(a_0+N_H)(b_0+N_T)}{(a_0 + b_0 + N)^2(a_0 + b_0 + N+1)} \\[6pt] &amp;= \lim_{N \rightarrow \infty} \frac{a_0+N_H}{a_0 + b_0 + N} \cdot \frac{b_0+N_T}{a_0 + b_0 + N} \cdot \frac{1}{a_0 + b_0 + N + 1} \\[6pt] &amp;\leqslant \lim_{N \rightarrow \infty} \frac{1}{a_0 + b_0 + N + 1} \\[6pt] &amp;= 0. \\[6pt] \end{aligned} \end{equation}$$</span></p> <p>So, we have <span class="math-container">$\mathbb{E}(p_H| \mathbf{X}_N) \overset{a.s}{\rightarrow} \hat{p}_H$</span> and <span class="math-container">$\mathbb{V}(p_H| \mathbf{X}_N) \rightarrow 0$</span>, which gives <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_mean" rel="nofollow noreferrer">convergence in mean-square</a> to the true parameter value <span class="math-container">$\hat{p}_H$</span>. Using <a href="https://en.wikipedia.org/wiki/Markov%27s_inequality" rel="nofollow noreferrer">Markov's inequality</a> this implies convergence in probability to <span class="math-container">$\hat{p}_H$</span>, which further implies convergence in probability of the posterior to the point-mass distribution on <span class="math-container">$\hat{p}_H$</span>. This means that we have weak convergence to the &quot;true distribution&quot; of the parameter.</p> <hr /> <p><span class="math-container">$^\dagger$</span> Your stated posterior variance is incorrect - I have used the correct posterior variance in my working.</p>
265
bayesian inference
Bayesian inference of Pr(X &gt; Y) where X and Y each have an approximate posterior distribution
https://stats.stackexchange.com/questions/552919/bayesian-inference-of-prx-y-where-x-and-y-each-have-an-approximate-posterior
<p>I am developing a Bayesian system in which I would like to quantify the evidence for or against the conclusion that one data-generating process (X, for which we observe X = x) will produce a more extreme result than another process (Y, for which we observe Y = y).</p> <p>For my purposes, by &quot;more extreme&quot; I mean that I am interested in the quantity Pr(X &gt; Y). Also, X and Y are defined over the positive integers, so &quot;extreme&quot; literally means &quot;more positive&quot;.</p> <p>Suppose that I have empirical posterior distributions for both processes X and Y, that I acquired through an MCMC procedure. Now, my understanding is that since I can sample directly from the posteriors I could simply draw pairs from X and Y an arbitrary number of times, and report the proportion of instances in which x &gt; y. Is this correct?</p> <p>As you read this question, you may infer that I am generally interested in a procedure for comparing X and Y that does not rely on a summary statistic of the two distributions (e.g., Pr(E(X) &gt; E(Y) ). Why would I describe my posterior with a summary when I have the entire posterior distributions to work with? I am unaware of a general method for comparing two distributions that does not degenerate into a simple comparison of means (or other similar summary statistic).</p> <p>Edit: a bit of context, below:</p> <p>Essentially, I have a computer simulation that is designed to accept a bunch of parameters values and then simulate the number of products sold over a 3-month period of time. The parameters include things like time of year (spring, summer, winter, fall), number of stores (numeric), average income of the area (numeric), etc.</p> <p>What I did is set up the simulation with two different sets of parameterizations (e.g., simulation &quot;X&quot; took place in wintertime in a low-income neighborhood whereas simulation &quot;Y&quot; took place in summertime in a high-income neighborhood), and then allowed the program to generate two sets of simulated numbers of products sold. The simulations under both conditions were replicated many times so I have a distribution of results for both X and Y.</p> <p>My goal is to demonstrate that X &gt; Y. I understand that looking at means/modes would simplify my analysis, but I don't want to make my result too sensitive to the shape of the distributions (e.g., could be polymodal).</p> <p>Since the two distributions were generated independently but by programs that only differ by a few parameter values, I suspect that X and Y will vary in similar ways but can be considered to be independent from one another.</p>
<p>Yes you &quot;could simply draw pairs from <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> an arbitrary number of times, and report the proportion of instances in which <span class="math-container">$x &gt; y$</span>.&quot; What you wish to estimate is not a random variable; hence, it is constructed from a point estimator of a parameter of a random distribution. In this case the random distribution may either be the difference statistic <span class="math-container">$D = X - Y$</span> or the ratio statistic <span class="math-container">$R = \frac{X}{Y}$</span> and the parameters would be <span class="math-container">$P(D&gt;0)$</span> and <span class="math-container">$P(R&gt;1)$</span>, respectively. Clearly both of these are equivalent to <span class="math-container">$P(X&gt;Y)$</span>. Looking at the posterior distribution of <span class="math-container">$D$</span> and <span class="math-container">$R$</span> will provide additional information into disparities of the posterior densities of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, but perhaps a simple overlay of the posterior densities would suffice.</p> <p>It is important to stress that to draw a sample from the posterior density of <span class="math-container">$D$</span> or <span class="math-container">$R$</span>, one merely draws a random pair from the posterior density of <span class="math-container">$(X,Y)$</span> and computes <span class="math-container">$D=X-Y$</span> or <span class="math-container">$R=X/Y$</span>.</p>
266
bayesian inference
Bayesian Model(Write out likelihood and prior)
https://stats.stackexchange.com/questions/576071/bayesian-modelwrite-out-likelihood-and-prior
<p>I am working with a dataset regarding transmission rate for a disease spreading among cattle at different farms during a 5-month period.</p> <p>The goal is to estimate the transmission parameter <span class="math-container">$\alpha$</span> using a Bayesian model.</p> <p>I have a dataset with 12 entries(different farms), with the format</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><span class="math-container">$N$</span></th> <th><span class="math-container">$Z_1$</span></th> <th><span class="math-container">$Z_2$</span></th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>32</td> <td>17</td> <td>23</td> <td>0</td> </tr> <tr> <td>10</td> <td>5</td> <td>8</td> <td>1</td> </tr> </tbody> </table> </div> <p>where <span class="math-container">$N$</span> is the amount of cows tested, <span class="math-container">$Z_1$</span> is the amount of infected cows before the period, <span class="math-container">$Z_2$</span> is the amount infected after, and the binary variable <strong>Type</strong> is determines if it is a dairy farm(0) or meat farm(1).</p> <p>I have assumed that the number of newly infected animals during the period follows a Poisson distribution with mean</p> <p><span class="math-container">$$\alpha \frac{SI}{N},$$</span></p> <p>where <span class="math-container">$N$</span>is the number of tested animals, <span class="math-container">$S$</span> is the amount of negative at first testing and <span class="math-container">$I$</span> is the number of positive at first testing. I also model <span class="math-container">$\alpha$</span> as <span class="math-container">$$\log(\alpha) = \beta_0 +\beta_1x_i$$</span> where <span class="math-container">$x_i \in \{0,1\}$</span> is the type of farm.</p> <p>I now want to write out a Bayesian model for this(likelihood and prior), with diffuse priors on the parameters. My first idea was to write out the likelihood as</p> <p><span class="math-container">$$\prod_{i=1}^{12} \frac{1}{(Z_{2_i}-Z_{1_i})!}\left(\frac{S_i I_i}{N_i}\exp\{\beta_0 + \beta_1\text{type}_i\} \right)^{Z_{2_i}-Z_{1_i}} \exp\left\{-\frac{S_i I_i}{N_i}\exp\{\beta_0 + \beta_1\text{type}_i\} \right\}$$</span></p> <p>but I am quite unsure. Espescially as how I should write out the prior, but I would also appreciate some help on the likelihood.</p>
<p>There are two parameters in your model, <span class="math-container">$\beta_0$</span> and <span class="math-container">$\beta_1$</span>, which are real parameters that can vary from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>, so the most <a href="https://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors" rel="nofollow noreferrer">uninformative prior</a> that you can use for these parameters would be the improper uniform prior over the real number line, <span class="math-container">$P(\beta_0,\beta_1) \propto 1$</span>, in which case your expression for the likelihood is the same as the expression for the posterior.</p> <p>In stan, this model could be expressed like this:</p> <pre><code>data { int n_farms; int&lt;lower=0&gt; N[n_farms]; // number of cows tested int&lt;lower=0&gt; Z1[n_farms]; // number of cows infected before int&lt;lower=0&gt; Z2[n_farms]; // number of cows infected after int&lt;lower=0,upper=1&gt; type[n_farms]; // whether the farm is dairy (0) or meat (1) } transformed data { int newly_infected[n_farms]; for (i in 1:n_farms) newly_infected[i] = Z2[i] - Z1[i]; } parameters { real beta0; real beta1; } transformed parameters { vector[n_farms] alpha; vector[n_farms] newly_infected_mu; for (i in 1:n_farms) { alpha[i] = exp(beta0 + beta1 * type[i]); newly_infected_mu[i] = alpha[i] * (N[i] - Z1[i]) * Z1[i] / N[i]; } } model { // model of newly infected newly_infected ~ poisson(newly_infected_mu); } </code></pre> <p>By omitting an explicit prior on the <code>beta0</code>, <code>beta1</code> parameters in the <code>model</code> block, we are implicitly using the improper uniform prior on these parameters.</p> <p>However, given that we have very little data available, it would be best to use available domain knowledge to place priors on <span class="math-container">$\beta_0,\beta_1$</span> to establish our expectations for the range of values that we might consider to be plausible in the real world. The most uninformative prior that we can apply that would establish a range of reasonable values for each of our parameters would be a normal distribution with a specified mean and standard deviation for each of the parameters that are approximately set based on domain experience. So, we could update our stan model with normal priors to something like this:</p> <pre><code>data { int n_farms; int&lt;lower=0&gt; N[n_farms]; // number of cows tested int&lt;lower=0&gt; Z1[n_farms]; // number of cows infected before int&lt;lower=0&gt; Z2[n_farms]; // number of cows infected after int&lt;lower=0,upper=1&gt; type[n_farms]; // whether the farm is dairy (0) or meat (1) // prior parameters real beta0_mu; real&lt;lower=0&gt; beta0_sd; real beta1_mu; real&lt;lower=0&gt; beta1_sd; } transformed data { int newly_infected[n_farms]; for (i in 1:n_farms) newly_infected[i] = Z2[i] - Z1[i]; } parameters { real beta0; real beta1; } transformed parameters { vector[n_farms] alpha; vector[n_farms] newly_infected_mu; for (i in 1:n_farms) { alpha[i] = exp(beta0 + beta1 * type[i]); newly_infected_mu[i] = alpha[i] * (N[i] - Z1[i]) * Z1[i] / N[i]; } } model { // priors beta0 ~ normal(beta0_mu, beta0_sd); beta1 ~ normal(beta1_mu, beta1_sd); // model of newly infected newly_infected ~ poisson(newly_infected_mu); } </code></pre>
267
bayesian inference
Compute the Maximum A Posteriori (MAP) estimate of θ
https://stats.stackexchange.com/questions/578414/compute-the-maximum-a-posteriori-map-estimate-of-%ce%b8
<p>How can I compute the Maximum A Posteriori (MAP) estimate of <span class="math-container">$\theta$</span> with those informations: a discrete random variable y with values in {1, 2, . . . , N} has a Binomial distribution depending on the unknown probability <span class="math-container">$\theta \in (0,1)$</span> of the form <span class="math-container">$p(y=k|θ)=\binom{N}{k}θ^{k}(1-θ)^{N-k}$</span>. I've to compute the Maximum A Posteriori (MAP) estimate of <span class="math-container">$\theta$</span> based on a single observation y assuming a prior density on <span class="math-container">$\theta$</span> to be Beta distribution <span class="math-container">$B(x; a, b)=\frac{1}{B}x^{a-1}(1-x)^{b-1}$</span> where <span class="math-container">$x \in (0, 1)$</span>, <span class="math-container">$a, b &gt; 0$</span> and <span class="math-container">$B$</span> a normalization parameter. I also know that the mode of the Beta distribution is <span class="math-container">$\hat{x}=\frac{a-1}{a+b-2}$</span>. <br /> My idea is to calculate <span class="math-container">$\arg \max_{θ}[p(y=k|θ)p(θ)]$</span> but I don't know if it is the right way, and I don't even know how to proceed. <br /> Thanks! <br /> EDIT: I tried to compute the MAP estimate of <span class="math-container">$\theta$</span> using the theoretical definition: <span class="math-container">$\hat{θ}_{MAP}=\arg \max_{θ}[p(y=k|θ)p(θ)]$</span>. <br /> So I obtained: <span class="math-container">$\hat{θ}_{MAP}=\arg \max_{θ}[\binom{N}{k}θ^{k}(1-θ)^{N-k}\frac{1}{B}θ^{a-1}(1-θ)^{b-1}]$</span>. At this point I don't know how to proceed. Is there any analytical method to find the value of θ that maximize this function? Can the mode of the Beta distribution be an useful information?</p>
268
bayesian inference
Bayesian Prior definition
https://stats.stackexchange.com/questions/578510/bayesian-prior-definition
<p>The prior of an inference problem where we try to infer <span class="math-container">$x$</span> from observations <span class="math-container">$y$</span> is defined as <span class="math-container">$P(X)$</span>. Often (<a href="https://arxiv.org/pdf/1010.5141.pdf" rel="nofollow noreferrer">e.g.</a>) I see another definition where the prior is defined as <span class="math-container">$P(X|Q)$</span>, what exactly is <span class="math-container">$Q$</span> in that context, are those the hyperparameters as shown <a href="https://en.wikipedia.org/wiki/Bayesian_inference#:%7E:text=Bayesian%20inference%20is%20a%20method,and%20especially%20in%20mathematical%20statistics." rel="nofollow noreferrer">here</a>?</p>
269
bayesian inference
Reasonable to incorporate sample size into beta-binomial?
https://stats.stackexchange.com/questions/581139/reasonable-to-incorporate-sample-size-into-beta-binomial
<p><strong>Setup:</strong></p> <p>The relationship between the beta and binomial distributions is well known.</p> <p><span class="math-container">$$\frac{\pi^{\alpha - 1} (1 - \pi)^{\beta - 1}}{B(\alpha, \beta)} \leftrightarrow {{n}\choose{x}}\pi^{x} (1 - \pi)^{n-x}$$</span></p> <p>By comparing the two, one can see:</p> <ol> <li><span class="math-container">$\alpha - 1$</span> is analogous to the number of successes, <span class="math-container">$x$</span></li> <li><span class="math-container">$\beta - 1$</span> is analogous to the number of failures, <span class="math-container">$n-x$</span>, and thus</li> <li><span class="math-container">$\alpha + \beta - 2$</span> is analogous to the number of trials, <span class="math-container">$n$</span>.</li> </ol> <p>I am faced with the problem of testing whether two independent binomial random variables of different size have the following relation:</p> <p>First we observe a random binomial, <span class="math-container">$X_1 \sim Bin(n_1, p_1)$</span>.</p> <p>Later, we observe another random binomial, <span class="math-container">$X_2 \sim Bin(n_2, p_2)$</span>.</p> <ul> <li><span class="math-container">$H_0: p2 = 0.65p_1$</span></li> <li><span class="math-container">$H_a: p2 \ge 0.65p_1$</span></li> </ul> <p>My thought is to take a Bayesian approach.</p> <hr /> <p>If I assume an uniform prior for <span class="math-container">$\pi_1$</span>, <span class="math-container">$\pi_1 \sim Beta(1,1)$</span>, then the posterior distribution is <span class="math-container">$\pi_1 | x_1 \sim Beta(x_1 + 1, n_1 - x_1 + 1)$</span></p> <p><strong>Question: Is it reasonable to do the following?</strong></p> <p>Since we know the outcome of the first trial <span class="math-container">$\{x_1, n_1\}$</span>, as well as the size of the second trial, <span class="math-container">$n_2$</span>, I want to choose <span class="math-container">$\alpha_2$</span> and <span class="math-container">$\beta_2$</span> such that:</p> <ol> <li><span class="math-container">$\frac{\alpha_2}{\alpha_2 + \beta_2} = \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}$</span>, and</li> <li><span class="math-container">$\alpha_2 + \beta_2 = n_2 + 2$</span></li> </ol> <p>This suggests choosing the following values of <span class="math-container">$\alpha_2$</span> and <span class="math-container">$\beta_2$</span>:</p> <ul> <li><span class="math-container">$\alpha_2$</span></li> </ul> <p><span class="math-container">$$ \begin{align} \frac{\alpha_2}{\alpha_2 + \beta_2} = \frac{\alpha_2}{n_2 + 2} &amp; = \frac{13}{20} \frac{x_1 + 1}{n_1 + 2} \\ \alpha_2 &amp; = \frac{13}{20} \frac{x_1 + 1}{n_1 + 2} (n_2 + 2) \end{align} $$</span></p> <p>This intuitively make sense, since we believe the number of successes, <span class="math-container">$x_2$</span>, will be roughly <span class="math-container">$\frac{13}{20} p_1$</span> of the <span class="math-container">$n_2$</span> trials (i.e. <span class="math-container">$\alpha_2 + \beta_2 = n_2 + 2$</span>).</p> <ul> <li><span class="math-container">$\beta_2$</span></li> </ul> <p><span class="math-container">$$ \begin{align} \beta_2 = n_2 + 2 - \alpha_2 &amp; = n_2 + 2 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2} (n_2 + 2) \\ &amp; = \left(1 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}\right) (n_2 + 2) \end{align} $$</span></p> <hr /> <p>So, the prior would be <span class="math-container">$\pi_2 \sim Beta\left(\frac{13}{20} \frac{x_1 + 1}{n_1 + 2} (n_2 + 2), \left(1 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}\right) (n_2 + 2)\right)$</span></p> <p>Then the posterior distribution is <span class="math-container">$\pi_2 | x_2 \sim Beta\left(\frac{13}{20} \frac{x_1 + 1}{n_1 + 2} (n_2 + 2) + x_2, \left(1 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}\right) (n_2 + 2) + n_2 - x_2\right)$</span></p> <hr /> <p>Is this reasonable? Thanks for any thoughts you have on the matter.</p> <hr /> <p><strong>EDIT:</strong></p> <p>Based on <a href="https://stats.stackexchange.com/users/919/whuber">Whuber's</a> critique (and whuber is a stats.stackexchange God), perhaps I can replace the <span class="math-container">$n_2 + 2$</span> in my prior specification of <span class="math-container">$\pi_2$</span> with some other constant, <span class="math-container">$k$</span>? This would maintain my assumption that <span class="math-container">$E(\Pi_2) = \frac{13}{20}E(\Pi_1)$</span>, but I can adjust <span class="math-container">$k$</span> to modify the variance and thereby reflect my confidence in the chosen value?</p> <p>So, the prior would be <span class="math-container">$\pi_2 \sim Beta\left(k * \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}, k * \left(1 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}\right) \right)$</span></p> <p>Then the posterior distribution is <span class="math-container">$\pi_2 | x_2 \sim Beta\left(k * \frac{13}{20} \frac{x_1 + 1}{n_1 + 2} + x_2, k * \left(1 - \frac{13}{20} \frac{x_1 + 1}{n_1 + 2}\right) + n_2 - x_2\right)$</span></p>
270
bayesian inference
How to go about selecting an algorithm for approximate Bayesian inference
https://stats.stackexchange.com/questions/32214/how-to-go-about-selecting-an-algorithm-for-approximate-bayesian-inference
<p>I am wondering if there are any good rules of thumb for how to go about selecting an approximate inference algorithm for a problem/model (specifically when exact inference is intractable)? When you are faced with a problem, what are the things you consider when selecting an approach for inference (e.g. MCMC, belief propagation, variational, etc.)?</p>
<p>At first you have to decide what amount of time you can afford. </p> <p>In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avoid complex integrations in some cases. </p> <p>In case you have a strong background in statistics and you want to integrate a lot you can try methods like variational lower bound or expectation propagation. So, you have to choose carefully a batch of parameters (for example, in case you try a variational lower bound approach you have to select distribution you can integrate out to replace initial distribution, so you have to use your intuition (or use normal distribution)). </p> <p>If this problem is new and no other approaches were tried you can simply try to use gaussian or Laplace approximation. </p> <p>Also, in many cases yo can use a method proposed in the state of the art. For example, all methods you mention were successfully used to proceed heteroscedasticity gaussian processes regression (see, for example paper <a href="http://www.tsc.uc3m.es/~miguel/papers/vhgpr_icml.pdf" rel="nofollow">http://www.tsc.uc3m.es/~miguel/papers/vhgpr_icml.pdf</a> from the 2011 ICML conference). </p> <p>P.S. In ICML 2012 article <a href="http://icml.cc/discuss/2012/360.html" rel="nofollow">http://icml.cc/discuss/2012/360.html</a> interesting and simple method for variational inference was proposed, so you can try it for your problem. </p>
271
bayesian inference
Comparison of Variational Bayes and Expectation Maximization algorithms
https://stats.stackexchange.com/questions/82184/comparison-of-variational-bayes-and-expectation-maximization-algorithms
<p>I need to learn both the VB and EM methods for Bayesian Networks. Before going into detail of both algorithms, which I am a bit aware of, I need to EXACTLY understand the basic motivations behind them. Different resources use the terms "inference, parameters, estimation, learning" so intermingled that I easily lose the track and find myself more confused.</p> <p>I will try to explain the purpose of the algorithms in a comparative way, as far as I understand them for now (probably incorrectly). As the question, I am kindly asking you to correct me and to show me my thinking errors in my explanation below so I can correctly grasp the fundamentals.</p> <p>So, we have a very general system of random variables. It is commonly said that there are no parameters in a Bayesian Inference task (only variables) and therefore we have two types of variables: Set of observed variables, $D$ and unobserved ones. The unobserved ones consists of the set of the latent variables (and/or observable but missing variables) $Z$ and the ones on which we want to make an estimation, inference, etc, $X$. The whole system consists of the random variable sets $D, X$ and $Z$.</p> <p>So the general Bayesian Inference problem is $P(X|D) = \int P(X,Z|D)dZ = \int P(X|Z,D)P(Z|D)dZ$ , we condition on the given data and integrate out all "nuisance" variables.</p> <p>As far as I understand, EM tries to make a "point" estimate to the posterior distribution $P(X|D)$ by finding (not exactly finding, by converging actually) $X^* = arg max_{X} P(X|D)$ in which $X^*$ is the point estimates of all variables $x$ which are $x \in X$. So we take these estimates as the "observed values of $X$" from now on and use in the following inference steps along with the $D$. For example if we have two disjoint $Z_1$ and $Z_2$ with $Z_1 \cup Z_2 = Z$ and we want to infer $Z_1$ it is now $P(Z_1|D,X=X^*) = \int P(Z_1,Z_2|D,X=X^*) dZ_2$</p> <p>The Variational Bayes method treats $P(X|D) = \int P(X,Z|D)dZ = \int P(X|Z,D)P(Z|D)dZ$ by finding a tractable approximation to the posterior $P(X|D)$, say, $Q(X)$. This is not a point estimate, rather a simpler but complete distribution. We can use it for future inferences based on the data $D$. So again using $Z_1$ and $Z_2$, it is:</p> <p>$P(Z_1|D) = \int\int P(Z_1,Z_2,X|D) dZ_2dX = \int\int P(Z_1,Z_2|X,D) P(X|D) dZ_2 dX = \int\int P(Z_1,Z_2|X,D) Q(X) dZ_2 dX$ </p> <p>for a new inference.</p> <p>Now, are these really the basis for the both algorithms, did I understand their functions correctly? If not, please kindly show me the correct ones.</p> <p>Thanks in advance</p>
272
bayesian inference
Estimating total number of people from an observed sample
https://stats.stackexchange.com/questions/109166/estimating-total-number-of-people-from-an-observed-sample
<p>The well known "German tank problem" shows how to answer the question: "If I have tanks which have an increasing serial number, and I see a sample of tanks and record their serial numbers, what is the likely total number of tanks". This question is analogous but is where there is no ordering to the observations, eg with people.</p> <p>Here's a hypothetical example (and the one I am most interested in). Suppose you go to a company website and they provide a number of CVs for staff of a particular job title (eg analyst or whatever). The question is, given this knowledge, how many staff are there likely to be with that title?</p> <p>To formalise this, let the number of observed people be $m$ and the total number of people be $N$. The question is then: What is $p(N|m)$.</p> <p>I appreciate that there may be company policies at work here, eg they may want to show all of the people on a particular level, or some representative sample.</p> <p>Clearly, $N \geq m$. </p> <p>Bayes' theorem gives that $p(N|m) = P(m|N) p(N)$. Let's ignore the prior $p(N)$ for now (or equivalent assume that it is flat), giving $p(N|m) \sim p(m|n)$.</p> <p>Using combinatorics, the number of ways you can get the observed $m$ people from a larger set is $N \choose m$. So immediately this implies that $p(m|N) \propto 1/ {N \choose m}$. There is a normalisation constant which is obtainable by requiring that $\sum_m^\infty C/{N \choose m} = 1$.</p> <p>The problem is that $1/ {N \choose m}$ is a very steeply declining function for even moderate values of $m$. For example, using $m=5$ then $p(5|5) \sim 0.8$, $p(6|5) \sim 0.13$, $p(7|5) \sim 0.038$, $p(8|5) \sim 0.014$ etc. My intuition is that if you observe five people you shouldn't conclude there is a 97% chance that there are between 5 and 7 people.</p> <p>What is going here? I suppose I could change my prior, but this conflicts with the fact that I want to be indifferent to the number of people (i.e. I shouldn't have to assume that 7 people are more likely than 5 people to get what I want).</p> <p>Help please?</p> <hr> <p>It may not be possible to answer this question at all. From a maximum entropy perspective, all I know is that $N \geq m$. The maximum entropy distribution which describes this is just a uniform distribution on $[m, \infty)$. So perhaps I am hoping there is some trick here that will give me info that doesn't exist...</p>
<p>The model $p(m|N) \propto 1/{N \choose m}$ does not make sense. Once the company has decided to show $m$ people, then there are indeed ${N \choose m}$ sets of people that they could show. But this doesn't tell you anything about why the number was $m$. </p>
273
bayesian inference
Inferring prior distribution
https://stats.stackexchange.com/questions/114139/inferring-prior-distribution
<p>Suppose that we take a sample ($X_1, X_2, ... X_n$) from a distribution where we assume that $X_i $~$ Bin(n_i, p_i)$ and $n_i$ is known for every $i$. We also assume that $p_i$'s are independent and identically distributed, $p_i$ ~ $D$, where $D$ is some unknown distribution. $n_i$ cannot be assumed to be large.</p> <p>My goal is to get a Bayesian estimate (or a probability distribution) for $p_i$. But this requires coming up with a distribution for $D$. </p> <p>One option is to make an empirical distribution that uses frequentist estimates for each $p_i$ (i.e. $p_i = X_i/n_i$). This is a rather intuitive and potentially reasonable idea. Unfortunately, the presence of small $n_i$'s would make the tails heavier than they should be (lots of extreme values close to 0 or 1). </p> <p><strong>I'm looking for another option that doesn't have the problems of the aforementioned solution.</strong></p> <p>One possibility I have in mind is to use the following algorithm:</p> <ol> <li>Generate prior distribution as explained earlier.</li> <li>Get MAP or EAP estimate for every $p_i$.</li> <li>Generate new empirical prior from the probabilities obtained in 2.</li> <li>Go back to 2 (continue for a set number of steps, or possibly until convergence?)</li> </ol> <p>Is this method similar to any method out there? Is it reasonable?</p>
<p>I hope you like Python! I'll recite my comment here:</p> <p>This sounds like a hierarchical model. If I wanted to recreate the dataset, here's what I'd do: Let $D$ be a $Beta(\alpha, \beta)$ distribution (reasonable since we are dealing with probabilities). We don't know $\alpha, \beta$, we assign priors to them, say exponential for both with some $\lambda$ hyperparameter. Then we draw the $p_i$ for each $i$, and sample $X_i$ from the binomials. </p> <p>That's how I would recreate the dataset. To make inference, we go backwards. Here's the model in PyMC: </p> <pre><code>import pymc as pm #fake data X = np.array([3,2,2,5,7,10,11]) n = np.array([5, 4, 4, 6, 10, 19, 12]) #here I make sure I fulfill fake-data constraints assert X.shape == n.shape assert (X &lt;= n).all() alpha = pm.Exponential("alpha", 1) beta = pm.Exponential("beta", 1) p = pm.Beta( "p", alpha, beta, size=X.shape[0]) obs = pm.Binomial("obs", n, p, value=X, observed=True) mcmc = pm.MCMC([obs,p,beta,alpha]) mcmc.sample(10000, 5000) </code></pre> <p>And some output: </p> <p><img src="https://i.sstatic.net/zBH1T.png" alt="enter image description here"></p> <p>With samples from the posteriors of $\alpha$ and $\beta$, we can reconstruct possible distributions of $D$, the unknown distribution: </p> <p><strong>Edit</strong>: Apologies, the x-axis should be between 0-1, not 0-500, it's Python thing I forgot to change. </p> <p><img src="https://i.sstatic.net/F6Yxu.png" alt="enter image description here"></p>
274
bayesian inference
Is the posterior distribution on means in a Bayesian Gaussian mixture model with symmetric priors Gaussian?
https://stats.stackexchange.com/questions/179882/is-the-posterior-distribution-on-means-in-a-bayesian-gaussian-mixture-model-with
<p>I am reading through a document on <a href="http://research.microsoft.com/en-us/um/cambridge/projects/infernet/docs/Mixture%20of%20Gaussians%20tutorial.aspx" rel="nofollow">learning Gaussian mixture models in Infer.NET</a>. They assume the data is generated from 2 Gaussians where the prior distribution on means is Gaussian and the prior distribution on precisions is a Whishart distribution. The prior distribution on the mixture is a Dirichlet distribution. All of these priors are symmetric in the two Gaussians.</p> <p>They do some inference on some data, and they get back that the posterior distribution on each of the two means is the same Gaussian. They then go on to talk about how to break the symmetry in the model so that the means can converge to different Gaussians.</p> <p>How can it possibly be that the posteriors on the means are Gaussian? If I observe a million samples from a Gaussian Mixture Model (say unbeknownst to me the data is created by choosing with equal probability a normal distribution of mean 0 and variance 1 or a normal distribution with mean 100 and variance 1) it should be ABSOLUTELY CLEAR what the two means and standard deviations are. The symmetry of course means that the model doesn't know whether the first or the second Gaussian has mean 0 or mean 100, so shouldn't the posterior have two peaks, one near 0 and one near 100? If so, it's obviously not Gaussian.</p> <p>I would appreciate any help in this matter.</p>
<p>The paper <a href="http://link.springer.com/chapter/10.1007/978-3-662-01131-7_26#page-1" rel="nofollow">Bayesian Inference for Mixture: The Label Switching Problem</a> says</p> <blockquote> <p>A K-Component mixture distribution is invariant to permutations of the labels of the components. As a consequence, in a Bayesian framework, the posterior distribution of the mixture parameters has theoretically K! nodes.</p> </blockquote> <p>To me, this answers the question : No. In general the posterior distribution is not Gaussian.</p>
275
bayesian inference
How is prior knowledge possible under a purely Bayesian framework?
https://stats.stackexchange.com/questions/201686/how-is-prior-knowledge-possible-under-a-purely-bayesian-framework
<p>This is more of a philosophical question, but from a purely Bayesian standpoint how does one actually form prior knowledge? If we need prior information to carry out valid inferences then there seems to be a problem if we have to appeal to past experience in justifying today's priors. We're apparently left with the same question regarding how yesterday's conclusions were valid, and a kind of infinite regress seems to follow where no knowledge is warranted. Does this mean that ultimately prior information must be assumed in an arbitrary way, or perhaps based on a more "frequentist" style of inference?</p>
<p>Speaking of <em>prior knowledge</em> can be misleading, that is why you often see people speaking rather about <em>prior beliefs</em>. You do not need to have any prior knowledge to set up a prior. If you needed one, how would Longley-Cook manage with his problem?</p> <blockquote> <p>Here is an example from the 1950s when Longley-Cook, an actuary at an insurance company, was asked to price the risk for a mid-air collision of two planes, an event which as far as he knew hadn't happened before. The civilian airline industry was still very young, but rapidly growing and all Longely-Cook knew was that there were no collisions in the previous 5 years.</p> </blockquote> <p>Lack of data about mid-air collisions was not a problem to assign some prior to it that lead to pretty accurate conclusions as <a href="http://www.magesblog.com/2015/05/predicting-events-when-they-havent.html" rel="noreferrer">described by Markus Gesmann</a>. This is extreme example of insufficient data and no prior knowledge, but in most real life situations you would have some out-of-data beliefs about your problem, that can be translated to priors.</p> <p>There is a common misconception about priors that they need to be somehow "correct", or "unique". In fact, you can purposefully use "incorrect" priors to validate different beliefs against your data. Such approach is described by Spiegelhalter (2004) who describes how a "community" of priors (e.g. "skeptical", or "optimistic") can be used in decision-making scenario. In this case it is not even prior beliefs that are used to form priors, but rather prior hypotheses.</p> <p>Since when using Bayesian approach, you include both the prior and data into your model, information from both sources will be combined. The <a href="https://stats.stackexchange.com/questions/200982/do-bayesian-priors-become-irrelevant-with-large-sample-size/201059#201059">more informative is your prior comparing to data, the more influence it would have, the more informative is your data, the less influence would your prior have</a>.</p> <p>Eventually, <a href="https://stats.stackexchange.com/questions/57407/what-is-the-meaning-of-all-models-are-wrong-but-some-are-useful">"all models are wrong, but some are useful"</a>. Priors describe beliefs that you incorporate in your model, they do not have to be correct. It is enough if they are helpful for your problem, as we are dealing only with <em>approximations</em> of reality that are described by your models. Yes, they <em>are</em> subjective. As you already noticed, if we needed prior knowledge for them, we would end up in a vicious circle. Their beauty is that they can be formed even when confronted with shortage of data, so to overcome it.</p> <hr> <p>Spiegelhalter, D. J. (2004). <a href="https://projecteuclid.org/euclid.ss/1089808280" rel="noreferrer">Incorporating Bayesian ideas into health-care evaluation.</a> Statistical Science, 156-174. </p>
276
bayesian inference
How do I perform Bayesian Updating for a function of multiple parameters, each with its own distribution?
https://stats.stackexchange.com/questions/216164/how-do-i-perform-bayesian-updating-for-a-function-of-multiple-parameters-each-w
<p>I have a variable that is a recursive function involving other variables with known distributions (see problem below). </p> <ul> <li>Let $b(t+1) = b(t) + C \sqrt{b(t)}$ where I know $C \sim N(1.82, .0298)$ and the initial value of $b$ [$b_{initial} \sim N(.02,0.0036)$].</li> </ul> <p>My observation for updating is a discrete value of $b$ for a certain '$t$' (say $b(1500) = 0.005$) but I need to find posterior distributions for $C$ and $b_{initial}$.</p> <p>Any thoughts or resources that can be directed my way would be extremely helpful.</p> <p>Thanks</p>
<p>Probabilistically, the right way to do this is a posterior <strong>joint</strong> distribution for C and binitial--if you know b(t) exactly, then you'll get a line of sorts, where if binitial is .022 that implies that C was 1.81, but it binitial was instead .023 then that implies that C was 1.76. Each of those points has a probability associated with it from the prior, and the update just consists of renormalization from the entire 2D space to that line.</p> <p>If you know b(t) inexactly, then you instead need to calculate the probability of that b(t) measurement for everywhere in the distribution, multiply those, and then renormalize. (That is, you're updating with a continuous likelihood, instead of the 0 or 1 likelihood implied by an exact measurement.)</p> <p>For your particular function, it doesn't look like it'll be computationally easy to work with, although it probably is smooth (in the sense that the derivatives with respect to C and binitial are well-behaved). Seeing if you can come up with something where you can shift C by $\Delta$C and get the corresponding $\Delta$binitial shift (or the reverse) looks like it'll be very useful.</p>
277
bayesian inference
If I do the same experiment many times then does a 95% credible interval mean 95% of the time the true value lies within that range?
https://stats.stackexchange.com/questions/286761/if-i-do-the-same-experiment-many-times-then-does-a-95-credible-interval-mean-95
<p>Its common misinterpretation of a 95% confidence interval to say that that 95% of the time the true value lies within that interval.</p> <p>However, in Bayesian statistics, the 95% credible interval contains 95% of the probability from the probability density function. And if I repeat the experiment many times, I'm wondering if I can learn something about my prior?</p> <p>So say I do an experiment to measure a parameter 100 times and I know the true value for each experiment. Then I calculate the credible interval for each experiment. Should expect 95% of the time the credible interval contains the true value? And if its lower, say only 85% of the time the credible interval contains the true parameter, then perhaps the prior is strongly influencing the results and should be changed?</p>
278
bayesian inference
Intractable posterior - why not use kernel density for the data distribution?
https://stats.stackexchange.com/questions/300296/intractable-posterior-why-not-use-kernel-density-for-the-data-distribution
<p>In the Bayes rule, it is said that the posterior $$ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)} $$ is <em>intractable</em>, because $$ P(D) = \int P(D,\theta) d\theta $$ and the latter is often a high-dimensional integral.</p> <p>See <a href="https://stats.stackexchange.com/questions/208176/why-is-the-posterior-distribution-in-bayesian-inference-often-intractable">Why is the posterior distribution in Bayesian Inference often intractable?</a></p> <p>But this is just one way of computing $P(D)$. There are others? What about instead estimating $P(D)$ using a kernel density (place a Gaussian or some other lobe at each datapoint, and normalize so it all sums to one). Or simply using delta functions: $$ P(x) = \frac{1}{n} \sum_i \delta_{x_i}(x) $$</p> <p>This requires touching each bit of data, but that is not intractable.</p>
<p>There are certainly lots of ways to try to numerically estimate high-dimensional definite integrals. The entire field of <a href="https://www.mathematik.hu-berlin.de/~romisch/papers/Rutg13.pdf" rel="nofollow noreferrer">high-dimensional numerical integration</a> is devoted to this problem, and it suffers from the dreaded <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">curse of dimensionality</a>. There are a lot of research papers in this field with a lot of different methods used. Kernel methods are one method that can be used to obtain approximate integrals (using the delta function would give a terrible approximation for continuous distributions), but I think it is fair to say that the most favoured methods presently used in this field are Monte-Carlo methods (e.g., importance sampling), Markov-Chain Monte-Carlo methods (e.g., Gibbs, Metropolis-Hastings, Hamiltonian MC), and sparse-grid methods.</p> <p>Most Bayesians make extensive use of Markov-Chain Monte-Carlo (MCMC) methods, and many general pieces of Bayesian software are built on these algorithms. The <code>Stan</code> package for Bayesian statistics is built on using Hamiltonian Monte-Carlo methods to estimate these integrals. This is a powerful method that has led to recent improvements in computational power in Bayesian analysis. I'm not an expert on this stuff myself, but I know it is a very large an complicated field, with lots of methods and lots of literature.</p>
279
bayesian inference
What Bayesian test to conduct with one independent variable and two dependent variables?
https://stats.stackexchange.com/questions/323062/what-bayesian-test-to-conduct-with-one-independent-variable-and-two-dependent-va
<p>In my current study I am looking at the effects of creatine monohydrate ingestion on ground reaction force and repeated sprint times. With CM as the independent variable, and ground reaction force and repeated sprint times as the dependent variables, what Bayesian inferential test would be the best to conduct?</p>
<p>Bayesian inference would, in theory, involve determining the joint probability distribution (JPD) $P(R,S,C)$ that expresses the probability of a certain combination of reaction force ($R$), sprint time ($S$) and creatine monohydrate ingestion ($C$). The JPD over all variables can later be used to infer all other possible conditional probabilities using Bayes's rule. $P(R,S,C)$, can using this same rule, be factorised as $P(R|S,C)\times P(S|C)\times P(C)$. With an eye on what you probably want to infer, you just need to is estimate the probability $P(R|S,C)$ of different reaction force's ($R$) given a certain value for repeated sprint time ($S$) and CM ingestion ($C$). The same can be done for $P(S|C)$.</p> <p><strong>Estimating</strong> these probability <strong>distributions</strong> can either be done using continuous variables in which you would make certain assumptions about the functional form of your distribution and fit its parameters. Or you could bin your data in discrete intervals and <strong>count occurrences</strong> per interval.</p> <p>Once you estimated $P(R|S,C)$ and $P(S|C)$, the distribution of reaction force ($R$) given a certain CM ingestion ($C=c$) (which can be used to <strong>infer</strong> it's most probable value) is expressed as :</p> <p>$$ P(R|C=c) = \frac{P(R,C=c)}{P(C=c)}=\frac{\sum_S P(R,S,C=c)}{P(C=c)}=\sum_S P(R|S,C=c)\times P(S|C=c) $$</p> <p>Or infer the complete distribution over both $R$ and $S$ given given a certain CM ingestion ($C=c$)</p> <p>$$ P(R,S|C=c) = \frac{P(R,S,C=c)}{P(C=c)}= P(R|S,C=c)\times P(S|C=c) $$ Using these distributions, you can also determine the standard deviation, confidence intervals, etc.</p> <p><strong>Side note:</strong> You could also choose a different factorisation that might better suite your data, in the end the probabilities you infer will stay the same, these "factors" just serve as a way to make estimating your JPD practical. I did not assume any independence between $R$ and $S$, as this seems very unlikely. This would however ease the computation of $P(R|S,C)$ which would then $=P(R|C)$.</p>
280
bayesian inference
Bayesian inference of a coin&#39;s bias when we don&#39;t directly observe the flips
https://stats.stackexchange.com/questions/355646/bayesian-inference-of-a-coins-bias-when-we-dont-directly-observe-the-flips
<p>Consider a coin with bias $p$. We generate a random sample $x_1, \dots, x_n \sim \text{Bernoulli}(p)$, but <strong>we do not observe results of these coin tosses</strong>. Instead, for each $x_i$, we observe a set of features $y_{i1}, \dots, y_{im}$ about the flip, e.g. the height of the toss, the coin's rotational velocity, etc. </p> <p>Then we feed the observed feature vector $\mathbf{y}_i$ into a black box predictor (perhaps a logistic regression or tree-based model trained on past observations). The model yields $\hat{p}(\mathbf{y}_i) \in [0,1]$, which we interpret as the probability that $x_i = 1$.</p> <p>I am trying to figure out how to use $\hat{p}(\mathbf{y}_1), \dots, \hat{p}(\mathbf{y}_n)$ to obtain a posterior distribution on $p$. My initial idea is to run the model $\hat{p}$ on a test dataset, and obtain the empirical distributions $P(\hat{p} \mid x = 0)$ and $P(\hat{p} \mid x = 1)$. We then have all the likelihood functions for the hierarchical model (ignoring the $\mathbf{y}_i$'s) and can generate MCMC samples of $p$.</p> <p>Is there anything wrong with this approach? Is there a better way? Can we gain anything by making assumptions about the model $\hat{p}$?</p>
281
bayesian inference
Bayesian Hypothesis Tests with continuous priors
https://stats.stackexchange.com/questions/421269/bayesian-hypothesis-tests-with-continuous-priors
<p>I am new to the Bayesian world, and I'm trying to understand how hypotheses tests are performed here (as opposed to the frequentist framework).</p> <p>I am aware that likelihoods, priors and posteriors can be discrete or continuous. And once we have calculated posteriors, we can build a lot of things like credible intervals and so on.</p> <p>Now, the problem arises when I'm applying this to hypotheses tests. So far I encountered situations where there were a finite number of hypotheses to compare (<span class="math-container">$H_0$</span>, <span class="math-container">$H_1$</span>, <span class="math-container">$H_2$</span>..), as well as a discrete number of parameters associated to them.</p> <p><i>For example</i>, I'm tossing a coin n times with associated observations <span class="math-container">$X_1, ..., X_n$</span> where: <span class="math-container">$$X_i \sim Bernoulli(\theta)$$</span> where <span class="math-container">$\theta$</span> is the probability of getting heads.</p> <p>I want to test whether this coin is loaded or not and I may have prior beliefs such that: <span class="math-container">\begin{cases} p(\theta=0.5) = 0.5\\ p(\theta=0.7) = 0.5\end{cases}</span></p> <p>I would then have my hypotheses <span class="math-container">$H_0$</span> (coin is fair) and <span class="math-container">$H_1$</span> (coin is loaded). <br>I would then calculate a posterior for each hypothesis and conclude.</p> <p>But what if <span class="math-container">$\theta$</span> was continuous (<i>e.g.</i> follows a Uniform distribution) ? What would my hypotheses be ? And how to calculate them ?</p>
<p>The case is well-covered in <a href="https://amzn.to/31vg5C7" rel="nofollow noreferrer">Bayesian textbooks</a>, including <a href="http://amzn.to/2kxykkw" rel="nofollow noreferrer">ours</a>!, and can be summarised by the constraint that one can only test hypotheses for which the prior has a positive probability mass. When the prior is given as a Uniform(0,1), it is impossible to test whether or not <span class="math-container">$\theta=1/2$</span>. A contrario, if one comes up with the question as to whether or not <span class="math-container">$\theta=1/2$</span>, a prior must be constructed with this possibility in mind, hence must include a point mass at <span class="math-container">$1/2$</span>. For instance, a mixture of a Uniform(0,1) and of a point mass at <span class="math-container">$1/2$</span>.</p>
282
bayesian inference
How to start coding for posterior inference
https://stats.stackexchange.com/questions/428676/how-to-start-coding-for-posterior-inference
<p>I am trying to implement the model given in <a href="http://proceedings.mlr.press/v84/andersen18a/andersen18a.pdf" rel="nofollow noreferrer">http://proceedings.mlr.press/v84/andersen18a/andersen18a.pdf</a> where they have used mean-field variational inference for posterior inference, but I want to use MCMC for inference. I am new to Bayesian statistics and I am currently facing challenges in implementation. I am free to use any language. Is there any good source which I can use as a reference to start. I would be really grateful if anyone could provide pseudo code or something which I can use as a starting point. How do one infer about <span class="math-container">$\beta$</span> and <span class="math-container">$K$</span>. Below is their model- <span class="math-container">\begin{align} \Lambda_t^{n} = \frac{1}{\beta} \mathbf{I} + \Sigma_{k=1}^K \alpha_{k,t}^nv_kv_k^T \end{align}</span> where <span class="math-container">$\mathbf{\alpha}_k^n \sim GP(m_k^n,C_k^n)$</span>, GP is the Gaussian process, <span class="math-container">$v_k$</span> has horseshoe prior, <span class="math-container">$\Lambda_t^n$</span> is a covariance matrix and we have a p dimensional time series data of length T, so here we are trying to estimate dynamic covariance matrices.</p>
283
bayesian inference
Real-time Bayesian updating. How to link posteriors?
https://stats.stackexchange.com/questions/461473/real-time-bayesian-updating-how-to-link-posteriors
<p>I have a general question about Bayesian inference which may help me solve a problem I have. It is best to illustrate this with an example. Inspired from this great post by AllenDowney:</p> <p><a href="https://github.com/AllenDowney/BiteSizeBayes/blob/master/08_soccer_soln.ipynb" rel="nofollow noreferrer">https://github.com/AllenDowney/BiteSizeBayes/blob/master/08_soccer_soln.ipynb</a></p> <p>Let's say I want to get an estimate for the number of goals scored by a team in a football match. I model this as a Poisson process with parameter <span class="math-container">$\lambda$</span>. On <span class="math-container">$\lambda$</span> I put a Gamma distribution as the prior, with parameter <span class="math-container">$\alpha$</span>. Let's say I use a prior value of 1.4 for <span class="math-container">$\alpha$</span> (avg number of goals scored by a team).</p> <p>In the blog post, we are computing the posterior after a match has been played. So, given a game where 4 goals were scored by a team, we compute the posterior for that team which is now shifted to the right.</p> <p>What would we do if we wanted to update the estimate in real-time? So instead of computing the posterior after the game has been played, we do this every 5 minutes until we hit 90 minutes. So I am interested in getting a posterior for 90 minutes given the data after 5 minutes, 10 minutes, 15 minutes, etc.. I can think of two ways of doing this. Please help me understand what method, if any, makes sense:</p> <ol> <li><p>We propagate the data we have to the expected number of goals in 90 minutes after <span class="math-container">$x$</span> minutes have passed. So, if after 30 minutes a team scored 1 time, we propagate that to (90/30)*1=3 goals and use this as input to compute the posterior using the same process as in the blog post. Issue: I feel like this is not correct, because what if a team scores in the first 5 minutes? Doing this would mean we expect the team to score 18 (!) times in a match. The first posterior calculation after 5 minutes would be complete trash. Although this should eventually converge to a realistic value as we do more updates?</p></li> <li><p>We don't use a value of 1.4 for <span class="math-container">$\alpha$</span> because that's based on a 90 minute match. Instead, we use alpha = 1.4 / (90/5) = 0.08 to scale it to a 5-minute prior. So, on average, a team scores 0.08 times every 5 minutes. We now do the posterior calculation every 5 minutes instead of every 90 minutes. Issue: I don't understand how we now get a prediction for the 90 minute posterior because the posterior we calculate every time will be based on 5 minutes. Also, how do we link the second posterior (for minutes 5-10) to the first (minutes 0-5)?</p></li> </ol> <p>Perhaps I am missing something basic here. I really want to understand Bayesian inference better but feel like I am not completely getting it. Thanks!</p>
<p>Ok, so from what I understand from the blog post...</p> <ul> <li><p>The likelihood for the number of goals scored is Poisson. Each team has a goal scoring rate, <span class="math-container">$\lambda$</span> measured in units per game. We can divide this by 18 to yield the goal scoring rate per 5 minute increments.</p></li> <li><p>A gamma prior is put on <span class="math-container">$\lambda$</span>. This makes things particularly nice because the gamma prior is conjugate for the Poisson likelihood, so our posterior will be gamma as well. For some strange reason, <span class="math-container">$\beta=1$</span> in scipy's parameterization by default (see the documentation for the Gamma density <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html" rel="nofollow noreferrer">here</a> and contrast it with the Gamma density <a href="https://en.wikipedia.org/wiki/Gamma_distribution" rel="nofollow noreferrer">here</a>. For consistency with the blog post, I will adopt the assumption that <span class="math-container">$\beta=1$</span>). </p></li> </ul> <p>I think the problem you are experiencing is that your likelihood is on the scale of 5 minutes, but the prior is on the scale of games. Not to worry, this should be an easy fix. Thanks to conjugacy, the posterior is gamma distributed as well. In particular, on the scale of games</p> <p><span class="math-container">$$ \lambda | y \sim \operatorname{Gamma}(1.4+ y_i, 1+1) $$</span></p> <p>Here, <span class="math-container">$y_i$</span> is the number of goals scored in a single game. To change this to the scale of 5 minutes, we need to do some arithmetic.</p> <p>Let <span class="math-container">$\tilde{n}$</span> be the number of 5 minute increments which have been observed. Then, a single game has 18 5 minute increments. So, our posterior should then look like</p> <p><span class="math-container">$$ \lambda | y \sim \operatorname{Gamma}(1.4+ \sum_i \tilde{y_i}, 1+ \dfrac{\tilde{n}}{18}) $$</span></p> <p>Now, <span class="math-container">$\tilde{y}_i$</span> is the number of goals scored in the <span class="math-container">$i^{th}$</span> 5 minute increment. Let's make sure this is indeed out posterior. Here is some python code to double check. I use pymc3 to sample from the posterior and then compare against my analytical result.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.stats import gamma, poisson import matplotlib.pyplot as plt import pymc3 as pm x = np.linspace(0,10) alpha = 1.4 # draw a lambda from the prior lams = gamma(a = alpha).rvs(1) # use this to draw 18 observations. Each element is the number of goals scored # by this team in the ith 5 minute increment goals = poisson(mu=lams/18).rvs(size=18) with pm.Model() as model: #This model is on the scale of games. lam = pm.Gamma('lam',alpha=1.4, beta = 1) y = pm.Poisson('y', mu=lam, observed=goals.sum()) trace = pm.sample() # Histogram of the posterior plt.hist(trace['lam'], density = True) # Plot the true parameter value in red plt.axvline(lams, color = 'red') # Now, plot the posterior using the analytical result on the scale of games. # At the end of the game, this posterior should look like the one from pymc3 y = pm.Gamma.dist(alpha = 1.4 + sum(goals), beta = 1+1).logp(x).eval() plt.plot(x, np.exp(y)) </code></pre> <p><a href="https://i.sstatic.net/5qvZh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5qvZh.png" alt="enter image description here"></a></p> <p>The posterior I computed analytically on the scale of 5 mins (orange) is very similar to posterior I obtained from pymc3 (histogram in blue) on the scale of games, which leads me to believe I am correct.</p> <p>Here is a more concrete example. Suppose we observe the following over an entire game.</p> <blockquote> <p>0, 0, 0, 0, 2, 0, 0, 0, 1, 0, 0, 1, 2, 0, 0, 0, 0, 0</p> </blockquote> <p>In the first 5 minutes, there are no goals. In the second 5 minutes, there are no goals, etc. How would our posterior look after each 5 minutes? Like so</p> <p><a href="https://i.sstatic.net/3JSHD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3JSHD.png" alt="enter image description here"></a></p> <p>Here is the code to perform the updating after each 5 minutes</p> <pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(dpi = 120, ncols=6, nrows=3, figsize = (20,10), sharex = True, sharey = True) ax = ax.ravel() alpha = 1.4 beta = 1 num_goals = 0 num_5_mins = 0 for i, g in enumerate(goals): num_goals+=g num_5_mins +=1 y = gamma(a = 1.4 + num_goals, scale = 1/(1+num_5_mins/18)).pdf(x) ax[i].plot(x,np.exp(y)) ax[i].set_title(f'Goals Scored So Far: {num_goals}') ax[i].set_ylabel('Density') ax[i].set_xlabel(r'<span class="math-container">$\lambda$</span>') </code></pre>
284
bayesian inference
(Bayesian) estimation of the underlying population size knowing its upper bound after $x$ draws
https://stats.stackexchange.com/questions/472669/bayesian-estimation-of-the-underlying-population-size-knowing-its-upper-bound
<p>Consider you have an initial bag of unique and identifiable items <span class="math-container">$(1.. K)$</span>. From this bag, someone used an arbitrary criteria to tag <span class="math-container">$N$</span> items. You don't know the chosen criteria (which can be anything, from odd numbers, to just the item 65) but you know <span class="math-container">$K$</span>. Your job is to estimate how many items were tagged (i.e. the cardinality of the tagged set, which is <span class="math-container">$N$</span>). For that, you can sample (with and/or without replacement<sup>[1]</sup>), any arbitrary amount of items from the bag and verify the criteria at will.</p> <p>I know how to estimate <span class="math-container">$N$</span> using a monte-carlo method (basically I keep drawing items and use the ratio of tagged/non-tagged to approximate the real cardinality). But I would like to provide an estimation as soon as one item is drawn, along with a confidence value (i.e. the probability of <span class="math-container">$N=n$</span>). You can also assume that I can make an informed guess as a prior PDF of <span class="math-container">$N=n$</span> (e.g. uniform, or gaussian).</p> <hr /> <ol> <li>Each method has a different computational cost, so I would love to get an answer for both methods, as to provide a chance on deciding the tradeoff.</li> </ol>
<p>Let's say that you take a sample of <span class="math-container">$s$</span> elements, with replacement, out of the <span class="math-container">$K$</span> items. Then the number of tagged item, <span class="math-container">$t$</span>, that you get follow a binomial distribution <span class="math-container">$\mathcal{B}(\frac{N}{K}, s)$</span>. You easily get that the posterior distribution of <span class="math-container">$N$</span> given <span class="math-container">$t$</span> is : <span class="math-container">$$\pi_s(N \mid t)\propto \pi(N) \left( \begin{array}\;s\\t\end{array} \right){\left(\frac{N}{K}\right)} ^ t {\left(1 -\frac{N}{K}\right)}^{s - t}$$</span></p> <p>Where <span class="math-container">$\pi$</span> denotes the prior distribution on <span class="math-container">$N$</span> that you chose, and <span class="math-container">$\pi_s(.\mid t)$</span> denotes the posterior distribution obtained from <span class="math-container">$s$</span> draws given that <span class="math-container">$t$</span> of them where tagged. This formula works from the first draw that you make (i.e. <span class="math-container">$s = 1$</span>), and you can apply it at each draw, i.e. for <span class="math-container">$s = 1, 2,...$</span> .</p> <p>In general, to get an estimate (such as maximum a posteriori or expectation a posteriori), you need to use numerical methods (typically use a sampler or an approximation of the posterior) which is a bit computationally expensive.</p> <p>If you want to avoid using numerical method for finding estimates and confidence intervals, you can use as a prior the conjugate prior of the binomial model, which is a Beta distribution. So if you assume that a priori <span class="math-container">$\frac{N}{K} \sim Beta(\alpha, \beta)$</span>, then you know that the posterior distribution of <span class="math-container">$\frac{N}{K}$</span> is <span class="math-container">$Beta(\alpha + t, \beta + s - t)$</span>. This leads to the following iterative procedure to get estimates and confidence interval at each draw:</p> <ul> <li>Select prior parameters <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span> of a Beta distribution.</li> <li>At each draw that you make : <ul> <li>update <span class="math-container">$\alpha \leftarrow \alpha + 1$</span> and <span class="math-container">$\beta \leftarrow \beta$</span> if item is tagged,</li> <li>update <span class="math-container">$\alpha \leftarrow \alpha$</span> and <span class="math-container">$\beta \leftarrow \beta + 1$</span> if item is not tagged,</li> <li>compute estimate : expectation a posteriori is <span class="math-container">$\frac{\alpha}{\alpha + \beta}$</span>, or maximum a posteriori is <span class="math-container">$\frac{\alpha - 1}{\alpha + \beta - 2}$</span>,</li> <li>compute confidence interval (e.g. using <code>qbeta()</code> function in R).</li> </ul> </li> </ul> <p>I guess the same could be done with better efficiency by using draws without replacements. In this case the binomial distribution would be replaced by a hypergeometric distribution and the adequate conjugate prior would then be a beta binomial distribution instead of a Beta. I cowardly refer you to <a href="https://stats.stackexchange.com/questions/311088/beta-binomial-as-conjugate-to-hypergeometric">this discussion</a> to get details on how to make the update then.</p>
285
bayesian inference
How to perform a sensitivity analysis in Bayesian statistics?
https://stats.stackexchange.com/questions/178502/how-to-perform-a-sensitivity-analysis-in-bayesian-statistics
<p>Bayesian inference is drawn from the posterior distribution or - in case we are interested in forecasting - from the predictive posterior distribution. However, these values are heavily affected by the choice of the prior, even if you have decided to go for an uninformed one (which can be implemented in many different ways). Is there a standard way to convince the audience that your choice of the prior does not lead your results a priori in a way that would diminish the value of your results (Of course your results are affected by informed priors, but slight changes in your prior should - at least in my intuition - not lead to extreme changes in the results.)? Could this problem be relaxed by the choice of hierarchical models because it also implies 'switching' priors? I am happy for each and every comment or reference on how this problem can be tackled,thank you!</p>
<p>A fairly standard approach to showing that your results were not heavily influenced by your choice of prior is simply to show that your results hold when choosing a different prior. For example, if you have an informed prior that suggests a certain result is more likely, you might want to also show your results also hold when a uniform prior is specified. </p> <p>A fairly new piece of software for checking such things is called <a href="https://jasp-stats.org/" rel="nofollow">JASP</a>, which is like a free, modern SPSS that handles Bayesian versions of many frequentest statistical tests. What is nice about this, is that when you run a Bayesian test it outputs a graph showing what your test result would have been had a range of other priors been specified. I don't know if this output is something you would want to include in a report, but it is useful to get an idea of how sensitive your results were to your specific prior. </p>
286
bayesian inference
Bayesian Inference in the presence of multiple hypotheses
https://stats.stackexchange.com/questions/437456/bayesian-inference-in-the-presence-of-multiple-hypotheses
<blockquote> <p>"Because [Bayesian Inference] respects the forward flow of time or information, there's no need for nor availability of methods for correcting for multiplicity ... The evidence of one question is not tilted by whether other questions are being asked."</p> </blockquote> <p><a href="https://www.youtube.com/watch?v=8B-IEMJCtEw" rel="noreferrer">https://www.youtube.com/watch?v=8B-IEMJCtEw</a></p> <p>From Frank Harrel's course.</p> <p>I understand with frequentist inference, the objective is to conserve Type 1 error rate. When we control the family-wise error rate, we consider the family of "questions" being asked and consider a type 1 error for at least one question to be a type 1 error for the family of tests. It also makes sense, therefore, that if we present the 95% CIs for each of the estimands for which the original tests were constructed, we can communicate with a readership about the range of plausible effects that an experiment or data collection process has generated <em>without</em> correction.</p> <p>For a Bayesian, it's easy to see the analogue with estimation and the CI. And we can exonerate ourselves from multiple testing corrections. However, from an inferential standpoint, I don't believe any such claim can be made. But to begin: <strong>What is the Bayesian analogue of a Type 1 error?</strong> The Bayesian upon collecting compelling evidence might be said to "adopt" an updated/alternative probability model for the parameter, if we are to coerce an inferential/decision rule. </p> <p>The Bayesian might like to control the number of models that are spuriously adopted. In this case, the Bayesian one would need to attenuate the posterior by using an increasingly stringent set of priors depending on the number of tests being used. <strong>Is this the basic approach used and/or is there literature developing these ideas more fully</strong>?</p>
<p>Arguments that Bayesians do not need to worry about type I errors are starting from the premise that the type I error rate does not matter/is not a relevant concept* and simply adhere to the likelihood principle**. </p> <p>I don't think this kind of Bayesian viewpoint is compatible with coercing an inferential threshold, but for taking an action it can work well with decision theory, but then you really need utility functions for how bad it is to be wrong in what way. </p> <p>* Some Bayesian methods happen to perform well in a frequentist sense, but usually mostly because shrinkage towards plausible parmeter values is usually a good thing.</p> <p>** If you take your data generating method to be "my main claim is the one that has the highest posterior probability", you can of course argue whether the likelihood principle could be seen to tell you to take that selection into account.</p>
287
bayesian inference
Bayesian inference - a use case
https://stats.stackexchange.com/questions/330554/bayesian-inference-a-use-case
<p>I've been recently studying Bayesian inference with PyMC3. I understand the flexibility that comes with multiple possible options for initial distribution choices, yet I can't seem to understand why one would need the sampling part. I realize this is a very naive question, yet I cant seem to understand why does one not stop at the MAP part -- here the model parameter values are found. </p> <p>Why the NUTS,GIBBS or any other sampling? Why is this useful? I see that whole distributions are obtained for individual parameters, and can be visualized. I assume this has to do with some sort of parameter validation, where one inspects the quality of the parameters obtained?</p> <p>My current understanding is that MAP estimates are used as starting points for the sampling part. Once the sampling is done, how does one obtain "more correct" parameter values based on the MCMC?</p> <p>I would really like to use this methodology for modeling tasks I am dealing with, yet I just don't see positive sides of having to choose one of n possible priors, where each could potentially give different results (?)</p> <p>Thank you very much.</p>
<p>Since it seems that you lack some basic understanding of the process behind Bayesian modeling work, let me give you a short summary of the usual workflow: </p> <ol> <li><p>You define the <em>likelihood</em> function for your model, for example: you assume the <a href="https://en.wikipedia.org/wiki/Bernoulli_distribution" rel="nofollow noreferrer">Bernoulli distribution</a> parametrized by probability of success $\pi$ for the binary variable $Y$. So $Y$ is your data, e.g. series of coin flips: <code>[0, 0, 1, 1, 0, 1, 1]</code>.</p></li> <li><p>You assume the <em>prior</em> distribution(s) for your parameter(s), say in our case, we assume <a href="https://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution/47782#47782">beta distribution</a> parametrized by hyperparameters $\alpha$ and $\beta$. Let's say that you assume a <a href="https://stats.stackexchange.com/a/298176/35989">uniform prior</a>, i.e. $\alpha = \beta = 1$.</p></li> </ol> <p>Now you described the problems in probabilistic terms and defined your model. Notice that we didn't estimate anything yet, we just defined a set of assumptions about our data and the parameters.</p> <ol start="3"> <li><p>Next, what we need to do is to estimate the parameter $\pi$. For this, you can do one of the three things:</p> <ul> <li><p>You can use <a href="https://en.wikipedia.org/wiki/Conjugate_prior" rel="nofollow noreferrer">conjugacy</a> and obtain the <a href="https://stats.stackexchange.com/questions/308099/bayesian-update-of-continuous-beliefs/308103#308103">closed-form posterior distribution</a> by using pure math,</p></li> <li><p>You can use some kind of optimization to <a href="https://stats.stackexchange.com/questions/312113/why-flat-priors-and-noiseless-data-are-required-for-map-learning/312127#312127">find the maximum of the posterior distribution</a> (the mode of the posterior distribution), i.e. use <em>maximum a posteriori</em> (MAP) -- this gives you a point estimate for $\pi$,</p></li> <li><p>You can simulate draws from the posterior distribution using Markov Chain Monte Carlo (MCMC) algorithms (e.g. Metropolis-Hastings, Gibbs, or NUTS). By doing so you obtain the samples from the posterior distribution (in this case samples from the posterior distribution of the random variable $\pi$). You can use the samples to approximate the posterior distribution, or to estimate it's properties, e.g. calculate the mean from the samples, to estimate the expected value of the parameter of interest $E(\pi | Y)$ as your point estimate of $\pi$.</p></li> </ul></li> </ol> <p>All the approaches have pros and cons. You can use conjugacy only for the very simple problems, but there is no closed-form solutions for things like multivariate logistic regression, so this won't work for many problems. As about MAP, it gives you only the point estimate and it depends on the optimizer you used, it may not always work for complicated problems that are hard to optimize (recall that when using maximum likelihood we often use dedicated optimizers for complicated problems rather then black-box ones). Simulation is the most commonly used approach in Bayesian estimation, it is slower then the two previous approaches, but it gives you the full posterior distribution. MCMC is the approach of choice for most Bayesians. </p> <p>We use priors to quantify our assumptions about the parameters. They let us to put some out-of-data information into the model. On another hand, <a href="https://stats.stackexchange.com/questions/200982/do-bayesian-priors-become-irrelevant-with-large-sample-size/201059#201059">the more information does your data convey, the less influential are the priors</a>.</p>
288
bayesian inference
Bayesian inference on a sum of iid random variables with known distribution
https://stats.stackexchange.com/questions/273473/bayesian-inference-on-a-sum-of-iid-random-variables-with-known-distribution
<p>Let $X_1$, $X_2$, ..., $X_n$ be iid RV's following a mixture distribution of two lognormals such that the pdf of each $X_i$ is $f_{mix}(x)=pf_1(x) + (1-p)f_2(x)$ where $f_1(x)$ and $f_2(x)$ are lognormal pdfs with parameters $\mu_1,\sigma$ and $\mu_2,\sigma$, respectively.</p> <p>Define $S_i$ as a sum of 10 $X$s, e.g. $S_1 = X_1 +..+ X_{10}, \ \ S_2 = X_{11} +..+X_{20},...$</p> <p>I am given only $S_1,S_2,\dots,S_{n/10}$.</p> <p>How can I infer the mixing proportion parameter $p$ here? That means, I want to know what the proportion of the two lognormals in my mixture is among the 10 samples is (but I only have the sum measurement).</p> <p>Note that the sum of lognormals is not lognormally distributed.</p> <p>My problem could be related to this one, but I am not sure: <a href="https://stats.stackexchange.com/questions/24344/bayesian-inference-on-a-sum-of-iid-real-valued-random-variables">Bayesian inference on a sum of iid real-valued random variables</a></p> <p><strong>update</strong>: assume that I have many samples of $S_i$. If that helps we can assume that the $\mu$s are well separated, making the mixture nicely bimodal. Generally in mixtures it is that the mixing proportions sum to 1. This is also why I thought about the Dirichlet process. I have no restrictions on the values of $p$ otherwise.</p>
<p>Each $X_i$ comes from either one of the two lognormals with probabilities $p$ and $1-p$. Let $Z_j$ be the number of $X_i$'s in the sum $S_j=\sum_{i=10j-9}^{10j}X_i$ that comes from the first lognormal. Clearly $Z_j \sim \mbox{bin}(10,p)$. Conditional on $Z_j$, each $S_j$ is a sum of $Z_j$ lognormals with parameters $\mu_1,\sigma^2$ and $10-Z_j$ lognormals with parameters $\mu_2,\sigma^2$, with pdf $$ f_{S|Z=z}(s;\mu_1,\mu_2,\sigma^2), \tag{1} $$ given by a complicated convolution integral. The unconditional distribution of each $S_j$ is the 11-component mixture $$ f_S(s;\mu_1,\mu_2,\sigma^2,p)=\sum_{z=0}^{10}{10 \choose z}p^z(1-p)^{10-z}f_{S|Z=z}(s;\mu_1,\mu_2,\sigma^2). \tag{2} $$ If $\sigma^2$ is sufficiently small or $\mu_1$ and $\mu_2$ sufficiently different this is going to be a 11-modal distribution with modes located near $10e^{\mu_1+\sigma^2},9e^{\mu_1+\sigma^2}+e^{\mu_2+\sigma^2},\dots,10e^{\mu_2+\sigma^2}$. This suggest that all five parameters in principle are identifiable given enough data.</p> <p>Perhaps you can approximate the convolution in (1) by a single moment-matched lognormal as discussed <a href="https://stats.stackexchange.com/questions/238529/the-sum-of-independent-lognormal-random-variables-appears-lognormal/238566#238566">here</a>, use (2) to compute the likelihood and then compute approximate maximum likelihood estimates by maximising the resulting log likelihood numerically. Or you could do approximate Bayesian inference using this approximate likelihood function. This option would allow using informative priors on some of the parameters which might be necessary in practice if there is too much overlap between each component of (2).</p>
289
bayesian inference
Correctness of product of densities representing parts of information as prior density in Bayes inference
https://stats.stackexchange.com/questions/600027/correctness-of-product-of-densities-representing-parts-of-information-as-prior-d
<p>suppose I've got data <span class="math-container">$X$</span> from a model driven by parameter <span class="math-container">$\theta$</span>. Model of data is represented by conditional density (likelihood function) <span class="math-container">$$f(x|\theta).$$</span> Suppose the prior density of <span class="math-container">$\theta$</span> is denoted by <span class="math-container">$$f(\theta).$$</span> My question is, is it consistent with bayesian theory to choose ANY probability density as a prior density in our model, if it's correctly defined density with respect to correct measure? I'm asking this question because I have a complicated model with 2 types of information about parameter <span class="math-container">$\theta$</span>, and both those sources of informations have probability density as their description. I don't have an idea how to add those two sources of prior information to model, other than multiply those 2 individual densities into one and normalize them. But something inside me is telling me that might be fundamentally incorrect.</p>
290
bayesian inference
Question about the Bayesian Inference of a parameter
https://stats.stackexchange.com/questions/112322/question-about-the-bayesian-inference-of-a-parameter
<p>In order to understand the difference between the Frequentist and Bayesian inference, I was reading the presentation at: <a href="http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf" rel="nofollow">http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf</a> . In order to explain the difference between the approaches, the author is using the following example(Page 6): If I understand correctly, there are three different cases of protocols (something related to toxicity, it seems) and there are different studies for each of the three protocols, which has examined the relation of the protocol to a specific cause, which is called something as "AAN". </p> <p>For example, for one of the three protocols, different studies showing the total number of cases (first) and relation to "AAN" (second) is as following : </p> <p>1) 66,11</p> <p>2) 1756,129</p> <p>3) 272, 48</p> <p>4) 151, 18</p> <p>... etc. Each of these number pairs belong to different studies.</p> <p>Now, the Bayesian model for these studies are given as such:</p> <p>Given $X_j$ is the AAN frequency of the $j$th study about the protocol $i$, $X_j$ is distributed as $X_j \sim Binomial(n_j,p_i)$. $n_j$ is the total number of incidences for the $j$th study and $p_i$ is the parameter of the distribution we want to infer.</p> <p>What I did not understand here is, the author says that, in this model $p_i$ can vary from study to study.</p> <p>In my understanding of the Bayesian approach, here, all of the studies constitute our data, $D=\left( n_1,X_1,n_2,X_2,...,n_K,X_K \right)$ where $K$ is the total number of studies. For $p_i$ we have a prior distribution of $P(p_i)$. So we try to find the posterior distribution $P(p_i|D)$. In my understanding $p_i$ cannot vary from study to study: $p_i$ is first generated from the prior distribution as $p_i \sim P(p_i)$ and then this generated value of $p_i$ is used to generate each $X_j \sim Binomial(n_j,p_i)$. Yes, $p_i$ is not fixed as in the frequentist approach, but it varies over different realizations of all $K$ studies, once a $p_i$ is generated from the prior, then all $K$ studies use the same $p_i$. So, it should not change from a single study to another. So, this slide has confused me at this particular point. Am I right with my thoughts here or did I misunderstand something?</p> <p>Thanks in advance.</p>
<p>I think the slides are a bit ambiguous. For the Bayesian approach you can (among many other things) assume that:</p> <ol> <li><p>All protocols share the same $p$. Then there's a single prior over what that $p$ might be. </p></li> <li><p>Each study $i$ has its own $p_i$. Then you can either </p> <p>2.1. Assume that there's nothing in common to the three protocols, so that nothing in the results of one protocol could be informative about the others. Then you might want three independent priors, one for each $p_i$. Or, perhaps more reasonably...</p> <p>2.2 Assume that all are these three protocols are a 'sample' of the possible protocols that could have been run. In which case it's natural to think of the $p_i$ for each protocol to be a draw from a common prior over $p$. You might formalise this as reflecting an exchangeability assumption, but that doens't much matter for your question.</p></li> </ol> <p>Assumption 1 (plus model and data) will generate a marginal posterior over $p$. Both of assumptions 2 will generate a marginal posterior over [$p_1, p_2, p_3$], but with potentially different properties. </p> <p>But I suppose what the slides are going for is the idea that, from a Frequentist perspective, a natural null hypothesis to test is that $p=p_1=p_2=p_3$. Rejecting this suggests that there is variation across the $p_i$ which one might investigate further. </p> <p>However one might, in <em>equally</em> Frequentist mode, simply ask for confidence intervals for the different $p_i$ and see if they overlap (assuming you could get the coverage right, etc.) This would give results rather closer to those generated through assumption 2.1., without formally assuming, even as a null hypothesis, that there was a single fixed $p$.</p> <p>This second approach makes the Frequentist and Bayesian approaches look <em>much</em> more like each other. And spoils the overly sharp contrast that the slides seems to be making.</p>
291
bayesian inference
How are custom kernel functions in Gaussian processes statistically justified?
https://stats.stackexchange.com/questions/612663/how-are-custom-kernel-functions-in-gaussian-processes-statistically-justified
<p>I am confused about one aspect of the use of Gaussian processes for Bayesian inference. I understand that it relies on the assumption that your train and test data points form a multivariate normal distribution where you define a prior mean and covariance for the distribution. What I don't understand is that I believed covariance had a strict statistical definition <span class="math-container">$\text{cov}(X, Y) = \mathbb{E}\left(X-\mu_X)(Y-\mu_Y)^\top\right)$</span>. How is it justified statistically to just use what seems like any old function we like? I am pretty new to this so would appreciate if anyone could direct me to good resources on the topic too.</p>
<p>In <a href="https://stats.stackexchange.com/questions/502531/elementary-explanation-of-gaussian-processes">Gaussian Process</a>, your task is to learn the <em>distribution over functions</em> <span class="math-container">$$f(\mathbf{x}) = [f(x_1), f(x_2), \dots, f(x_n)]'$$</span> This distribution is modeled by Gaussian Process <span class="math-container">$\mathcal{GP}\left(m(\mathbf{x}),\, k(\mathbf{x}, \mathbf{x}')\right)$</span> parametrized by the <em>mean function</em> and <em>covariance function</em></p> <p><span class="math-container">$$\begin{align} m(\mathbf{x}) &amp;= E[f(\mathbf{x})] \\ k(\mathbf{x}, \mathbf{x}') &amp;= E\big[\big(f(\mathbf{x}) - m(\mathbf{x})\big)\big(f(\mathbf{x}') - m(\mathbf{x}')\big)\big] \end{align}$$</span></p> <p>So the covariance function is the function that tells us what would the covariance be. It's the same as saying, for example, that the <a href="https://en.wikipedia.org/wiki/Conditional_expectation" rel="nofollow noreferrer">conditional expectation</a> <span class="math-container">$E[y|\mathbf{x}]$</span> is given by a function <span class="math-container">$g(\mathbf{x})$</span>, where in linear regression it would be a linear function <span class="math-container">$g(\mathbf{x}) = \mathbf{x}\boldsymbol{\beta}$</span>. It doesn't say that we are not taking the integral to calculate the expectation anymore, but that the integral has a solution given by <span class="math-container">$g(\mathbf{x})$</span>. It also doesn't say that any conditional expectation is like this, but that <em>this particular</em> random variable has an expectation that takes such form.</p> <p>Gaussian Process defines such a distribution that if using the regular definition of covariance you calculated the covariance of <span class="math-container">$f(\mathbf{x})$</span> it would take the form of the covariance function.</p>
292
bayesian inference
Why does the marginal likelihood integral have no closed-form solution?
https://stats.stackexchange.com/questions/430842/why-does-the-marginal-likelihood-integral-have-no-closed-form-solution
<p>In Bayesian inference we end up with the formula:</p> <p><span class="math-container">$$ P(\mathbf{w|t,X)}= \frac{P(\mathbf{t|w,X)}P(\mathbf{w)}}{\int P(\mathbf{t|w,X}) P(\mathbf{w}) d\mathbf{w}}$$</span></p> <p>Assume the prior <span class="math-container">$P(w)$</span> is a Gaussian distribution with 0 mean and <span class="math-container">$\sigma$</span> as standard deviation.</p> <p>It is always said the integral has no closed-form solution. When the prior is a Gaussian distribution, is this related to the fact that the indefinite integral <span class="math-container">$\int e^{-x^2}dx$</span> can not be expressed with elementary functions?</p> <p>What if I choose an ad-hoc prior distribution? Is there any case when the integral has a closed-form solution?</p>
<p>Yes, the marginal likelihood has a closed-form for all polynomial models of the form <span class="math-container">$\mathbf{t} = X\mathbf{w} + \boldsymbol{\varepsilon}$</span>, where, <span class="math-container">\begin{aligned} X &amp;= \begin{bmatrix} \mathbf{1}^T &amp; \mathbf{(x^1)}^T &amp; (\mathbf{x}^2)^T ... (\mathbf{x}^n)^T \end{bmatrix}\\ \boldsymbol{\varepsilon} &amp;\sim \mathcal{N}(0, \sigma_n^2I)\\ \mathbf{w} &amp;\sim \mathcal{N}(0,\sigma_w^2I) \end{aligned}</span></p> <p>It is given as follows,</p> <p><span class="math-container">$$ p(t|X) \sim \mathcal{N}(0, \sigma_w^2XX^T + \sigma_n^2I) $$</span></p> <p>I have referred to the last slide from <a href="http://mlg.eng.cam.ac.uk/teaching/4f13/1920/bayesian%20finite%20regression.pdf" rel="nofollow noreferrer">here</a></p>
293
bayesian inference
Inference in Bayesian networks with hidden variables
https://stats.stackexchange.com/questions/617132/inference-in-bayesian-networks-with-hidden-variables
<p>Suppose I have the Bayesian network in the figure and the corresponding conditional probability table for each node, where A and B are the hidden variables, and C and D are the observed variables. What probabilistic inference algorithm can I use to get all the conditional probabilities in Table - 1? can I use likelihood weighting sampling inference algorithm ? If the network becomes the bottom one, is the likelihood weighting sampling inference algorithm appropriate?</p>
<p>Well I don't think sampling is needed here (unless I misunderstand your question / diagram). I believe what is intended is to expand the probabilities using something like the product rule, so that:</p> <p><span class="math-container">\begin{align} P(c1,d1\mid a1,b1) &amp;= P(d1 \mid c1, a1, b1)\cdot P(c1\mid a1,b1) \\ &amp;=P(d1 \mid c1)\cdot P(c1\mid a1,b1) \end{align}</span></p> <p>and you have <span class="math-container">$P(d1 \mid c1)$</span> and <span class="math-container">$P(c1\mid a1,b1)$</span> in the tables already so you multiply them together.</p> <p>I assume this is a homework question so I wont give you all the reasons why I did what I did but I will leave with somethings to think about:</p> <p>(1) How did I do that probability product rule? i.e. How does the probability product rule work.</p> <p>(2) Why did I expand the product rule with respect c1 and not d1?</p> <p>(3) Why did <span class="math-container">$a1$</span> and <span class="math-container">$b1$</span> disappear in the probability: <span class="math-container">$P(d1 \mid c1, a1, b1)$</span></p> <p>Good luck!</p>
294
bayesian inference
Finding values a and b to get PMF with certain mean and standard deviation
https://stats.stackexchange.com/questions/325864/finding-values-a-and-b-to-get-pmf-with-certain-mean-and-standard-deviation
<p>Suppose that the proportion θ of defective items in a large manufactured lot is known to be either 0.05 or 0.15, and the prior pmf of θ is as follows: ξ(0.05) = a and ξ(0.15) = b. Suppose also that when n = 10 items are selected at random from the lot, it is found that X = 5 of them are defective.</p> <p>(a) Determine the values of a and b such that the prior pmf of θ has mean 0.1 and standard deviation 0.05.</p> <p>(b) Determine the posterior pmf of θ using the selected a and b in the previous problem.</p> <p>(c) What are the posterior mean and standard deviation of θ?</p> <p>I am unsure where to start. Would a and b both equal to 0.05 because the mean 0.1 is between 0.05 and 0.15. Can I assume normal distribution here?</p>
295
bayesian inference
What&#39;s the problem with model identifiability?
https://stats.stackexchange.com/questions/60446/whats-the-problem-with-model-identifiability
<p>I understand that in a decision perspective, identifiability of a model is needed to ensure the convergence (with increasing number of observations) of the parameters to estimate through a single value. But, if the non-identifiability of a given model is not a modeling artifacts but clearly characterises some "inaccessible knowledge" about the system under study, is it valid to perform bayesian inference on a non-identifiable model ?</p> <p>Here is a simple example. $$ x_i =t a y_i + \epsilon_i $$ with $(\epsilon_i)$ iid $$ \epsilon_i \sim N(0,1) $$ and an informative prior for $t$: $$ t\sim N(1,0.1) $$ and a non-informative prior for $a$ (let says, that one chooses a uniform...) $$ a \sim U(0,1000) $$ One observes $(x_i)$ and $(y_i)$ are exogenous parameters and one wants to compute : $$ p(a | (x_i); (y_i)) $$ As I understand it, the model is not identifiable as all the densities $p((x_i) | a,t;(y_i))$ described by the pairs $(a,t)$ such that $a.t=k$ ($k \in R$) are the same. Obviously in such a case the choice of $p(t)$ has a strong implication but if it is physically supported, I see no reason to invalidate the meaning of an HPD interval obtained from such a non-identifiable model. On the other hand, I do not manage to find any reference about that... so thanks for your expertise.</p>
<p>I recommend you read Andrew Gelman's blog post <a href="http://andrewgelman.com/2014/02/12/think-identifiability-bayesian-inference/" rel="noreferrer">Think identifiability Bayesian inference</a>. </p> <p>Right off the bat, I can tell you that identifiability does not have to do with a model by itself (as in "an unidentifiable model"), rather than with the combination of this model with <em>some</em> data. That is to say, it has to do with the data also. The same model may be identifiable with some data, and unidentifiable with some other data.</p> <p>In a Bayesian context, it is not clear as to what exactly identifiability means. As the link I provided says, it is not a "black-or-white" case. Rather, it has to with the amount of information learned from the data, or the "distance" of the posterior from the prior.</p> <p>A perhaps suitable measure of information might be the <em>Information Entropy</em>, and while you are at it, the "distance" between two probability distributions (prior and posterior in this case) may be quantified by the <em>Kullback-Leibler divergence</em>, both of which can be found in the Wikipedia page on <a href="https://en.wikipedia.org/wiki/Information_theory" rel="noreferrer">information theory</a>. </p> <p>So you could say that, for a given model and data, if the posterior carries the same amount of information as the prior, then nothing was learned about the model from this data, and the case is <em>unidentifiable</em>.</p> <p>If on the other hand, the data are informative about the model parameters, then the posterior will be more informative than the prior (less information entropy than the prior, and KL divergence is positive) and the case is <em>identifiable</em>.</p> <p>Based on all the intermediate states, that is, <em>how much</em> information gain happened, we can talk about more or less identifiable cases when the information gain from the prior to the posterior is more or less respectively.</p>
296
bayesian inference
How to construct the highest posterior density (HPD) interval
https://stats.stackexchange.com/questions/304957/how-to-construct-the-highest-posterior-density-hpd-interval
<p>Please, anybody could explain the steps to compute the highest posterior density (HPD) interval, when the posterior distribution is known? For instance, when the posterior distribution is Beta distributed.</p> <p>When the posteriori distribution is simulated, the <a href="https://www.jstor.org/stable/1390921?seq=1#page_scan_tab_contents" rel="noreferrer">Chen-Shao algorithm</a> can be used to estimate the HPD interval.</p>
<p>An HPD region is defined as$$\mathfrak{h}_\tau \stackrel{\text{def}}{=} \{\theta;\ \pi(\theta|x)&gt;\tau\}$$and it is an interval only when the parameter is unidimensional and the posterior is unimodal. Assuming this is the case and the posterior $\pi(\cdot|x)$ is available up to a multiplicative constant, finding an HPD interval consists in solving in $\theta$ the equation$$\pi(\theta|x)=\tau$$Since in most situations a coverage of $\alpha$ is requested, a second computational step consists in associating a coverage $\alpha(\tau)$ with the bound $\tau$, as in $$\int_{\{\theta;\ \pi(\theta|x)&gt;\tau\}} \pi(\theta|x)\,\text{d}\theta = \alpha(\tau)a$$followed by the inversion of the function $\alpha(\tau)$ to find the value of $\tau$ guaranteeing the proper coverage.</p> <p>In the case $\pi(\theta|x)$ is the density of a Beta $B(\delta,\beta)$distribution, the first step requires solving $$\theta^{\delta-1}(1-\theta)^{\beta-1}=\tau$$ which usually has no analytic solution [unless there exists $\gamma$ such that both $\gamma(\delta-1)$ and $\gamma(\beta-1)$ are integers. Hence a numerical resolution of the equation is required. For each $\tau$, the coverage $\alpha(\tau)$ can then be derived by calling the Beta cdf.</p>
297
bayesian inference
Change of variable in posterior distribution
https://stats.stackexchange.com/questions/247677/change-of-variable-in-posterior-distribution
<p>I am working in a Bayesian framework: I have some observations $y$, for which I assume a statistical model. The model depends on parameters $\theta \in \Theta$ ($\Theta$ is the parameters space). I assume a probability distribution $q$ on $\Theta$. The parameters of this model can be estimated in a <em>maximum a posteriori</em> fashion : $$ \hat{\theta} = \mathop{\mathrm{argmax}} \limits_{\theta \in \Theta} p(\theta \mid y) $$ where $p(\theta \mid y)$ is the posterior distribution of $\theta$ given $y$. Now, say I want to perform a change of variable on $\Theta$. I consider a mapping $g \, : \, \Theta \, \rightarrow \, \Theta$ which transforms the "old" parameters $\theta$ in $\theta^{\mathrm{new}} = g(\theta)$. We assume that $g$ is a smooth diffeomorphism. My question is : how does this change of variable modifies the posterior distribution $p(\theta \mid y)$ ?</p> <p>If $\mathrm{J}_{g}(\theta)$ denotes the jacobian matrix of $g$ at $\theta$, we know that $\theta^{\mathrm{new}}$ has a probability distribution $\widetilde{q}$ on $\Theta$ given by : </p> <p>$$ \widetilde{q}(\theta^{\mathrm{new}}) \vert \mathrm{J}_{g}(\theta) \vert = q(\theta). $$</p> <p>Using Bayes formula, I would write :</p> <p>$$ p(\theta^{\mathrm{new}} \mid y) = \frac{ p\big( y \mid \theta^{\mathrm{new}} \big) \widetilde{q}(\theta^{\mathrm{new}}) }{ p(y) } = \frac{ p\big( y \mid g(\theta) \big) \vert \mathrm{J}_{g}(\theta) \vert^{-1} q(\theta) }{ p(y) }. $$</p> <p>Is this correct or am I mistaken? </p>
<p>The change of variable in the posterior density is a standard change of variable, involving the Jacobian. The impact on the maximum a posteriori estimator is thus significant in that the MAP of the transform is not the transform of the MAP. (There are <a href="https://xianblog.wordpress.com/2009/09/12/map-estimators-are-not-truly-bayesian-estimators/" rel="nofollow noreferrer">deeper reasons</a> for disliking MAP estimators, of course.)</p>
298
bayesian inference
difference between the linear predictor with uncertainty and predictive distribution for a new observation
https://stats.stackexchange.com/questions/609894/difference-between-the-linear-predictor-with-uncertainty-and-predictive-distribu
<p>I was reading an extract from the book &quot;regression and other stories&quot; and at chapter 9 the author distinguish between 3 cases</p> <p>&quot;After fitting a regression, <span class="math-container">$y = a + bx + error$</span>, we can use it to predict a new data point, or a set of new data points, with predictors <span class="math-container">$x_{new}$</span>. We can make three sorts of predictions, corresponding to increasing levels of uncertainty:</p> <ol> <li><p>The point prediction,<span class="math-container">$\hat{a}+\hat{b}x_{new}$</span>: Based on the fitted model, this is the best point estimate of the average value of y for new data points with this new value of x. We use <span class="math-container">$\hat a$</span> and <span class="math-container">$\hat b$</span> here because the point prediction ignores uncertainty.</p> </li> <li><p>The linear predictor with uncertainty, <span class="math-container">$a + bx_{new}$</span>, propagating the inferential uncertainty in <span class="math-container">$(a, b)$</span>: This represents the distribution of uncertainty about the expected or average value of y for new data points with predictors <span class="math-container">$x_{new}$</span></p> </li> <li><p>The predictive distribution for a new observation, <span class="math-container">$a + bx_{new} + error$</span>: This represents uncertainty about a new observation y with predictors <span class="math-container">$x_{new}$</span>.</p> </li> </ol> <p>and then it makes the example</p> <p>&quot; For example, consider a study in which blood pressure, <span class="math-container">$y$</span>, is predicted from the dose, <span class="math-container">$x$</span>, of a drug. For any given <span class="math-container">$x_{new}$</span>, the point prediction is the best estimate of the average blood pressure in the population, conditional on dose <span class="math-container">$x_{new}$</span>; the linear predictor is the modeled average blood pressure of people with dose <span class="math-container">$x_{new}$</span> in the population, with uncertainty corresponding to inferential uncertainty in the coefficients <span class="math-container">$a$</span> and <span class="math-container">$b$</span>; and the predictive distribution represents the blood pressure of a single person drawn at random drawn from this population, under the model conditional on the specified value of <span class="math-container">$x_{new}$</span>. As sample size approaches infinity, the coefficients a and b are estimated more and more precisely, and the uncertainty in the linear predictor approaches zero, but the uncertainty in the predictive distribution for a new observation does not approach zero; it approaches the residual standard deviation <span class="math-container">$σ$</span>&quot;</p> <p>but honestly i am not sure i understood the difference between 2) and 3)</p> <p>suppose i have a model <span class="math-container">$y= f(x,a,b,c)$</span> that depends on a predictor variable <span class="math-container">$x$</span> and 3 other coefficients <span class="math-container">$a,b,c$</span> and i sample the posterior distribution with EMCEE or another software and find the best fit coefficients <span class="math-container">$\hat a, \hat b, \hat c$</span>.</p> <p>if :</p> <ol> <li>i substitute the value of <span class="math-container">$a,b,c$</span> with the best fit coefficients at a point <span class="math-container">$x_{new}$</span> i get the point prediction</li> </ol> <p>case 2 should correspond to finding the uncertainty on the model expected value of <span class="math-container">$y_{pred}$</span> at <span class="math-container">$x_{new}$</span> and should correspond to this procedure (i think):</p> <ol> <li><p>fix <span class="math-container">$x= x_{new}$</span> and</p> </li> <li><p>take the samples of <span class="math-container">$a,b,c$</span> and</p> </li> <li><p>collect all the <span class="math-container">$f(x_{new},a,b,c)$</span></p> </li> <li><p>compute the mean and the variance to get the distribution of uncertainty about the model average value of y</p> </li> </ol> <p>but what's case 3 and what is the procedure ? and what does it mean &quot; under the model conditional on the specified value of <span class="math-container">$x_{new}$</span>&quot; .</p>
<p>The difficulty might be that <span class="math-container">$y = f(x, a, b, c)$</span> is too abstract of a notation and doesn't fully specify a particular model.</p> <p>So let's take simple linear regression as an example: <span class="math-container">$y = a + bx + e$</span> with <span class="math-container">$e \sim \operatorname{N}(0, c)$</span>. In this model <span class="math-container">$a$</span> is the intercept, <span class="math-container">$b$</span> is the slope and <span class="math-container">$c$</span> is the error variance.</p> <ul> <li>Goal #2 (linear prediction with uncertainty) is to estimate <span class="math-container">$\operatorname{E}(y_{new} | x_{new}) = a + bx_{new}$</span>.</li> <li>Goal #3 (predictive distribution for a new observation) is to predict <span class="math-container">$y_{new} | x_{new} \sim N(a + bx_{new}, c)$</span>.</li> </ul> <p>Let's also assume that you've already fitted the model, so you have a sample <span class="math-container">$\big\{\widehat{a}^{(k)}, \widehat{b}^{(k)}, c^{(k)}\big\}$</span> from the posterior <span class="math-container">$(a,b,c) | x$</span> given a dataset <span class="math-container">$x$</span>; <span class="math-container">$k$</span> indexes the posterior draws.</p> <p>To estimate <span class="math-container">$\operatorname{E}(y_{new} | x_{new})$</span>, you proceed as you describe: you calculate <span class="math-container">$\big\{ \widehat{a}^{(k)} + \widehat{b}^{(k)}x_{new} \big\}$</span> for each posterior draw. (No need for <span class="math-container">$\widehat{c}$</span> here.) This is a sample from the posterior distribution of <span class="math-container">$\operatorname{E}(y_{new} | x_{new})$</span> and you can get an estimate of its mean &amp; variance, plot its histogram, etc.</p> <p>To predict <span class="math-container">$y_{new} | x_{new}$</span>, you additionally draw an error <span class="math-container">$e^{(k)}$</span> for each posterior draw <span class="math-container">$k$</span>: <span class="math-container">$$ \begin{aligned} e^{(k)} &amp;\sim \operatorname{N}\big(0, \widehat{c}^{(k)}\big) \\ y^{(k)} &amp;= \widehat{a}^{(k)} + \widehat{b}^{(k)}x_{new} + e^{(k)} \end{aligned} $$</span></p> <p>So &quot;under the model conditional on the specified value of <span class="math-container">$x_{new}$</span>&quot; means that you know how to sample from the model <span class="math-container">$f(a,b,c)$</span> given a specific value for the predictor <span class="math-container">$x_{new}$</span> and a set of parameter values <span class="math-container">$\big(\widehat{a}, \widehat{b}, \widehat{c}\big)$</span>. For simple linear regression, this means drawing an error <span class="math-container">$e$</span> from a normal distribution and adding it to the estimate of <span class="math-container">$\operatorname{E}(y_{new} | x_{new})$</span>.</p> <p>Clearly, there is more uncertainty in predicting a new observation <span class="math-container">$y_{new} | x_{new}$</span> than the population mean <span class="math-container">$\operatorname{E}(y_{new} | x_{new})$</span> due to the additional variability of drawing the (individual) error <span class="math-container">$e$</span>.</p> <p>And here is how to do add this extra variability in R code (pp. 116 in <em>Regression and Other Stories</em>):</p> <pre class="lang-r prettyprint-override"><code>y_pred &lt;- a + b * as.numeric(new) + rnorm(n_sims, 0, c) </code></pre> <p>where I've substituted <code>sigma</code> with <code>c</code> for consistency.</p>
299