idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
39,301
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
Framing the question You are asking an applied and subjective question, and thus, any answer needs to be infused with applied and subjective considerations. From a purely statistical perspective, the mean and median both provide different information about the central tendency of a sample of data. Thus, neither is correct or incorrect by definition. From an applied perspective, we often want to say something meaningful about the central tendency of a sample, where central tendency maps onto some subjective notion of "typical". General thoughts When summarising what is typical in a sample, observations that are many standard deviations away from the mean (perhaps 3 or 4 SD) will have a large influence on the mean, but not the median. Such observations may lead the mean to deviate from what we think of as the "typical" value of the sample. This helps to explain the popularity of the median when it comes to reporting house prices and income, where a single island in the pacific or billionaire could dramatically influence the mean, but not the median. Such distributions can often include extreme outliers, and the distribution is positively skewed. In contrast, the median is robust. The median can be problematic when the data takes on a limited number of values. For example, the median of a 5-point Likert item lacks the nuance possessed by the mean. For example, means of 2.8, 3.0, and 3.3 might all have a median of 3. In general, the mean has the benefit of using more of the information from the data. When skewed distributions exist, it is also possible to transform the distribution and report the mean of the transformed distribution. When a distribution includes outliers, it is possible to use a trimmed mean, or remove the outliers, or adjust the value of the outlier to a less extreme value (e.g., 2 SD from the mean).
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
Framing the question You are asking an applied and subjective question, and thus, any answer needs to be infused with applied and subjective considerations. From a purely statistical perspective, the
When does the amount of skew or prevalence of outliers make the median preferable to the mean? Framing the question You are asking an applied and subjective question, and thus, any answer needs to be infused with applied and subjective considerations. From a purely statistical perspective, the mean and median both provide different information about the central tendency of a sample of data. Thus, neither is correct or incorrect by definition. From an applied perspective, we often want to say something meaningful about the central tendency of a sample, where central tendency maps onto some subjective notion of "typical". General thoughts When summarising what is typical in a sample, observations that are many standard deviations away from the mean (perhaps 3 or 4 SD) will have a large influence on the mean, but not the median. Such observations may lead the mean to deviate from what we think of as the "typical" value of the sample. This helps to explain the popularity of the median when it comes to reporting house prices and income, where a single island in the pacific or billionaire could dramatically influence the mean, but not the median. Such distributions can often include extreme outliers, and the distribution is positively skewed. In contrast, the median is robust. The median can be problematic when the data takes on a limited number of values. For example, the median of a 5-point Likert item lacks the nuance possessed by the mean. For example, means of 2.8, 3.0, and 3.3 might all have a median of 3. In general, the mean has the benefit of using more of the information from the data. When skewed distributions exist, it is also possible to transform the distribution and report the mean of the transformed distribution. When a distribution includes outliers, it is possible to use a trimmed mean, or remove the outliers, or adjust the value of the outlier to a less extreme value (e.g., 2 SD from the mean).
When does the amount of skew or prevalence of outliers make the median preferable to the mean? Framing the question You are asking an applied and subjective question, and thus, any answer needs to be infused with applied and subjective considerations. From a purely statistical perspective, the
39,302
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
You can read about measures of central tendency here: http://en.wikipedia.org/wiki/Central_tendency . Generally, you analyse a sample in order to tell something about a (much larger) population. Often you know more about the population than merely the data in your sample, usually something that motivated you to take a sample in the first place. If you know that the population has normal distribution then the sample mean will be the best estimator of the expected value even if the sample does not look normal (using small sample sizes like the above you can't really characterise the distribution anyway). You can reliably estimate the mean if you have a lot of data even if the distribution is not normal (see T-test for non normal when N>50?). In cases of distributions that can not be described parametrically median and the IQR may tell much more. IQR is a dispersion measure as opposed to the mean and median that are location measures. You can read about dispersion parameters here: http://en.wikipedia.org/wiki/Statistical_dispersion . A further aspect to consider is that some of your data may be outliers (see Rigorous definition of an outlier?).
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
You can read about measures of central tendency here: http://en.wikipedia.org/wiki/Central_tendency . Generally, you analyse a sample in order to tell something about a (much larger) population. Often
When does the amount of skew or prevalence of outliers make the median preferable to the mean? You can read about measures of central tendency here: http://en.wikipedia.org/wiki/Central_tendency . Generally, you analyse a sample in order to tell something about a (much larger) population. Often you know more about the population than merely the data in your sample, usually something that motivated you to take a sample in the first place. If you know that the population has normal distribution then the sample mean will be the best estimator of the expected value even if the sample does not look normal (using small sample sizes like the above you can't really characterise the distribution anyway). You can reliably estimate the mean if you have a lot of data even if the distribution is not normal (see T-test for non normal when N>50?). In cases of distributions that can not be described parametrically median and the IQR may tell much more. IQR is a dispersion measure as opposed to the mean and median that are location measures. You can read about dispersion parameters here: http://en.wikipedia.org/wiki/Statistical_dispersion . A further aspect to consider is that some of your data may be outliers (see Rigorous definition of an outlier?).
When does the amount of skew or prevalence of outliers make the median preferable to the mean? You can read about measures of central tendency here: http://en.wikipedia.org/wiki/Central_tendency . Generally, you analyse a sample in order to tell something about a (much larger) population. Often
39,303
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
There are no hard and fast rules. They convey different information and have different properties. You select the statistic that best conveys what you want to convey. Or better yet, select statistics that best describe the data. Keep this same thing in mind when you're selecting a measure of central tendency to analyze. (snipped a bunch of stuff repeating Mike Lawrence's answer) Note that Mike Lawrence is referring to something that's surprising for a lot of people. In the behavioural sciences there's a lot of folk wisdom that you use medians with small sample sizes. But in actual fact that's exactly the wrong thing to do because the median quickly becomes more biased than the mean with small samples.
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
There are no hard and fast rules. They convey different information and have different properties. You select the statistic that best conveys what you want to convey. Or better yet, select statisti
When does the amount of skew or prevalence of outliers make the median preferable to the mean? There are no hard and fast rules. They convey different information and have different properties. You select the statistic that best conveys what you want to convey. Or better yet, select statistics that best describe the data. Keep this same thing in mind when you're selecting a measure of central tendency to analyze. (snipped a bunch of stuff repeating Mike Lawrence's answer) Note that Mike Lawrence is referring to something that's surprising for a lot of people. In the behavioural sciences there's a lot of folk wisdom that you use medians with small sample sizes. But in actual fact that's exactly the wrong thing to do because the median quickly becomes more biased than the mean with small samples.
When does the amount of skew or prevalence of outliers make the median preferable to the mean? There are no hard and fast rules. They convey different information and have different properties. You select the statistic that best conveys what you want to convey. Or better yet, select statisti
39,304
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
Be careful with medians: they are biased estimators and the degree of bias can change depending on the skew of the distribution and the sample size (see Miller, 1988). This means that if you are comparing two conditions that have either different skew or different sample sizes, you may find a difference that is in fact attributable to bias rather than a real difference, or you may fail to find a difference when there is a real one when the direction of the difference in bias is opposite to the direction of the real difference between the conditions.
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
Be careful with medians: they are biased estimators and the degree of bias can change depending on the skew of the distribution and the sample size (see Miller, 1988). This means that if you are compa
When does the amount of skew or prevalence of outliers make the median preferable to the mean? Be careful with medians: they are biased estimators and the degree of bias can change depending on the skew of the distribution and the sample size (see Miller, 1988). This means that if you are comparing two conditions that have either different skew or different sample sizes, you may find a difference that is in fact attributable to bias rather than a real difference, or you may fail to find a difference when there is a real one when the direction of the difference in bias is opposite to the direction of the real difference between the conditions.
When does the amount of skew or prevalence of outliers make the median preferable to the mean? Be careful with medians: they are biased estimators and the degree of bias can change depending on the skew of the distribution and the sample size (see Miller, 1988). This means that if you are compa
39,305
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
"deviations in the data are the devil" is just not true I think - well I don't agree with it at least. I'd say its more like "chilli" than the "devil" - as much as you can reasonably handle is good, but it can get nasty if there is too much. The most general procedure I know of to "choose a statistic" to report your data is a combination of two things Bayesian inference (describing what is known) Decision theory (taking actions under uncertainty) However, both of these methods are only partially "algorithmic" so to speak. You have to supply the inputs though. Perhaps the most important part of this stage is that you have to ask a question that your procedure is going to answer. Naturally, different questions get different answers. As the saying goes "I have just derived a very elegant and beautiful answer. All I have to do now is figure out the question." This is a common problem that I have seen with many statistical procedures, is that there is not always a clear statement of the class of problems that it is the best procedure to use. Bayesian inference requires you to specify your prior information in a mathematical framework. This involves Specifying the hypothesis space - what possibilities am I going to consider? Assigning probabilities to each part of the space Using the rules of probability theory to manipulate the assigned probabilities this is basically an open-ended problem (you can always analyse a given English statement more deeply, to extract more or different information from it). Decision theory also requires you to specify a loss function - and there are basically no rules or principles by which to do this, at least as far as I know (computational simplicity is a key driver). One useful question to ask yourself though is "what information about the sample do I convey by presenting this statistic?" or "how much of the complete data set can I recover from using just this set of statistics?" One way you could use Bayesian statistics to help you here is to propose a hypothesis: $$\begin{array}{l l} H_{mean}:\text{The mean is the best statistic} \\H_{med}:\text{The median is the best statistic} \\H_{IQR}:\text{The IQR is the best statistic} \end{array} $$ Now these are not "mathematically well posed" hypothesis, but if we use them anyway, and see what parts of the maths are required to make it well posed. The first part is the prior probabilities, without any data, how likely is each hypothesis? The usual answer is equal probabilities (but not always - may have some theoretical reason to support one hypothesis being more likely - the CLT is perhaps one for $H_{mean}$ being higher than the others). So we use Bayes theorem to update each probability ($I$=prior information, $D$= data set): $$P(H_{i}|D,I)=P(H_{i}|I)\frac{P(D|H_{i},I)}{P(D|I)}\implies \frac{P(H_{i}|D,I)}{P(H_{j}|D,I)}=\frac{P(H_{i}|I)}{P(H_{j}|I)}\frac{P(D|H_{i},I)}{P(D|H_{j},I)}$$ So if the prior probabilities are equal, then the relative probabilities are given by the likelihood ratio. So you also need to specify a probability distribution for what type of data sets you would be likely to see if the mean was the best statistic, etc. Note that each hypothesis doesn't actually state what the specific value of the mean, median, or IQR actually is. Therefore, the probability cannot depend on the exact value of the mean. Hence in the likelihoods these must have been "integrated out" using the sum and product rules $$P(D|H_{i},I)=\int P(\theta_{i}|H_{i},I)P(D|\theta_{i},H_{i},I)d\theta_{i}$$ So you have the prior $P(\theta_{i}|H_{i},I)$ which can be interpreted for i=mean as "given that the mean is the best statistic, and prior to seeing the data, what values of the mean are we likely to see?" and the likelihood $P(D|\theta_{i},H_{i},I)$ can be similarly interpreted as "given the mean is best, and equal to $\theta_{mean}$ how likely is the data that was observed?". This may help you come up with some kinds of features that your distribution should have. This describes the inference - now it is time to apply decision theory. This is particularly simple because your decision doesn't influence the state of nature - the statistic won't change if you do or don't use it. So we can describe the decisions ($A$ for "action" because $D$ is already taken): $$\begin{array}{l l} A_{mean}:\text{The mean is the reported statistic} \\A_{med}:\text{The median is the reported statistic} \\A_{IQR}:\text{The IQR is the reported statistic} \end{array} $$ And now you need to specify a loss matrix $L_{ij}$ which relates the action/decision $A_{i}$ to the state of nature $H_{j}$ - what is the loss if I report the mean, but the median is actually the best statistic? In most cases the diagonal elements will be zero - taking the correct action means no loss. You may also have that all non-diagonal elements are equal - how you are wrong doesn't matter, only whether or not you are wrong. You then proceed by calculating the average loss for each action, weighted by their probabilities: $$L_{i}=\sum_{j}L_{ij}P(H_{j}|D,I)$$ And you then choose the action with the smallest average loss.
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
"deviations in the data are the devil" is just not true I think - well I don't agree with it at least. I'd say its more like "chilli" than the "devil" - as much as you can reasonably handle is good,
When does the amount of skew or prevalence of outliers make the median preferable to the mean? "deviations in the data are the devil" is just not true I think - well I don't agree with it at least. I'd say its more like "chilli" than the "devil" - as much as you can reasonably handle is good, but it can get nasty if there is too much. The most general procedure I know of to "choose a statistic" to report your data is a combination of two things Bayesian inference (describing what is known) Decision theory (taking actions under uncertainty) However, both of these methods are only partially "algorithmic" so to speak. You have to supply the inputs though. Perhaps the most important part of this stage is that you have to ask a question that your procedure is going to answer. Naturally, different questions get different answers. As the saying goes "I have just derived a very elegant and beautiful answer. All I have to do now is figure out the question." This is a common problem that I have seen with many statistical procedures, is that there is not always a clear statement of the class of problems that it is the best procedure to use. Bayesian inference requires you to specify your prior information in a mathematical framework. This involves Specifying the hypothesis space - what possibilities am I going to consider? Assigning probabilities to each part of the space Using the rules of probability theory to manipulate the assigned probabilities this is basically an open-ended problem (you can always analyse a given English statement more deeply, to extract more or different information from it). Decision theory also requires you to specify a loss function - and there are basically no rules or principles by which to do this, at least as far as I know (computational simplicity is a key driver). One useful question to ask yourself though is "what information about the sample do I convey by presenting this statistic?" or "how much of the complete data set can I recover from using just this set of statistics?" One way you could use Bayesian statistics to help you here is to propose a hypothesis: $$\begin{array}{l l} H_{mean}:\text{The mean is the best statistic} \\H_{med}:\text{The median is the best statistic} \\H_{IQR}:\text{The IQR is the best statistic} \end{array} $$ Now these are not "mathematically well posed" hypothesis, but if we use them anyway, and see what parts of the maths are required to make it well posed. The first part is the prior probabilities, without any data, how likely is each hypothesis? The usual answer is equal probabilities (but not always - may have some theoretical reason to support one hypothesis being more likely - the CLT is perhaps one for $H_{mean}$ being higher than the others). So we use Bayes theorem to update each probability ($I$=prior information, $D$= data set): $$P(H_{i}|D,I)=P(H_{i}|I)\frac{P(D|H_{i},I)}{P(D|I)}\implies \frac{P(H_{i}|D,I)}{P(H_{j}|D,I)}=\frac{P(H_{i}|I)}{P(H_{j}|I)}\frac{P(D|H_{i},I)}{P(D|H_{j},I)}$$ So if the prior probabilities are equal, then the relative probabilities are given by the likelihood ratio. So you also need to specify a probability distribution for what type of data sets you would be likely to see if the mean was the best statistic, etc. Note that each hypothesis doesn't actually state what the specific value of the mean, median, or IQR actually is. Therefore, the probability cannot depend on the exact value of the mean. Hence in the likelihoods these must have been "integrated out" using the sum and product rules $$P(D|H_{i},I)=\int P(\theta_{i}|H_{i},I)P(D|\theta_{i},H_{i},I)d\theta_{i}$$ So you have the prior $P(\theta_{i}|H_{i},I)$ which can be interpreted for i=mean as "given that the mean is the best statistic, and prior to seeing the data, what values of the mean are we likely to see?" and the likelihood $P(D|\theta_{i},H_{i},I)$ can be similarly interpreted as "given the mean is best, and equal to $\theta_{mean}$ how likely is the data that was observed?". This may help you come up with some kinds of features that your distribution should have. This describes the inference - now it is time to apply decision theory. This is particularly simple because your decision doesn't influence the state of nature - the statistic won't change if you do or don't use it. So we can describe the decisions ($A$ for "action" because $D$ is already taken): $$\begin{array}{l l} A_{mean}:\text{The mean is the reported statistic} \\A_{med}:\text{The median is the reported statistic} \\A_{IQR}:\text{The IQR is the reported statistic} \end{array} $$ And now you need to specify a loss matrix $L_{ij}$ which relates the action/decision $A_{i}$ to the state of nature $H_{j}$ - what is the loss if I report the mean, but the median is actually the best statistic? In most cases the diagonal elements will be zero - taking the correct action means no loss. You may also have that all non-diagonal elements are equal - how you are wrong doesn't matter, only whether or not you are wrong. You then proceed by calculating the average loss for each action, weighted by their probabilities: $$L_{i}=\sum_{j}L_{ij}P(H_{j}|D,I)$$ And you then choose the action with the smallest average loss.
When does the amount of skew or prevalence of outliers make the median preferable to the mean? "deviations in the data are the devil" is just not true I think - well I don't agree with it at least. I'd say its more like "chilli" than the "devil" - as much as you can reasonably handle is good,
39,306
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
In your examples you seem to make the underlying assumption that you are "blind" to how the data you are going to analyze were generated. In other words, whether mean or median is the right tool for say something robust about central tendency of a sample of data largely depends on your (prior) knowledge of the data generating process. That's probably the main reason why most of us dealing with statistics are steadily focused on the same research fields (or their nearest outskirsts). As a sidelight, in real practice the 7000 value of your in your CASE 2 example would sound more like a mistaken data entry than a genuine outlier.
When does the amount of skew or prevalence of outliers make the median preferable to the mean?
In your examples you seem to make the underlying assumption that you are "blind" to how the data you are going to analyze were generated. In other words, whether mean or median is the right tool for s
When does the amount of skew or prevalence of outliers make the median preferable to the mean? In your examples you seem to make the underlying assumption that you are "blind" to how the data you are going to analyze were generated. In other words, whether mean or median is the right tool for say something robust about central tendency of a sample of data largely depends on your (prior) knowledge of the data generating process. That's probably the main reason why most of us dealing with statistics are steadily focused on the same research fields (or their nearest outskirsts). As a sidelight, in real practice the 7000 value of your in your CASE 2 example would sound more like a mistaken data entry than a genuine outlier.
When does the amount of skew or prevalence of outliers make the median preferable to the mean? In your examples you seem to make the underlying assumption that you are "blind" to how the data you are going to analyze were generated. In other words, whether mean or median is the right tool for s
39,307
Forecasting beyond one season using Holt-Winters' exponential smoothing
I am not very familiar with Holt-Winters, however I have this excellent book by @Rob Hyndman. The package forecast (which is based on the book) of statistical package R gives the following result on your data: > hw<-read.table("~/R/stackoverflow/hw.txt") > tt<-ts(hw[,3],start=c(1999,1),freq=12) > aa<-forecast(tt) > plot(aa) > summary(aa) Forecast method: ETS(M,N,A) Model Information: ETS(M,N,A) Call: ets(y = object) Smoothing parameters: alpha = 0.1701 gamma = 1e-04 Initial states: l = 870.4847 s = -278.0815 -143.6584 151.959 -135.595 514.2527 236.9216 -32.7679 128.8337 115.0829 47.5922 -234.4105 -370.1288 sigma: 0.1122 AIC AICc BIC 1892.756 1896.346 1933.115 In-sample error measures: ME RMSE MAE MPE MAPE MASE 18.1543007 121.8594668 70.7086492 0.8480306 7.0006920 0.2893504 Here is the graph of the forecast together with the confidence intervals: Note that the function forecast picks automatically the best exponential smoothing model from 30 models which are classified by the type of trend model, seasonal part model and the additivity or multiplicity of error. The best model found in your data is with multiplicative error, no trend and additive seasonality, which is less complicated model than you are trying to fit. The way function forecast works is however that the more complicated model was considered and rejected in favor the final model. If you provide the exact formulas it would be possible to fit the precise model to see whether the problem you described is really property of the model.
Forecasting beyond one season using Holt-Winters' exponential smoothing
I am not very familiar with Holt-Winters, however I have this excellent book by @Rob Hyndman. The package forecast (which is based on the book) of statistical package R gives the following result on y
Forecasting beyond one season using Holt-Winters' exponential smoothing I am not very familiar with Holt-Winters, however I have this excellent book by @Rob Hyndman. The package forecast (which is based on the book) of statistical package R gives the following result on your data: > hw<-read.table("~/R/stackoverflow/hw.txt") > tt<-ts(hw[,3],start=c(1999,1),freq=12) > aa<-forecast(tt) > plot(aa) > summary(aa) Forecast method: ETS(M,N,A) Model Information: ETS(M,N,A) Call: ets(y = object) Smoothing parameters: alpha = 0.1701 gamma = 1e-04 Initial states: l = 870.4847 s = -278.0815 -143.6584 151.959 -135.595 514.2527 236.9216 -32.7679 128.8337 115.0829 47.5922 -234.4105 -370.1288 sigma: 0.1122 AIC AICc BIC 1892.756 1896.346 1933.115 In-sample error measures: ME RMSE MAE MPE MAPE MASE 18.1543007 121.8594668 70.7086492 0.8480306 7.0006920 0.2893504 Here is the graph of the forecast together with the confidence intervals: Note that the function forecast picks automatically the best exponential smoothing model from 30 models which are classified by the type of trend model, seasonal part model and the additivity or multiplicity of error. The best model found in your data is with multiplicative error, no trend and additive seasonality, which is less complicated model than you are trying to fit. The way function forecast works is however that the more complicated model was considered and rejected in favor the final model. If you provide the exact formulas it would be possible to fit the precise model to see whether the problem you described is really property of the model.
Forecasting beyond one season using Holt-Winters' exponential smoothing I am not very familiar with Holt-Winters, however I have this excellent book by @Rob Hyndman. The package forecast (which is based on the book) of statistical package R gives the following result on y
39,308
Forecasting beyond one season using Holt-Winters' exponential smoothing
The formulae for Holt-Winters' method include forecasting the seasonal component. You don't need $\gamma=0$. See a forecasting textbook for the details.
Forecasting beyond one season using Holt-Winters' exponential smoothing
The formulae for Holt-Winters' method include forecasting the seasonal component. You don't need $\gamma=0$. See a forecasting textbook for the details.
Forecasting beyond one season using Holt-Winters' exponential smoothing The formulae for Holt-Winters' method include forecasting the seasonal component. You don't need $\gamma=0$. See a forecasting textbook for the details.
Forecasting beyond one season using Holt-Winters' exponential smoothing The formulae for Holt-Winters' method include forecasting the seasonal component. You don't need $\gamma=0$. See a forecasting textbook for the details.
39,309
Forecasting beyond one season using Holt-Winters' exponential smoothing
1: http://i.stack.imgur.com/kxU4t.jpg reflects a questioning of the highly unusual Oct 2009 value 130 Oct-09 2301.41 . Time series analysis actually challenges the data rather than fitting a presumed set of models. The residuals from the following model 1: http://i.stack.imgur.com/Q4W5h.jpg more closely exhibit the required Gaussian Structure for T tests to be valid. I apologize in advance for the unnecessary repetition of the forecast graph. I will have to go to wiki school to learn how to include images in my posts. 1: http://i.stack.imgur.com/OUc5a.jpg . The forecasts for the next 24 months are then robust to identified anomalies
Forecasting beyond one season using Holt-Winters' exponential smoothing
1: http://i.stack.imgur.com/kxU4t.jpg reflects a questioning of the highly unusual Oct 2009 value 130 Oct-09 2301.41 . Time series analysis actually challenges the data rather than fitting a presumed
Forecasting beyond one season using Holt-Winters' exponential smoothing 1: http://i.stack.imgur.com/kxU4t.jpg reflects a questioning of the highly unusual Oct 2009 value 130 Oct-09 2301.41 . Time series analysis actually challenges the data rather than fitting a presumed set of models. The residuals from the following model 1: http://i.stack.imgur.com/Q4W5h.jpg more closely exhibit the required Gaussian Structure for T tests to be valid. I apologize in advance for the unnecessary repetition of the forecast graph. I will have to go to wiki school to learn how to include images in my posts. 1: http://i.stack.imgur.com/OUc5a.jpg . The forecasts for the next 24 months are then robust to identified anomalies
Forecasting beyond one season using Holt-Winters' exponential smoothing 1: http://i.stack.imgur.com/kxU4t.jpg reflects a questioning of the highly unusual Oct 2009 value 130 Oct-09 2301.41 . Time series analysis actually challenges the data rather than fitting a presumed
39,310
How to add horizontal lines to ggplot2 boxplot?
Found solution myself. Maybe someone could use it: #step 1: preparing data ageMetaData <- ddply(data,~group,summarise, mean=mean(age), sd=sd(age), min=min(age), max=max(age), median=median(age), Q1=summary(age)['1st Qu.'], Q3=summary(age)['3rd Qu.'] ) #step 2: correction for outliers out <- data.frame() #initialising storage for outliers for(group in 1:length((levels(factor(data$group))))){ bps <- boxplot.stats(data$age[data$group == group],coef=1.5) ageMetaData[ageMetaData$group == group,]$min <- bps$stats[1] #lower wisker ageMetaData[ageMetaData$group == group,]$max <- bps$stats[5] #upper wisker if(length(bps$out) > 0){ #adding outliers for(y in 1:length(bps$out)){ pt <-data.frame(x=group,y=bps$out[y]) out<-rbind(out,pt) } } } #step 3: drawing p <- ggplot(ageMetaData, aes(x = group,y=mean)) p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box # drawning outliers if any if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4) p <- p + scale_x_discrete(name= "Group") p <- p + scale_y_continuous(name= "Age") p The quantile data resulution is ugly, but works. Maybe there is another way. The result looks like this: Also improved boxplot a little: added second smaller dotted errorbar to reflect sd range. added point to reflect mean removed background maybe this also could be useful to someone: p <- ggplot(ageMetaData, aes(x = group,y=mean)) p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box p <- p + geom_errorbar(aes(ymin=mean-sd,ymax=mean+sd),linetype = 3,width = 0.25) #sd range p <- p + geom_point() # mean # drawning outliers if any if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4) p <- p + scale_x_discrete(name= "Group") p <- p + scale_y_continuous(name= "Age") p + opts(panel.background = theme_rect(fill = "white",colour = NA)) The result is: and the same data with smaller range (boxplot coef = 0.5)
How to add horizontal lines to ggplot2 boxplot?
Found solution myself. Maybe someone could use it: #step 1: preparing data ageMetaData <- ddply(data,~group,summarise, mean=mean(age), sd=sd(age), min=min(age),
How to add horizontal lines to ggplot2 boxplot? Found solution myself. Maybe someone could use it: #step 1: preparing data ageMetaData <- ddply(data,~group,summarise, mean=mean(age), sd=sd(age), min=min(age), max=max(age), median=median(age), Q1=summary(age)['1st Qu.'], Q3=summary(age)['3rd Qu.'] ) #step 2: correction for outliers out <- data.frame() #initialising storage for outliers for(group in 1:length((levels(factor(data$group))))){ bps <- boxplot.stats(data$age[data$group == group],coef=1.5) ageMetaData[ageMetaData$group == group,]$min <- bps$stats[1] #lower wisker ageMetaData[ageMetaData$group == group,]$max <- bps$stats[5] #upper wisker if(length(bps$out) > 0){ #adding outliers for(y in 1:length(bps$out)){ pt <-data.frame(x=group,y=bps$out[y]) out<-rbind(out,pt) } } } #step 3: drawing p <- ggplot(ageMetaData, aes(x = group,y=mean)) p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box # drawning outliers if any if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4) p <- p + scale_x_discrete(name= "Group") p <- p + scale_y_continuous(name= "Age") p The quantile data resulution is ugly, but works. Maybe there is another way. The result looks like this: Also improved boxplot a little: added second smaller dotted errorbar to reflect sd range. added point to reflect mean removed background maybe this also could be useful to someone: p <- ggplot(ageMetaData, aes(x = group,y=mean)) p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box p <- p + geom_errorbar(aes(ymin=mean-sd,ymax=mean+sd),linetype = 3,width = 0.25) #sd range p <- p + geom_point() # mean # drawning outliers if any if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4) p <- p + scale_x_discrete(name= "Group") p <- p + scale_y_continuous(name= "Age") p + opts(panel.background = theme_rect(fill = "white",colour = NA)) The result is: and the same data with smaller range (boxplot coef = 0.5)
How to add horizontal lines to ggplot2 boxplot? Found solution myself. Maybe someone could use it: #step 1: preparing data ageMetaData <- ddply(data,~group,summarise, mean=mean(age), sd=sd(age), min=min(age),
39,311
How to add horizontal lines to ggplot2 boxplot?
There is a simpler solution using stat_boxplot(geom ='errorbar') I provide an example: bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species)) bp + geom_boxplot() + stat_boxplot(geom ='errorbar') Result:
How to add horizontal lines to ggplot2 boxplot?
There is a simpler solution using stat_boxplot(geom ='errorbar') I provide an example: bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species)) bp + geom_boxplot() + stat_boxplot(geom ='e
How to add horizontal lines to ggplot2 boxplot? There is a simpler solution using stat_boxplot(geom ='errorbar') I provide an example: bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species)) bp + geom_boxplot() + stat_boxplot(geom ='errorbar') Result:
How to add horizontal lines to ggplot2 boxplot? There is a simpler solution using stat_boxplot(geom ='errorbar') I provide an example: bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species)) bp + geom_boxplot() + stat_boxplot(geom ='e
39,312
How to add horizontal lines to ggplot2 boxplot?
I think that it looks better if stat_boxplot(geom ='errorbar') on the first line as it hides the vertical line. bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species))+ stat_boxplot(geom ='errorbar') bp + geom_boxplot()
How to add horizontal lines to ggplot2 boxplot?
I think that it looks better if stat_boxplot(geom ='errorbar') on the first line as it hides the vertical line. bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species))+ stat_boxp
How to add horizontal lines to ggplot2 boxplot? I think that it looks better if stat_boxplot(geom ='errorbar') on the first line as it hides the vertical line. bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species))+ stat_boxplot(geom ='errorbar') bp + geom_boxplot()
How to add horizontal lines to ggplot2 boxplot? I think that it looks better if stat_boxplot(geom ='errorbar') on the first line as it hides the vertical line. bp <- ggplot(iris, aes(factor(Species), Sepal.Width, fill = Species))+ stat_boxp
39,313
Why are fitted.values not part the R object returned from arima?
Use fitted() function from the forecast package. Since arima object saves residuals it is easy to compute fitted values from it.
Why are fitted.values not part the R object returned from arima?
Use fitted() function from the forecast package. Since arima object saves residuals it is easy to compute fitted values from it.
Why are fitted.values not part the R object returned from arima? Use fitted() function from the forecast package. Since arima object saves residuals it is easy to compute fitted values from it.
Why are fitted.values not part the R object returned from arima? Use fitted() function from the forecast package. Since arima object saves residuals it is easy to compute fitted values from it.
39,314
Interactive decision trees
Try Orange Canvas, it will give you option to build interactive decision tree.
Interactive decision trees
Try Orange Canvas, it will give you option to build interactive decision tree.
Interactive decision trees Try Orange Canvas, it will give you option to build interactive decision tree.
Interactive decision trees Try Orange Canvas, it will give you option to build interactive decision tree.
39,315
Interactive decision trees
Try the examples under dendrogram. You can make it as interactive as you want. require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method str(dend1) # "str()" method str(dend1, max = 2) # only the first two sub-levels op <- par(mfrow= c(2,2), mar = c(5,2,1,4)) plot(dend1) ## "triangle" type and show inner nodes: plot(dend1, nodePar=list(pch = c(1,NA), cex=0.8, lab.cex = 0.8), type = "t", center=TRUE) plot(dend1, edgePar=list(col = 1:2, lty = 2:3), dLeaf=1, edge.root = TRUE) plot(dend1, nodePar=list(pch = 2:1,cex=.4*2:1, col = 2:3), horiz=TRUE) Edit 1 ==================================== The interactivity depends on what you want to do. It all comes down to the structure of the data that goes to plot. To make it easier to see what's going on, I'll only use the first 3 lines of data from the above example: #Use only the first 3 lines from USArrests (df <- USArrests[1:3,]) #Perform the hc analysis (hcdf <- hclust(dist(df), "ave")) #Plot the results plot(hcdf) #Look at the names of hcdf names(hcdf) #Look at the structure of hcdf dput(hcdf) The next segment is the output of the above dput statement. This structure tells plot how to draw the tree. structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") You can easily change the data and see what plot does. Just copy/paste the structure statement from your screen and assign it to a new variable, make your changes, and plot it. newvar <- structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") plot(newvar) As far as making the clustering more interactive, you'll have to explore the different methods and determine what you want to do. http://cran.cnr.berkeley.edu/web/views/Cluster.html http://wiki.math.yorku.ca/index.php/R:_Cluster_analysis http://www.statmethods.net/advstats/cluster.html http://www.statmethods.net/advstats/cart.html
Interactive decision trees
Try the examples under dendrogram. You can make it as interactive as you want. require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method
Interactive decision trees Try the examples under dendrogram. You can make it as interactive as you want. require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method str(dend1) # "str()" method str(dend1, max = 2) # only the first two sub-levels op <- par(mfrow= c(2,2), mar = c(5,2,1,4)) plot(dend1) ## "triangle" type and show inner nodes: plot(dend1, nodePar=list(pch = c(1,NA), cex=0.8, lab.cex = 0.8), type = "t", center=TRUE) plot(dend1, edgePar=list(col = 1:2, lty = 2:3), dLeaf=1, edge.root = TRUE) plot(dend1, nodePar=list(pch = 2:1,cex=.4*2:1, col = 2:3), horiz=TRUE) Edit 1 ==================================== The interactivity depends on what you want to do. It all comes down to the structure of the data that goes to plot. To make it easier to see what's going on, I'll only use the first 3 lines of data from the above example: #Use only the first 3 lines from USArrests (df <- USArrests[1:3,]) #Perform the hc analysis (hcdf <- hclust(dist(df), "ave")) #Plot the results plot(hcdf) #Look at the names of hcdf names(hcdf) #Look at the structure of hcdf dput(hcdf) The next segment is the output of the above dput statement. This structure tells plot how to draw the tree. structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") You can easily change the data and see what plot does. Just copy/paste the structure statement from your screen and assign it to a new variable, make your changes, and plot it. newvar <- structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") plot(newvar) As far as making the clustering more interactive, you'll have to explore the different methods and determine what you want to do. http://cran.cnr.berkeley.edu/web/views/Cluster.html http://wiki.math.yorku.ca/index.php/R:_Cluster_analysis http://www.statmethods.net/advstats/cluster.html http://www.statmethods.net/advstats/cart.html
Interactive decision trees Try the examples under dendrogram. You can make it as interactive as you want. require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method
39,316
How do you solve a Poisson process problem
The two most important characteristics of a Poisson process with rate $\lambda$ are For any interval $(s, t)$, the number of arrivals within the interval follows a Poisson distribution with mean $\lambda (t-s)$. The number of arrivals in disjoint intervals are independent of one another. So, if $s_1 < t_1 < s_2 < t_2$, then the number of arrivals in $(s_1, t_1)$ and $(s_2, t_2)$ are independent of one another (and have means of $\lambda (t_1 - s_1)$ and $\lambda (t_2 - s_2)$, respectively). For this problem let "time" be denoted in "pages". And so the Poisson process has rate $\lambda = 1.6 \text{ errors/page}$. Suppose we are interested in the probability that there are $x$ errors in three (prespecified!) pages. Call the random variable corresponding to the number of errors $X$. Then, $X$ has a Poisson distribution with mean $\lambda_3 = 3 \lambda = 3 \cdot 1.6 = 4.8$. And so $$ \Pr(\text{$x$ errors in three pages}) = \Pr(X = x) = \frac{e^{-\lambda_3} \lambda_3^x}{x!}, $$ so, for $x = 5$, we get $$ \Pr(X = 5) = \frac{e^{-\lambda_3} \lambda_3^5}{5!} = \frac{e^{-4.8} 4.8^5}{5!} \approx 0.175 $$
How do you solve a Poisson process problem
The two most important characteristics of a Poisson process with rate $\lambda$ are For any interval $(s, t)$, the number of arrivals within the interval follows a Poisson distribution with mean $\la
How do you solve a Poisson process problem The two most important characteristics of a Poisson process with rate $\lambda$ are For any interval $(s, t)$, the number of arrivals within the interval follows a Poisson distribution with mean $\lambda (t-s)$. The number of arrivals in disjoint intervals are independent of one another. So, if $s_1 < t_1 < s_2 < t_2$, then the number of arrivals in $(s_1, t_1)$ and $(s_2, t_2)$ are independent of one another (and have means of $\lambda (t_1 - s_1)$ and $\lambda (t_2 - s_2)$, respectively). For this problem let "time" be denoted in "pages". And so the Poisson process has rate $\lambda = 1.6 \text{ errors/page}$. Suppose we are interested in the probability that there are $x$ errors in three (prespecified!) pages. Call the random variable corresponding to the number of errors $X$. Then, $X$ has a Poisson distribution with mean $\lambda_3 = 3 \lambda = 3 \cdot 1.6 = 4.8$. And so $$ \Pr(\text{$x$ errors in three pages}) = \Pr(X = x) = \frac{e^{-\lambda_3} \lambda_3^x}{x!}, $$ so, for $x = 5$, we get $$ \Pr(X = 5) = \frac{e^{-\lambda_3} \lambda_3^5}{5!} = \frac{e^{-4.8} 4.8^5}{5!} \approx 0.175 $$
How do you solve a Poisson process problem The two most important characteristics of a Poisson process with rate $\lambda$ are For any interval $(s, t)$, the number of arrivals within the interval follows a Poisson distribution with mean $\la
39,317
How do you solve a Poisson process problem
If I were his instructor I'd prefer the following explanation, which obviously is equivalent to that given by @cardinal. Let $N_t$ be the Poisson process of counting the number of errors on consequtive pages, of rate $\lambda=1.6/page$. One supposes that we observe the process at integer moments t (here t being assimilated with number of pages). Because $P(N_t=k)=e^{-\lambda t} (\lambda t)^k/k!$, we have $P(N_3=5)=e^{-(3*1.6)}(4.8)^5/5!$.
How do you solve a Poisson process problem
If I were his instructor I'd prefer the following explanation, which obviously is equivalent to that given by @cardinal. Let $N_t$ be the Poisson process of counting the number of errors on consequtiv
How do you solve a Poisson process problem If I were his instructor I'd prefer the following explanation, which obviously is equivalent to that given by @cardinal. Let $N_t$ be the Poisson process of counting the number of errors on consequtive pages, of rate $\lambda=1.6/page$. One supposes that we observe the process at integer moments t (here t being assimilated with number of pages). Because $P(N_t=k)=e^{-\lambda t} (\lambda t)^k/k!$, we have $P(N_3=5)=e^{-(3*1.6)}(4.8)^5/5!$.
How do you solve a Poisson process problem If I were his instructor I'd prefer the following explanation, which obviously is equivalent to that given by @cardinal. Let $N_t$ be the Poisson process of counting the number of errors on consequtiv
39,318
How do extreme values scale with sample size?
Assume that the random variables $x_k$ are i.i.d., nonnegative, integer valued, bounded by $n$, and such that $P(x_k=0)$ and $P(x_k=1)$ are both positive. For every $N\ge1$, let $$ X_N= \min\{x_1,\ldots,x_N\}. $$ Then, when $N\to+\infty$, $$ E(X_N)=c^N(1+o(1)), $$ where $c<1$ is independent of $N$ and given by $$ c=P(x_k\ge1). $$ Hence $E(X_N)$ is exponentially small. When each $x_k$ is Binomial $(n,p)$ with $n\ge1$ and $p$ in $(0,1)$ fixed, the result holds with $c=1-(1-p)^n$. To see this, note that $[X_N\ge i]=[x_1\ge i]\cap\cdots\cap[x_N\ge i]$ for every $i$ and that, since $X_N$ is nonnegative and integer valued, $E(X_N)$ is the sum over $i\ge1$ of $P(X_N\ge i)$, hence $$ E(X_N)=\sum_{i\ge 1}P(x_1\ge i)^N. $$ For every $i\ge n+1$, $P(x_1\ge i)=0$. For every $2\le i\le n$, $0\le P(x_1\ge i)\le P(x_1\ge 2)$. Hence $$ c^N\le E(X_N)\le c^N+(n-1)d^N, $$ with $$ c=P(x_1\ge1),\quad d=P(x_1\ge 2). $$ Because $P(x_k=1)$ is positive, one knows that $d<c$, hence $E(X_N)\sim c^N$ when $N\to+\infty$.
How do extreme values scale with sample size?
Assume that the random variables $x_k$ are i.i.d., nonnegative, integer valued, bounded by $n$, and such that $P(x_k=0)$ and $P(x_k=1)$ are both positive. For every $N\ge1$, let $$ X_N= \min\{x_1,\ld
How do extreme values scale with sample size? Assume that the random variables $x_k$ are i.i.d., nonnegative, integer valued, bounded by $n$, and such that $P(x_k=0)$ and $P(x_k=1)$ are both positive. For every $N\ge1$, let $$ X_N= \min\{x_1,\ldots,x_N\}. $$ Then, when $N\to+\infty$, $$ E(X_N)=c^N(1+o(1)), $$ where $c<1$ is independent of $N$ and given by $$ c=P(x_k\ge1). $$ Hence $E(X_N)$ is exponentially small. When each $x_k$ is Binomial $(n,p)$ with $n\ge1$ and $p$ in $(0,1)$ fixed, the result holds with $c=1-(1-p)^n$. To see this, note that $[X_N\ge i]=[x_1\ge i]\cap\cdots\cap[x_N\ge i]$ for every $i$ and that, since $X_N$ is nonnegative and integer valued, $E(X_N)$ is the sum over $i\ge1$ of $P(X_N\ge i)$, hence $$ E(X_N)=\sum_{i\ge 1}P(x_1\ge i)^N. $$ For every $i\ge n+1$, $P(x_1\ge i)=0$. For every $2\le i\le n$, $0\le P(x_1\ge i)\le P(x_1\ge 2)$. Hence $$ c^N\le E(X_N)\le c^N+(n-1)d^N, $$ with $$ c=P(x_1\ge1),\quad d=P(x_1\ge 2). $$ Because $P(x_k=1)$ is positive, one knows that $d<c$, hence $E(X_N)\sim c^N$ when $N\to+\infty$.
How do extreme values scale with sample size? Assume that the random variables $x_k$ are i.i.d., nonnegative, integer valued, bounded by $n$, and such that $P(x_k=0)$ and $P(x_k=1)$ are both positive. For every $N\ge1$, let $$ X_N= \min\{x_1,\ld
39,319
How do extreme values scale with sample size?
The table in this page of this book might help you. The explicit formulas for the expectation of minimum of sample of binomial distributions is given in the page before.
How do extreme values scale with sample size?
The table in this page of this book might help you. The explicit formulas for the expectation of minimum of sample of binomial distributions is given in the page before.
How do extreme values scale with sample size? The table in this page of this book might help you. The explicit formulas for the expectation of minimum of sample of binomial distributions is given in the page before.
How do extreme values scale with sample size? The table in this page of this book might help you. The explicit formulas for the expectation of minimum of sample of binomial distributions is given in the page before.
39,320
How do extreme values scale with sample size?
The distribution of the minimum of any set of N iid random variables is: $$f_{min}(x)=Nf(x)[1-F(x)]^{N-1}$$ Where $f(x)$ is the pdf and $F(x)$ is the cdf (this is sometime called a $Beta-F$ distribution, because it is a compound of a Beta distribution and an arbitrary distribution). Hence the expectation (in this particular case) is given by: $$E[min(X)] = N\sum_{x=0}^{x=n} xf(x)[1-F(x)]^{N-1}$$ Which means that $E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})$. Using the "delta method" to approximation this expectation $E[g(x)]\approx g(E[X])$ gives $$E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})\approx N(E(x_1)[1-F(E(x_1))]^{N-1})$$ Substituting $np=E[x_1]$ then gives the approximation: $$E[min(X)]\approx Nnp[1-F(np)]^{N-1}$$ Note that $F(np)\approx \frac{1}{2}$ (via normal approx.) to give $$E[min(X)]\approx \frac{Nnp}{2^{N-1}}$$
How do extreme values scale with sample size?
The distribution of the minimum of any set of N iid random variables is: $$f_{min}(x)=Nf(x)[1-F(x)]^{N-1}$$ Where $f(x)$ is the pdf and $F(x)$ is the cdf (this is sometime called a $Beta-F$ distributi
How do extreme values scale with sample size? The distribution of the minimum of any set of N iid random variables is: $$f_{min}(x)=Nf(x)[1-F(x)]^{N-1}$$ Where $f(x)$ is the pdf and $F(x)$ is the cdf (this is sometime called a $Beta-F$ distribution, because it is a compound of a Beta distribution and an arbitrary distribution). Hence the expectation (in this particular case) is given by: $$E[min(X)] = N\sum_{x=0}^{x=n} xf(x)[1-F(x)]^{N-1}$$ Which means that $E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})$. Using the "delta method" to approximation this expectation $E[g(x)]\approx g(E[X])$ gives $$E[min(X)]=NE(x_1[1-F(x_1)]^{N-1})\approx N(E(x_1)[1-F(E(x_1))]^{N-1})$$ Substituting $np=E[x_1]$ then gives the approximation: $$E[min(X)]\approx Nnp[1-F(np)]^{N-1}$$ Note that $F(np)\approx \frac{1}{2}$ (via normal approx.) to give $$E[min(X)]\approx \frac{Nnp}{2^{N-1}}$$
How do extreme values scale with sample size? The distribution of the minimum of any set of N iid random variables is: $$f_{min}(x)=Nf(x)[1-F(x)]^{N-1}$$ Where $f(x)$ is the pdf and $F(x)$ is the cdf (this is sometime called a $Beta-F$ distributi
39,321
Logistic Regression: Classification Tables a la SPSS in R
I'm not aware of a specific command, but this might be a start: # generate some data > N <- 100 > X <- rnorm(N, 175, 7) > Y <- 0.4*X + 10 + rnorm(N, 0, 3) # dichotomize Y > Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi")) # logistic regression > glmFit <- glm(Yfac ~ X, family=binomial(link="logit")) # predicted probabilities > Yhat <- fitted(glmFit) # choose a threshold for dichotomizing according to predicted probability > thresh <- 0.5 > YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi")) # contingency table and marginal sums > cTab <- table(Yfac, YhatFac) > addmargins(cTab) YhatFac Yfac lo hi Sum lo 36 14 50 hi 12 38 50 Sum 48 52 100 # percentage correct for training data > sum(diag(cTab)) / sum(cTab) [1] 0.74
Logistic Regression: Classification Tables a la SPSS in R
I'm not aware of a specific command, but this might be a start: # generate some data > N <- 100 > X <- rnorm(N, 175, 7) > Y <- 0.4*X + 10 + rnorm(N, 0, 3) # dichotomize Y > Yfac <- cut(Y, breaks=c(-I
Logistic Regression: Classification Tables a la SPSS in R I'm not aware of a specific command, but this might be a start: # generate some data > N <- 100 > X <- rnorm(N, 175, 7) > Y <- 0.4*X + 10 + rnorm(N, 0, 3) # dichotomize Y > Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi")) # logistic regression > glmFit <- glm(Yfac ~ X, family=binomial(link="logit")) # predicted probabilities > Yhat <- fitted(glmFit) # choose a threshold for dichotomizing according to predicted probability > thresh <- 0.5 > YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi")) # contingency table and marginal sums > cTab <- table(Yfac, YhatFac) > addmargins(cTab) YhatFac Yfac lo hi Sum lo 36 14 50 hi 12 38 50 Sum 48 52 100 # percentage correct for training data > sum(diag(cTab)) / sum(cTab) [1] 0.74
Logistic Regression: Classification Tables a la SPSS in R I'm not aware of a specific command, but this might be a start: # generate some data > N <- 100 > X <- rnorm(N, 175, 7) > Y <- 0.4*X + 10 + rnorm(N, 0, 3) # dichotomize Y > Yfac <- cut(Y, breaks=c(-I
39,322
Logistic Regression: Classification Tables a la SPSS in R
Thomas D. Fletcher has a function called ClassLog() (for "classification analysis for a logistic regression model") in his QuantPsyc package. However, I like @caracal's response because it is self-made and easily customizable.
Logistic Regression: Classification Tables a la SPSS in R
Thomas D. Fletcher has a function called ClassLog() (for "classification analysis for a logistic regression model") in his QuantPsyc package. However, I like @caracal's response because it is self-mad
Logistic Regression: Classification Tables a la SPSS in R Thomas D. Fletcher has a function called ClassLog() (for "classification analysis for a logistic regression model") in his QuantPsyc package. However, I like @caracal's response because it is self-made and easily customizable.
Logistic Regression: Classification Tables a la SPSS in R Thomas D. Fletcher has a function called ClassLog() (for "classification analysis for a logistic regression model") in his QuantPsyc package. However, I like @caracal's response because it is self-mad
39,323
How to rank the results of questions with categorical answers?
If all your questions have the same response scale and they are standard Likert items, scaling the item 1,2,3,4,5 and taking the mean is generally fine. You can investigate the robustness of the rank ordering by experimenting with different scaling procedures (e.g., 0, 0, 0, 1, 1 is common where you want to assess the percentage happy or very happy; or agreeing or strongly agreeing). From my experience, such variants in scaling will give you almost identical question orderings. You could also explore optimal scaling principal components or some form of polytomous IRT approach is you wanted to be sophisticated. A table with three columns would be fine: rank, item text, mean. You could also do the same thing with question on the x axis and mean on the y axis.
How to rank the results of questions with categorical answers?
If all your questions have the same response scale and they are standard Likert items, scaling the item 1,2,3,4,5 and taking the mean is generally fine. You can investigate the robustness of the rank
How to rank the results of questions with categorical answers? If all your questions have the same response scale and they are standard Likert items, scaling the item 1,2,3,4,5 and taking the mean is generally fine. You can investigate the robustness of the rank ordering by experimenting with different scaling procedures (e.g., 0, 0, 0, 1, 1 is common where you want to assess the percentage happy or very happy; or agreeing or strongly agreeing). From my experience, such variants in scaling will give you almost identical question orderings. You could also explore optimal scaling principal components or some form of polytomous IRT approach is you wanted to be sophisticated. A table with three columns would be fine: rank, item text, mean. You could also do the same thing with question on the x axis and mean on the y axis.
How to rank the results of questions with categorical answers? If all your questions have the same response scale and they are standard Likert items, scaling the item 1,2,3,4,5 and taking the mean is generally fine. You can investigate the robustness of the rank
39,324
How to rank the results of questions with categorical answers?
Recoding your data with numerical values seems ok, provided the assumption of an ordinal scale holds. This is often the case for Likert-type item, but see these related questions: Is it appropriate to treat n-point Likert scale data as n trials from a binomial process? Under what conditions should Likert scales be used as ordinal or interval data? When validating a questionnaire, we often provide usual numerical summaries (mean $\pm$ sd, range, quartiles) to highlight ceiling/floor effect, that is higher response rate in the extreme range of the scale. Dotplots are also great tool to summarize such data. This is just for visualization/summary purpose. If you want to get into more statistical stuff, you can use proportional odds model or ordinal logistic regression, for ordinal items, and multinomial regression, for discrete ones.
How to rank the results of questions with categorical answers?
Recoding your data with numerical values seems ok, provided the assumption of an ordinal scale holds. This is often the case for Likert-type item, but see these related questions: Is it appropriate t
How to rank the results of questions with categorical answers? Recoding your data with numerical values seems ok, provided the assumption of an ordinal scale holds. This is often the case for Likert-type item, but see these related questions: Is it appropriate to treat n-point Likert scale data as n trials from a binomial process? Under what conditions should Likert scales be used as ordinal or interval data? When validating a questionnaire, we often provide usual numerical summaries (mean $\pm$ sd, range, quartiles) to highlight ceiling/floor effect, that is higher response rate in the extreme range of the scale. Dotplots are also great tool to summarize such data. This is just for visualization/summary purpose. If you want to get into more statistical stuff, you can use proportional odds model or ordinal logistic regression, for ordinal items, and multinomial regression, for discrete ones.
How to rank the results of questions with categorical answers? Recoding your data with numerical values seems ok, provided the assumption of an ordinal scale holds. This is often the case for Likert-type item, but see these related questions: Is it appropriate t
39,325
How to rank the results of questions with categorical answers?
If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively skewed histogram). So picking the extremes is easy but this is also dependent on the data. Is an absolute ranking necessary? Like you point out, things may be fuzzier in the middle, so is it relevant to your investigation to distinguish between rank 8 and 9 (or whatever) based on some scoring method? One approach would be to continue with what you stated above -- look at the distributions and categorize questions based on proportions of good/ok/bad based on the data. You might start with a mosaic plot (with questions as factors) to explore your data. This may help reveal criteria for collapsing questions into groups. Instead of piecemeal rankings, they get classified into categories (e.g. what might have been ranks 1-5 become category 1, etc).
How to rank the results of questions with categorical answers?
If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively
How to rank the results of questions with categorical answers? If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively skewed histogram). So picking the extremes is easy but this is also dependent on the data. Is an absolute ranking necessary? Like you point out, things may be fuzzier in the middle, so is it relevant to your investigation to distinguish between rank 8 and 9 (or whatever) based on some scoring method? One approach would be to continue with what you stated above -- look at the distributions and categorize questions based on proportions of good/ok/bad based on the data. You might start with a mosaic plot (with questions as factors) to explore your data. This may help reveal criteria for collapsing questions into groups. Instead of piecemeal rankings, they get classified into categories (e.g. what might have been ranks 1-5 become category 1, etc).
How to rank the results of questions with categorical answers? If I plot the distribution of answers for each question, I can identify which questions have lots of 'good' answers (distribution is negatively skewed) or those with lots of 'bad' answers (positively
39,326
How to rank the results of questions with categorical answers?
A small additional point to the answers already given: Without assuming your ordinal data is interval, you can compare any convenient quantiles - e.g. medians. Or, when comparing X vs Y which are both ordered categorical, you can estimate something like P(Y>X) - P(X>Y) or P(Y>X) + 0.5 * P(Y=X) (etc.), where you estimate probabilities by proportions of course.
How to rank the results of questions with categorical answers?
A small additional point to the answers already given: Without assuming your ordinal data is interval, you can compare any convenient quantiles - e.g. medians. Or, when comparing X vs Y which are both
How to rank the results of questions with categorical answers? A small additional point to the answers already given: Without assuming your ordinal data is interval, you can compare any convenient quantiles - e.g. medians. Or, when comparing X vs Y which are both ordered categorical, you can estimate something like P(Y>X) - P(X>Y) or P(Y>X) + 0.5 * P(Y=X) (etc.), where you estimate probabilities by proportions of course.
How to rank the results of questions with categorical answers? A small additional point to the answers already given: Without assuming your ordinal data is interval, you can compare any convenient quantiles - e.g. medians. Or, when comparing X vs Y which are both
39,327
How to rank the results of questions with categorical answers?
You can rank ordinal distributions by means of an intuitive dominance criterion: the answers to one question are better than the answers to another when it is more likely than not that a randomly chosen answer to the first will be better than a randomly chosen answer to the second. In more detail: put all the answers to question $X$ into one hat and all the answers to question $Y$ into another hat. Draw one answer from each hat at random. We will compare these answers, which we can do because they are on an ordinal scale. Let's also agree to resolve any ties by flipping a fair coin. Let $p(X,Y)$ be the probability that the answer to $X$ is better than the answer to $Y$. Rank $X$ ahead of $Y$ when $p$ exceeds $1/2$ and rank $X$ behind $Y$ when $p$ is less than $1/2$. If $p$ equals $1/2$, declare a tie between $X$ and $Y$. (By virtue of our tie-resolution procedure, $p(X,Y) + p(Y,X) = 1$, implying the ranking does not depend on the sequence in which we draw the two answers.) The calculation is a simple exercise for "just" a programmer (and a fun one if you are interested in efficient calculation, although that's unlikely to matter here). To make this proposal clear, though, I will illustrate it. Suppose all answers are on an integral scale from one to four, with four best. Write the answer distributions in the form $(k_1, k_2, k_3, k_4)$ where $k_3$ counts the number of "3"'s among the answers to a question, for example. For this example suppose $X$ has distribution $(4, 2, 0, 4)$ and $Y$ has distribution $(1, 6, 1, 2)$ (ten answers each). (Stop for a moment to consider which of these distributions ought to be considered "best" and note that they have identical means of 2.4 and identical medians of 2, suggesting this is a difficult comparison to make.) Then: There is a 4/10 chance of drawing a "4" for $X$. In this case, There is a 2/10 chance of drawing a "4" for $Y$ for a tie; There is an 8/10 chance of drawing less than "4" for $Y$, a win for $X$. This contributes $(4/10)[(2/10)0.5 + 8/10] = 0.36$ to $p(X,Y)$. Continuing similarly, Drawing a "3" for $X$ is impossible; it contributes $0$ to $p(X,Y)$. Drawing a "2" for $X$ contributes $(2/10)[(6/10)0.5 + 1/10] = .08$. Drawing a "1" for $X$ contributes $(4/10)[(1/10)0.5] = 0.02$. Whence $p(X,Y) = 0.36 + 0.00 + 0.08 + 0.02 = 0.46$. Because this value is less than $1/2$, we conclude $X$ should be ranked lower than $Y$. This idea is related to that of Pitman Closeness and to certain non-parametric slippage tests (which decide whether one distribution has "slipped"--changed values--with respect to other distributions based on random samples of them), such as the Mann-Whitney (aka Wilcoxon) test.
How to rank the results of questions with categorical answers?
You can rank ordinal distributions by means of an intuitive dominance criterion: the answers to one question are better than the answers to another when it is more likely than not that a randomly chos
How to rank the results of questions with categorical answers? You can rank ordinal distributions by means of an intuitive dominance criterion: the answers to one question are better than the answers to another when it is more likely than not that a randomly chosen answer to the first will be better than a randomly chosen answer to the second. In more detail: put all the answers to question $X$ into one hat and all the answers to question $Y$ into another hat. Draw one answer from each hat at random. We will compare these answers, which we can do because they are on an ordinal scale. Let's also agree to resolve any ties by flipping a fair coin. Let $p(X,Y)$ be the probability that the answer to $X$ is better than the answer to $Y$. Rank $X$ ahead of $Y$ when $p$ exceeds $1/2$ and rank $X$ behind $Y$ when $p$ is less than $1/2$. If $p$ equals $1/2$, declare a tie between $X$ and $Y$. (By virtue of our tie-resolution procedure, $p(X,Y) + p(Y,X) = 1$, implying the ranking does not depend on the sequence in which we draw the two answers.) The calculation is a simple exercise for "just" a programmer (and a fun one if you are interested in efficient calculation, although that's unlikely to matter here). To make this proposal clear, though, I will illustrate it. Suppose all answers are on an integral scale from one to four, with four best. Write the answer distributions in the form $(k_1, k_2, k_3, k_4)$ where $k_3$ counts the number of "3"'s among the answers to a question, for example. For this example suppose $X$ has distribution $(4, 2, 0, 4)$ and $Y$ has distribution $(1, 6, 1, 2)$ (ten answers each). (Stop for a moment to consider which of these distributions ought to be considered "best" and note that they have identical means of 2.4 and identical medians of 2, suggesting this is a difficult comparison to make.) Then: There is a 4/10 chance of drawing a "4" for $X$. In this case, There is a 2/10 chance of drawing a "4" for $Y$ for a tie; There is an 8/10 chance of drawing less than "4" for $Y$, a win for $X$. This contributes $(4/10)[(2/10)0.5 + 8/10] = 0.36$ to $p(X,Y)$. Continuing similarly, Drawing a "3" for $X$ is impossible; it contributes $0$ to $p(X,Y)$. Drawing a "2" for $X$ contributes $(2/10)[(6/10)0.5 + 1/10] = .08$. Drawing a "1" for $X$ contributes $(4/10)[(1/10)0.5] = 0.02$. Whence $p(X,Y) = 0.36 + 0.00 + 0.08 + 0.02 = 0.46$. Because this value is less than $1/2$, we conclude $X$ should be ranked lower than $Y$. This idea is related to that of Pitman Closeness and to certain non-parametric slippage tests (which decide whether one distribution has "slipped"--changed values--with respect to other distributions based on random samples of them), such as the Mann-Whitney (aka Wilcoxon) test.
How to rank the results of questions with categorical answers? You can rank ordinal distributions by means of an intuitive dominance criterion: the answers to one question are better than the answers to another when it is more likely than not that a randomly chos
39,328
What practical implications/interpretations are there of a kurtotic distribution?
The kurtosis also indicates the "fat tailedness" of the distribution. A distribution with high kurtosis will have many extreme events (events far away from the center) and many "typical" events (events near the center). A distribution with low kurtosis will have events a moderate distance from the center. This picture may help: http://mvpprograms.com/help/images/KurtosisPict.jpg
What practical implications/interpretations are there of a kurtotic distribution?
The kurtosis also indicates the "fat tailedness" of the distribution. A distribution with high kurtosis will have many extreme events (events far away from the center) and many "typical" events (event
What practical implications/interpretations are there of a kurtotic distribution? The kurtosis also indicates the "fat tailedness" of the distribution. A distribution with high kurtosis will have many extreme events (events far away from the center) and many "typical" events (events near the center). A distribution with low kurtosis will have events a moderate distance from the center. This picture may help: http://mvpprograms.com/help/images/KurtosisPict.jpg
What practical implications/interpretations are there of a kurtotic distribution? The kurtosis also indicates the "fat tailedness" of the distribution. A distribution with high kurtosis will have many extreme events (events far away from the center) and many "typical" events (event
39,329
What practical implications/interpretations are there of a kurtotic distribution?
I seem to remember that the median has a smaller standard error than the mean when the samples are drawn from a leptokurtic distribution, but the mean has a smaller standard error when the distribution is platykurtic. I think I read this in one of Wilcox' books. Thus the kurtosis may dictate which kinds of locational tests one uses..
What practical implications/interpretations are there of a kurtotic distribution?
I seem to remember that the median has a smaller standard error than the mean when the samples are drawn from a leptokurtic distribution, but the mean has a smaller standard error when the distributio
What practical implications/interpretations are there of a kurtotic distribution? I seem to remember that the median has a smaller standard error than the mean when the samples are drawn from a leptokurtic distribution, but the mean has a smaller standard error when the distribution is platykurtic. I think I read this in one of Wilcox' books. Thus the kurtosis may dictate which kinds of locational tests one uses..
What practical implications/interpretations are there of a kurtotic distribution? I seem to remember that the median has a smaller standard error than the mean when the samples are drawn from a leptokurtic distribution, but the mean has a smaller standard error when the distributio
39,330
What practical implications/interpretations are there of a kurtotic distribution?
Haven't got an example of a dataset in mind with which to answer your question about the interpretation, but this answer to a related question indicates that a practical implication of kurtosis is biased variance estimates. In considering the interpretive difference rationally, I think there's relevant information in the extreme example of a comparison between a normal distribution and a completely flat distribution (e.g., outcomes of rolled dice). This isn't a real dataset, but I'm sure everyone is familiar with both distributions and could easily create either or think of another example. Basically, the difference of a platykurtic distribution from a normal distribution is that the central tendency is weaker, and there is less of a difference between the probabilities of relatively common vs. extreme / rare events. Simply stated, the opposite is true of a leptokurtic distribution: some events are very common, and most of the rest are very rare, generally due to an unusually strong central tendency. Also, you might want to consider this quote from Wikipedia (emphasis added): One common measure of kurtosis, originating with Karl Pearson, is based on a scaled version of the fourth moment of the data or population, but it has been argued that this really measures heavy tails, and not peakedness...It is common practice to use an adjusted version of Pearson's kurtosis, the excess kurtosis, to provide a comparison of the shape of a given distribution to that of the normal distribution. The above distinction between Pearson's kurtosis and excess kurtosis seems relevant to the comment from @whuber on the accepted answer.
What practical implications/interpretations are there of a kurtotic distribution?
Haven't got an example of a dataset in mind with which to answer your question about the interpretation, but this answer to a related question indicates that a practical implication of kurtosis is bia
What practical implications/interpretations are there of a kurtotic distribution? Haven't got an example of a dataset in mind with which to answer your question about the interpretation, but this answer to a related question indicates that a practical implication of kurtosis is biased variance estimates. In considering the interpretive difference rationally, I think there's relevant information in the extreme example of a comparison between a normal distribution and a completely flat distribution (e.g., outcomes of rolled dice). This isn't a real dataset, but I'm sure everyone is familiar with both distributions and could easily create either or think of another example. Basically, the difference of a platykurtic distribution from a normal distribution is that the central tendency is weaker, and there is less of a difference between the probabilities of relatively common vs. extreme / rare events. Simply stated, the opposite is true of a leptokurtic distribution: some events are very common, and most of the rest are very rare, generally due to an unusually strong central tendency. Also, you might want to consider this quote from Wikipedia (emphasis added): One common measure of kurtosis, originating with Karl Pearson, is based on a scaled version of the fourth moment of the data or population, but it has been argued that this really measures heavy tails, and not peakedness...It is common practice to use an adjusted version of Pearson's kurtosis, the excess kurtosis, to provide a comparison of the shape of a given distribution to that of the normal distribution. The above distinction between Pearson's kurtosis and excess kurtosis seems relevant to the comment from @whuber on the accepted answer.
What practical implications/interpretations are there of a kurtotic distribution? Haven't got an example of a dataset in mind with which to answer your question about the interpretation, but this answer to a related question indicates that a practical implication of kurtosis is bia
39,331
What practical implications/interpretations are there of a kurtotic distribution?
There is the Kurtosis risk which isn't explained fantastically well at that link. In general, measures of normality (or deviation therefrom) are crucial if you are using analyses that assume normality. For example, the standard workhorse Pearson-r correlation coefficient is severely sensitive to outliers and becomes essentially invalid as excess kurtosis deviates from 0. The K² test is often used to check a distribution for normality and incorporates the sample kurtosis as a factor.
What practical implications/interpretations are there of a kurtotic distribution?
There is the Kurtosis risk which isn't explained fantastically well at that link. In general, measures of normality (or deviation therefrom) are crucial if you are using analyses that assume normalit
What practical implications/interpretations are there of a kurtotic distribution? There is the Kurtosis risk which isn't explained fantastically well at that link. In general, measures of normality (or deviation therefrom) are crucial if you are using analyses that assume normality. For example, the standard workhorse Pearson-r correlation coefficient is severely sensitive to outliers and becomes essentially invalid as excess kurtosis deviates from 0. The K² test is often used to check a distribution for normality and incorporates the sample kurtosis as a factor.
What practical implications/interpretations are there of a kurtotic distribution? There is the Kurtosis risk which isn't explained fantastically well at that link. In general, measures of normality (or deviation therefrom) are crucial if you are using analyses that assume normalit
39,332
Continuity of the location
The expectation does not have to be a continuous function of the parameter. The following example illustrates what can go wrong. For any number $0 \le p \le 1,$ let $g(x,p)=0$ when $x \le 1$ and otherwise $$g(x,p) = (p+1)x^{-(p+2)} = \frac{\mathrm{d}}{\mathrm{d}x}\left(1 - x^{-(p+1)} \right) = \frac{\mathrm{d}}{\mathrm{d}x}G(x,p).$$ This describes a family of positive distributions with distribution functions $x\to G(x,p)$ (Pareto distributions). Moreover, for any fixed real number $x$, the function $p\to g(x,p)$ is continuous (it's actually differentiable). Using these, construct a family by mixing two such distributions, $$f(x,\theta) = \frac{1 - \theta}{2} g(-x,\theta^2) + \frac{1+\theta}{2} g(x,\theta^2),$$ where $-1 \le \theta \le 1.$ Because $g$ is continuous in its second variable, the functions $\theta\to f(x,\theta)$ are all continuous (actually, differentiable). The parameter $\theta$ plays two roles in this family: as it grows closer to $0,$ it increases the heaviness of the tails, causing the absolute value of the expectation to increase; and it also determines the amounts of negative and positive parts in the mixture, causing the expectation to move from negative to positive as $\theta$ crosses $0.$ Their expectations are $$\begin{aligned} e(\theta) &= \int_{\mathbb R} x f(x,\theta)\,\mathrm{d}x\\ &= \frac{1 - \theta}{2}\int_{-\infty}^0 x g(-x,\theta^2)\,\mathrm{d}x + \frac{1 + \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &= -\frac{1 - \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x + \frac{1 + \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &=\theta \int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &= \theta \int_1^\infty x (\theta^2+1)x^{-(\theta^2 + 2)}\,\mathrm{d}x\\ &= \theta\left(1 + \frac{1}{\theta^2}\right) = \theta + \frac{1}{\theta}, \end{aligned}$$ provided $\theta\ne 0.$ When $\theta = 0,$ the expectation is undefined. Clearly, as $\theta$ increases from just below $0$ to just above $0,$ there is no way to define $e(0)$ to make this a continuous function. If you would like an example where all the density functions are continuous in $x$ everywhere (which these are not: they are discontinuous at $\pm 1$), then convolve these with (say) a standard Normal distribution. (These will be the densities of $X+Z$ where $Z$ is an independent standard Normal variable.) That adds $0$ to all the expectations while guaranteeing all the densities are infinitely differentiable. This is what some of them look like:
Continuity of the location
The expectation does not have to be a continuous function of the parameter. The following example illustrates what can go wrong. For any number $0 \le p \le 1,$ let $g(x,p)=0$ when $x \le 1$ and oth
Continuity of the location The expectation does not have to be a continuous function of the parameter. The following example illustrates what can go wrong. For any number $0 \le p \le 1,$ let $g(x,p)=0$ when $x \le 1$ and otherwise $$g(x,p) = (p+1)x^{-(p+2)} = \frac{\mathrm{d}}{\mathrm{d}x}\left(1 - x^{-(p+1)} \right) = \frac{\mathrm{d}}{\mathrm{d}x}G(x,p).$$ This describes a family of positive distributions with distribution functions $x\to G(x,p)$ (Pareto distributions). Moreover, for any fixed real number $x$, the function $p\to g(x,p)$ is continuous (it's actually differentiable). Using these, construct a family by mixing two such distributions, $$f(x,\theta) = \frac{1 - \theta}{2} g(-x,\theta^2) + \frac{1+\theta}{2} g(x,\theta^2),$$ where $-1 \le \theta \le 1.$ Because $g$ is continuous in its second variable, the functions $\theta\to f(x,\theta)$ are all continuous (actually, differentiable). The parameter $\theta$ plays two roles in this family: as it grows closer to $0,$ it increases the heaviness of the tails, causing the absolute value of the expectation to increase; and it also determines the amounts of negative and positive parts in the mixture, causing the expectation to move from negative to positive as $\theta$ crosses $0.$ Their expectations are $$\begin{aligned} e(\theta) &= \int_{\mathbb R} x f(x,\theta)\,\mathrm{d}x\\ &= \frac{1 - \theta}{2}\int_{-\infty}^0 x g(-x,\theta^2)\,\mathrm{d}x + \frac{1 + \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &= -\frac{1 - \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x + \frac{1 + \theta}{2}\int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &=\theta \int_0^\infty x g(x,\theta^2)\,\mathrm{d}x\\ &= \theta \int_1^\infty x (\theta^2+1)x^{-(\theta^2 + 2)}\,\mathrm{d}x\\ &= \theta\left(1 + \frac{1}{\theta^2}\right) = \theta + \frac{1}{\theta}, \end{aligned}$$ provided $\theta\ne 0.$ When $\theta = 0,$ the expectation is undefined. Clearly, as $\theta$ increases from just below $0$ to just above $0,$ there is no way to define $e(0)$ to make this a continuous function. If you would like an example where all the density functions are continuous in $x$ everywhere (which these are not: they are discontinuous at $\pm 1$), then convolve these with (say) a standard Normal distribution. (These will be the densities of $X+Z$ where $Z$ is an independent standard Normal variable.) That adds $0$ to all the expectations while guaranteeing all the densities are infinitely differentiable. This is what some of them look like:
Continuity of the location The expectation does not have to be a continuous function of the parameter. The following example illustrates what can go wrong. For any number $0 \le p \le 1,$ let $g(x,p)=0$ when $x \le 1$ and oth
39,333
Discrimination/Slope Item Response Theory Models
A steeper slope means a stronger relationship between ability and the question. It means that the item (and therefore the test) is more reliable - and reliability is the inverse of measurement error. Less error = better discrimination. You can get higher reliability / discrimination / less measurement error by having more questions or having questions with larger slope parameters.
Discrimination/Slope Item Response Theory Models
A steeper slope means a stronger relationship between ability and the question. It means that the item (and therefore the test) is more reliable - and reliability is the inverse of measurement error.
Discrimination/Slope Item Response Theory Models A steeper slope means a stronger relationship between ability and the question. It means that the item (and therefore the test) is more reliable - and reliability is the inverse of measurement error. Less error = better discrimination. You can get higher reliability / discrimination / less measurement error by having more questions or having questions with larger slope parameters.
Discrimination/Slope Item Response Theory Models A steeper slope means a stronger relationship between ability and the question. It means that the item (and therefore the test) is more reliable - and reliability is the inverse of measurement error.
39,334
Discrimination/Slope Item Response Theory Models
Intuitions are useful, but it is worth understanding the math behind the model. We model the probability of answering correctly to $i$-th question by a person with the ability $\theta$, $p_i({\theta})$. The model can have up to three parameters $a_i, b_i, c_i$ and uses a logistic function. $$ p_i({\theta})=c_i + \frac{1-c_i}{1+e^{-a_i({\theta}-b_i)}} $$ The first thing to notice when trying to understand it is that if you don't use the parameter $c_i$ (3PL), we can ignore the logistic function. The logistic function would map real values to the probability range $[0, 1]$ and change the shape from linear to sigmoidal, but it won't change the ordering of the values, so high after transformation was also high before, low after was low before, etc. After the transformation, the values are easier to interpret because they can be thought of as probabilities. What remains is $$ a_i(\theta-b_i) $$ When ability $\theta$ is high, the question needs to be hard $b_i$ to make the value low. If the question is very hard $b_i$, it can make the outcome very low even if the ability $\theta$ was high. This is moderated by the $a_i$ slope, if it is small, it would make smaller everything inside the brackets, if it is big, it will make it bigger. Try it with different values and observe how it behaves for better intuition.
Discrimination/Slope Item Response Theory Models
Intuitions are useful, but it is worth understanding the math behind the model. We model the probability of answering correctly to $i$-th question by a person with the ability $\theta$, $p_i({\theta})
Discrimination/Slope Item Response Theory Models Intuitions are useful, but it is worth understanding the math behind the model. We model the probability of answering correctly to $i$-th question by a person with the ability $\theta$, $p_i({\theta})$. The model can have up to three parameters $a_i, b_i, c_i$ and uses a logistic function. $$ p_i({\theta})=c_i + \frac{1-c_i}{1+e^{-a_i({\theta}-b_i)}} $$ The first thing to notice when trying to understand it is that if you don't use the parameter $c_i$ (3PL), we can ignore the logistic function. The logistic function would map real values to the probability range $[0, 1]$ and change the shape from linear to sigmoidal, but it won't change the ordering of the values, so high after transformation was also high before, low after was low before, etc. After the transformation, the values are easier to interpret because they can be thought of as probabilities. What remains is $$ a_i(\theta-b_i) $$ When ability $\theta$ is high, the question needs to be hard $b_i$ to make the value low. If the question is very hard $b_i$, it can make the outcome very low even if the ability $\theta$ was high. This is moderated by the $a_i$ slope, if it is small, it would make smaller everything inside the brackets, if it is big, it will make it bigger. Try it with different values and observe how it behaves for better intuition.
Discrimination/Slope Item Response Theory Models Intuitions are useful, but it is worth understanding the math behind the model. We model the probability of answering correctly to $i$-th question by a person with the ability $\theta$, $p_i({\theta})
39,335
XGBoost Classifier not capturing extreme probabilities
This is what I would focus on. Limitations on max_depth might cause terminal nodes to group together observations with very small probabilities with other observations where probabilities aren't that small, so the effect is to move the leaf weights away from extreme values. Likewise, something similar with large probability observations. Try increasing max_depth. lambda penalizes the absolute value of the weights. You want weights with large absolute value, because these weights allow for probabilities closer to 0 and 1, so try setting lambda smaller. Column subsampling could omit the important features (time left in the game sounds important), so I wouldn't use it. Increasing the maximum number of trees dramatically and using early stopping could help. Tuning the learning rate alongside these parameters is important. Since your question is basically about calibration of probabilities, something to know is that XGBoost is notorious for producing poorly-calibrated predicted probabilities. It's unclear if this is the culprit in your case; usually, the poor calibration arises from predictions that are too close to 0 or 1, but you have the opposite finding here. This is why I think you might be able to close the gap using different hyper-parameters. I wonder if an XGBoost model is the best approach, because your data are arranged sequentially in time (60, 50, ... 10 minutes remaining, etc.). I would investigate alternative models that can account for this temporal dependency. If you think about each game as a sequence, the probability of Team A winning should have a wide band around it at the start of the game, and then that band should narrow as the clock runs out. I don't know how to model that, but intuitively, that seems like what you're looking for.
XGBoost Classifier not capturing extreme probabilities
This is what I would focus on. Limitations on max_depth might cause terminal nodes to group together observations with very small probabilities with other observations where probabilities aren't that
XGBoost Classifier not capturing extreme probabilities This is what I would focus on. Limitations on max_depth might cause terminal nodes to group together observations with very small probabilities with other observations where probabilities aren't that small, so the effect is to move the leaf weights away from extreme values. Likewise, something similar with large probability observations. Try increasing max_depth. lambda penalizes the absolute value of the weights. You want weights with large absolute value, because these weights allow for probabilities closer to 0 and 1, so try setting lambda smaller. Column subsampling could omit the important features (time left in the game sounds important), so I wouldn't use it. Increasing the maximum number of trees dramatically and using early stopping could help. Tuning the learning rate alongside these parameters is important. Since your question is basically about calibration of probabilities, something to know is that XGBoost is notorious for producing poorly-calibrated predicted probabilities. It's unclear if this is the culprit in your case; usually, the poor calibration arises from predictions that are too close to 0 or 1, but you have the opposite finding here. This is why I think you might be able to close the gap using different hyper-parameters. I wonder if an XGBoost model is the best approach, because your data are arranged sequentially in time (60, 50, ... 10 minutes remaining, etc.). I would investigate alternative models that can account for this temporal dependency. If you think about each game as a sequence, the probability of Team A winning should have a wide band around it at the start of the game, and then that band should narrow as the clock runs out. I don't know how to model that, but intuitively, that seems like what you're looking for.
XGBoost Classifier not capturing extreme probabilities This is what I would focus on. Limitations on max_depth might cause terminal nodes to group together observations with very small probabilities with other observations where probabilities aren't that
39,336
XGBoost Classifier not capturing extreme probabilities
If you want precise estimates of probabilities, don't use algorithms based on decision trees. To get a probability estimate of a decision tree, you would count the class occurrences in the node and divide by the node size. This would never lead to a smooth function describing the probabilities, same as regression trees don't produce smooth approximations of the functions. If you consider many trees, this would smooth a little bit more, but never perfectly. This would be especially striking for extreme probabilities as in your case (for a decision tree to predict probability like 0.001 the node would need to contain >1000 samples). As mentioned by Sycroax, it is a poor choice of the algorithm if you care about well-calibrated probabilities.
XGBoost Classifier not capturing extreme probabilities
If you want precise estimates of probabilities, don't use algorithms based on decision trees. To get a probability estimate of a decision tree, you would count the class occurrences in the node and di
XGBoost Classifier not capturing extreme probabilities If you want precise estimates of probabilities, don't use algorithms based on decision trees. To get a probability estimate of a decision tree, you would count the class occurrences in the node and divide by the node size. This would never lead to a smooth function describing the probabilities, same as regression trees don't produce smooth approximations of the functions. If you consider many trees, this would smooth a little bit more, but never perfectly. This would be especially striking for extreme probabilities as in your case (for a decision tree to predict probability like 0.001 the node would need to contain >1000 samples). As mentioned by Sycroax, it is a poor choice of the algorithm if you care about well-calibrated probabilities.
XGBoost Classifier not capturing extreme probabilities If you want precise estimates of probabilities, don't use algorithms based on decision trees. To get a probability estimate of a decision tree, you would count the class occurrences in the node and di
39,337
XGBoost Classifier not capturing extreme probabilities
As @Sycorax and @BenReiniger pointed out, the problem is that the probabilities are not calibrated (or not as calibrated as well as you'd prefer). Here is how you could calibrate the XGBoost probabilities. Use the following model: P(y|x) = 1/(1+exp(-(a+x))) where x is the logit function of the original probabilities produced by XGBoost: logit = log(p/(1-p)) and y are the same outcomes you are already using. This is based on the paper by van den Goorbergh et al., "The harm of class imbalance corrections for risk prediction models: illustration and simulation using logistic regression", arXiv 2022 (see the Methods section). In my experience works well. You can implement this in R using this statement: recal_mod = glm(y ~ 1, offset = logit, family = "binomial") Note that this model is a logistic regression whereas the independent variable has a fixed weight of 1 and only the intercept is fit. To my knowledge this type of model is not supported in python scikit-learn LogisticRegression() function, so I use R. Note also that it uses lot of RAM for large datasets, so you may want to to downsample your data. The following graphs illustrate how it works. The first graph is a calibration curve before recalibration. The second graph is the calibration curve after recalibration using the above model. I don't know if it will achieve the calibration that you want but it's worth trying
XGBoost Classifier not capturing extreme probabilities
As @Sycorax and @BenReiniger pointed out, the problem is that the probabilities are not calibrated (or not as calibrated as well as you'd prefer). Here is how you could calibrate the XGBoost probabili
XGBoost Classifier not capturing extreme probabilities As @Sycorax and @BenReiniger pointed out, the problem is that the probabilities are not calibrated (or not as calibrated as well as you'd prefer). Here is how you could calibrate the XGBoost probabilities. Use the following model: P(y|x) = 1/(1+exp(-(a+x))) where x is the logit function of the original probabilities produced by XGBoost: logit = log(p/(1-p)) and y are the same outcomes you are already using. This is based on the paper by van den Goorbergh et al., "The harm of class imbalance corrections for risk prediction models: illustration and simulation using logistic regression", arXiv 2022 (see the Methods section). In my experience works well. You can implement this in R using this statement: recal_mod = glm(y ~ 1, offset = logit, family = "binomial") Note that this model is a logistic regression whereas the independent variable has a fixed weight of 1 and only the intercept is fit. To my knowledge this type of model is not supported in python scikit-learn LogisticRegression() function, so I use R. Note also that it uses lot of RAM for large datasets, so you may want to to downsample your data. The following graphs illustrate how it works. The first graph is a calibration curve before recalibration. The second graph is the calibration curve after recalibration using the above model. I don't know if it will achieve the calibration that you want but it's worth trying
XGBoost Classifier not capturing extreme probabilities As @Sycorax and @BenReiniger pointed out, the problem is that the probabilities are not calibrated (or not as calibrated as well as you'd prefer). Here is how you could calibrate the XGBoost probabili
39,338
How come model prediction accuracy high but model does not generalise well
It's hard to say without digging deeply into your model and your data. However, it seems like you have been doing a lot of cross-validation, model tuning, cross-validation, model tuning and so forth. That, together with bad out-of-sample performance, suggests that you are overfitting to your test set. That is harder than overfitting in-sample (which is easy indeed), but it is quite possible to do. Essentially, if this is the problem, then your repeated model tuning cycles simply fitted it to the idiosyncrasies of the full dataset. As to what to do now: you should dig into your data. Did anything change drastically between the training and the testing data? Are there any strong predictors in the new data which did not show up as strongly in the training data? Stuff like that. But remember that the more you tweak your model, the more likely you are to overfit, so proceed with caution. Incidentally, you should be able to get at least 50% accuracy by always predicting the majority class in your holdout dataset, assuming you can identify this class beforehand. Thus, an accuracy of only 40% is a big red flag. It looks like something has changed in a major way. (Also, this simple benchmark is one reason why accuracy is not a good evaluation measure.)
How come model prediction accuracy high but model does not generalise well
It's hard to say without digging deeply into your model and your data. However, it seems like you have been doing a lot of cross-validation, model tuning, cross-validation, model tuning and so forth.
How come model prediction accuracy high but model does not generalise well It's hard to say without digging deeply into your model and your data. However, it seems like you have been doing a lot of cross-validation, model tuning, cross-validation, model tuning and so forth. That, together with bad out-of-sample performance, suggests that you are overfitting to your test set. That is harder than overfitting in-sample (which is easy indeed), but it is quite possible to do. Essentially, if this is the problem, then your repeated model tuning cycles simply fitted it to the idiosyncrasies of the full dataset. As to what to do now: you should dig into your data. Did anything change drastically between the training and the testing data? Are there any strong predictors in the new data which did not show up as strongly in the training data? Stuff like that. But remember that the more you tweak your model, the more likely you are to overfit, so proceed with caution. Incidentally, you should be able to get at least 50% accuracy by always predicting the majority class in your holdout dataset, assuming you can identify this class beforehand. Thus, an accuracy of only 40% is a big red flag. It looks like something has changed in a major way. (Also, this simple benchmark is one reason why accuracy is not a good evaluation measure.)
How come model prediction accuracy high but model does not generalise well It's hard to say without digging deeply into your model and your data. However, it seems like you have been doing a lot of cross-validation, model tuning, cross-validation, model tuning and so forth.
39,339
How come model prediction accuracy high but model does not generalise well
I second Stephan's answer that the likely culprit is overfitting the entire dataset. That said, another thing to validate is that there are no differences between data processing pipelines in your training vs. production code. E.g. are you normalizing the features before training? If so, do you record the means and standard deviations and apply the same normalization to live data?
How come model prediction accuracy high but model does not generalise well
I second Stephan's answer that the likely culprit is overfitting the entire dataset. That said, another thing to validate is that there are no differences between data processing pipelines in your tra
How come model prediction accuracy high but model does not generalise well I second Stephan's answer that the likely culprit is overfitting the entire dataset. That said, another thing to validate is that there are no differences between data processing pipelines in your training vs. production code. E.g. are you normalizing the features before training? If so, do you record the means and standard deviations and apply the same normalization to live data?
How come model prediction accuracy high but model does not generalise well I second Stephan's answer that the likely culprit is overfitting the entire dataset. That said, another thing to validate is that there are no differences between data processing pipelines in your tra
39,340
How come model prediction accuracy high but model does not generalise well
To me it seems to be a data problem. You are splitting the data 70:30, but are all data from the 70% prior to data from the 30% set? It can be a problem if you mix older and newer data in the training set. If time is involved in the generation of data, which seems to be the case as you have live data, test set should never contain data that were generated before those of the training set.
How come model prediction accuracy high but model does not generalise well
To me it seems to be a data problem. You are splitting the data 70:30, but are all data from the 70% prior to data from the 30% set? It can be a problem if you mix older and newer data in the training
How come model prediction accuracy high but model does not generalise well To me it seems to be a data problem. You are splitting the data 70:30, but are all data from the 70% prior to data from the 30% set? It can be a problem if you mix older and newer data in the training set. If time is involved in the generation of data, which seems to be the case as you have live data, test set should never contain data that were generated before those of the training set.
How come model prediction accuracy high but model does not generalise well To me it seems to be a data problem. You are splitting the data 70:30, but are all data from the 70% prior to data from the 30% set? It can be a problem if you mix older and newer data in the training
39,341
Trying to calculate confidence intervals for a Monte-Carlo estimate of Pi. What am I doing wrong?
The standard error that you want is the standard deviation of the estimate $\hat\pi$ at a fixed point in the sequence, over multiple experiments. This standard error will give you a confidence interval that includes the actual value $\pi$ in 95% of experiments. You don't need a huge sample size to get reasonable estimation of the standard error; you need a huge sample size to get an accurate estimate of $\pi$. I'm going to do the code in R; translating to Python is left as an exercise for the reader. First, let's have a look at multiple runs: 100 runs of a 1000-step simulation hatpi <- function(n){ x <- runif(n) y<-runif(n) in_circle<- (x*x+y*y)<=1 estimate_path<-4*cumsum(in_circle)/(1:n) estimate_path } plot(1:1000,hatpi(1000),type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(2,4)) for(i in 2:10){ lines(1:1000,hatpi(1000),col="#00000080") } abline(h=pi,col="red") You can see that the variation along each curve (which is what your first method uses) is smaller than the variation between the curves. That's because points on the same curve are correlated: the estimates at 800 and 900 throws share the first 800 throws and so have a correlation of 800/900, or nearly 0.9. Your standard error formula didn't take this into account Now, I'll run a slightly larger set of experiments, 100, and estimate the standard deviation at 100,200,...,1000 throws experiments<-matrix(0, ncol=10, nrow=100) for(i in 1:100){ one_experiment<-hatpi(1000) experiments[i,]=one_experiment[100*(1:10)] } stddevs <- apply(experiments,2,sd) lines(100*(1:10), pi-1.96*stddevs,col="purple") lines(100*(1:10), pi+1.96*stddevs,col="purple") The standard deviations look more plausible now; the $\pm 1.96$ interval around the true value covers all 10 curves nearly all the way from 100 to 1000 (so the corresponding intervals around each curve would cover the true value). We can actually do the calculations analytically here. The variance of the binary in_circle variable is $(\pi/4)\times(1-\pi/4)$, so the standard deviation of the estimate of $\pi$ based on $n$ throws is $4\sqrt{(\pi/4)\times(1-\pi/4)/n}$ truesd<-function(n) 4*sqrt((pi/4)*(1-pi/4)/n) stddevs/truesd(100*(1:10)) [1] 1.071722 1.015365 1.081009 1.090563 1.048619 1.043178 1.023968 1.042759 1.071379 [10] 1.115733 Even 100 experiments gave a reasonable estimate of the standard errors. Estimating the standard errors is relatively easy, because you'll probably be happy with the standard error being within about 10% of the truth, but you're trying to get $\pi$ to much more than one digit accuracy. Now, a larger simulation: 1000 experiements with 100,000 throws each (but plotting only every 1000th throw) experiments<-matrix(0, ncol=100, nrow=1000) for(i in 1:1000){ one_experiment<-hatpi(100000) experiments[i,]=one_experiment[1000*(1:100)] } stddevs <- apply(experiments,2,sd) plot((1:100)*1000,experiments[1,],type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(3,3.3)) for(i in 2:10){ lines(1000*(1:100),experiments[i,],col="#00000080") } lines(1000*(1:100), pi-1.96*stddevs,col="green",lwd=2) lines(1000*(1:100), pi+1.96*stddevs,col="green",lwd=2) lines(1000*(1:100), pi-1.96*truesd(1000*(1:100)),col="orange",lty=2,lwd=2) lines(1000*(1:100), pi+1.96*truesd(1000*(1:100)),col="orange",lty=2,lwd=2) I've only drawn 10 of the experiments, but you can still see the pattern. The estimated and analytic standard errors are almost identical (the green and orange curves are superimposed) and 9 of the 10 lines stay inside the interval. The standard error is only 0.005 after 100,000 throws, so you'll need many more than that for a good value of $\pi$. The confidence intervals were well estimated with only 100 experiments, and 1000 experiments is overkill. Finally, could you estimate the standard error from a single run? Yes, you could, but not the way you were doing it. Averages (and variances) of the binary in_circle variable along one run estimate the same thing as averages and variances across experiments. But averages and variances of the cumulative estimator don't -- they're increasingly correlated. hatpi_with_sd<-function(n){ x<-runif(n) y<-runif(n) in_circle<- (x*x+y*y)<=1 estimate_path<-4*cumsum(in_circle)/(1:n) estimate_sd <-4*sd(in_circle) estimate_se <-estimate_sd/sqrt(1:n) list(estimate_path,estimate_se) } experiment<-hatpi_with_sd(1000) plot(1:1000,experiment[[1]],type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(2,4)) lines(1:1000, experiment[[1]]-1.96*experiment[[2]],col="red") lines(1:1000, experiment[[1]]+1.96*experiment[[2]],col="red") abline(h=pi,col="purple") It works! The bootstrap idea is a slightly more general version of this, where the estimate_sd is based on empirical standard deviations over intervals of more than one time point, so it would work for, eg, autocorrelation estimates as well as for means.
Trying to calculate confidence intervals for a Monte-Carlo estimate of Pi. What am I doing wrong?
The standard error that you want is the standard deviation of the estimate $\hat\pi$ at a fixed point in the sequence, over multiple experiments. This standard error will give you a confidence interv
Trying to calculate confidence intervals for a Monte-Carlo estimate of Pi. What am I doing wrong? The standard error that you want is the standard deviation of the estimate $\hat\pi$ at a fixed point in the sequence, over multiple experiments. This standard error will give you a confidence interval that includes the actual value $\pi$ in 95% of experiments. You don't need a huge sample size to get reasonable estimation of the standard error; you need a huge sample size to get an accurate estimate of $\pi$. I'm going to do the code in R; translating to Python is left as an exercise for the reader. First, let's have a look at multiple runs: 100 runs of a 1000-step simulation hatpi <- function(n){ x <- runif(n) y<-runif(n) in_circle<- (x*x+y*y)<=1 estimate_path<-4*cumsum(in_circle)/(1:n) estimate_path } plot(1:1000,hatpi(1000),type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(2,4)) for(i in 2:10){ lines(1:1000,hatpi(1000),col="#00000080") } abline(h=pi,col="red") You can see that the variation along each curve (which is what your first method uses) is smaller than the variation between the curves. That's because points on the same curve are correlated: the estimates at 800 and 900 throws share the first 800 throws and so have a correlation of 800/900, or nearly 0.9. Your standard error formula didn't take this into account Now, I'll run a slightly larger set of experiments, 100, and estimate the standard deviation at 100,200,...,1000 throws experiments<-matrix(0, ncol=10, nrow=100) for(i in 1:100){ one_experiment<-hatpi(1000) experiments[i,]=one_experiment[100*(1:10)] } stddevs <- apply(experiments,2,sd) lines(100*(1:10), pi-1.96*stddevs,col="purple") lines(100*(1:10), pi+1.96*stddevs,col="purple") The standard deviations look more plausible now; the $\pm 1.96$ interval around the true value covers all 10 curves nearly all the way from 100 to 1000 (so the corresponding intervals around each curve would cover the true value). We can actually do the calculations analytically here. The variance of the binary in_circle variable is $(\pi/4)\times(1-\pi/4)$, so the standard deviation of the estimate of $\pi$ based on $n$ throws is $4\sqrt{(\pi/4)\times(1-\pi/4)/n}$ truesd<-function(n) 4*sqrt((pi/4)*(1-pi/4)/n) stddevs/truesd(100*(1:10)) [1] 1.071722 1.015365 1.081009 1.090563 1.048619 1.043178 1.023968 1.042759 1.071379 [10] 1.115733 Even 100 experiments gave a reasonable estimate of the standard errors. Estimating the standard errors is relatively easy, because you'll probably be happy with the standard error being within about 10% of the truth, but you're trying to get $\pi$ to much more than one digit accuracy. Now, a larger simulation: 1000 experiements with 100,000 throws each (but plotting only every 1000th throw) experiments<-matrix(0, ncol=100, nrow=1000) for(i in 1:1000){ one_experiment<-hatpi(100000) experiments[i,]=one_experiment[1000*(1:100)] } stddevs <- apply(experiments,2,sd) plot((1:100)*1000,experiments[1,],type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(3,3.3)) for(i in 2:10){ lines(1000*(1:100),experiments[i,],col="#00000080") } lines(1000*(1:100), pi-1.96*stddevs,col="green",lwd=2) lines(1000*(1:100), pi+1.96*stddevs,col="green",lwd=2) lines(1000*(1:100), pi-1.96*truesd(1000*(1:100)),col="orange",lty=2,lwd=2) lines(1000*(1:100), pi+1.96*truesd(1000*(1:100)),col="orange",lty=2,lwd=2) I've only drawn 10 of the experiments, but you can still see the pattern. The estimated and analytic standard errors are almost identical (the green and orange curves are superimposed) and 9 of the 10 lines stay inside the interval. The standard error is only 0.005 after 100,000 throws, so you'll need many more than that for a good value of $\pi$. The confidence intervals were well estimated with only 100 experiments, and 1000 experiments is overkill. Finally, could you estimate the standard error from a single run? Yes, you could, but not the way you were doing it. Averages (and variances) of the binary in_circle variable along one run estimate the same thing as averages and variances across experiments. But averages and variances of the cumulative estimator don't -- they're increasingly correlated. hatpi_with_sd<-function(n){ x<-runif(n) y<-runif(n) in_circle<- (x*x+y*y)<=1 estimate_path<-4*cumsum(in_circle)/(1:n) estimate_sd <-4*sd(in_circle) estimate_se <-estimate_sd/sqrt(1:n) list(estimate_path,estimate_se) } experiment<-hatpi_with_sd(1000) plot(1:1000,experiment[[1]],type="l",lwd=2,ylab=expression(pi),xlab="throws",ylim=c(2,4)) lines(1:1000, experiment[[1]]-1.96*experiment[[2]],col="red") lines(1:1000, experiment[[1]]+1.96*experiment[[2]],col="red") abline(h=pi,col="purple") It works! The bootstrap idea is a slightly more general version of this, where the estimate_sd is based on empirical standard deviations over intervals of more than one time point, so it would work for, eg, autocorrelation estimates as well as for means.
Trying to calculate confidence intervals for a Monte-Carlo estimate of Pi. What am I doing wrong? The standard error that you want is the standard deviation of the estimate $\hat\pi$ at a fixed point in the sequence, over multiple experiments. This standard error will give you a confidence interv
39,342
Circular reasoning in Harrell BBR 18-19?
That argument seems circular to me; we have to assume that c-index and $\chi^2$ are superior metrics to show the flaw in accuracy as a metric In your comments, you mention that you agree and understand why we should prefer c-index and likelihood ratio chi square (LHRCS) to classification accuracy. In your post, you also mention that you understand that accuracy is not a proper scoring rule. OK, that's well and good. The argument as you've presented it is not circular. It is however a bad argument. "Use LHRCS because it is a better metric in this example". However, this is not a faithful representation of the argument I think Frank is trying to make. If I were to write out the arguments point by point then the argument may go something like: Proper scoring rules are preferred to improper scoring rules since by definition proper scoring rules are maximized by the true distribution The LHRCS is derived from the log likelihood, which in a probabilistic forecast is the binomial likelihood and hence a proper scoring rule. Accuracy is not a proper scoring rule (+ additional arguments against accuracy, like it can be maximized by guessing the most prevalent class all the time, which does not seem like a good property and also falsely overstates confidence by giving a 100% probability to an outcome). Given our understanding of proper scoring rules, the example demonstrates that using an improper scoring rule like accuracy will result in wrong decisions about what is important (I assume here that we know a priori that sex is predictive). This line of reasoning does not require assuming LHRCS is a superior metric because we justify it with knowledge from proper scoring rules. EDIT: If your goal is to convince other people the LHRCS is superior, simulation is your friend. Here, I simulate Frank's example 1000 times. library(tidyverse) library(rms) r = rerun(1000,{ N = 400 age = round(rnorm(N)) sex = rbinom(N, 1, 0.5) noise = rnorm(N) p = plogis(1.6*age + 0.5*sex) y = rbinom(N, 1, p) model_1 = lrm(y~age) model_2 = lrm(y~sex) model_3 = lrm(y~sex + age) model_4 = lrm(y~sex + age + noise) models = list(model_1, model_2, model_3, model_4) accs = map_dbl(models, ~{ preds = as.integer(predict(.x)>0.5) Metrics::accuracy(y, preds) }) aics = map_dbl(models, AIC) X1 = anova(model_1)['TOTAL','Chi-Square'] - anova(model_1)['TOTAL','d.f.'] X2 = anova(model_2)['TOTAL','Chi-Square'] - anova(model_2)['TOTAL','d.f.'] X3 = anova(model_3)['TOTAL','Chi-Square'] - anova(model_3)['TOTAL','d.f.'] X4 = anova(model_4)['TOTAL','Chi-Square'] - anova(model_4)['TOTAL','d.f.'] X = c(X1,X2,X3,X4) tibble( right_accs = which.max(accs)==3, right_xs = which.max(X)==3, right_aics = which.min(aics)==3 ) }) %>% map_dfr(~.x) r %>% summarise_all(mean) I'm fairly confident Frank's chi-square in the example is a partial chisquare, so I've tried to use that in this example. The true model has largest accuracy less than half the time and largest partial chi-sqaure a little more than half the time. So clearly better although still not bulletproof. The results change quite a lot with more data. Even with 4000 observations (an order of magnitude more), the right model has largest accuracy about half the time but has largest chi square 8 times out of 10!
Circular reasoning in Harrell BBR 18-19?
That argument seems circular to me; we have to assume that c-index and $\chi^2$ are superior metrics to show the flaw in accuracy as a metric In your comments, you mention that you agree and understa
Circular reasoning in Harrell BBR 18-19? That argument seems circular to me; we have to assume that c-index and $\chi^2$ are superior metrics to show the flaw in accuracy as a metric In your comments, you mention that you agree and understand why we should prefer c-index and likelihood ratio chi square (LHRCS) to classification accuracy. In your post, you also mention that you understand that accuracy is not a proper scoring rule. OK, that's well and good. The argument as you've presented it is not circular. It is however a bad argument. "Use LHRCS because it is a better metric in this example". However, this is not a faithful representation of the argument I think Frank is trying to make. If I were to write out the arguments point by point then the argument may go something like: Proper scoring rules are preferred to improper scoring rules since by definition proper scoring rules are maximized by the true distribution The LHRCS is derived from the log likelihood, which in a probabilistic forecast is the binomial likelihood and hence a proper scoring rule. Accuracy is not a proper scoring rule (+ additional arguments against accuracy, like it can be maximized by guessing the most prevalent class all the time, which does not seem like a good property and also falsely overstates confidence by giving a 100% probability to an outcome). Given our understanding of proper scoring rules, the example demonstrates that using an improper scoring rule like accuracy will result in wrong decisions about what is important (I assume here that we know a priori that sex is predictive). This line of reasoning does not require assuming LHRCS is a superior metric because we justify it with knowledge from proper scoring rules. EDIT: If your goal is to convince other people the LHRCS is superior, simulation is your friend. Here, I simulate Frank's example 1000 times. library(tidyverse) library(rms) r = rerun(1000,{ N = 400 age = round(rnorm(N)) sex = rbinom(N, 1, 0.5) noise = rnorm(N) p = plogis(1.6*age + 0.5*sex) y = rbinom(N, 1, p) model_1 = lrm(y~age) model_2 = lrm(y~sex) model_3 = lrm(y~sex + age) model_4 = lrm(y~sex + age + noise) models = list(model_1, model_2, model_3, model_4) accs = map_dbl(models, ~{ preds = as.integer(predict(.x)>0.5) Metrics::accuracy(y, preds) }) aics = map_dbl(models, AIC) X1 = anova(model_1)['TOTAL','Chi-Square'] - anova(model_1)['TOTAL','d.f.'] X2 = anova(model_2)['TOTAL','Chi-Square'] - anova(model_2)['TOTAL','d.f.'] X3 = anova(model_3)['TOTAL','Chi-Square'] - anova(model_3)['TOTAL','d.f.'] X4 = anova(model_4)['TOTAL','Chi-Square'] - anova(model_4)['TOTAL','d.f.'] X = c(X1,X2,X3,X4) tibble( right_accs = which.max(accs)==3, right_xs = which.max(X)==3, right_aics = which.min(aics)==3 ) }) %>% map_dfr(~.x) r %>% summarise_all(mean) I'm fairly confident Frank's chi-square in the example is a partial chisquare, so I've tried to use that in this example. The true model has largest accuracy less than half the time and largest partial chi-sqaure a little more than half the time. So clearly better although still not bulletproof. The results change quite a lot with more data. Even with 4000 observations (an order of magnitude more), the right model has largest accuracy about half the time but has largest chi square 8 times out of 10!
Circular reasoning in Harrell BBR 18-19? That argument seems circular to me; we have to assume that c-index and $\chi^2$ are superior metrics to show the flaw in accuracy as a metric In your comments, you mention that you agree and understa
39,343
how to generate data from cdf which is not in closed form?
Solution By exploiting the principles explained at https://stats.stackexchange.com/a/495347/919, we can examine the expression for $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta+1}}{\left(1 + (x/\alpha)^\beta\right)^2}\frac{\mathrm{d}x}{x}$$ and see that if it is to make any sense at all (a question I will prescind from until later), then any random variable $X$ with this density must be the $1/\beta$ power of some random variable $Z$ with an F ratio distribution. The latter's density is determined by two parameters (its "degrees of freedom," both of which are positive) as $$f_Z(z,\nu_1,\nu_2) \propto \frac{z^{\nu_1/2}}{\left(1 + \frac{\nu_1}{\nu_2}z\right)^{(\nu_1+\nu_2)/2}} \frac{\mathrm{d}z}{z}.$$ Setting $(x/\alpha)^\beta = z$ and comparing these expressions gives $$\left\{\begin{aligned}\nu_1 &= 2 + \frac{2}{\beta},\\\nu_2 &= 2 - \frac{2}{\beta}.\end{aligned}\right.$$ Since $\nu_2\gt 0,$ this works only if $\beta \gt 1.$ At the end of this post I will demonstrate that $f_X$ is undefined for $0\lt \beta\le 1,$ showing that nothing is lost by viewing $X$ as a power of an $F_{2+2/\beta,2-2/\beta}$ variable. (Incidentally, the series expansion in the question fails to converge for $x\gt \alpha$ and so is invalid. The CDF of an F-ratio distribution is given by a regularized incomplete Beta function.) Consequently, (1) $f_X$ does define a distribution for $\beta \gt 1$ and (2) to draw a random variate $X$ with this distribution, Draw a random variable $Z$ from an F-ratio distribution with $2+2/\beta$ and $2-2\beta$ degrees of freedom and set $$ X = \alpha \left(Z \left(\frac{\beta+1}{\beta-1}\right)\right)^{1/\beta}.$$ Practical matters Because the F-ratio distribution is widely used in statistics and is simply related to other common distributions (such as Beta, Gamma, and Chi-squared distributions), efficient, accurate methods to compute with it abound. To generate random variates with $F$ distribution I used the rf function in R. As a demonstration, for three widely varying values of $\beta$ I generated 100,000 independent realizations of $X$ in this manner, plotted the histogram of $\log(X)$ (because the distribution can be very skewed), and overplotted that with the graph of $f_X(;\beta,\alpha).$ The agreements are excellent. (The vertical lines shows the modes of these distributions.) The R command to produce these data was x <- alpha * (rf(n, 2+2/beta, 2-2/beta) * (beta+1)/(beta-1))^(1/beta) Further analysis and limitations Let's derive the distribution given by $f_X.$ Suppose $Y$ has a standard logistic distribution, which means it has a density $F_Y$ defined on all real numbers $y$ by $$f_Y(y) = \frac{e^y}{(1+e^y)^2} = \frac{\mathrm{d}}{\mathrm{d}y} \frac{1}{1 + e^{-y}}.$$ Because the right hand side (the antiderivative) increases from $0$ to $1,$ it is a cumulative distribution function, whence $f_Y$ is a probability density function. Now suppose $Y$ is the logarithm of some (positive) random variable $X;$ that is, $X = e^Y.$ Then the probability element of $X$ is $$f_X(x)\mathrm{d}x = f_Y(\log x)\mathrm{d}(\log x) = \frac{x}{(1+x)^2}\frac{\mathrm{d}x}{x} = \frac{1}{(1+x)^2}\mathrm{d}x. $$ When $Y$ is rescaled by a factor $\beta\gt 0,$ the rules of exponentiation imply $X$ is transformed to $X^\beta.$ Under this transformation the probability element $\mathrm{d}x/x$ is merely multiplied by $\beta,$ showing immediately that the density of $X^\beta$ is proportional to $$f_X(x,\beta) \propto \frac{x^\beta}{\left(1 + x^\beta\right)^2} \frac{\mathrm{d}(x^\beta)}{x^\beta} \propto \frac{x^{\beta-1}}{\left(1 + x^\beta\right)^2}\mathrm{d}x.$$ Introducing a scale factor $\alpha$ for $X$ (which would be the exponential of a location parameter for $Y$) gives the log-logistic distribution family, $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta-1}}{\left(1 + (x/\alpha)^\beta\right)^2}\mathrm{d}x.$$ This distribution can be "length extended" in various ways. By comparison to the question, it is evident that $f_X(x,\beta,\alpha)$ has been changed to $xf_X(x,\beta,\alpha).$ This weights the probability densities at larger $x\gt 0$ directly proportional to $x,$ skewing the distribution to the right. This helps us understand what $f_X$ is, what it is intended to accomplish, and how it relates to familiar distributions. It is time to consider the implicit proportionality constants. For this length-extended log-logistic (LELL) distribution to be defined, it must be possible to scale all densities so they integrate to unity. The integral of $x f_X(x,\beta,\alpha)$ is (by definition) the expectation of $X,$ equal to $$E[X] = \frac{\alpha \pi/\beta}{\sin(\pi/\beta)}$$ provided $\beta\gt 1.$ This is why its reciprocal appears as a factor of $f_X$ in the question. When $0 \lt \beta \le 1,$ the LELL construction does not work. Let's see why not. The scale factor doesn't matter, so take $\alpha=1.$ For any $T\gt 1,$ a series of elementary estimates based on $x^\beta \gt 1$ and $x^{-\beta}\ge x^{-1}$ when $x\gt 1$ (when $0\lt \beta\le 1$) gives $$\begin{aligned} E[X\mid \beta,1] &= \int_\mathbb{R} x f_X(x,\beta,1)\,\mathrm{d}x\\ &= \int_0^\infty x \frac{x^{\beta-1}}{\left(1 + x^\beta\right)^2}\mathrm{d}x \\ &\ge \int_1^\infty \frac{x^{\beta}}{\left(1 + x^\beta\right)^2}\mathrm{d}x \\ &\gt \int_1^\infty \frac{x^{\beta}}{\left(x^\beta + x^\beta\right)^2}\mathrm{d}x \\ &= \frac{1}{4}\int_1^\infty x^{-\beta}\,\mathrm{d}x\\ &\gt \frac{1}{4}\int_1^T x^{-\beta}\,\mathrm{d}x\\ &= \frac{1}{4(1-\beta)}\left(T^{1-\beta}-1\right). \end{aligned}$$ Because the final expression grows arbitrarily large as $T$ grows, $E[X]$ cannot be finite when $\beta \le 1.$ This makes it impossible to construct an LELL distribution for $\beta \le 1.$ Working code This is a series of R commands. On my system, the command to draw independent values following the $f_X$ density will produce four million (4,000,000) values per second (on one core). # # The PDF in the question. # f <- function(x, beta, alpha=1) { y <- (x/alpha)^beta beta^2 * sin(pi/beta) / pi * y / (1 + y)^2 / alpha } # # The mode of log(X) where the density of X is `f`. # mode <- function(beta, alpha=1) { log(alpha) + log((1+beta)/(beta-1)) / beta } # # Simulations. # n <- 1e5 # Simulation size log.alpha <- 1 alpha <- exp(log.alpha) # Scale; must be positive beta <- c(1.1, 2, 1e3) # The beta values to use par(mfrow=c(1,length(beta))) set.seed(17) for (beta in beta) { # # Generate random variates. # x <- alpha * (rf(n, 2+2/beta, 2-2/beta) * (beta+1)/(beta-1))^(1/beta) # # Plot a histogram of log(x) (because X is so skewed). # m <- mode(beta, alpha) ymax <- f(exp(m), beta, alpha)*exp(m) hist(log(x), freq=FALSE, breaks=80, ylim=c(0, ymax), col="#f0f0f0", border="Gray") abline(v = m) # Mark the mode mtext(bquote(list(alpha==e^.(log.alpha), beta==.(beta))), side=3, line=0, cex=0.7) # # Overplot the PDF to check. # curve(f(exp(x), beta, alpha)*exp(x), add=TRUE, n=501, lwd=2, col="Red") } par(mfrow=c(1,1))
how to generate data from cdf which is not in closed form?
Solution By exploiting the principles explained at https://stats.stackexchange.com/a/495347/919, we can examine the expression for $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta+1}}{\left(1 + (
how to generate data from cdf which is not in closed form? Solution By exploiting the principles explained at https://stats.stackexchange.com/a/495347/919, we can examine the expression for $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta+1}}{\left(1 + (x/\alpha)^\beta\right)^2}\frac{\mathrm{d}x}{x}$$ and see that if it is to make any sense at all (a question I will prescind from until later), then any random variable $X$ with this density must be the $1/\beta$ power of some random variable $Z$ with an F ratio distribution. The latter's density is determined by two parameters (its "degrees of freedom," both of which are positive) as $$f_Z(z,\nu_1,\nu_2) \propto \frac{z^{\nu_1/2}}{\left(1 + \frac{\nu_1}{\nu_2}z\right)^{(\nu_1+\nu_2)/2}} \frac{\mathrm{d}z}{z}.$$ Setting $(x/\alpha)^\beta = z$ and comparing these expressions gives $$\left\{\begin{aligned}\nu_1 &= 2 + \frac{2}{\beta},\\\nu_2 &= 2 - \frac{2}{\beta}.\end{aligned}\right.$$ Since $\nu_2\gt 0,$ this works only if $\beta \gt 1.$ At the end of this post I will demonstrate that $f_X$ is undefined for $0\lt \beta\le 1,$ showing that nothing is lost by viewing $X$ as a power of an $F_{2+2/\beta,2-2/\beta}$ variable. (Incidentally, the series expansion in the question fails to converge for $x\gt \alpha$ and so is invalid. The CDF of an F-ratio distribution is given by a regularized incomplete Beta function.) Consequently, (1) $f_X$ does define a distribution for $\beta \gt 1$ and (2) to draw a random variate $X$ with this distribution, Draw a random variable $Z$ from an F-ratio distribution with $2+2/\beta$ and $2-2\beta$ degrees of freedom and set $$ X = \alpha \left(Z \left(\frac{\beta+1}{\beta-1}\right)\right)^{1/\beta}.$$ Practical matters Because the F-ratio distribution is widely used in statistics and is simply related to other common distributions (such as Beta, Gamma, and Chi-squared distributions), efficient, accurate methods to compute with it abound. To generate random variates with $F$ distribution I used the rf function in R. As a demonstration, for three widely varying values of $\beta$ I generated 100,000 independent realizations of $X$ in this manner, plotted the histogram of $\log(X)$ (because the distribution can be very skewed), and overplotted that with the graph of $f_X(;\beta,\alpha).$ The agreements are excellent. (The vertical lines shows the modes of these distributions.) The R command to produce these data was x <- alpha * (rf(n, 2+2/beta, 2-2/beta) * (beta+1)/(beta-1))^(1/beta) Further analysis and limitations Let's derive the distribution given by $f_X.$ Suppose $Y$ has a standard logistic distribution, which means it has a density $F_Y$ defined on all real numbers $y$ by $$f_Y(y) = \frac{e^y}{(1+e^y)^2} = \frac{\mathrm{d}}{\mathrm{d}y} \frac{1}{1 + e^{-y}}.$$ Because the right hand side (the antiderivative) increases from $0$ to $1,$ it is a cumulative distribution function, whence $f_Y$ is a probability density function. Now suppose $Y$ is the logarithm of some (positive) random variable $X;$ that is, $X = e^Y.$ Then the probability element of $X$ is $$f_X(x)\mathrm{d}x = f_Y(\log x)\mathrm{d}(\log x) = \frac{x}{(1+x)^2}\frac{\mathrm{d}x}{x} = \frac{1}{(1+x)^2}\mathrm{d}x. $$ When $Y$ is rescaled by a factor $\beta\gt 0,$ the rules of exponentiation imply $X$ is transformed to $X^\beta.$ Under this transformation the probability element $\mathrm{d}x/x$ is merely multiplied by $\beta,$ showing immediately that the density of $X^\beta$ is proportional to $$f_X(x,\beta) \propto \frac{x^\beta}{\left(1 + x^\beta\right)^2} \frac{\mathrm{d}(x^\beta)}{x^\beta} \propto \frac{x^{\beta-1}}{\left(1 + x^\beta\right)^2}\mathrm{d}x.$$ Introducing a scale factor $\alpha$ for $X$ (which would be the exponential of a location parameter for $Y$) gives the log-logistic distribution family, $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta-1}}{\left(1 + (x/\alpha)^\beta\right)^2}\mathrm{d}x.$$ This distribution can be "length extended" in various ways. By comparison to the question, it is evident that $f_X(x,\beta,\alpha)$ has been changed to $xf_X(x,\beta,\alpha).$ This weights the probability densities at larger $x\gt 0$ directly proportional to $x,$ skewing the distribution to the right. This helps us understand what $f_X$ is, what it is intended to accomplish, and how it relates to familiar distributions. It is time to consider the implicit proportionality constants. For this length-extended log-logistic (LELL) distribution to be defined, it must be possible to scale all densities so they integrate to unity. The integral of $x f_X(x,\beta,\alpha)$ is (by definition) the expectation of $X,$ equal to $$E[X] = \frac{\alpha \pi/\beta}{\sin(\pi/\beta)}$$ provided $\beta\gt 1.$ This is why its reciprocal appears as a factor of $f_X$ in the question. When $0 \lt \beta \le 1,$ the LELL construction does not work. Let's see why not. The scale factor doesn't matter, so take $\alpha=1.$ For any $T\gt 1,$ a series of elementary estimates based on $x^\beta \gt 1$ and $x^{-\beta}\ge x^{-1}$ when $x\gt 1$ (when $0\lt \beta\le 1$) gives $$\begin{aligned} E[X\mid \beta,1] &= \int_\mathbb{R} x f_X(x,\beta,1)\,\mathrm{d}x\\ &= \int_0^\infty x \frac{x^{\beta-1}}{\left(1 + x^\beta\right)^2}\mathrm{d}x \\ &\ge \int_1^\infty \frac{x^{\beta}}{\left(1 + x^\beta\right)^2}\mathrm{d}x \\ &\gt \int_1^\infty \frac{x^{\beta}}{\left(x^\beta + x^\beta\right)^2}\mathrm{d}x \\ &= \frac{1}{4}\int_1^\infty x^{-\beta}\,\mathrm{d}x\\ &\gt \frac{1}{4}\int_1^T x^{-\beta}\,\mathrm{d}x\\ &= \frac{1}{4(1-\beta)}\left(T^{1-\beta}-1\right). \end{aligned}$$ Because the final expression grows arbitrarily large as $T$ grows, $E[X]$ cannot be finite when $\beta \le 1.$ This makes it impossible to construct an LELL distribution for $\beta \le 1.$ Working code This is a series of R commands. On my system, the command to draw independent values following the $f_X$ density will produce four million (4,000,000) values per second (on one core). # # The PDF in the question. # f <- function(x, beta, alpha=1) { y <- (x/alpha)^beta beta^2 * sin(pi/beta) / pi * y / (1 + y)^2 / alpha } # # The mode of log(X) where the density of X is `f`. # mode <- function(beta, alpha=1) { log(alpha) + log((1+beta)/(beta-1)) / beta } # # Simulations. # n <- 1e5 # Simulation size log.alpha <- 1 alpha <- exp(log.alpha) # Scale; must be positive beta <- c(1.1, 2, 1e3) # The beta values to use par(mfrow=c(1,length(beta))) set.seed(17) for (beta in beta) { # # Generate random variates. # x <- alpha * (rf(n, 2+2/beta, 2-2/beta) * (beta+1)/(beta-1))^(1/beta) # # Plot a histogram of log(x) (because X is so skewed). # m <- mode(beta, alpha) ymax <- f(exp(m), beta, alpha)*exp(m) hist(log(x), freq=FALSE, breaks=80, ylim=c(0, ymax), col="#f0f0f0", border="Gray") abline(v = m) # Mark the mode mtext(bquote(list(alpha==e^.(log.alpha), beta==.(beta))), side=3, line=0, cex=0.7) # # Overplot the PDF to check. # curve(f(exp(x), beta, alpha)*exp(x), add=TRUE, n=501, lwd=2, col="Red") } par(mfrow=c(1,1))
how to generate data from cdf which is not in closed form? Solution By exploiting the principles explained at https://stats.stackexchange.com/a/495347/919, we can examine the expression for $$f_X(x,\beta,\alpha) \propto \frac{(x/\alpha)^{\beta+1}}{\left(1 + (
39,344
how to generate data from cdf which is not in closed form?
Searching for something related Step 1 use a transformation to simplify te distribution. With $t = (x/\alpha)^\beta$ or $x=\alpha t^{1/\beta}$ it becomes $$f(t) \propto \frac{t^{1/\beta}}{(1+t)^{2}}$$ Step 2. Compare with other distributions. We can use the following quotient to see if this is a Pearson distribution $$f'(t)/f(t) = \frac{\frac{1}{1-2 \beta}+t}{(t+t^2)/(1/\beta-2)} $$ which is indeed like a Pearson distribution. Searching a bit further we see that it is related to the type VI distribution, or F-distribution of Beta prime distribution. The Beta prime distribution below resembles directly with what we had above $$\frac{1}{B(a,b)} \frac{x^{a-1}}{(1+x)^{a+b}}$$ this works if your $1/\beta<1$ (because the beta prime distribution has the restriction $a>0$ and $b>0$) So you can sample $t$ from the beta prime distribution with any of the software packages for this function, and then transform to $x$.
how to generate data from cdf which is not in closed form?
Searching for something related Step 1 use a transformation to simplify te distribution. With $t = (x/\alpha)^\beta$ or $x=\alpha t^{1/\beta}$ it becomes $$f(t) \propto \frac{t^{1/\beta}}{(1+t)^{2}}
how to generate data from cdf which is not in closed form? Searching for something related Step 1 use a transformation to simplify te distribution. With $t = (x/\alpha)^\beta$ or $x=\alpha t^{1/\beta}$ it becomes $$f(t) \propto \frac{t^{1/\beta}}{(1+t)^{2}}$$ Step 2. Compare with other distributions. We can use the following quotient to see if this is a Pearson distribution $$f'(t)/f(t) = \frac{\frac{1}{1-2 \beta}+t}{(t+t^2)/(1/\beta-2)} $$ which is indeed like a Pearson distribution. Searching a bit further we see that it is related to the type VI distribution, or F-distribution of Beta prime distribution. The Beta prime distribution below resembles directly with what we had above $$\frac{1}{B(a,b)} \frac{x^{a-1}}{(1+x)^{a+b}}$$ this works if your $1/\beta<1$ (because the beta prime distribution has the restriction $a>0$ and $b>0$) So you can sample $t$ from the beta prime distribution with any of the software packages for this function, and then transform to $x$.
how to generate data from cdf which is not in closed form? Searching for something related Step 1 use a transformation to simplify te distribution. With $t = (x/\alpha)^\beta$ or $x=\alpha t^{1/\beta}$ it becomes $$f(t) \propto \frac{t^{1/\beta}}{(1+t)^{2}}
39,345
how to generate data from cdf which is not in closed form?
I cannot make sense of your PDF. So I will show an acceptance-rejection method using a PDF that does make sense to me. The method is similar to that of @Ertxiem (+1), except that I guess mine works better if your (correct) density function has support $(0,\infty).$ Both methods are general ones, and I hope you can use one of them to sample from your density when you get its proper formulation. Suppose you want to sample from the 'half normal' distribution of $X =|Z|,$ where $Z$ is standard normal, so that the density function of $X$ is $f(x) = \frac{2}{\sqrt{2\pi}}e^{0.5x^2},$ for $x > 0.$ One can show that $E(X) = \sqrt{2/\pi},\;$ $Var(X) = 1 - 2/\pi.$ We want to take a sample of size $n = 10^6$ from this distribution. We cannot write the CDF of this density function or its inverse in closed form. For this demonstration, forget that the standard normal CDF has been extensively tabled and that its inverse function is implemented in R and other statistical software. One solution is to find an 'envelope function' $g(x)\ge f(x),$ for $x > 0,$ where $g(x)$ is a multiple of the density function of a distribution from which we do know how to sample. Here, we can take $g(x) = 1.4e^{-x},$ which is $1.4$ times the density function of $\mathsf{Exp}(1).$ hdr = "Half Normal Density with Envelope" curve(2*dnorm(x), 0, 4, ylim=c(0,1.4), lwd=2, ylab="Density", main=hdr) curve(1.4*dexp(x), add=T, col="blue") abline(h=0, col="green2"); abline(v=0, col="green2") Now we generate a random sample y of size $n$ from $\mathsf{Exp}(1),$ using its quantile function and standard normal random variables. n = 10^6; y = -log(runif(n)) # exponential u = runif(n, 0, 1.4*exp(-y)) acc = u < 2*dnorm(y) x = y[acc] # for verification: about 2 or 3 places mean(x); sqrt(2/pi) [1] 0.7974243 # aprx E(X) [1] 0.7978846 # exact sd(x); sqrt(1 - 2/pi) [1] 0.6024202 # aprx SD(X) [1] 0.6028103 # exact hdr = "Historam of Sampled Points with Density" hist(x, prob=T, col="skyblue2", main=hdr) curve(2*dnorm(x), col="maroon", 0, 5, add=T) The following figure shows some of the accepted points (blue) and some of those not accepted (orange). [For clarity in the figure, not all simulated points are shown.] The vector x contain the x-coordinates of all accepted points. R code for image above: curve(2*dnorm(x), 0, 4, lwd=2, col="blue", ylim=c(0,1.42), xaxs="i", ylab="Density", main="Accepted (x) Beneath Folded Normal Density") curve(1.4*exp(-abs(x)), col="maroon", add=T) abline(h=0, col="green2") plt = 20000; y.pl = y[1:plt] # plot fewer points for clarity u.pl = u[1:plt]; acc.pl=acc[1:plt] points(y.pl, u.pl, pch=".", col="orange") points(y.pl[acc.pl], u.pl[acc.pl], pch=".", col="skyblue4") Notes: (1) In the first block of code to generate values x, I could have used rexp(n) to generate y instead of using the quantile function of uniform random variables with -log(runif(n)). Also, instead of 2*dnorm(y), I could have used (2/sqrt(2*pi))*exp(-0.5*y^2). (2) I'm not sure if you'll know the mean, variance, or other facts about your density function, so you may not have anything to 'verify' after taking your sample x. But you could still plot a histogram of the simulated data and overlay your density function. (3) In order for my method to work, you will have to be able to find some distribution that can be simulated, and for which a multiple of the density majorizes your density.
how to generate data from cdf which is not in closed form?
I cannot make sense of your PDF. So I will show an acceptance-rejection method using a PDF that does make sense to me. The method is similar to that of @Ertxiem (+1), except that I guess mine works be
how to generate data from cdf which is not in closed form? I cannot make sense of your PDF. So I will show an acceptance-rejection method using a PDF that does make sense to me. The method is similar to that of @Ertxiem (+1), except that I guess mine works better if your (correct) density function has support $(0,\infty).$ Both methods are general ones, and I hope you can use one of them to sample from your density when you get its proper formulation. Suppose you want to sample from the 'half normal' distribution of $X =|Z|,$ where $Z$ is standard normal, so that the density function of $X$ is $f(x) = \frac{2}{\sqrt{2\pi}}e^{0.5x^2},$ for $x > 0.$ One can show that $E(X) = \sqrt{2/\pi},\;$ $Var(X) = 1 - 2/\pi.$ We want to take a sample of size $n = 10^6$ from this distribution. We cannot write the CDF of this density function or its inverse in closed form. For this demonstration, forget that the standard normal CDF has been extensively tabled and that its inverse function is implemented in R and other statistical software. One solution is to find an 'envelope function' $g(x)\ge f(x),$ for $x > 0,$ where $g(x)$ is a multiple of the density function of a distribution from which we do know how to sample. Here, we can take $g(x) = 1.4e^{-x},$ which is $1.4$ times the density function of $\mathsf{Exp}(1).$ hdr = "Half Normal Density with Envelope" curve(2*dnorm(x), 0, 4, ylim=c(0,1.4), lwd=2, ylab="Density", main=hdr) curve(1.4*dexp(x), add=T, col="blue") abline(h=0, col="green2"); abline(v=0, col="green2") Now we generate a random sample y of size $n$ from $\mathsf{Exp}(1),$ using its quantile function and standard normal random variables. n = 10^6; y = -log(runif(n)) # exponential u = runif(n, 0, 1.4*exp(-y)) acc = u < 2*dnorm(y) x = y[acc] # for verification: about 2 or 3 places mean(x); sqrt(2/pi) [1] 0.7974243 # aprx E(X) [1] 0.7978846 # exact sd(x); sqrt(1 - 2/pi) [1] 0.6024202 # aprx SD(X) [1] 0.6028103 # exact hdr = "Historam of Sampled Points with Density" hist(x, prob=T, col="skyblue2", main=hdr) curve(2*dnorm(x), col="maroon", 0, 5, add=T) The following figure shows some of the accepted points (blue) and some of those not accepted (orange). [For clarity in the figure, not all simulated points are shown.] The vector x contain the x-coordinates of all accepted points. R code for image above: curve(2*dnorm(x), 0, 4, lwd=2, col="blue", ylim=c(0,1.42), xaxs="i", ylab="Density", main="Accepted (x) Beneath Folded Normal Density") curve(1.4*exp(-abs(x)), col="maroon", add=T) abline(h=0, col="green2") plt = 20000; y.pl = y[1:plt] # plot fewer points for clarity u.pl = u[1:plt]; acc.pl=acc[1:plt] points(y.pl, u.pl, pch=".", col="orange") points(y.pl[acc.pl], u.pl[acc.pl], pch=".", col="skyblue4") Notes: (1) In the first block of code to generate values x, I could have used rexp(n) to generate y instead of using the quantile function of uniform random variables with -log(runif(n)). Also, instead of 2*dnorm(y), I could have used (2/sqrt(2*pi))*exp(-0.5*y^2). (2) I'm not sure if you'll know the mean, variance, or other facts about your density function, so you may not have anything to 'verify' after taking your sample x. But you could still plot a histogram of the simulated data and overlay your density function. (3) In order for my method to work, you will have to be able to find some distribution that can be simulated, and for which a multiple of the density majorizes your density.
how to generate data from cdf which is not in closed form? I cannot make sense of your PDF. So I will show an acceptance-rejection method using a PDF that does make sense to me. The method is similar to that of @Ertxiem (+1), except that I guess mine works be
39,346
how to generate data from cdf which is not in closed form?
You may try to use a Monte-Carlo method. Generate a random value for $(x, \alpha, \beta)$ and $f_e$. If $f_e < f(x, \alpha, \beta)$ accept that triple, otherwise, reject it. This is easier to make work if $(x, \alpha, \beta)$ and $f$ are bounded than if they're unbounded. Edit: I will assume that $x \in [x_L; x_u]$, $\alpha \in [\alpha_L; \alpha_U]$, $\beta \in [\beta_L; \beta_U]$ and $f \in [f_L; f_U]$, with $f_L = 0$. Step 1: draw randomly from an uniform distribution $x_e \in [x_L; x_u]$, $\alpha_e \in [\alpha_L; \alpha_U]$, $\beta_e \in [\beta_L; \beta_U]$ and $f_e \in [f_L; f_U]$. Step 2: If $f_e < f(x_e, \alpha_e, \beta_e)$, then the values $(x_e, \alpha_e, \beta_e)$ are valid and you can add that triple to the generated dataset; otherwise reject the triple. Repeat these two steps until you have a large enough dataset. Note: in case $\alpha$ and $\beta$ are parameters, you'll just have to draw $x_e$.
how to generate data from cdf which is not in closed form?
You may try to use a Monte-Carlo method. Generate a random value for $(x, \alpha, \beta)$ and $f_e$. If $f_e < f(x, \alpha, \beta)$ accept that triple, otherwise, reject it. This is easier to make wor
how to generate data from cdf which is not in closed form? You may try to use a Monte-Carlo method. Generate a random value for $(x, \alpha, \beta)$ and $f_e$. If $f_e < f(x, \alpha, \beta)$ accept that triple, otherwise, reject it. This is easier to make work if $(x, \alpha, \beta)$ and $f$ are bounded than if they're unbounded. Edit: I will assume that $x \in [x_L; x_u]$, $\alpha \in [\alpha_L; \alpha_U]$, $\beta \in [\beta_L; \beta_U]$ and $f \in [f_L; f_U]$, with $f_L = 0$. Step 1: draw randomly from an uniform distribution $x_e \in [x_L; x_u]$, $\alpha_e \in [\alpha_L; \alpha_U]$, $\beta_e \in [\beta_L; \beta_U]$ and $f_e \in [f_L; f_U]$. Step 2: If $f_e < f(x_e, \alpha_e, \beta_e)$, then the values $(x_e, \alpha_e, \beta_e)$ are valid and you can add that triple to the generated dataset; otherwise reject the triple. Repeat these two steps until you have a large enough dataset. Note: in case $\alpha$ and $\beta$ are parameters, you'll just have to draw $x_e$.
how to generate data from cdf which is not in closed form? You may try to use a Monte-Carlo method. Generate a random value for $(x, \alpha, \beta)$ and $f_e$. If $f_e < f(x, \alpha, \beta)$ accept that triple, otherwise, reject it. This is easier to make wor
39,347
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ implies $\lim_{n\to\infty} P(X_n=k) = P(X=k)$
$F$ actually is continuous at all non-integers (where, of course, its graph is horizontal, reflecting zero chance that $X$ will be nonintegral). Consequently, in particular, the convergence in distribution of the sequence implies convergence at (say) the values $k\pm 1/2,$ from which you may instantly conclude for any integral $k$ that $$\begin{aligned} \lim_{n\to\infty}\Pr(X_n=k) &= \lim_{n\to\infty}\left(\Pr(X_n\le k+1/2) - \Pr(X_n\le k-1/2)\right) \\ &= \lim_{n\to\infty}\left(F_{X_n}(k+1/2) - F_{X_n}(k-1/2)\right)\\ &= \lim_{n\to\infty} F_{X_n}(k+1/2) - \lim_{n\to\infty}F_{X_n}(k-1/2)\\ &= F(k+1/2) - F(k-1/2)\\ &= \Pr(X=k), \end{aligned}$$ QED.
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ imp
$F$ actually is continuous at all non-integers (where, of course, its graph is horizontal, reflecting zero chance that $X$ will be nonintegral). Consequently, in particular, the convergence in distri
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ implies $\lim_{n\to\infty} P(X_n=k) = P(X=k)$ $F$ actually is continuous at all non-integers (where, of course, its graph is horizontal, reflecting zero chance that $X$ will be nonintegral). Consequently, in particular, the convergence in distribution of the sequence implies convergence at (say) the values $k\pm 1/2,$ from which you may instantly conclude for any integral $k$ that $$\begin{aligned} \lim_{n\to\infty}\Pr(X_n=k) &= \lim_{n\to\infty}\left(\Pr(X_n\le k+1/2) - \Pr(X_n\le k-1/2)\right) \\ &= \lim_{n\to\infty}\left(F_{X_n}(k+1/2) - F_{X_n}(k-1/2)\right)\\ &= \lim_{n\to\infty} F_{X_n}(k+1/2) - \lim_{n\to\infty}F_{X_n}(k-1/2)\\ &= F(k+1/2) - F(k-1/2)\\ &= \Pr(X=k), \end{aligned}$$ QED.
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ imp $F$ actually is continuous at all non-integers (where, of course, its graph is horizontal, reflecting zero chance that $X$ will be nonintegral). Consequently, in particular, the convergence in distri
39,348
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ implies $\lim_{n\to\infty} P(X_n=k) = P(X=k)$
Following on from whuber's excellent answer, we can also see that this theory applies for any set of random variables with countable support, even if they are not positive integers. To see this, consider a sequence of random variables with distribution on a countable support $\mathscr{X} \subset \mathbb{R}$. For any such set, there exist continuous non-decreasing functions $u:\mathbb{R} \rightarrow \mathbb{R}$ and $r:\mathbb{R} \rightarrow \mathbb{R}$ with the property that: $$x' < r(x) < x < u(x) < x'' \quad \quad \quad \text{for all } \ x' < x < x'' \ \text{ that are all in } \mathscr{X},$$ (Proving this result is a useful follow-up exercise you might like to try.) It follows that for all $x \in \mathscr{X}$ (and for any random variable in the sequence) we have: $$\mathbb{P}(X_n = x) = \mathbb{P}(r(x) < X_n \leqslant u(x)) = F_{X_n}(u(x)) - F_{X_n}(r(x)).$$ By analogy to the solution for the case of positive integer random variables, we then have: $$\begin{aligned} \lim_{n \rightarrow \infty} \mathbb{P}(X_n = x) &= \lim_{n \rightarrow \infty} \Big[ F_{X_n}(u(x)) - F_{X_n}(r(x)) \Big] \\[10pt] &= \lim_{n \rightarrow \infty} F_{X_n}(u(x)) - \lim_{n \rightarrow \infty} F_{X_n}(r(x)) \\[10pt] &= F_X(r(x)) - F_X(u(x)) \\[12pt] &= \mathbb{P}(X = x). \\[10pt] \end{aligned}$$
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ imp
Following on from whuber's excellent answer, we can also see that this theory applies for any set of random variables with countable support, even if they are not positive integers. To see this, cons
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ implies $\lim_{n\to\infty} P(X_n=k) = P(X=k)$ Following on from whuber's excellent answer, we can also see that this theory applies for any set of random variables with countable support, even if they are not positive integers. To see this, consider a sequence of random variables with distribution on a countable support $\mathscr{X} \subset \mathbb{R}$. For any such set, there exist continuous non-decreasing functions $u:\mathbb{R} \rightarrow \mathbb{R}$ and $r:\mathbb{R} \rightarrow \mathbb{R}$ with the property that: $$x' < r(x) < x < u(x) < x'' \quad \quad \quad \text{for all } \ x' < x < x'' \ \text{ that are all in } \mathscr{X},$$ (Proving this result is a useful follow-up exercise you might like to try.) It follows that for all $x \in \mathscr{X}$ (and for any random variable in the sequence) we have: $$\mathbb{P}(X_n = x) = \mathbb{P}(r(x) < X_n \leqslant u(x)) = F_{X_n}(u(x)) - F_{X_n}(r(x)).$$ By analogy to the solution for the case of positive integer random variables, we then have: $$\begin{aligned} \lim_{n \rightarrow \infty} \mathbb{P}(X_n = x) &= \lim_{n \rightarrow \infty} \Big[ F_{X_n}(u(x)) - F_{X_n}(r(x)) \Big] \\[10pt] &= \lim_{n \rightarrow \infty} F_{X_n}(u(x)) - \lim_{n \rightarrow \infty} F_{X_n}(r(x)) \\[10pt] &= F_X(r(x)) - F_X(u(x)) \\[12pt] &= \mathbb{P}(X = x). \\[10pt] \end{aligned}$$
Let $X,X_1,X_2,X_3,...$ be positive integer random variables. Show that $X_n \overset{d}{\to} X$ imp Following on from whuber's excellent answer, we can also see that this theory applies for any set of random variables with countable support, even if they are not positive integers. To see this, cons
39,349
Why is the chi-square test more popular than the G-test?
The Pearson test is popular because it's simple to compute - it's amenable to hand-calculation even without a calculator (or historically, even without log-tables) - and yet generally has good power compared to alternatives; the simplicity means it continues to be taught in the most basic subjects. There might be argued that there's an element of technological inertia in the choice, but actually I think the Pearson chi-squared is still an easily defendable choice in a wide range of situations. Being derived from a likelihood ratio test, the Neyman-Pearson lemma would suggest that the G-test should tend to have more power in large samples, but generally the Pearson chi-squared test has similar power in large samples (asymptotically it should be equivalent in the Pitman sense - there's some brief discussion about various kinds of asymptotics below - but here I just mean what you tend to see in large samples with a small effect size and at typical significance levels, without worrying about a particular sequence of tests by which $n\to\infty$.) On the other hand, in small samples, the set of available significance levels has more impact than asymptotic power; I don't think there's usually a big difference, but in some situations one or the other may have an advantage*. * But in that case the neat trick of combining the two may be even better - that is, using one statistic to break ties in another (non-equivalent) test when you have small samples, increasing the set of available significance levels -- and so improving power by allowing the type I error rate to be closer to a desired significance level without having to do something as unappetizing as randomized tests. I think that usually these will only be distinct when the tables are larger than 2x2, and in that case, it can also work with the rxc version of the Fisher exact test as well; in that case all three tests tend to differ somewhat in their ordering of tables and so the discreteness of any of the test statistics can be broken up more finely still by adding a second stage of tie-breaking, in some cases allowing a noticeably finer range of potential significance levels. This can sometimes help a lot by allowing the actual significance level to approach some desired arbitrary value more closely without relying on randomized tests (which may not be particularly palatable in practice, for all their value theoretically). Both the Pearson and G-test may be placed into the set of (Cressie-Read) power-divergence statistics (Cressie and Read, 1984 [1]), by setting $\lambda=1$ and $\lambda=0$ respectively; this family of statistics includes several other previously defined statistics, such as the Neyman ($\lambda=-2$) and the Freeman-Tukey statistic ($\lambda=\frac12$) among others, and in that context - considering several criteria - Cressie and Read suggested that the statistic with $\lambda=\frac23$ is a good compromise choice for a statistic. The efficiency issue is worth a brief mention; each definition compares the ratio of sample sizes under two tests. Loosely, Pitman efficiency considers a sequence of tests with fixed level $\alpha$ where the sample sizes achieve the same power over a sequence of ever-smaller effect sizes, while Bahadur efficiency holds the effect size fixed and considers a sequence of decreasing significance levels. (Hodges-Lehmann efficiency holds the significance level and effect size constant and lets the type II error rate decrease toward 0.) Other than among some statisticians, it doesn't seem very common that most users of statistics consider using different significance levels; in that sense the sort of behavior we might tend to see if a sequence of increasing sample sizes were available would hold the significance level constant (for all that other choices might be wiser; it can be difficult to calculate). In any case, Pitman efficiency is the most often used. On this topic, P. Groeneboom and J. Oosterhoff (1981) [2] mention (in their abstract): the asymptotic efficiency in the sense of Bahadur often turns out to be quite an unsatisfactory measure of the relative performance of two tests when the sample sizes are moderate or small. On the removed paragraph from Wikipedia; it's complete nonsense and it was rightly removed. Likelihood ratio tests were not invented until decades after Pearson's paper on the chi-squared test. The awkwardness of computing the likelihood ratio statistic in a pre-calculator era was in no sense a consideration for Pearson then, since the concept of Likelihood ratio tests simply didn't exist. Pearson's actual considerations are reasonably clear from his original paper. As I see it, he takes the form of the statistic directly from the term (aside the $-\frac12$) in the exponent in the multivariate normal approximation to the multinomial distribution. If I was writing the same thing now, I'd characterize it as the (squared) Mahalanobis distance from the values expected under the null. it makes you wonder why there isn't an R function for the G-test. It can be found in one or two packages. However, it's so simple to calculate, I never bother to load them. Instead I usually compute it directly from the data and the expected values that are returned by the function that calculates the Pearson chi-squared statistic (or occasionally - at least in some situations - I compute it instead from the output of the glm function). Just a couple of lines in addition to the usual chisq.test call are sufficient; it's easier to write it fresh from scratch each time than loading a package to do it. Indeed, you can also do an "exact" test based on the G-test statistic (conditioning on both margins) - using the same method that chisq.test does, by using r2dtable to generate as many random tables as you like (I tend to use a lot more tables than the default used by chisq.test in R unless the original table is so large that it would take a very long time) References [1]: Cressie, N. and Read, T.R. (1984), "Multinomial Goodness‐Of‐Fit Tests." Journal of the Royal Statistical Society: Series B (Methodological), 46, p. 440-464. [2]: P. Groeneboom and J. Oosterhoff (1981), "Bahadur Efficiency and Small-sample Efficiency." International Statistical Review, 49, p. 127-141.
Why is the chi-square test more popular than the G-test?
The Pearson test is popular because it's simple to compute - it's amenable to hand-calculation even without a calculator (or historically, even without log-tables) - and yet generally has good power c
Why is the chi-square test more popular than the G-test? The Pearson test is popular because it's simple to compute - it's amenable to hand-calculation even without a calculator (or historically, even without log-tables) - and yet generally has good power compared to alternatives; the simplicity means it continues to be taught in the most basic subjects. There might be argued that there's an element of technological inertia in the choice, but actually I think the Pearson chi-squared is still an easily defendable choice in a wide range of situations. Being derived from a likelihood ratio test, the Neyman-Pearson lemma would suggest that the G-test should tend to have more power in large samples, but generally the Pearson chi-squared test has similar power in large samples (asymptotically it should be equivalent in the Pitman sense - there's some brief discussion about various kinds of asymptotics below - but here I just mean what you tend to see in large samples with a small effect size and at typical significance levels, without worrying about a particular sequence of tests by which $n\to\infty$.) On the other hand, in small samples, the set of available significance levels has more impact than asymptotic power; I don't think there's usually a big difference, but in some situations one or the other may have an advantage*. * But in that case the neat trick of combining the two may be even better - that is, using one statistic to break ties in another (non-equivalent) test when you have small samples, increasing the set of available significance levels -- and so improving power by allowing the type I error rate to be closer to a desired significance level without having to do something as unappetizing as randomized tests. I think that usually these will only be distinct when the tables are larger than 2x2, and in that case, it can also work with the rxc version of the Fisher exact test as well; in that case all three tests tend to differ somewhat in their ordering of tables and so the discreteness of any of the test statistics can be broken up more finely still by adding a second stage of tie-breaking, in some cases allowing a noticeably finer range of potential significance levels. This can sometimes help a lot by allowing the actual significance level to approach some desired arbitrary value more closely without relying on randomized tests (which may not be particularly palatable in practice, for all their value theoretically). Both the Pearson and G-test may be placed into the set of (Cressie-Read) power-divergence statistics (Cressie and Read, 1984 [1]), by setting $\lambda=1$ and $\lambda=0$ respectively; this family of statistics includes several other previously defined statistics, such as the Neyman ($\lambda=-2$) and the Freeman-Tukey statistic ($\lambda=\frac12$) among others, and in that context - considering several criteria - Cressie and Read suggested that the statistic with $\lambda=\frac23$ is a good compromise choice for a statistic. The efficiency issue is worth a brief mention; each definition compares the ratio of sample sizes under two tests. Loosely, Pitman efficiency considers a sequence of tests with fixed level $\alpha$ where the sample sizes achieve the same power over a sequence of ever-smaller effect sizes, while Bahadur efficiency holds the effect size fixed and considers a sequence of decreasing significance levels. (Hodges-Lehmann efficiency holds the significance level and effect size constant and lets the type II error rate decrease toward 0.) Other than among some statisticians, it doesn't seem very common that most users of statistics consider using different significance levels; in that sense the sort of behavior we might tend to see if a sequence of increasing sample sizes were available would hold the significance level constant (for all that other choices might be wiser; it can be difficult to calculate). In any case, Pitman efficiency is the most often used. On this topic, P. Groeneboom and J. Oosterhoff (1981) [2] mention (in their abstract): the asymptotic efficiency in the sense of Bahadur often turns out to be quite an unsatisfactory measure of the relative performance of two tests when the sample sizes are moderate or small. On the removed paragraph from Wikipedia; it's complete nonsense and it was rightly removed. Likelihood ratio tests were not invented until decades after Pearson's paper on the chi-squared test. The awkwardness of computing the likelihood ratio statistic in a pre-calculator era was in no sense a consideration for Pearson then, since the concept of Likelihood ratio tests simply didn't exist. Pearson's actual considerations are reasonably clear from his original paper. As I see it, he takes the form of the statistic directly from the term (aside the $-\frac12$) in the exponent in the multivariate normal approximation to the multinomial distribution. If I was writing the same thing now, I'd characterize it as the (squared) Mahalanobis distance from the values expected under the null. it makes you wonder why there isn't an R function for the G-test. It can be found in one or two packages. However, it's so simple to calculate, I never bother to load them. Instead I usually compute it directly from the data and the expected values that are returned by the function that calculates the Pearson chi-squared statistic (or occasionally - at least in some situations - I compute it instead from the output of the glm function). Just a couple of lines in addition to the usual chisq.test call are sufficient; it's easier to write it fresh from scratch each time than loading a package to do it. Indeed, you can also do an "exact" test based on the G-test statistic (conditioning on both margins) - using the same method that chisq.test does, by using r2dtable to generate as many random tables as you like (I tend to use a lot more tables than the default used by chisq.test in R unless the original table is so large that it would take a very long time) References [1]: Cressie, N. and Read, T.R. (1984), "Multinomial Goodness‐Of‐Fit Tests." Journal of the Royal Statistical Society: Series B (Methodological), 46, p. 440-464. [2]: P. Groeneboom and J. Oosterhoff (1981), "Bahadur Efficiency and Small-sample Efficiency." International Statistical Review, 49, p. 127-141.
Why is the chi-square test more popular than the G-test? The Pearson test is popular because it's simple to compute - it's amenable to hand-calculation even without a calculator (or historically, even without log-tables) - and yet generally has good power c
39,350
Idea for an using of mixed model. Used variable with all possible category as an random effect
This is an interesting question. Indeed, understanding random effects $u_i$ as draws from a larger population is well in line with the underlying model, which treats them as a random variable, e.g.: $u_i \sim_{iid} Normal(0, \sigma^2)$ But there are arguments for treating entities as coming from a larger population even if your sample is a full census. Any realized set of entities can be viewed as "chosen at random by Nature", an argument made by Freedman (2005). Relatedly, Deming (1953) has argued that if a full census is used to solve a, what he calls, analytical problem, i.e., when inferring an underlying relationship or process with the goal to generalize (as in your case) rather than just count, even a full census should be treated as a sample with sampling error. This would also justify the view of your counties coming from a larger distribution (in Deming's words, coming from the "causal system" that produced it). It will probably depend on your discipline how strictly people treat the different assumptions made in random effect models, but the random effect independence assumption is probably more relevant and empirically verifiable than what you view as your population of counties. Given the efficiency advantage of RE over FE models and the convenient estimation of between-county variance in RE models, which is one of your side goals, I would advise for the RE model (given other assumptions, like the independence assumption, make sense in your context and do no harm). References: Freedman D. A. (2005). Statistical Models: Theory and Practice. Cambridge, UK: Cambridge University Press. Deming, W. E. (1953). On the distinction between enumerative and analytic surveys. Journal of the American Statistical Association, 48(262), 244-255.
Idea for an using of mixed model. Used variable with all possible category as an random effect
This is an interesting question. Indeed, understanding random effects $u_i$ as draws from a larger population is well in line with the underlying model, which treats them as a random variable, e.g.: $
Idea for an using of mixed model. Used variable with all possible category as an random effect This is an interesting question. Indeed, understanding random effects $u_i$ as draws from a larger population is well in line with the underlying model, which treats them as a random variable, e.g.: $u_i \sim_{iid} Normal(0, \sigma^2)$ But there are arguments for treating entities as coming from a larger population even if your sample is a full census. Any realized set of entities can be viewed as "chosen at random by Nature", an argument made by Freedman (2005). Relatedly, Deming (1953) has argued that if a full census is used to solve a, what he calls, analytical problem, i.e., when inferring an underlying relationship or process with the goal to generalize (as in your case) rather than just count, even a full census should be treated as a sample with sampling error. This would also justify the view of your counties coming from a larger distribution (in Deming's words, coming from the "causal system" that produced it). It will probably depend on your discipline how strictly people treat the different assumptions made in random effect models, but the random effect independence assumption is probably more relevant and empirically verifiable than what you view as your population of counties. Given the efficiency advantage of RE over FE models and the convenient estimation of between-county variance in RE models, which is one of your side goals, I would advise for the RE model (given other assumptions, like the independence assumption, make sense in your context and do no harm). References: Freedman D. A. (2005). Statistical Models: Theory and Practice. Cambridge, UK: Cambridge University Press. Deming, W. E. (1953). On the distinction between enumerative and analytic surveys. Journal of the American Statistical Association, 48(262), 244-255.
Idea for an using of mixed model. Used variable with all possible category as an random effect This is an interesting question. Indeed, understanding random effects $u_i$ as draws from a larger population is well in line with the underlying model, which treats them as a random variable, e.g.: $
39,351
Idea for an using of mixed model. Used variable with all possible category as an random effect
Yes I think your intuition is correct and you can fit random intercepts. There are several criteria for assessing if a factor should be treated as random and being a sample from a population is only one. Often different criteria point in different directions and it is a matter of judgement. In this case if you really wanted to be strict about it you can think of your population as a sample from a population of similar entities in similar countries - or as a sample from a population in a different universe! I say that lightheartedly of course. The fact is that you have clustered data and this is one of the main use cases for random effects.
Idea for an using of mixed model. Used variable with all possible category as an random effect
Yes I think your intuition is correct and you can fit random intercepts. There are several criteria for assessing if a factor should be treated as random and being a sample from a population is only o
Idea for an using of mixed model. Used variable with all possible category as an random effect Yes I think your intuition is correct and you can fit random intercepts. There are several criteria for assessing if a factor should be treated as random and being a sample from a population is only one. Often different criteria point in different directions and it is a matter of judgement. In this case if you really wanted to be strict about it you can think of your population as a sample from a population of similar entities in similar countries - or as a sample from a population in a different universe! I say that lightheartedly of course. The fact is that you have clustered data and this is one of the main use cases for random effects.
Idea for an using of mixed model. Used variable with all possible category as an random effect Yes I think your intuition is correct and you can fit random intercepts. There are several criteria for assessing if a factor should be treated as random and being a sample from a population is only o
39,352
Why is the convergence rate important?
Perhaps the two most familiar and most used limit theorems are the Central Limit Theorem (CLT) and the Law of Large Numbers (LLN). Both are useful for proving other theoretical theorems. Here I discuss a few kinds of practical applications in which one hopes sample size is large enough to use CLT and LLN to make useful approximations. CLT. If $X_i,$ for $i = 1,2,3,\dots,$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2 < \infty,$ then the limiting distribution of $$Z_n = \frac{\sum_{i-1}^nX_i - n\mu}{\sigma\sqrt{n}} = \frac{\bar X -\mu}{\sigma/\sqrt{n}}$$ is the standard normal distribution $\mathsf{Norm}(0,1).$ Depending on the shape of the distribution of the $X_i$ this convergence can be very fast or rather slow. Sample from uniform population: For example if $X_i \sim \mathsf{Unif}(0,1),$ then the sum $\sum_{i=1}^{12} X_i$ of a sample of size of only $n = 12$ has very nearly the distribution $\mathsf{Norm}(6, 1)$ so $Z = \sum_{i=1}^{12} X_i - 6$ is very nearly standard normal. In the early days of computation this fact was used to sample from the standard normal distribution using only variables from a random number generator that are indistinguishable in practice from independent standard uniform random variables, along with simple arithmetic. The R code below uses this method to generate 5000 values that are difficult to distinguish from standard normal. The mean of these 5000 values is very nearly $0$ and their standard deviation is very nearly 1. Also, a Shapiro-Wilk normality test does not reject the null hypothesis that they are normal. set.seed(422) z = replicate(5000, sum(runif(12)) - 6) mean(z); sd(z) [1] 0.001091293 # aprx 0 [1] 1.00467 # aprx 1 However, more sensitive tests do detect that these 5000 values are not exactly standard normal. In particular, all random variables $Z$ generated by this method lie between $\pm 6.$ So, although the convergence is very fast, twelve observations are not enough for a perfect fit to standard normal. Sample from exponential population. The extreme right-skewness of exponential random variables causes the convergence guaranteed by the CLT to be rather slow. The mean of a random sample of size 12 from the distribution $\mathsf{Exp}(1)$ has the distribution $\mathsf{Gamma}(\mathrm{shape}=12, \mathsf{rate}=12),$ which is again noticeably right skewed. [The density function is shown in the left panel of the figure below.] However, the mean of 100 standard exponential random variables has the distribution $\mathsf{Gamma}(100,100)$ [black density in the right panel] which is very nearly $\mathsf{Norm}(1,0.01)$ [broken red]. The CLT is "working" as promised, but much more slowly than for sums of uniformly distributed random variables. Binomial approximation to normal. Also, by applying the CLT to independent Bernoulli random variables with success probability $p,$ one can approximate some binomial probabilities using normal distributions. Using binomial probability functions in R and other widely used statistical software, it now easy and often better to get exact binomial probabilities. Even so, normal approximations are still widely used. Various 'rules of thumb' have been suggested to determine when $n$ is large enough for a good normal approximation to $\mathsf{Binom}(n,p).$ Many of these try to avoid substantial normal probability outside $(0, n).$ Perhaps the most popular rule is that $\min(np, n(1-p)) \ge 5.$ (I have seen bounds 3, 10, etc. by less or more fastidious authors.) This rule largely ignores that approximations tend to be better for $p \approx 1/2$ (for any $n)$ because better fits are possible when the binomial distribution in question is nearly symmetrical. The two graphs below show a bad normal approximation to $\mathsf{Binom}(20, .2)$ on the left and relatively good ones for $\mathsf{Binom}(10, .5)$ and $\mathsf{Binom}(40, .5)$ center and right. In particular, if $X \sim \mathsf{Binom}(20,.2),$ then the exact probability $P(1.5 < X < 4.5) = 0.5605,$ but the normal approximation gives $0.5289.$ However, if $X \sim \mathsf{Binom}(40,.5),$ we have $P(9.5 < X < 20.5) = 0.5623$ exactly, and the approximation gives $0.5624.$ In general use with $\min(np,n(1-p)) \ge 5,$ one is hopes the approximation is accurate to about two decimal places. LLN. If $X_i$ for $ i = 1,2,3. \dots,$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2 < \infty,$ then the sequence of sample means $\bar X_n = \frac 1n\sum_{i=1}^n X_i$ converges in probability to $\mu.$ That is, $\lim_{n\rightarrow\infty} P(|\bar X_n - \mu| < \epsilon) = 1,$ for any $\epsilon > 0.$ The words "large numbers" in the name of the theorem suggests that the theorem is a useful approximation only for large $n.$ For example, in a public opinion poll we may get Yes and No answers from subjects. If $1$ stands for Yes and $0$ for No, then the proportion $p$ of Yes opinions in the population is estimated by $\hat p_n = \bar X_n,$ the mean of the 0's and 1's. The LLN guarantees that, for sufficiently large $n,$ it is very likely that $\hat p_n$ is within $\epsilon$ of $p.$ However, in order for the result to be useful, $\epsilon$ needs to be small, say $\epsilon = 0.02.$ The following simulation makes a 'trace' of the successive values of $\hat p_n$ as we interview increasingly may subjects. Suppose $p = 0.55.$ At the start the trace fluctuates widely and then for large $n$ it begins to "settle" near $p.$ set.seed(2020) n = 3000; p = 0.55 x = sample(0:1, n, rep=T, prob=c(1-p,p)) p.hat = cumsum(x)/(1:n) plot(p.hat, ylim=c(.4,.6), type="l", lwd=2, xaxs="i") abline(h = p, col="green2") abline(h = c(p+.02, p-.02), col="red") This run was a 'lucky' one; it often takes about 2500 interviews before the trace settles to within $\pm 2\%$ of the population proportion. That is not to say that the LLN is useless for practical purposes because of its relatively slow convergence, it's just that this theorem doesn't guarantee pollsters an easy life.
Why is the convergence rate important?
Perhaps the two most familiar and most used limit theorems are the Central Limit Theorem (CLT) and the Law of Large Numbers (LLN). Both are useful for proving other theoretical theorems. Here I discus
Why is the convergence rate important? Perhaps the two most familiar and most used limit theorems are the Central Limit Theorem (CLT) and the Law of Large Numbers (LLN). Both are useful for proving other theoretical theorems. Here I discuss a few kinds of practical applications in which one hopes sample size is large enough to use CLT and LLN to make useful approximations. CLT. If $X_i,$ for $i = 1,2,3,\dots,$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2 < \infty,$ then the limiting distribution of $$Z_n = \frac{\sum_{i-1}^nX_i - n\mu}{\sigma\sqrt{n}} = \frac{\bar X -\mu}{\sigma/\sqrt{n}}$$ is the standard normal distribution $\mathsf{Norm}(0,1).$ Depending on the shape of the distribution of the $X_i$ this convergence can be very fast or rather slow. Sample from uniform population: For example if $X_i \sim \mathsf{Unif}(0,1),$ then the sum $\sum_{i=1}^{12} X_i$ of a sample of size of only $n = 12$ has very nearly the distribution $\mathsf{Norm}(6, 1)$ so $Z = \sum_{i=1}^{12} X_i - 6$ is very nearly standard normal. In the early days of computation this fact was used to sample from the standard normal distribution using only variables from a random number generator that are indistinguishable in practice from independent standard uniform random variables, along with simple arithmetic. The R code below uses this method to generate 5000 values that are difficult to distinguish from standard normal. The mean of these 5000 values is very nearly $0$ and their standard deviation is very nearly 1. Also, a Shapiro-Wilk normality test does not reject the null hypothesis that they are normal. set.seed(422) z = replicate(5000, sum(runif(12)) - 6) mean(z); sd(z) [1] 0.001091293 # aprx 0 [1] 1.00467 # aprx 1 However, more sensitive tests do detect that these 5000 values are not exactly standard normal. In particular, all random variables $Z$ generated by this method lie between $\pm 6.$ So, although the convergence is very fast, twelve observations are not enough for a perfect fit to standard normal. Sample from exponential population. The extreme right-skewness of exponential random variables causes the convergence guaranteed by the CLT to be rather slow. The mean of a random sample of size 12 from the distribution $\mathsf{Exp}(1)$ has the distribution $\mathsf{Gamma}(\mathrm{shape}=12, \mathsf{rate}=12),$ which is again noticeably right skewed. [The density function is shown in the left panel of the figure below.] However, the mean of 100 standard exponential random variables has the distribution $\mathsf{Gamma}(100,100)$ [black density in the right panel] which is very nearly $\mathsf{Norm}(1,0.01)$ [broken red]. The CLT is "working" as promised, but much more slowly than for sums of uniformly distributed random variables. Binomial approximation to normal. Also, by applying the CLT to independent Bernoulli random variables with success probability $p,$ one can approximate some binomial probabilities using normal distributions. Using binomial probability functions in R and other widely used statistical software, it now easy and often better to get exact binomial probabilities. Even so, normal approximations are still widely used. Various 'rules of thumb' have been suggested to determine when $n$ is large enough for a good normal approximation to $\mathsf{Binom}(n,p).$ Many of these try to avoid substantial normal probability outside $(0, n).$ Perhaps the most popular rule is that $\min(np, n(1-p)) \ge 5.$ (I have seen bounds 3, 10, etc. by less or more fastidious authors.) This rule largely ignores that approximations tend to be better for $p \approx 1/2$ (for any $n)$ because better fits are possible when the binomial distribution in question is nearly symmetrical. The two graphs below show a bad normal approximation to $\mathsf{Binom}(20, .2)$ on the left and relatively good ones for $\mathsf{Binom}(10, .5)$ and $\mathsf{Binom}(40, .5)$ center and right. In particular, if $X \sim \mathsf{Binom}(20,.2),$ then the exact probability $P(1.5 < X < 4.5) = 0.5605,$ but the normal approximation gives $0.5289.$ However, if $X \sim \mathsf{Binom}(40,.5),$ we have $P(9.5 < X < 20.5) = 0.5623$ exactly, and the approximation gives $0.5624.$ In general use with $\min(np,n(1-p)) \ge 5,$ one is hopes the approximation is accurate to about two decimal places. LLN. If $X_i$ for $ i = 1,2,3. \dots,$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2 < \infty,$ then the sequence of sample means $\bar X_n = \frac 1n\sum_{i=1}^n X_i$ converges in probability to $\mu.$ That is, $\lim_{n\rightarrow\infty} P(|\bar X_n - \mu| < \epsilon) = 1,$ for any $\epsilon > 0.$ The words "large numbers" in the name of the theorem suggests that the theorem is a useful approximation only for large $n.$ For example, in a public opinion poll we may get Yes and No answers from subjects. If $1$ stands for Yes and $0$ for No, then the proportion $p$ of Yes opinions in the population is estimated by $\hat p_n = \bar X_n,$ the mean of the 0's and 1's. The LLN guarantees that, for sufficiently large $n,$ it is very likely that $\hat p_n$ is within $\epsilon$ of $p.$ However, in order for the result to be useful, $\epsilon$ needs to be small, say $\epsilon = 0.02.$ The following simulation makes a 'trace' of the successive values of $\hat p_n$ as we interview increasingly may subjects. Suppose $p = 0.55.$ At the start the trace fluctuates widely and then for large $n$ it begins to "settle" near $p.$ set.seed(2020) n = 3000; p = 0.55 x = sample(0:1, n, rep=T, prob=c(1-p,p)) p.hat = cumsum(x)/(1:n) plot(p.hat, ylim=c(.4,.6), type="l", lwd=2, xaxs="i") abline(h = p, col="green2") abline(h = c(p+.02, p-.02), col="red") This run was a 'lucky' one; it often takes about 2500 interviews before the trace settles to within $\pm 2\%$ of the population proportion. That is not to say that the LLN is useless for practical purposes because of its relatively slow convergence, it's just that this theorem doesn't guarantee pollsters an easy life.
Why is the convergence rate important? Perhaps the two most familiar and most used limit theorems are the Central Limit Theorem (CLT) and the Law of Large Numbers (LLN). Both are useful for proving other theoretical theorems. Here I discus
39,353
Why is the convergence rate important?
Here is an example of how to apply such theorems: Let us say that we want to fit a function $g$ to some data observed and let us assume that the setting is 'good' in the sense that the data really comes from a true function $f$ and the observed data points really come from IID random variables or so (assumptions that we can never truly verify nor falsify for real world data!) and the assumptions of the convergence theorem works. Let us say that the theorem states that the error $|f-g|$ is roughly $1/n$ where $n$ is the amount of data points observed. Let us say that we start with $10$ data points. Then the error will be roughly $1/10 = 0.1$. A number that is small but not 'impressively small' I would say. If we take $100$ data points then the error will be roughly $0.001$. So far so good. So we see that knowing the rate of convergence lets us compute a minimal number of data points that we need in order to achieve a certain error. So let us say that we are talking about a physics experiment and the data is some sensor data and we really want that the temperature is captured up to an error of $0.0001$ (otherwise the experiment will fail or something). Then how many data points do we need to capture? Given the convergence rate we know that we need roughly $10000$ data points. This is one of the applications of the convergence rate but it has more in theory I guess... If I remember correctly there are situations like this: If some $g$ converges 'fast enough' then it might help you showing (in a purely mathematical sense) that the target function $f$ lies in a special space of functions. That in turn has to be read as 'if we want a theorem like this with functions $g$ then we MUST assume that the target function $f$ lies in that special space, otherwise it will not work'. NB: To be precise: Actually we usually want to fit a sequence of functions $g_n$ to $f$ but $g_n$ comes from some kind of training routine involving $n$ data points.
Why is the convergence rate important?
Here is an example of how to apply such theorems: Let us say that we want to fit a function $g$ to some data observed and let us assume that the setting is 'good' in the sense that the data really com
Why is the convergence rate important? Here is an example of how to apply such theorems: Let us say that we want to fit a function $g$ to some data observed and let us assume that the setting is 'good' in the sense that the data really comes from a true function $f$ and the observed data points really come from IID random variables or so (assumptions that we can never truly verify nor falsify for real world data!) and the assumptions of the convergence theorem works. Let us say that the theorem states that the error $|f-g|$ is roughly $1/n$ where $n$ is the amount of data points observed. Let us say that we start with $10$ data points. Then the error will be roughly $1/10 = 0.1$. A number that is small but not 'impressively small' I would say. If we take $100$ data points then the error will be roughly $0.001$. So far so good. So we see that knowing the rate of convergence lets us compute a minimal number of data points that we need in order to achieve a certain error. So let us say that we are talking about a physics experiment and the data is some sensor data and we really want that the temperature is captured up to an error of $0.0001$ (otherwise the experiment will fail or something). Then how many data points do we need to capture? Given the convergence rate we know that we need roughly $10000$ data points. This is one of the applications of the convergence rate but it has more in theory I guess... If I remember correctly there are situations like this: If some $g$ converges 'fast enough' then it might help you showing (in a purely mathematical sense) that the target function $f$ lies in a special space of functions. That in turn has to be read as 'if we want a theorem like this with functions $g$ then we MUST assume that the target function $f$ lies in that special space, otherwise it will not work'. NB: To be precise: Actually we usually want to fit a sequence of functions $g_n$ to $f$ but $g_n$ comes from some kind of training routine involving $n$ data points.
Why is the convergence rate important? Here is an example of how to apply such theorems: Let us say that we want to fit a function $g$ to some data observed and let us assume that the setting is 'good' in the sense that the data really com
39,354
What does $N(x|\mu, \sigma^2)$ mean?
Here, it means the normal PDF: $$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/2\sigma^2}$$ The $\mu,\sigma^2$ in given side means that you can treat them as known quantities.
What does $N(x|\mu, \sigma^2)$ mean?
Here, it means the normal PDF: $$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/2\sigma^2}$$ The $\mu,\sigma^2$ in given side means that you can treat them as known quantities.
What does $N(x|\mu, \sigma^2)$ mean? Here, it means the normal PDF: $$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/2\sigma^2}$$ The $\mu,\sigma^2$ in given side means that you can treat them as known quantities.
What does $N(x|\mu, \sigma^2)$ mean? Here, it means the normal PDF: $$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/2\sigma^2}$$ The $\mu,\sigma^2$ in given side means that you can treat them as known quantities.
39,355
What does $N(x|\mu, \sigma^2)$ mean?
$N(x|\mu, \sigma^2)$ combines the two notations: $x \sim N(\mu, \sigma^2)$ and $p(x| \mu, \sigma^2)$. So it reads: $x$ is normally distributed with parameters $\mu, \sigma^2$.
What does $N(x|\mu, \sigma^2)$ mean?
$N(x|\mu, \sigma^2)$ combines the two notations: $x \sim N(\mu, \sigma^2)$ and $p(x| \mu, \sigma^2)$. So it reads: $x$ is normally distributed with parameters $\mu, \sigma^2$.
What does $N(x|\mu, \sigma^2)$ mean? $N(x|\mu, \sigma^2)$ combines the two notations: $x \sim N(\mu, \sigma^2)$ and $p(x| \mu, \sigma^2)$. So it reads: $x$ is normally distributed with parameters $\mu, \sigma^2$.
What does $N(x|\mu, \sigma^2)$ mean? $N(x|\mu, \sigma^2)$ combines the two notations: $x \sim N(\mu, \sigma^2)$ and $p(x| \mu, \sigma^2)$. So it reads: $x$ is normally distributed with parameters $\mu, \sigma^2$.
39,356
probability that random draws from the same pool will collectively select 90% of the pool
The answer is $n=17$. I can't see an easy analytic solution to this question. Instead, we will develop an analytic solution to a closely related problem, and then find the answer to your exact question via simulation. Clarification: Since the question is slightly vague, let me re-state the problem. There are $200$ names on a list and $n$ names will be selected from this list without replacement. This process, using the full $200$ names each time, is repeated a total of $30$ times. A related problem. Let $X_i$ equal $1$ if the $i^{th}$ name is selected at least once and equal to $0$ otherwise. This implies that $$X = \sum_{i=1}^{200}X_i$$ represents the total number of names which are selected at least once. Since the $X_i$ are dependent, the exact distribution of $X$ is not-trivial, and the original question is hard to answer. Instead, we can easily determine the value of $n$ such that $90\%$ of the names are selected on average. First, note that $$P(X_i = 0) = \left(\frac{200 - n}{200}\right)^{30}$$ which implies $$E(X_i) = P(X_i =1) = 1 - \left(1- \frac{n}{200}\right)^{30}.$$ Now by linearity of expectation we have $$E(X) = \sum_{i=1}^{200}E(X_i) = 200\left(1 - \left(1- \frac{n}{200}\right)^{30}\right).$$ For this expectation to equal $90\%$ of the names, we need to set $E(X) = 180$ and solve for $n$. This gives $$n = 200\left(1 - (1 - 0.9)^{1/30}\right) = 14.776.$$ Thus $n=15$ names should be drawn from the list each time for this to occur on average. This answer will be close to (but not the same as) the original question with $50\%$ certainty. To achieve $90\%$ certainty, we will need to increase $n$. Simulations. First, we write a function which is able to generate $X$ a large number (say $M$) times for a given value of $n$. sample_X <- function(n, M){ X <- rep(NA, M) for(i in 1:M){ #Set all names to false names <- rep(FALSE, 200) #Repeat process 30 times for(k in 1:30){ #Sample n names from list selection <- sample(200, n, replace=F) #Mark that these names have been selected names[selection] <- TRUE } #Let X be the number of selected names X[i] <- sum(name_been_selected) } return(X) } Now, for a given value of $n$ we can approximate "the probability that at least $90\%$ of the names are selected", i.e. $P(X \geq 180)$. In R, this probability can be approximated by typing: X <- sample_X(n, M=10000) prob <- mean(X >= 180) Repeating this for $n = 14, 15, \cdots 20$ gives us the following plot. From the plot, we can determine that $n=17$ names must be selected in each round for the probability of selecting at least $180$ names to exceed $0.9$. The blue line in the figure shows the exact simulations detailed above. The orange line is an approximation which is obtained by ignoring the dependency of the $X_i$ (see previous section) and assuming that $$X \sim \text{Binom}\left(200, 1 - \left(1- \frac{n}{200}\right)^{30}\right).$$ Although the assumption of independence is obviously incorrect, the probabilities obtained by this simple assumption are reasonably close to the simulated probabilities.
probability that random draws from the same pool will collectively select 90% of the pool
The answer is $n=17$. I can't see an easy analytic solution to this question. Instead, we will develop an analytic solution to a closely related problem, and then find the answer to your exact questio
probability that random draws from the same pool will collectively select 90% of the pool The answer is $n=17$. I can't see an easy analytic solution to this question. Instead, we will develop an analytic solution to a closely related problem, and then find the answer to your exact question via simulation. Clarification: Since the question is slightly vague, let me re-state the problem. There are $200$ names on a list and $n$ names will be selected from this list without replacement. This process, using the full $200$ names each time, is repeated a total of $30$ times. A related problem. Let $X_i$ equal $1$ if the $i^{th}$ name is selected at least once and equal to $0$ otherwise. This implies that $$X = \sum_{i=1}^{200}X_i$$ represents the total number of names which are selected at least once. Since the $X_i$ are dependent, the exact distribution of $X$ is not-trivial, and the original question is hard to answer. Instead, we can easily determine the value of $n$ such that $90\%$ of the names are selected on average. First, note that $$P(X_i = 0) = \left(\frac{200 - n}{200}\right)^{30}$$ which implies $$E(X_i) = P(X_i =1) = 1 - \left(1- \frac{n}{200}\right)^{30}.$$ Now by linearity of expectation we have $$E(X) = \sum_{i=1}^{200}E(X_i) = 200\left(1 - \left(1- \frac{n}{200}\right)^{30}\right).$$ For this expectation to equal $90\%$ of the names, we need to set $E(X) = 180$ and solve for $n$. This gives $$n = 200\left(1 - (1 - 0.9)^{1/30}\right) = 14.776.$$ Thus $n=15$ names should be drawn from the list each time for this to occur on average. This answer will be close to (but not the same as) the original question with $50\%$ certainty. To achieve $90\%$ certainty, we will need to increase $n$. Simulations. First, we write a function which is able to generate $X$ a large number (say $M$) times for a given value of $n$. sample_X <- function(n, M){ X <- rep(NA, M) for(i in 1:M){ #Set all names to false names <- rep(FALSE, 200) #Repeat process 30 times for(k in 1:30){ #Sample n names from list selection <- sample(200, n, replace=F) #Mark that these names have been selected names[selection] <- TRUE } #Let X be the number of selected names X[i] <- sum(name_been_selected) } return(X) } Now, for a given value of $n$ we can approximate "the probability that at least $90\%$ of the names are selected", i.e. $P(X \geq 180)$. In R, this probability can be approximated by typing: X <- sample_X(n, M=10000) prob <- mean(X >= 180) Repeating this for $n = 14, 15, \cdots 20$ gives us the following plot. From the plot, we can determine that $n=17$ names must be selected in each round for the probability of selecting at least $180$ names to exceed $0.9$. The blue line in the figure shows the exact simulations detailed above. The orange line is an approximation which is obtained by ignoring the dependency of the $X_i$ (see previous section) and assuming that $$X \sim \text{Binom}\left(200, 1 - \left(1- \frac{n}{200}\right)^{30}\right).$$ Although the assumption of independence is obviously incorrect, the probabilities obtained by this simple assumption are reasonably close to the simulated probabilities.
probability that random draws from the same pool will collectively select 90% of the pool The answer is $n=17$. I can't see an easy analytic solution to this question. Instead, we will develop an analytic solution to a closely related problem, and then find the answer to your exact questio
39,357
probability that random draws from the same pool will collectively select 90% of the pool
Here is a general analytic solution --- does not require simulation This is a variation on the classical occupancy problem, where you are sampling lots of thirty names at each sampling point, instead of sampling individual names. The simplest way to compute this result is by framing the problem as a Markov chain, and then computing the required probability using the appropriate power of the transition probability matrix. For the sake of broader interest to other users, I will generalise from your example by considering a list with $m$ names, with each sample selecting $1 \leqslant h \leqslant m$ names (using simple-random-sampling without replacement). The general problem and its solution: Let $0 \leqslant K_{n,h} \leqslant m$ denote the number of names that have been sampled after we sample $n$ times with each lot sampling $h$ names. For a fixed value $h$ the stochastic process $\{ K_{n,h} | n = 0,1,2,... \}$ satisfies the Markov assumption, so it is a Markov chain. Since each sampling lot is done using simple-random-sampling without replacement, the transition probabilities for the chain are given by the hypergeometric probabilities: $$P_{t,t+r} \equiv \mathbb{P}(K_{n,h} = t+r | K_{n-1,h} = t) = \frac{{m-t \choose r} {t \choose h-r}}{{m \choose h}}.$$ Let $\mathbf{P}_h$ denote the $(m+1) \times (m+1)$ transition probability matrix composed of these probabilities. If we start at the state $K_{0,h} = 0$ then we have: $$\mathbb{P}(K_{n,h} = k) = [ \mathbf{P}_h^n ]_{0,k}.$$ This probability can be computed by matrix multiplication, or by using the spectral decomposition of the transition probability matrix. It is relatively simple to compute the mass function of values over $k=0,1,...,m$ for any given values of $n$ and $h$. This allows you to compute the marginal probabilities associated with the Markov chain, to solve the problem you have posed. The problem you have posed is a case of the following general problem. For a specified minimum proportion $0 < \alpha \leqslant 1$ and a specified minimum probability $0 < p < 1$, we seek the value: $$h_* \equiv h_* (\alpha, p) \equiv \min \{ h = 1,...,m | \mathbb{P}(K_{n,h} \geqslant \alpha m) \geqslant p \}.$$ In your problem you have $m=200$ names in your list and you are taking $n=30$ samples. You seek the value $h_*$ for the proportion $\alpha = 0.9$ and the probability cut-off $p = 0.9$. This value can be computed by computing the relevant marginal probabilities of interest in the Markov chain. Implementation in R: We can implement the above Markov chain in R by creating the transition probability matrix and using this to compute the marginal probabilities of interest. We can compute the marginal probabilities of interest using standard analysis of Markov chains, and then use these to compute the required number of names $h_*$ in each sample. In the code below we compute the solution to your problem and show the relevant probabilities increasing over the number of samples (this code takes a while to run, owing to the computation of matrix-powers in log-space). #Create function to compute marginal distribution of Markov chain COMPUTE_DIST <- function(m, n, H) { #Generate empty matrix of occupancy probabilities DIST <- matrix(0, nrow = H, ncol = m+1); #Compute the occupancy probabilities for (h in 1:H) { #Generate the transition probability matrix STATES <- 0:m; LOGP <- matrix(-Inf, nrow = m+1, ncol = m+1); for (t in 0:m) { for (r in t:m) { LOGP[t+1, r+1] <- lchoose(m-t, r-t) + lchoose(t, h-r+t) - lchoose(m, h); } } PP <- exp(LOGP); #Compute the occupancy probabilities library(expm); DIST[h, ] <- (PP %^% n)[1, ]; } #Give the output DIST; } #Compute the probabilities for the problem m <- 200; n <- 30; H <- 20; DIST <- COMPUTE_DIST(m, n, H); From the marginal probabilities for the Markov chain, we can now compute the required value $h_*$ for your particular problem. #Set parameters for problem alpha <- 0.9; cutoff <- ceiling(alpha*m); p <- 0.9; #Find the required value PROBS <- rowSums(DIST[, (cutoff+1):(m+1)]); hstar <- 1 + sum(PROBS < p); #Show the solution and its probability hstar; [1] 17 PROBS[hstar]; [1] 0.976388 We can see here that we require $h_* = 17$ samples in order to obtain a minimum $p=0.9$ probability of sampling at least $\alpha \cdot m = 180$ of the names on the list. Below we show a plot of the probabilities for values $h=1,...,20$ with the required value highlighted in red. #Plot the probabilities and the solution library(ggplot2); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); DATA <- data.frame(h = 1:H, Probability = PROBS); ggplot(aes(x = h, y = Probability), data = DATA) + geom_point(size = 3, colour = 'blue') + geom_point(size = 4, colour = 'red', data = DATA[hstar, ]) + geom_hline(yintercept = p, size = 1, linetype = 'dashed') + geom_segment(aes(x = hstar, y = 0, xend = hstar, yend = DATA[hstar, 2]), colour = 'red', size = 1) + annotate("text", x = hstar + 1, y = 0.1, label = paste0('h = ', hstar), colour = 'red', fontface = 'bold') + THEME + ggtitle('Probability of required occupancy') + labs(subtitle = paste0('(Occupancy problem taking ', n, ' samples of size h from ', m, ' units) \n (We require ', sprintf(100*alpha, fmt = '%#.1f'), '% occupancy with ', sprintf(100*p, fmt = '%#.1f'), '% probability)'));
probability that random draws from the same pool will collectively select 90% of the pool
Here is a general analytic solution --- does not require simulation This is a variation on the classical occupancy problem, where you are sampling lots of thirty names at each sampling point, instead
probability that random draws from the same pool will collectively select 90% of the pool Here is a general analytic solution --- does not require simulation This is a variation on the classical occupancy problem, where you are sampling lots of thirty names at each sampling point, instead of sampling individual names. The simplest way to compute this result is by framing the problem as a Markov chain, and then computing the required probability using the appropriate power of the transition probability matrix. For the sake of broader interest to other users, I will generalise from your example by considering a list with $m$ names, with each sample selecting $1 \leqslant h \leqslant m$ names (using simple-random-sampling without replacement). The general problem and its solution: Let $0 \leqslant K_{n,h} \leqslant m$ denote the number of names that have been sampled after we sample $n$ times with each lot sampling $h$ names. For a fixed value $h$ the stochastic process $\{ K_{n,h} | n = 0,1,2,... \}$ satisfies the Markov assumption, so it is a Markov chain. Since each sampling lot is done using simple-random-sampling without replacement, the transition probabilities for the chain are given by the hypergeometric probabilities: $$P_{t,t+r} \equiv \mathbb{P}(K_{n,h} = t+r | K_{n-1,h} = t) = \frac{{m-t \choose r} {t \choose h-r}}{{m \choose h}}.$$ Let $\mathbf{P}_h$ denote the $(m+1) \times (m+1)$ transition probability matrix composed of these probabilities. If we start at the state $K_{0,h} = 0$ then we have: $$\mathbb{P}(K_{n,h} = k) = [ \mathbf{P}_h^n ]_{0,k}.$$ This probability can be computed by matrix multiplication, or by using the spectral decomposition of the transition probability matrix. It is relatively simple to compute the mass function of values over $k=0,1,...,m$ for any given values of $n$ and $h$. This allows you to compute the marginal probabilities associated with the Markov chain, to solve the problem you have posed. The problem you have posed is a case of the following general problem. For a specified minimum proportion $0 < \alpha \leqslant 1$ and a specified minimum probability $0 < p < 1$, we seek the value: $$h_* \equiv h_* (\alpha, p) \equiv \min \{ h = 1,...,m | \mathbb{P}(K_{n,h} \geqslant \alpha m) \geqslant p \}.$$ In your problem you have $m=200$ names in your list and you are taking $n=30$ samples. You seek the value $h_*$ for the proportion $\alpha = 0.9$ and the probability cut-off $p = 0.9$. This value can be computed by computing the relevant marginal probabilities of interest in the Markov chain. Implementation in R: We can implement the above Markov chain in R by creating the transition probability matrix and using this to compute the marginal probabilities of interest. We can compute the marginal probabilities of interest using standard analysis of Markov chains, and then use these to compute the required number of names $h_*$ in each sample. In the code below we compute the solution to your problem and show the relevant probabilities increasing over the number of samples (this code takes a while to run, owing to the computation of matrix-powers in log-space). #Create function to compute marginal distribution of Markov chain COMPUTE_DIST <- function(m, n, H) { #Generate empty matrix of occupancy probabilities DIST <- matrix(0, nrow = H, ncol = m+1); #Compute the occupancy probabilities for (h in 1:H) { #Generate the transition probability matrix STATES <- 0:m; LOGP <- matrix(-Inf, nrow = m+1, ncol = m+1); for (t in 0:m) { for (r in t:m) { LOGP[t+1, r+1] <- lchoose(m-t, r-t) + lchoose(t, h-r+t) - lchoose(m, h); } } PP <- exp(LOGP); #Compute the occupancy probabilities library(expm); DIST[h, ] <- (PP %^% n)[1, ]; } #Give the output DIST; } #Compute the probabilities for the problem m <- 200; n <- 30; H <- 20; DIST <- COMPUTE_DIST(m, n, H); From the marginal probabilities for the Markov chain, we can now compute the required value $h_*$ for your particular problem. #Set parameters for problem alpha <- 0.9; cutoff <- ceiling(alpha*m); p <- 0.9; #Find the required value PROBS <- rowSums(DIST[, (cutoff+1):(m+1)]); hstar <- 1 + sum(PROBS < p); #Show the solution and its probability hstar; [1] 17 PROBS[hstar]; [1] 0.976388 We can see here that we require $h_* = 17$ samples in order to obtain a minimum $p=0.9$ probability of sampling at least $\alpha \cdot m = 180$ of the names on the list. Below we show a plot of the probabilities for values $h=1,...,20$ with the required value highlighted in red. #Plot the probabilities and the solution library(ggplot2); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); DATA <- data.frame(h = 1:H, Probability = PROBS); ggplot(aes(x = h, y = Probability), data = DATA) + geom_point(size = 3, colour = 'blue') + geom_point(size = 4, colour = 'red', data = DATA[hstar, ]) + geom_hline(yintercept = p, size = 1, linetype = 'dashed') + geom_segment(aes(x = hstar, y = 0, xend = hstar, yend = DATA[hstar, 2]), colour = 'red', size = 1) + annotate("text", x = hstar + 1, y = 0.1, label = paste0('h = ', hstar), colour = 'red', fontface = 'bold') + THEME + ggtitle('Probability of required occupancy') + labs(subtitle = paste0('(Occupancy problem taking ', n, ' samples of size h from ', m, ' units) \n (We require ', sprintf(100*alpha, fmt = '%#.1f'), '% occupancy with ', sprintf(100*p, fmt = '%#.1f'), '% probability)'));
probability that random draws from the same pool will collectively select 90% of the pool Here is a general analytic solution --- does not require simulation This is a variation on the classical occupancy problem, where you are sampling lots of thirty names at each sampling point, instead
39,358
probability that random draws from the same pool will collectively select 90% of the pool
The answer is $n = 17$, with $P(N_{30}\ge180)=0.976388$. The approach I took to calculate the probability after 30 draws was to determine the probability of drawing seen vs. unseen names at each round. When drawing $n$ names out of $p=200$ after having seen $s$ of them, let's call $U_s$ the number of names out of those $n$ which were previously unseen. Then we have: $$P(U_s = u) = \frac{\text{P}(200-s, u) \text{P}(s, n-u) \text{C}(n, u)}{\text{P}(200, n)}$$ The first term is the permutations of u previously unseen names, the second permutations of previously seen ones. The last term $\text{C(n, u)}$ accounts for the $u$ unseen names coming in different positions out of the $n$ drawn. The denominator accounts for all possible draws of $n$ names. Having calculated that, we can look at successive draws of names. Let's call $N_d$ the total number of names after draw $d$. Before the first draw, there will be no previously seen names, so in the first draw all $n$ names will be seen for the first time. $$P(N_1=n)=1$$ We can then calculate the probability of drawing a certain number of names on draw $N_{d+1}$ by looking at the possibilities of drawing after $N_d$ and having a specific number of previously unseen names. Which we can calculate with: $$P(N_{d+1} = x) = \sum_{i=0}^{n}{P(N_d = x-i) P(U_{x-i} = i)}$$ For example, if we're drawing $n=16$ every time, then drawing exactly 180 names in total in a specific drawing can be arrived at by drawing 164 names in the previous drawing an then drawing exactly 16 unseen names (totalling 180), or having previously seen 165 names and drawing 15 unseen and one previously seen name, and so on... Until the possibility of having seen 180 names in the previous iteration and drawing all 16 previously seen names. At this point we can use iteration to calculate $P(N_{30} \ge 180)$ for different values of $n$. Iteration in Python: This code uses Python 3 and as written requires Python 3.8 for math.comb() and math.perm() from the standard library (if using an older version of Python, you can use a different implementation of those functions.) Let's start with $P(U_s = u)$: from functools import lru_cache from math import comb, perm @lru_cache def prob_unseen(n, p, s, u): # Return the probability of drawing # exactly $u$ unseen names when # drawing $n$ names out of a total of $p$, # having previously seen $s$ of them. return (perm(p-s, u) * perm(s, n-u) * comb(n, u) / perm(p, n)) Pretty straightforward. Now for $P(N_d = x)$ let's use a list of 201 elements (indices go from 0 to 200) to track the probabilities for each $x$: def names_in_draw(prev_draw, n): # Calculate probabilities of finding # exactly $x$ names in this draw, # for every $x$, taking in consideration # the probabilities of having drawn specific # numbers of names in the previous draw. p = len(prev_draw) - 1 this_draw = [0.0] * (p+1) for x in range(n, p+1): this_draw[x] = sum( prev_draw[x-u] * prob_unseen(n, p, x-u, u) for u in range(n+1)) return this_draw Finally, let's calculate the probability for the number of names after $d$ draws. def total_names(n, p, d): # Calculate probabilities for finding # exactly $x$ names after $d$ draws. draw = [0.0] * (p+1) draw[n] = 1.0 # first draw for _ in range(d): draw = names_in_draw(draw, n) return draw We start from the first draw, where we know for sure we'll draw $n$ unique names. Than we repeatedly calculate the probabilities $d-1$ times. Finally, we can calculate the probability of drawing at least $x$ names, drawing $n$ out of $p$ at a time, performing $d$ drawings: def prob_names(n, p, d, x): # Return the probability of seeing # at least $x$ names after $d$ drawings, # each of which draws $n$ out of $p$ names. return sum(total_names(n, p, d)[x:]) Finally, we can run this for a few values of $n$ to find the probabilities: >>> for i in range(13, 20): ... print(i, prob_names(i, 200, 30, 180)) 13 0.058384795418431244 14 0.28649904267865317 15 0.6384959089930037 16 0.8849450106842117 17 0.976388046862824 18 0.9966940083338005 19 0.9996649977705089 So $n=17$ is the answer, with probability of 97.6388% of seeing at least 90% of the names. $n=16$ comes close, with 88.4945%. (Since I had the code, I also looked at how many drawings of a single name are needed to see 90% of the names, with 90% probability. It turns out it's 503 drawings, or 454 drawings to see 90% of the names with 50% probability. Quite interesting result!)
probability that random draws from the same pool will collectively select 90% of the pool
The answer is $n = 17$, with $P(N_{30}\ge180)=0.976388$. The approach I took to calculate the probability after 30 draws was to determine the probability of drawing seen vs. unseen names at each round
probability that random draws from the same pool will collectively select 90% of the pool The answer is $n = 17$, with $P(N_{30}\ge180)=0.976388$. The approach I took to calculate the probability after 30 draws was to determine the probability of drawing seen vs. unseen names at each round. When drawing $n$ names out of $p=200$ after having seen $s$ of them, let's call $U_s$ the number of names out of those $n$ which were previously unseen. Then we have: $$P(U_s = u) = \frac{\text{P}(200-s, u) \text{P}(s, n-u) \text{C}(n, u)}{\text{P}(200, n)}$$ The first term is the permutations of u previously unseen names, the second permutations of previously seen ones. The last term $\text{C(n, u)}$ accounts for the $u$ unseen names coming in different positions out of the $n$ drawn. The denominator accounts for all possible draws of $n$ names. Having calculated that, we can look at successive draws of names. Let's call $N_d$ the total number of names after draw $d$. Before the first draw, there will be no previously seen names, so in the first draw all $n$ names will be seen for the first time. $$P(N_1=n)=1$$ We can then calculate the probability of drawing a certain number of names on draw $N_{d+1}$ by looking at the possibilities of drawing after $N_d$ and having a specific number of previously unseen names. Which we can calculate with: $$P(N_{d+1} = x) = \sum_{i=0}^{n}{P(N_d = x-i) P(U_{x-i} = i)}$$ For example, if we're drawing $n=16$ every time, then drawing exactly 180 names in total in a specific drawing can be arrived at by drawing 164 names in the previous drawing an then drawing exactly 16 unseen names (totalling 180), or having previously seen 165 names and drawing 15 unseen and one previously seen name, and so on... Until the possibility of having seen 180 names in the previous iteration and drawing all 16 previously seen names. At this point we can use iteration to calculate $P(N_{30} \ge 180)$ for different values of $n$. Iteration in Python: This code uses Python 3 and as written requires Python 3.8 for math.comb() and math.perm() from the standard library (if using an older version of Python, you can use a different implementation of those functions.) Let's start with $P(U_s = u)$: from functools import lru_cache from math import comb, perm @lru_cache def prob_unseen(n, p, s, u): # Return the probability of drawing # exactly $u$ unseen names when # drawing $n$ names out of a total of $p$, # having previously seen $s$ of them. return (perm(p-s, u) * perm(s, n-u) * comb(n, u) / perm(p, n)) Pretty straightforward. Now for $P(N_d = x)$ let's use a list of 201 elements (indices go from 0 to 200) to track the probabilities for each $x$: def names_in_draw(prev_draw, n): # Calculate probabilities of finding # exactly $x$ names in this draw, # for every $x$, taking in consideration # the probabilities of having drawn specific # numbers of names in the previous draw. p = len(prev_draw) - 1 this_draw = [0.0] * (p+1) for x in range(n, p+1): this_draw[x] = sum( prev_draw[x-u] * prob_unseen(n, p, x-u, u) for u in range(n+1)) return this_draw Finally, let's calculate the probability for the number of names after $d$ draws. def total_names(n, p, d): # Calculate probabilities for finding # exactly $x$ names after $d$ draws. draw = [0.0] * (p+1) draw[n] = 1.0 # first draw for _ in range(d): draw = names_in_draw(draw, n) return draw We start from the first draw, where we know for sure we'll draw $n$ unique names. Than we repeatedly calculate the probabilities $d-1$ times. Finally, we can calculate the probability of drawing at least $x$ names, drawing $n$ out of $p$ at a time, performing $d$ drawings: def prob_names(n, p, d, x): # Return the probability of seeing # at least $x$ names after $d$ drawings, # each of which draws $n$ out of $p$ names. return sum(total_names(n, p, d)[x:]) Finally, we can run this for a few values of $n$ to find the probabilities: >>> for i in range(13, 20): ... print(i, prob_names(i, 200, 30, 180)) 13 0.058384795418431244 14 0.28649904267865317 15 0.6384959089930037 16 0.8849450106842117 17 0.976388046862824 18 0.9966940083338005 19 0.9996649977705089 So $n=17$ is the answer, with probability of 97.6388% of seeing at least 90% of the names. $n=16$ comes close, with 88.4945%. (Since I had the code, I also looked at how many drawings of a single name are needed to see 90% of the names, with 90% probability. It turns out it's 503 drawings, or 454 drawings to see 90% of the names with 50% probability. Quite interesting result!)
probability that random draws from the same pool will collectively select 90% of the pool The answer is $n = 17$, with $P(N_{30}\ge180)=0.976388$. The approach I took to calculate the probability after 30 draws was to determine the probability of drawing seen vs. unseen names at each round
39,359
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
If $X$ is a Gamma $\mathcal G(p,1)$ variate, its density is $$f_X(x)= \frac{x^{p-1}}{(p-1)!}e^{-x}\mathbb I_{\mathbb R^*_+}(x)$$ Therefore, the probability density of $Y_1=\sqrt{X_1}$ is $$f_{Y_1}(y)=\frac{y^{2(p-1)}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)\times\overbrace{\left|\frac{\text{d}x}{\text{d}y}\right|}^{\text{Jacobian}} =2y\times\frac{y^{2(p-1)}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y) =\frac{2y^{2p-1}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)$$ And the probability density of $Y_2=\sqrt{X_2}$ is $$f_{Y_2}(y)=\frac{2y^{\overbrace{2p+1-1}^{2p}}}{\Gamma(p+1/2)}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)$$ Hence the joint density of $(Y_1,Y_2)$ is $$g(y_1,y_2)=\frac{4y_1^{2p-1}y_2^{2p}}{(p-1)!\Gamma(p+1/2)}e^{-y_1^2-y_2^2}\mathbb I_{\mathbb R^*_+}(y_1)\mathbb I_{\mathbb R^*_+}(y_1)$$ Considering the change of variables from $(y_1,y_2)$ to $(z=y_1y_2,y_2)$, the joint density of $(Z,Y_2)$ is $$h(z,y_2)=g(z/y_2,y_2)\overbrace{\left|\frac{\text{d}(y_1,y_2)}{\text{d}(z,y_2)}\right|}^{\text{Jacobian}}=g(z/y_2,y_2)\left|\frac{\text{d}y_1}{\text{d}z}\right|=g(z/y_2,y_2)y_2^{-1}$$ and the density of $Z$ is the marginal \begin{align*} f_Z(z) &= \int_0^\infty g(z/y_2,y_2)y_2^{-1}\,\text{d}y_2\\ &=\int_0^\infty \frac{4z^{2p-1}y_2^{2p-(2p-1)}}{(p-1)!\Gamma(p+1/2)}e^{-z^2y_2^{-2}-y_2^2}y_2^{-1}\,\text{d}y_2\\ &= \frac{4z^{2p-1}}{(p-1)!\Gamma(p+1/2)}\int_0^\infty e^{-z^2y_2^{-2}-y_2^2}\,\text{d}y_2\\ &= \frac{4z^{2p-1}}{(p-1)!\Gamma(p+1/2)} \frac{\sqrt{\pi}}{2}e^{-2z} \end{align*} [the last integral is formula 3.325 in Gradshteyn & Ryzhik, 2007] Hence the density of $S=2Z$ is $$f_S(s)=\frac{\sqrt{\pi}2^{1-2p}s^{2p-1}}{(p-1)!\Gamma(p+1/2)}e^{-s}=\frac{s^{2p-1}}{\Gamma(2p)}e^{-s}$$ [where the constant simplifies by the Legendre duplication formula]
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
If $X$ is a Gamma $\mathcal G(p,1)$ variate, its density is $$f_X(x)= \frac{x^{p-1}}{(p-1)!}e^{-x}\mathbb I_{\mathbb R^*_+}(x)$$ Therefore, the probability density of $Y_1=\sqrt{X_1}$ is $$f_{Y_1}(y)=
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? If $X$ is a Gamma $\mathcal G(p,1)$ variate, its density is $$f_X(x)= \frac{x^{p-1}}{(p-1)!}e^{-x}\mathbb I_{\mathbb R^*_+}(x)$$ Therefore, the probability density of $Y_1=\sqrt{X_1}$ is $$f_{Y_1}(y)=\frac{y^{2(p-1)}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)\times\overbrace{\left|\frac{\text{d}x}{\text{d}y}\right|}^{\text{Jacobian}} =2y\times\frac{y^{2(p-1)}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y) =\frac{2y^{2p-1}}{(p-1)!}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)$$ And the probability density of $Y_2=\sqrt{X_2}$ is $$f_{Y_2}(y)=\frac{2y^{\overbrace{2p+1-1}^{2p}}}{\Gamma(p+1/2)}e^{-y^2}\mathbb I_{\mathbb R^*_+}(y)$$ Hence the joint density of $(Y_1,Y_2)$ is $$g(y_1,y_2)=\frac{4y_1^{2p-1}y_2^{2p}}{(p-1)!\Gamma(p+1/2)}e^{-y_1^2-y_2^2}\mathbb I_{\mathbb R^*_+}(y_1)\mathbb I_{\mathbb R^*_+}(y_1)$$ Considering the change of variables from $(y_1,y_2)$ to $(z=y_1y_2,y_2)$, the joint density of $(Z,Y_2)$ is $$h(z,y_2)=g(z/y_2,y_2)\overbrace{\left|\frac{\text{d}(y_1,y_2)}{\text{d}(z,y_2)}\right|}^{\text{Jacobian}}=g(z/y_2,y_2)\left|\frac{\text{d}y_1}{\text{d}z}\right|=g(z/y_2,y_2)y_2^{-1}$$ and the density of $Z$ is the marginal \begin{align*} f_Z(z) &= \int_0^\infty g(z/y_2,y_2)y_2^{-1}\,\text{d}y_2\\ &=\int_0^\infty \frac{4z^{2p-1}y_2^{2p-(2p-1)}}{(p-1)!\Gamma(p+1/2)}e^{-z^2y_2^{-2}-y_2^2}y_2^{-1}\,\text{d}y_2\\ &= \frac{4z^{2p-1}}{(p-1)!\Gamma(p+1/2)}\int_0^\infty e^{-z^2y_2^{-2}-y_2^2}\,\text{d}y_2\\ &= \frac{4z^{2p-1}}{(p-1)!\Gamma(p+1/2)} \frac{\sqrt{\pi}}{2}e^{-2z} \end{align*} [the last integral is formula 3.325 in Gradshteyn & Ryzhik, 2007] Hence the density of $S=2Z$ is $$f_S(s)=\frac{\sqrt{\pi}2^{1-2p}s^{2p-1}}{(p-1)!\Gamma(p+1/2)}e^{-s}=\frac{s^{2p-1}}{\Gamma(2p)}e^{-s}$$ [where the constant simplifies by the Legendre duplication formula]
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? If $X$ is a Gamma $\mathcal G(p,1)$ variate, its density is $$f_X(x)= \frac{x^{p-1}}{(p-1)!}e^{-x}\mathbb I_{\mathbb R^*_+}(x)$$ Therefore, the probability density of $Y_1=\sqrt{X_1}$ is $$f_{Y_1}(y)=
39,360
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
I think the moment generating function approach works fine, but easier if we consider MGF of $\ln Y$. Assuming of course $\mathsf{Gamma}(p,1)$ refers to shape $p$ parameterization, i.e. with density $$f(x)=\frac{e^{-x}x^{p-1}}{\Gamma(p)}\mathbf1_{x>0}$$ with $p>0$ as in @Xi'an's answer. We have \begin{align} E\left[e^{t\ln Y}\right]&=E\left[Y^t\right] \\&=2^tE\left[X_1^{t/2}\right]E\left[X_2^{t/2}\right] \end{align} For $t>-2p$ where $p>0$, clearly $$E\left[X_1^{t/2}\right]=\frac{\Gamma\left(p+\frac t2\right)}{\Gamma(p)}$$ And $$E\left[X_2^{t/2}\right]=\frac{\Gamma\left(p+\frac t2+\frac12\right)}{\Gamma\left(p+\frac12\right)}$$ Using Legendre's duplication formula, \begin{align} E\left[e^{t\ln Y}\right]&=2^t\cdot \frac{\Gamma(2p+t)\sqrt\pi/2^{2p+t-1}}{\Gamma(2p)\sqrt\pi/2^{2p-1}} \\&=\frac{\Gamma(2p+t)}{\Gamma(2p)} \end{align} This is the MGF of the logarithm of a $\mathsf{Gamma}(2p,1)$ distribution evaluated at $t$ (or simply the $t$th order raw moment about $0$) where $t\in (-2p,\infty)$. As the MGF exists in an open interval containing $0$, we can conclude that $Y\sim \mathsf{Gamma}(2p,1)$ by uniqueness theorem of MGF. In effect we see that this Gamma distribution is uniquely determined by its moments.
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
I think the moment generating function approach works fine, but easier if we consider MGF of $\ln Y$. Assuming of course $\mathsf{Gamma}(p,1)$ refers to shape $p$ parameterization, i.e. with density $
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? I think the moment generating function approach works fine, but easier if we consider MGF of $\ln Y$. Assuming of course $\mathsf{Gamma}(p,1)$ refers to shape $p$ parameterization, i.e. with density $$f(x)=\frac{e^{-x}x^{p-1}}{\Gamma(p)}\mathbf1_{x>0}$$ with $p>0$ as in @Xi'an's answer. We have \begin{align} E\left[e^{t\ln Y}\right]&=E\left[Y^t\right] \\&=2^tE\left[X_1^{t/2}\right]E\left[X_2^{t/2}\right] \end{align} For $t>-2p$ where $p>0$, clearly $$E\left[X_1^{t/2}\right]=\frac{\Gamma\left(p+\frac t2\right)}{\Gamma(p)}$$ And $$E\left[X_2^{t/2}\right]=\frac{\Gamma\left(p+\frac t2+\frac12\right)}{\Gamma\left(p+\frac12\right)}$$ Using Legendre's duplication formula, \begin{align} E\left[e^{t\ln Y}\right]&=2^t\cdot \frac{\Gamma(2p+t)\sqrt\pi/2^{2p+t-1}}{\Gamma(2p)\sqrt\pi/2^{2p-1}} \\&=\frac{\Gamma(2p+t)}{\Gamma(2p)} \end{align} This is the MGF of the logarithm of a $\mathsf{Gamma}(2p,1)$ distribution evaluated at $t$ (or simply the $t$th order raw moment about $0$) where $t\in (-2p,\infty)$. As the MGF exists in an open interval containing $0$, we can conclude that $Y\sim \mathsf{Gamma}(2p,1)$ by uniqueness theorem of MGF. In effect we see that this Gamma distribution is uniquely determined by its moments.
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? I think the moment generating function approach works fine, but easier if we consider MGF of $\ln Y$. Assuming of course $\mathsf{Gamma}(p,1)$ refers to shape $p$ parameterization, i.e. with density $
39,361
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
... Comment continued: R code for the simple simulation is as shown below. Unfortunately, the simulation gives no clue how to work the problem. (See @Xi'an's Answer.) set.seed(1023) p = 2; m = 10^6 x1 = rgamma(m,p,1); x2 = rgamma(m,p+.5,1) y = 2*sqrt(x1*x2) hist(y, br=60, prob=T, col="skyblue2") curve(dgamma(x,2*p,1), add=T, col="red", n=10001)
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$?
... Comment continued: R code for the simple simulation is as shown below. Unfortunately, the simulation gives no clue how to work the problem. (See @Xi'an's Answer.) set.seed(1023) p = 2; m = 10^6 x
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? ... Comment continued: R code for the simple simulation is as shown below. Unfortunately, the simulation gives no clue how to work the problem. (See @Xi'an's Answer.) set.seed(1023) p = 2; m = 10^6 x1 = rgamma(m,p,1); x2 = rgamma(m,p+.5,1) y = 2*sqrt(x1*x2) hist(y, br=60, prob=T, col="skyblue2") curve(dgamma(x,2*p,1), add=T, col="red", n=10001)
How do I show that $Y=2\sqrt{X_1X_2}\sim$Gamma$(2p,1)$? ... Comment continued: R code for the simple simulation is as shown below. Unfortunately, the simulation gives no clue how to work the problem. (See @Xi'an's Answer.) set.seed(1023) p = 2; m = 10^6 x
39,362
Model ensembling - averaging of probabilities
From the law of total probability we know that for disjoint events $H_n$, we can calculate: $$P(A) = \sum_n P(A|H_n) * P(H_n)$$ Basically if $P(A|H_n), n:1,...,N$ are different networks emitting probabilities, and $H_n$ is a disjoint hypothesis space then the result is a probability. When doing simple averaging they are assuming that $P(H_n) = \frac{1}{N}$ for all $n:1,..,N$; a discrete uniform distribution. The biggest problem with these kind of averages is that nobody really checks if the hypotheses are in fact disjoint or whether it makes sense to assign equal probabilities to each or not. Hypotheses usually end up being very similar to each other. As a result, mathematically speaking the result is still a probability, but from a Bayesian averaging point of view, it is not a well thought prior.
Model ensembling - averaging of probabilities
From the law of total probability we know that for disjoint events $H_n$, we can calculate: $$P(A) = \sum_n P(A|H_n) * P(H_n)$$ Basically if $P(A|H_n), n:1,...,N$ are different networks emitting proba
Model ensembling - averaging of probabilities From the law of total probability we know that for disjoint events $H_n$, we can calculate: $$P(A) = \sum_n P(A|H_n) * P(H_n)$$ Basically if $P(A|H_n), n:1,...,N$ are different networks emitting probabilities, and $H_n$ is a disjoint hypothesis space then the result is a probability. When doing simple averaging they are assuming that $P(H_n) = \frac{1}{N}$ for all $n:1,..,N$; a discrete uniform distribution. The biggest problem with these kind of averages is that nobody really checks if the hypotheses are in fact disjoint or whether it makes sense to assign equal probabilities to each or not. Hypotheses usually end up being very similar to each other. As a result, mathematically speaking the result is still a probability, but from a Bayesian averaging point of view, it is not a well thought prior.
Model ensembling - averaging of probabilities From the law of total probability we know that for disjoint events $H_n$, we can calculate: $$P(A) = \sum_n P(A|H_n) * P(H_n)$$ Basically if $P(A|H_n), n:1,...,N$ are different networks emitting proba
39,363
Model ensembling - averaging of probabilities
As I already noticed in my comment, you can find a partial answer to your question and further details in my other answer and the references that were provided. What you seem to be asking, is "how do we know that the average of probability forecasts is a valid probability?", at least this is how I understand it. Your question asks about taking averages of multiple probabilistic forecasts to take pooled forecast, so it is closely resorted to linear opinion pools (Stone, 1961). First thing to notice is that a probability forecast, is in fact a conditional probability distribution. Taking arithmetic mean is a special case of taking a weighted sum $\sum_k w_k x_k$ with $\forall\, w_k > 0$ and $\sum_k w_k = 1$, where $w_1 = w_2 = \dots = w_n = n^{-1}$, so it is a convex combination. A weighted sum of probability distributions leads to a mixture distribution $$ p(x) = \sum_k w_k \,p_k(x) $$ where $p_k$ are some probability density (or mass) functions. As already said by Cowboy Trader, you can think of this in terms of basic laws of probability. Given the properties of weights $w_i$, we can think of them as of probabilities, the most meaningful interpretation would be considering them as prior probabilities for choosing those forecasts. In such case, their joint distribution is $$ p(x, k) = p_k(x) \,w_k = p(x|k) \, p(k) $$ what follows from the definition of conditional probability. When we have joint distribution, we can calculate marginal distribution of it by the law of total probability $$ p(x) = \sum_k p(x, k) = \sum_k p(x|k) \, p(k) $$ If you also want to ask "why do people use it?", then the answer is: because it just works.
Model ensembling - averaging of probabilities
As I already noticed in my comment, you can find a partial answer to your question and further details in my other answer and the references that were provided. What you seem to be asking, is "how do
Model ensembling - averaging of probabilities As I already noticed in my comment, you can find a partial answer to your question and further details in my other answer and the references that were provided. What you seem to be asking, is "how do we know that the average of probability forecasts is a valid probability?", at least this is how I understand it. Your question asks about taking averages of multiple probabilistic forecasts to take pooled forecast, so it is closely resorted to linear opinion pools (Stone, 1961). First thing to notice is that a probability forecast, is in fact a conditional probability distribution. Taking arithmetic mean is a special case of taking a weighted sum $\sum_k w_k x_k$ with $\forall\, w_k > 0$ and $\sum_k w_k = 1$, where $w_1 = w_2 = \dots = w_n = n^{-1}$, so it is a convex combination. A weighted sum of probability distributions leads to a mixture distribution $$ p(x) = \sum_k w_k \,p_k(x) $$ where $p_k$ are some probability density (or mass) functions. As already said by Cowboy Trader, you can think of this in terms of basic laws of probability. Given the properties of weights $w_i$, we can think of them as of probabilities, the most meaningful interpretation would be considering them as prior probabilities for choosing those forecasts. In such case, their joint distribution is $$ p(x, k) = p_k(x) \,w_k = p(x|k) \, p(k) $$ what follows from the definition of conditional probability. When we have joint distribution, we can calculate marginal distribution of it by the law of total probability $$ p(x) = \sum_k p(x, k) = \sum_k p(x|k) \, p(k) $$ If you also want to ask "why do people use it?", then the answer is: because it just works.
Model ensembling - averaging of probabilities As I already noticed in my comment, you can find a partial answer to your question and further details in my other answer and the references that were provided. What you seem to be asking, is "how do
39,364
Model ensembling - averaging of probabilities
Yes, there is theoretical basis, and No, we don't know why it works. Look up "forecast combination puzzle" in interweb, e.g. this presentation, p.20. Somehow, a simple average of multiple models appears to outperform single model forecast and weighted average forecasts in practice. There are many hypotheses of why this happens, but there is no consensus in forecasting literature. This could be because an optimal weight in a weighted average combination has too much noise, so in the end a simple average works better
Model ensembling - averaging of probabilities
Yes, there is theoretical basis, and No, we don't know why it works. Look up "forecast combination puzzle" in interweb, e.g. this presentation, p.20. Somehow, a simple average of multiple models appea
Model ensembling - averaging of probabilities Yes, there is theoretical basis, and No, we don't know why it works. Look up "forecast combination puzzle" in interweb, e.g. this presentation, p.20. Somehow, a simple average of multiple models appears to outperform single model forecast and weighted average forecasts in practice. There are many hypotheses of why this happens, but there is no consensus in forecasting literature. This could be because an optimal weight in a weighted average combination has too much noise, so in the end a simple average works better
Model ensembling - averaging of probabilities Yes, there is theoretical basis, and No, we don't know why it works. Look up "forecast combination puzzle" in interweb, e.g. this presentation, p.20. Somehow, a simple average of multiple models appea
39,365
Model ensembling - averaging of probabilities
Yes there is a theory for it, it's called ensemble learning. The method of bagging (bootstrap aggregating) relies on it. This is used for example in Random Forests. The intuitive idea is that by averaging models that have a very low bias but a high variance, you can reduce that variance while still keeping the bias low. This is what happens with random forests where you usually use deep trees that can overfit (i.e. low bias-high variance), but averaging their prediction reduces this overfitting. This of course works best if the training sets of all the models are independent but in practice you use bagging. In DL models the diversity in the ensemble comes from different hyperparameters: they highlight here different initialization, dropout levels, BN or not. As for the second part of your question, I think Cowboy Trader answered it best. However, the ensembling also works with outputs that are not probabilities, like for example in the case of regression.
Model ensembling - averaging of probabilities
Yes there is a theory for it, it's called ensemble learning. The method of bagging (bootstrap aggregating) relies on it. This is used for example in Random Forests. The intuitive idea is that by avera
Model ensembling - averaging of probabilities Yes there is a theory for it, it's called ensemble learning. The method of bagging (bootstrap aggregating) relies on it. This is used for example in Random Forests. The intuitive idea is that by averaging models that have a very low bias but a high variance, you can reduce that variance while still keeping the bias low. This is what happens with random forests where you usually use deep trees that can overfit (i.e. low bias-high variance), but averaging their prediction reduces this overfitting. This of course works best if the training sets of all the models are independent but in practice you use bagging. In DL models the diversity in the ensemble comes from different hyperparameters: they highlight here different initialization, dropout levels, BN or not. As for the second part of your question, I think Cowboy Trader answered it best. However, the ensembling also works with outputs that are not probabilities, like for example in the case of regression.
Model ensembling - averaging of probabilities Yes there is a theory for it, it's called ensemble learning. The method of bagging (bootstrap aggregating) relies on it. This is used for example in Random Forests. The intuitive idea is that by avera
39,366
Removing duplicates before train test split
Interesting question. The effect of duplicates in training data is slightly different than the effect of duplicates in the test data. If an element is duplicated in the training data, it is effectively the same as having its 'weight' doubled. That element becomes twice as important when the classifier is fitting your data, and the classifier becomes biased towards correctly classifying that particular scenario over others. It's up to you whether that's a good or bad thing. If the duplicates are real (that is, if the duplicates are generated through a process you want to take into account), then I'd probably advise against removing them, especially if you're doing logistic regression. There are other questions about dealing with oversampled and undersampled datasets on this SE. When it comes to neural networks and the like, other people may be able to answer better whether it is necessary to worry about this. If your dataset is, for example, tweets, and you are trying to train a natural language processor, I would advise removing duplicate sentences (mainly retweets) as that doesn't really help to train the model for general language use. Duplicated elements in the test data serve no real purpose. You've tested the model on that particular problem once, why would you do it again, when you'd expect the exact same answer? If there is a high proportion of the same duplicated entries in the test set as are in the training set, you'll get an inflated sense of how well the model performs overall, because the rarer scenarios are less well represented, and the classifier's poor performance with them will contribute less to the overall test score. If you are going to remove duplicates, I'd recommend doing it before splitting the dataset between train and test.
Removing duplicates before train test split
Interesting question. The effect of duplicates in training data is slightly different than the effect of duplicates in the test data. If an element is duplicated in the training data, it is effectivel
Removing duplicates before train test split Interesting question. The effect of duplicates in training data is slightly different than the effect of duplicates in the test data. If an element is duplicated in the training data, it is effectively the same as having its 'weight' doubled. That element becomes twice as important when the classifier is fitting your data, and the classifier becomes biased towards correctly classifying that particular scenario over others. It's up to you whether that's a good or bad thing. If the duplicates are real (that is, if the duplicates are generated through a process you want to take into account), then I'd probably advise against removing them, especially if you're doing logistic regression. There are other questions about dealing with oversampled and undersampled datasets on this SE. When it comes to neural networks and the like, other people may be able to answer better whether it is necessary to worry about this. If your dataset is, for example, tweets, and you are trying to train a natural language processor, I would advise removing duplicate sentences (mainly retweets) as that doesn't really help to train the model for general language use. Duplicated elements in the test data serve no real purpose. You've tested the model on that particular problem once, why would you do it again, when you'd expect the exact same answer? If there is a high proportion of the same duplicated entries in the test set as are in the training set, you'll get an inflated sense of how well the model performs overall, because the rarer scenarios are less well represented, and the classifier's poor performance with them will contribute less to the overall test score. If you are going to remove duplicates, I'd recommend doing it before splitting the dataset between train and test.
Removing duplicates before train test split Interesting question. The effect of duplicates in training data is slightly different than the effect of duplicates in the test data. If an element is duplicated in the training data, it is effectivel
39,367
Removing duplicates before train test split
I'd like to add 2 points to @Ingolifs nice answer: The main idea behind recommending to deduplicate or not is to think what that amounts to wrt. your application. Both have their point, but wrt. testing a slightly different kind of generalization ability: If you want to test the ability to correctly predict new (say, future) cases, there's nothing that would guarantee statistically independent future cases not to have independents that your model encountered during training. So deduplication here would lead to a bias in your sample distribution. Which may or may not influence your model and if it does have an influence on the model, that influence may or may not be what you actually want. On the other hand, it may still be of interest how your model peforms for cases where the independent variable vector is unknown in the sense that it has not been encountered during training. Note that while this means a constraint in splitting so that no equal independent vectors can appear in both training and test sets, it does not imply deduplication. I'm analytical chemist and I do something similar when I test performance for concentrations that are (slightly) different matrix* composition. The second point is that if your original sample is representative for your population, then the dedupicated sample will be biased. In other words, do the duplicates occur naturally or were they produced artificially? In the former case, I'd say you need very good argumentation why you want to change your sample distribution. You can still deduplicate, but it is up to you to correctly account for that treatment. As @Ingolifs said already, you may be able to save computation by replacing duplicates with appropriate weights. That holds for testing as well. For deduplicated test sets, you'll have to be particularly careful about the conclusions you draw. I'm thinking of the somewhat related issue of reporting predictive values that can be seriously wrong if they do not take into account the actual class distributions - see Buchen: Cancer: Missing the mark, Nature 471, 428-432 (2011). for a famous example. * in analytical chemistry matrix is the stuff surrounding the analyte you're interested in. Say I'm looking at ethanol in wine, then the water, acids and all other substances except the ethanol form the matrix.
Removing duplicates before train test split
I'd like to add 2 points to @Ingolifs nice answer: The main idea behind recommending to deduplicate or not is to think what that amounts to wrt. your application. Both have their point, but wrt. test
Removing duplicates before train test split I'd like to add 2 points to @Ingolifs nice answer: The main idea behind recommending to deduplicate or not is to think what that amounts to wrt. your application. Both have their point, but wrt. testing a slightly different kind of generalization ability: If you want to test the ability to correctly predict new (say, future) cases, there's nothing that would guarantee statistically independent future cases not to have independents that your model encountered during training. So deduplication here would lead to a bias in your sample distribution. Which may or may not influence your model and if it does have an influence on the model, that influence may or may not be what you actually want. On the other hand, it may still be of interest how your model peforms for cases where the independent variable vector is unknown in the sense that it has not been encountered during training. Note that while this means a constraint in splitting so that no equal independent vectors can appear in both training and test sets, it does not imply deduplication. I'm analytical chemist and I do something similar when I test performance for concentrations that are (slightly) different matrix* composition. The second point is that if your original sample is representative for your population, then the dedupicated sample will be biased. In other words, do the duplicates occur naturally or were they produced artificially? In the former case, I'd say you need very good argumentation why you want to change your sample distribution. You can still deduplicate, but it is up to you to correctly account for that treatment. As @Ingolifs said already, you may be able to save computation by replacing duplicates with appropriate weights. That holds for testing as well. For deduplicated test sets, you'll have to be particularly careful about the conclusions you draw. I'm thinking of the somewhat related issue of reporting predictive values that can be seriously wrong if they do not take into account the actual class distributions - see Buchen: Cancer: Missing the mark, Nature 471, 428-432 (2011). for a famous example. * in analytical chemistry matrix is the stuff surrounding the analyte you're interested in. Say I'm looking at ethanol in wine, then the water, acids and all other substances except the ethanol form the matrix.
Removing duplicates before train test split I'd like to add 2 points to @Ingolifs nice answer: The main idea behind recommending to deduplicate or not is to think what that amounts to wrt. your application. Both have their point, but wrt. test
39,368
Controlling for confounding variables in linear mixed effects models (lmer)
You have stated that you believe Time is a confounding variable in this analysis. If so then you should include Time as a covariate in the analysis. However, before doing so, it is important to ensure that the variable is indeed a (potential) confounder, or a competing exposure. To be a confounder, it must be a cause, or a proxy of a cause, of the outcome, AND a cause, or a proxy of a cause, of the exposure(s). So, in this case, if Time causes Behaviour AND also causes any of the other exposures, then it is indeed a confounder. It seems unlikely that it can be a cause of Sex or Subspecies, but if it determines the Treatment given, then it is a confounder, and should be included as a covariate in order to obtain unbiased estimates of the other fixed effects. The estimate for Time (and it's statistical significance) is irrelevant (and should not be interpreted if it is a confounder). On the other hand, if Time is on the causal pathway from the exposure(s) to the outcome, for example, if the Treatment given depends on the time of day, then it is a mediator and should not be included as a covariate - including a mediator in a regression can invoke a reversal paradox (for example Simpson's Paradox) - see Tu et al (2008) Lastly, if Time is not a cause of the exposure(s) (but is a cause of the outcome), then it should be treated as a competing exposure, and included in the model as a covariate; this will improve the accuracy of the other fixed effects estimates that you are interested in. References: Tu, Y.K., Gunnell, D. and Gilthorpe, M.S., 2008. Simpson's Paradox, Lord's Paradox, and Suppression Effects are the same phenomenon–the reversal paradox. Emerging themes in epidemiology, 5(1), p.2.
Controlling for confounding variables in linear mixed effects models (lmer)
You have stated that you believe Time is a confounding variable in this analysis. If so then you should include Time as a covariate in the analysis. However, before doing so, it is important to ensure
Controlling for confounding variables in linear mixed effects models (lmer) You have stated that you believe Time is a confounding variable in this analysis. If so then you should include Time as a covariate in the analysis. However, before doing so, it is important to ensure that the variable is indeed a (potential) confounder, or a competing exposure. To be a confounder, it must be a cause, or a proxy of a cause, of the outcome, AND a cause, or a proxy of a cause, of the exposure(s). So, in this case, if Time causes Behaviour AND also causes any of the other exposures, then it is indeed a confounder. It seems unlikely that it can be a cause of Sex or Subspecies, but if it determines the Treatment given, then it is a confounder, and should be included as a covariate in order to obtain unbiased estimates of the other fixed effects. The estimate for Time (and it's statistical significance) is irrelevant (and should not be interpreted if it is a confounder). On the other hand, if Time is on the causal pathway from the exposure(s) to the outcome, for example, if the Treatment given depends on the time of day, then it is a mediator and should not be included as a covariate - including a mediator in a regression can invoke a reversal paradox (for example Simpson's Paradox) - see Tu et al (2008) Lastly, if Time is not a cause of the exposure(s) (but is a cause of the outcome), then it should be treated as a competing exposure, and included in the model as a covariate; this will improve the accuracy of the other fixed effects estimates that you are interested in. References: Tu, Y.K., Gunnell, D. and Gilthorpe, M.S., 2008. Simpson's Paradox, Lord's Paradox, and Suppression Effects are the same phenomenon–the reversal paradox. Emerging themes in epidemiology, 5(1), p.2.
Controlling for confounding variables in linear mixed effects models (lmer) You have stated that you believe Time is a confounding variable in this analysis. If so then you should include Time as a covariate in the analysis. However, before doing so, it is important to ensure
39,369
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
It's 1/8. See the figure below, which shows A's delivery time on the x-axis and B's on the y-axis. Since deliveries are uniformly distributed, all points in the square are equally likely to occur. B delivers before A only in the shaded region, which is 1/8 of the total figure. Another way to think of it is that there's a 50% chance A delivers before B even starts, and 50% chance that B delivers after A is done, meaning there's a 75% chance of one or both of those happening. In the 25% chance they both deliver in the overlapping hour, it's a 50-50 chance of which delivers first.
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
It's 1/8. See the figure below, which shows A's delivery time on the x-axis and B's on the y-axis. Since deliveries are uniformly distributed, all points in the square are equally likely to occur. B d
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? It's 1/8. See the figure below, which shows A's delivery time on the x-axis and B's on the y-axis. Since deliveries are uniformly distributed, all points in the square are equally likely to occur. B delivers before A only in the shaded region, which is 1/8 of the total figure. Another way to think of it is that there's a 50% chance A delivers before B even starts, and 50% chance that B delivers after A is done, meaning there's a 75% chance of one or both of those happening. In the 25% chance they both deliver in the overlapping hour, it's a 50-50 chance of which delivers first.
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? It's 1/8. See the figure below, which shows A's delivery time on the x-axis and B's on the y-axis. Since deliveries are uniformly distributed, all points in the square are equally likely to occur. B d
39,370
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
Since the delivery rates are not specified, lets assume A delivers $a$ packages per hour and B delivers $b$ packages per hour. So there are $2a \cdot 2b$ pairs of delivery times. The window in which A and B overlap in deliver times has only $a \cdot b$ pairs, in half of which A comes before B. So the proportion of pairs in which A comes before B is $$ \frac{a\cdot b}{2}\frac{1}{2a \cdot 2b} = \frac{1}{8}. $$
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
Since the delivery rates are not specified, lets assume A delivers $a$ packages per hour and B delivers $b$ packages per hour. So there are $2a \cdot 2b$ pairs of delivery times. The window in which A
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? Since the delivery rates are not specified, lets assume A delivers $a$ packages per hour and B delivers $b$ packages per hour. So there are $2a \cdot 2b$ pairs of delivery times. The window in which A and B overlap in deliver times has only $a \cdot b$ pairs, in half of which A comes before B. So the proportion of pairs in which A comes before B is $$ \frac{a\cdot b}{2}\frac{1}{2a \cdot 2b} = \frac{1}{8}. $$
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? Since the delivery rates are not specified, lets assume A delivers $a$ packages per hour and B delivers $b$ packages per hour. So there are $2a \cdot 2b$ pairs of delivery times. The window in which A
39,371
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
I propose another way of looking at it, only if you had a pc during the interview of course. We can simulate the process with R, for example. Let's simulate 1000 values from A and the same from B, we know that both are uniforms, are independent. a <- runif(1000, 8, 10) # A deliveries b <- runif(1000, 9, 11) # B deliveries # [1] 9.485513 8.665070 8.488481 8.840332 8.755384 9.448949 # A deliveries for example Ok they're not exactly times, but it's the same. The probability $P(B<A)$ is what we seek. So we just count the number of pairs where $b<a$ in our code. prob <- sum(b < a)/1000 #[1] 0.112 # almost 1/8 We can also plot the 1000 pairs $(a,b)$, and see the region where B comes first. plot(a, b) polygon(c(9, 10, 10, 9), c(9, 9, 10, 9), density = 10, angle = 135) And the prob value above is the proportion of points in the shaded region (looks familiar doesn't it?). Now we could use the formula for the standard error of a proportion to estimate the standard error of the simulation. se <- sqrt(prob * (1 - prob) / 1000) #[1] 0.009972763 And we can build a CI (assuming Normal approximation of the sample distribution of probs). prob - 1.96*se #[1] 0.09245338 lower bound prob + 1.96*se #[1] 0.1315466 upper bound
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
I propose another way of looking at it, only if you had a pc during the interview of course. We can simulate the process with R, for example. Let's simulate 1000 values from A and the same from B, we
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? I propose another way of looking at it, only if you had a pc during the interview of course. We can simulate the process with R, for example. Let's simulate 1000 values from A and the same from B, we know that both are uniforms, are independent. a <- runif(1000, 8, 10) # A deliveries b <- runif(1000, 9, 11) # B deliveries # [1] 9.485513 8.665070 8.488481 8.840332 8.755384 9.448949 # A deliveries for example Ok they're not exactly times, but it's the same. The probability $P(B<A)$ is what we seek. So we just count the number of pairs where $b<a$ in our code. prob <- sum(b < a)/1000 #[1] 0.112 # almost 1/8 We can also plot the 1000 pairs $(a,b)$, and see the region where B comes first. plot(a, b) polygon(c(9, 10, 10, 9), c(9, 9, 10, 9), density = 10, angle = 135) And the prob value above is the proportion of points in the shaded region (looks familiar doesn't it?). Now we could use the formula for the standard error of a proportion to estimate the standard error of the simulation. se <- sqrt(prob * (1 - prob) / 1000) #[1] 0.009972763 And we can build a CI (assuming Normal approximation of the sample distribution of probs). prob - 1.96*se #[1] 0.09245338 lower bound prob + 1.96*se #[1] 0.1315466 upper bound
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? I propose another way of looking at it, only if you had a pc during the interview of course. We can simulate the process with R, for example. Let's simulate 1000 values from A and the same from B, we
39,372
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
Stumbled across this and it got in my head. :-) The answer seems like it must depend on the relative number of deliveries each truck makes in the hour of possible overlap (9a-10a) -- there's no constant answer. For example, suppose each truck makes 2 total deliveries (1 per hour). They'd each make 1 delivery between 9 and 10 and B wouldn't beat anything from A. So, the probability is 0 in that case. Consider a simplified version of the problem where they both only make deliveries between 9-10a (still a uniform distribution). And, for starters, suppose they make the same number of deliveries, n. The first delivery for B will beat everything except the first delivery from A (which it ties). So, with probability $\frac{1}{n}$ (the probability we're the first delivery for B) we beat an event with probability $\frac{n-1}{n}$ (the probability we're not the first delivery of A) The second delivery for B will beat everything except the first two deliveries from A. So, with probability $\frac{1}{n}$ we beat an event with probablity $\frac{n-2}{n}$ etc. Putting each of those terms into a summation, we get: $(\frac{1}{n} \cdot \frac{n-1}{n}) + (\frac{1}{n} \cdot \frac{n-2}{n}) + ... + (\frac{1}{n} \cdot \frac{n-n}{n})$ Or, $\sum_{i=1}^{n} \frac{n-i}{n^2}$ Since the probabilities are uniform and half (rounded down) of each occur during the hour of overlap, we only consider half the deliveries of each. If $n'=\lfloor\frac{n}{2}\rfloor$ and, compared to the whole domain, those events only happen half the time. So $\frac{1}{2}\sum_{i=1}^{n'} \frac{n'-i}{n'^2}$ I believe that for $a=b=n$, you get $1/8$. How to handle the fact A and B do not deliver the same number of packages? Again, to simplify, assume all their deliveries happen between 9-10am. For every delivery $b$ you consider from earliest to latest, instead of each successive one beating $\frac{1}{a}$ less from truck A, as above (where $a$ is the number of deliveries made by truck A and $b$ the number of deliveries made by $b$), you eliminate $\lfloor \frac{1}{b} \cdot a \rfloor $. That is, you beat all but a fraction of $a$ proportional to the fraction of $b$ you've thrown out. So, $ (\frac{1}{b} \cdot \frac{a - \lfloor 1 \cdot \frac{a}{b} \rfloor }{a}) + (\frac{1}{b} \cdot \frac{n - \lfloor 2 \cdot \frac{a}{b} \rfloor }{a}) + ... + (\frac{1}{b} \cdot \frac{a - \lfloor a \cdot \frac{a}{b} \rfloor }{a}) $ Or, $\sum_{i=1}^{b} \frac{a - \lfloor \frac{ia}{b} \rfloor }{ab}$ Again, accounting for the fact that they only overlap half the time, let $a'=\frac{a}{2}$ and $b'=\frac{b}{2}$: $\frac{1}{2}\sum_{i=1}^{b'} \frac{a' - \lfloor \frac{ia'}{b'} \rfloor }{a'b'}$
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
Stumbled across this and it got in my head. :-) The answer seems like it must depend on the relative number of deliveries each truck makes in the hour of possible overlap (9a-10a) -- there's no const
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? Stumbled across this and it got in my head. :-) The answer seems like it must depend on the relative number of deliveries each truck makes in the hour of possible overlap (9a-10a) -- there's no constant answer. For example, suppose each truck makes 2 total deliveries (1 per hour). They'd each make 1 delivery between 9 and 10 and B wouldn't beat anything from A. So, the probability is 0 in that case. Consider a simplified version of the problem where they both only make deliveries between 9-10a (still a uniform distribution). And, for starters, suppose they make the same number of deliveries, n. The first delivery for B will beat everything except the first delivery from A (which it ties). So, with probability $\frac{1}{n}$ (the probability we're the first delivery for B) we beat an event with probability $\frac{n-1}{n}$ (the probability we're not the first delivery of A) The second delivery for B will beat everything except the first two deliveries from A. So, with probability $\frac{1}{n}$ we beat an event with probablity $\frac{n-2}{n}$ etc. Putting each of those terms into a summation, we get: $(\frac{1}{n} \cdot \frac{n-1}{n}) + (\frac{1}{n} \cdot \frac{n-2}{n}) + ... + (\frac{1}{n} \cdot \frac{n-n}{n})$ Or, $\sum_{i=1}^{n} \frac{n-i}{n^2}$ Since the probabilities are uniform and half (rounded down) of each occur during the hour of overlap, we only consider half the deliveries of each. If $n'=\lfloor\frac{n}{2}\rfloor$ and, compared to the whole domain, those events only happen half the time. So $\frac{1}{2}\sum_{i=1}^{n'} \frac{n'-i}{n'^2}$ I believe that for $a=b=n$, you get $1/8$. How to handle the fact A and B do not deliver the same number of packages? Again, to simplify, assume all their deliveries happen between 9-10am. For every delivery $b$ you consider from earliest to latest, instead of each successive one beating $\frac{1}{a}$ less from truck A, as above (where $a$ is the number of deliveries made by truck A and $b$ the number of deliveries made by $b$), you eliminate $\lfloor \frac{1}{b} \cdot a \rfloor $. That is, you beat all but a fraction of $a$ proportional to the fraction of $b$ you've thrown out. So, $ (\frac{1}{b} \cdot \frac{a - \lfloor 1 \cdot \frac{a}{b} \rfloor }{a}) + (\frac{1}{b} \cdot \frac{n - \lfloor 2 \cdot \frac{a}{b} \rfloor }{a}) + ... + (\frac{1}{b} \cdot \frac{a - \lfloor a \cdot \frac{a}{b} \rfloor }{a}) $ Or, $\sum_{i=1}^{b} \frac{a - \lfloor \frac{ia}{b} \rfloor }{ab}$ Again, accounting for the fact that they only overlap half the time, let $a'=\frac{a}{2}$ and $b'=\frac{b}{2}$: $\frac{1}{2}\sum_{i=1}^{b'} \frac{a' - \lfloor \frac{ia'}{b'} \rfloor }{a'b'}$
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? Stumbled across this and it got in my head. :-) The answer seems like it must depend on the relative number of deliveries each truck makes in the hour of possible overlap (9a-10a) -- there's no const
39,373
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
It is zero: if the B truck has at least one delivery in the period of [9-11] at least one delivery is made after (or equal to ) 10 and that delivery is not before the deliveries of A (which are all before 10)
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A?
It is zero: if the B truck has at least one delivery in the period of [9-11] at least one delivery is made after (or equal to ) 10 and that delivery is not before the deliveries of A (which are all be
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? It is zero: if the B truck has at least one delivery in the period of [9-11] at least one delivery is made after (or equal to ) 10 and that delivery is not before the deliveries of A (which are all before 10)
If A is distributed uniformly on [8,10] and B on [9,11], what is the probability that B<A? It is zero: if the B truck has at least one delivery in the period of [9-11] at least one delivery is made after (or equal to ) 10 and that delivery is not before the deliveries of A (which are all be
39,374
Resources on Explainable AI
I would recommend a great book by Christoph Molnar: Interpretable Machine Learning - A Guide for Making Black Box Models Explainable. It touches upon both Interpretable Models, e.g. Linear/Logistic Regression, Generalized Linear Models (GLMs), Generative Additive Models (GAMs), Decision Trees and Model-Agnostic Methods, e.g. LIME, SHAP. Awesome Interpretable Machine Learning links to many interesting publications in the field.
Resources on Explainable AI
I would recommend a great book by Christoph Molnar: Interpretable Machine Learning - A Guide for Making Black Box Models Explainable. It touches upon both Interpretable Models, e.g. Linear/Logistic Re
Resources on Explainable AI I would recommend a great book by Christoph Molnar: Interpretable Machine Learning - A Guide for Making Black Box Models Explainable. It touches upon both Interpretable Models, e.g. Linear/Logistic Regression, Generalized Linear Models (GLMs), Generative Additive Models (GAMs), Decision Trees and Model-Agnostic Methods, e.g. LIME, SHAP. Awesome Interpretable Machine Learning links to many interesting publications in the field.
Resources on Explainable AI I would recommend a great book by Christoph Molnar: Interpretable Machine Learning - A Guide for Making Black Box Models Explainable. It touches upon both Interpretable Models, e.g. Linear/Logistic Re
39,375
Resources on Explainable AI
Longer papers which I found when I recently started exploring this topic are: Explanation in Artificial Intelligence: Insights from the Social Sciences by Tim Miller with lots of references The Mythos of Model Interpretability by Zachary Lipton For a more applied perspective and descriptions of concrete applications, you can start with this recent Science article introducing the topic to a general audience. A good list of references can be found on the website of the IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence and the Workshop on Human Interpretability in Machine Learning held at the same conference. If you found more information elsewhere in the meantime, I'd be very interested to learn about it.
Resources on Explainable AI
Longer papers which I found when I recently started exploring this topic are: Explanation in Artificial Intelligence: Insights from the Social Sciences by Tim Miller with lots of references The Mytho
Resources on Explainable AI Longer papers which I found when I recently started exploring this topic are: Explanation in Artificial Intelligence: Insights from the Social Sciences by Tim Miller with lots of references The Mythos of Model Interpretability by Zachary Lipton For a more applied perspective and descriptions of concrete applications, you can start with this recent Science article introducing the topic to a general audience. A good list of references can be found on the website of the IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence and the Workshop on Human Interpretability in Machine Learning held at the same conference. If you found more information elsewhere in the meantime, I'd be very interested to learn about it.
Resources on Explainable AI Longer papers which I found when I recently started exploring this topic are: Explanation in Artificial Intelligence: Insights from the Social Sciences by Tim Miller with lots of references The Mytho
39,376
Resources on Explainable AI
There is a lot of content coming up everyday. Banking, Pharma, Fraud detection are already using explaibale AI in many ways. You can get examples/main people( from IBM, Google, Microsoft, FB)/research paper from the github page of these framework. we had done very exhaustive literature study to build our own XAI framework- exhaustive literature study on XAI
Resources on Explainable AI
There is a lot of content coming up everyday. Banking, Pharma, Fraud detection are already using explaibale AI in many ways. You can get examples/main people( from IBM, Google, Microsoft, FB)/research
Resources on Explainable AI There is a lot of content coming up everyday. Banking, Pharma, Fraud detection are already using explaibale AI in many ways. You can get examples/main people( from IBM, Google, Microsoft, FB)/research paper from the github page of these framework. we had done very exhaustive literature study to build our own XAI framework- exhaustive literature study on XAI
Resources on Explainable AI There is a lot of content coming up everyday. Banking, Pharma, Fraud detection are already using explaibale AI in many ways. You can get examples/main people( from IBM, Google, Microsoft, FB)/research
39,377
Resources on Explainable AI
If you very new to Explainable AI, this book on Applied Machine Learning Explainability techniques would be the best book for you. It has wonderful code examples and very practical scenarios and dataset presented in the tutorials, where you can gain hands on knowledge on XAI. Book link - https://amzn.to/3f6v6H3 GitHub link - https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques Hope this helps :)
Resources on Explainable AI
If you very new to Explainable AI, this book on Applied Machine Learning Explainability techniques would be the best book for you. It has wonderful code examples and very practical scenarios and datas
Resources on Explainable AI If you very new to Explainable AI, this book on Applied Machine Learning Explainability techniques would be the best book for you. It has wonderful code examples and very practical scenarios and dataset presented in the tutorials, where you can gain hands on knowledge on XAI. Book link - https://amzn.to/3f6v6H3 GitHub link - https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques Hope this helps :)
Resources on Explainable AI If you very new to Explainable AI, this book on Applied Machine Learning Explainability techniques would be the best book for you. It has wonderful code examples and very practical scenarios and datas
39,378
a fast uniform order statistic generator
The R code means returning $$(E_1,E_1+E_2,\ldots,E_1+\cdots+E_n)\Big/\sum_{i=1}^{n+1} E_i$$ and the result follows from checking that the differences between the cumulated sums of exponentials renormalised by the overall sum has the same distribution as the differences between order statistics for a uniform sample, $S_i=U_{(i)}-U_{(i-1)}$. This is described and established in the bible of simulation, Devroye's Non-Uniform Random Variate Generation (1986, pp. 207-219): and also in Biau & Devroye Lectures on the Nearest Neighbor Method (pp.5-7) Running a test to compare this spacing method with a direct ordering of uniform variates shows a clear advantage for this approach (for n=100 and 10⁷ replications, using R benchmark tool). test replications elapsed relative user.self sys.self user.child 2 direct 1e7 355.213 4.722 355.112 0.024 0 1 spacings 1e7 75.221 1.000 75.208 0.000 0 although increasing $n$ to $n=10^3$ reduces the gain: test replications elapsed relative user.self sys.self user.child 2 direct 1e6 96.225 1.886 96.20 0 0 1 spacings 1e6 51.029 1.000 51.02 0 0
a fast uniform order statistic generator
The R code means returning $$(E_1,E_1+E_2,\ldots,E_1+\cdots+E_n)\Big/\sum_{i=1}^{n+1} E_i$$ and the result follows from checking that the differences between the cumulated sums of exponentials renorma
a fast uniform order statistic generator The R code means returning $$(E_1,E_1+E_2,\ldots,E_1+\cdots+E_n)\Big/\sum_{i=1}^{n+1} E_i$$ and the result follows from checking that the differences between the cumulated sums of exponentials renormalised by the overall sum has the same distribution as the differences between order statistics for a uniform sample, $S_i=U_{(i)}-U_{(i-1)}$. This is described and established in the bible of simulation, Devroye's Non-Uniform Random Variate Generation (1986, pp. 207-219): and also in Biau & Devroye Lectures on the Nearest Neighbor Method (pp.5-7) Running a test to compare this spacing method with a direct ordering of uniform variates shows a clear advantage for this approach (for n=100 and 10⁷ replications, using R benchmark tool). test replications elapsed relative user.self sys.self user.child 2 direct 1e7 355.213 4.722 355.112 0.024 0 1 spacings 1e7 75.221 1.000 75.208 0.000 0 although increasing $n$ to $n=10^3$ reduces the gain: test replications elapsed relative user.self sys.self user.child 2 direct 1e6 96.225 1.886 96.20 0 0 1 spacings 1e6 51.029 1.000 51.02 0 0
a fast uniform order statistic generator The R code means returning $$(E_1,E_1+E_2,\ldots,E_1+\cdots+E_n)\Big/\sum_{i=1}^{n+1} E_i$$ and the result follows from checking that the differences between the cumulated sums of exponentials renorma
39,379
Satterthwaite degrees of freedom in a mixed model change drastically depending on the DV
The Satterthwaite method depends on the dependent variable through the Hessian of the (reml) log-likelihood/deviance function with respect to the variance-parameters (there is also a gradient involved), so it is not surprising if the denominator degrees of freedom change with a change of the dependent variable. In this concrete case x4 enters in the random-effects as well as fixed-effects, which explains why the denominator df for the fixed effect of x4 changes between the models while the denominator df for the other fixed effects are not (as) affected (but also note the denominator df for x2). Three things inhibits further exploration of this concrete example: The contrasts may or may not differ between summary and this type III anova so we may not be comparing like with like if we compare the concrete examples of Satterthwaite and Kenward-Roger (KR) methods. Use summary(model, ddf="Kenward-Roger") if you want to use KR df in summary. Also note that while the Satterthwaite implementation is the same for t and 1 df F tests, they actually do differ for KR which can lead to additional differences between summary and anova outputs. The implementation was completely rewritten in lmerTest version >= 3.0-0 so knowing which version is being used is relevant (please include a sessionInfo()). On closer inspection of the output I think you are actually using an old version of lmerTest, so I suggest you upgrade as a first move (cf. https://github.com/runehaubo/lmerTestR). Seeing the estimates of the variance parameters is relevant here (in addition to potential convergence warnings), so please include the complete outputs of summary. Cheers Rune
Satterthwaite degrees of freedom in a mixed model change drastically depending on the DV
The Satterthwaite method depends on the dependent variable through the Hessian of the (reml) log-likelihood/deviance function with respect to the variance-parameters (there is also a gradient involved
Satterthwaite degrees of freedom in a mixed model change drastically depending on the DV The Satterthwaite method depends on the dependent variable through the Hessian of the (reml) log-likelihood/deviance function with respect to the variance-parameters (there is also a gradient involved), so it is not surprising if the denominator degrees of freedom change with a change of the dependent variable. In this concrete case x4 enters in the random-effects as well as fixed-effects, which explains why the denominator df for the fixed effect of x4 changes between the models while the denominator df for the other fixed effects are not (as) affected (but also note the denominator df for x2). Three things inhibits further exploration of this concrete example: The contrasts may or may not differ between summary and this type III anova so we may not be comparing like with like if we compare the concrete examples of Satterthwaite and Kenward-Roger (KR) methods. Use summary(model, ddf="Kenward-Roger") if you want to use KR df in summary. Also note that while the Satterthwaite implementation is the same for t and 1 df F tests, they actually do differ for KR which can lead to additional differences between summary and anova outputs. The implementation was completely rewritten in lmerTest version >= 3.0-0 so knowing which version is being used is relevant (please include a sessionInfo()). On closer inspection of the output I think you are actually using an old version of lmerTest, so I suggest you upgrade as a first move (cf. https://github.com/runehaubo/lmerTestR). Seeing the estimates of the variance parameters is relevant here (in addition to potential convergence warnings), so please include the complete outputs of summary. Cheers Rune
Satterthwaite degrees of freedom in a mixed model change drastically depending on the DV The Satterthwaite method depends on the dependent variable through the Hessian of the (reml) log-likelihood/deviance function with respect to the variance-parameters (there is also a gradient involved
39,380
Wasserstein Loss is very sensitive to model architecture
Usually, the same architecture and parameters would not be good for training both GAN and WGAN. In a typical GAN, you want to avoid making the discriminator more powerful than the generator, and you want to avoid training the discriminator so much that it "overpowers" the generator and always finds the fakes. In WGAN, you want to make the discriminator as powerful as possible, possibly by giving it a larger network, and you also want to train it for as long as computationally feasible -- several iterations for every one iteration the generator trains. The theory behind WGAN requires that the discriminator has converged to the optimal discriminating function, so this is important. If for some reason, you really need to fix one architecture, choose one where the generator is about the same size as the discriminator, and then make sure when you're training the WGAN that you really train the discriminator a lot -- maybe 10x more than the generator.
Wasserstein Loss is very sensitive to model architecture
Usually, the same architecture and parameters would not be good for training both GAN and WGAN. In a typical GAN, you want to avoid making the discriminator more powerful than the generator, and you
Wasserstein Loss is very sensitive to model architecture Usually, the same architecture and parameters would not be good for training both GAN and WGAN. In a typical GAN, you want to avoid making the discriminator more powerful than the generator, and you want to avoid training the discriminator so much that it "overpowers" the generator and always finds the fakes. In WGAN, you want to make the discriminator as powerful as possible, possibly by giving it a larger network, and you also want to train it for as long as computationally feasible -- several iterations for every one iteration the generator trains. The theory behind WGAN requires that the discriminator has converged to the optimal discriminating function, so this is important. If for some reason, you really need to fix one architecture, choose one where the generator is about the same size as the discriminator, and then make sure when you're training the WGAN that you really train the discriminator a lot -- maybe 10x more than the generator.
Wasserstein Loss is very sensitive to model architecture Usually, the same architecture and parameters would not be good for training both GAN and WGAN. In a typical GAN, you want to avoid making the discriminator more powerful than the generator, and you
39,381
Wasserstein Loss is very sensitive to model architecture
Try to substitute gradient clipping with gradient penalty in WGAN, if you haven't done so yet. The important thing is that you should NOT use batch normalization in WGAN discriminator, as it breaks the whole idea. The authors of the WGAN paper suggest to use layer norm.
Wasserstein Loss is very sensitive to model architecture
Try to substitute gradient clipping with gradient penalty in WGAN, if you haven't done so yet. The important thing is that you should NOT use batch normalization in WGAN discriminator, as it breaks th
Wasserstein Loss is very sensitive to model architecture Try to substitute gradient clipping with gradient penalty in WGAN, if you haven't done so yet. The important thing is that you should NOT use batch normalization in WGAN discriminator, as it breaks the whole idea. The authors of the WGAN paper suggest to use layer norm.
Wasserstein Loss is very sensitive to model architecture Try to substitute gradient clipping with gradient penalty in WGAN, if you haven't done so yet. The important thing is that you should NOT use batch normalization in WGAN discriminator, as it breaks th
39,382
What is distribution parameterization?
Reparameterization means the substitution of a function for a parameter, where the parameters are the coefficients of a distribution. References on this do not help much. Parameterization is the explicit form for a distribution. For example, the gamma distribution has two different parameterizations that are in common use: 1) The probability density function in the shape-rate parametrization is $$f(x;\alpha,\beta) = \frac{ \beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)} \quad \text{ for } x > 0 \text{ and } \alpha, \beta > 0\;,$$ where ${\Gamma(\alpha)}$ is a complete gamma function. 2) The probability density function using the shape-scale parametrization is $$f(x;k,\theta) = \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k\Gamma(k)} \quad \text{ for } x > 0 \text{ and } k, \theta > 0.$$ From this we can see that $\beta=\dfrac{1}{\theta}$, and $k=\alpha$ from which we can state that the shape-scale parametrization ($k,\theta$, respectively) can be reparameterized to be the shape-rate parametrization ($\alpha,\beta$) by substituting the $\beta$ parameter for the reciprocal of the $\theta$ parameter. However, $k=\alpha$ is not a reparameterization, it is just a different label for the same thing; the shape parameter. Why do we reparameterize? One good reason to parameterize a particular way it to use the form that produces more normal and less skewed distributions of the parameter values that occur using that form. Thus, the reader will find that the exponential and gamma distributions are frequently parameterized in the rate form (e.g., number 1 above), as opposed to the scale form (e.g., number 2 above). Also, suppose for parameterization number 1 above that we have $\beta$ values that are close to zero. Then regression fitting of that distribution using that parameterization would be frequently more robust than using the reciprocal parameter $\theta=\dfrac{1}{\beta}$, which alternative between iterations might make huge jumps, e.g., from 10000 to 100000. Why the increased robustness? Suppose that during fitting we make a slight transient incursion into negative $\beta$-values, for example, $-10^{-8}$ for one of the iterations. For the first parameterization, a slightly negative value is usually rectified during the next iteration. For the second parameterization above, that would yield $\theta=-100000000$, and thereafter we might be forever stuck in negative territory because of the $\pm\infty$ discontinuity at $\theta=\dfrac{1}{\beta}$ when $\beta\rightarrow0^+,0^-$, respectively. Caution. There is a different context for parametric equations, that may cause confusion. This has nothing to do with the meaning of parameterization here.
What is distribution parameterization?
Reparameterization means the substitution of a function for a parameter, where the parameters are the coefficients of a distribution. References on this do not help much. Parameterization is the expli
What is distribution parameterization? Reparameterization means the substitution of a function for a parameter, where the parameters are the coefficients of a distribution. References on this do not help much. Parameterization is the explicit form for a distribution. For example, the gamma distribution has two different parameterizations that are in common use: 1) The probability density function in the shape-rate parametrization is $$f(x;\alpha,\beta) = \frac{ \beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)} \quad \text{ for } x > 0 \text{ and } \alpha, \beta > 0\;,$$ where ${\Gamma(\alpha)}$ is a complete gamma function. 2) The probability density function using the shape-scale parametrization is $$f(x;k,\theta) = \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k\Gamma(k)} \quad \text{ for } x > 0 \text{ and } k, \theta > 0.$$ From this we can see that $\beta=\dfrac{1}{\theta}$, and $k=\alpha$ from which we can state that the shape-scale parametrization ($k,\theta$, respectively) can be reparameterized to be the shape-rate parametrization ($\alpha,\beta$) by substituting the $\beta$ parameter for the reciprocal of the $\theta$ parameter. However, $k=\alpha$ is not a reparameterization, it is just a different label for the same thing; the shape parameter. Why do we reparameterize? One good reason to parameterize a particular way it to use the form that produces more normal and less skewed distributions of the parameter values that occur using that form. Thus, the reader will find that the exponential and gamma distributions are frequently parameterized in the rate form (e.g., number 1 above), as opposed to the scale form (e.g., number 2 above). Also, suppose for parameterization number 1 above that we have $\beta$ values that are close to zero. Then regression fitting of that distribution using that parameterization would be frequently more robust than using the reciprocal parameter $\theta=\dfrac{1}{\beta}$, which alternative between iterations might make huge jumps, e.g., from 10000 to 100000. Why the increased robustness? Suppose that during fitting we make a slight transient incursion into negative $\beta$-values, for example, $-10^{-8}$ for one of the iterations. For the first parameterization, a slightly negative value is usually rectified during the next iteration. For the second parameterization above, that would yield $\theta=-100000000$, and thereafter we might be forever stuck in negative territory because of the $\pm\infty$ discontinuity at $\theta=\dfrac{1}{\beta}$ when $\beta\rightarrow0^+,0^-$, respectively. Caution. There is a different context for parametric equations, that may cause confusion. This has nothing to do with the meaning of parameterization here.
What is distribution parameterization? Reparameterization means the substitution of a function for a parameter, where the parameters are the coefficients of a distribution. References on this do not help much. Parameterization is the expli
39,383
What is distribution parameterization?
It means to use a parameter or a set of parameters to describe a probability distribution. The easiest example would be the Bernoulli distribution with one parameter $p$: Suppose we want to have a probability distribution on the discrete outcome of a coin flip. We use $p$ to represent the probability of getting a HEAD(H), in other words, the probability of getting TAIL(T) is $1-p$. Therefore the Probability Mass Function is $$ P(X)=\begin{cases} p & \text{for }X=H \\ 1-p & \text{for }X=T \end{cases} $$ and it is parameterized by $p$. "multinomial over k possible outcomes" is very similar, but with more parameters. BTW, I personally think the term "multinomial" is confusing. People uses "Multinoulli Distribution" or "Categorical Distribution" to describe a distribution with multiple outcomes. See page 62 of this book.
What is distribution parameterization?
It means to use a parameter or a set of parameters to describe a probability distribution. The easiest example would be the Bernoulli distribution with one parameter $p$: Suppose we want to have a pro
What is distribution parameterization? It means to use a parameter or a set of parameters to describe a probability distribution. The easiest example would be the Bernoulli distribution with one parameter $p$: Suppose we want to have a probability distribution on the discrete outcome of a coin flip. We use $p$ to represent the probability of getting a HEAD(H), in other words, the probability of getting TAIL(T) is $1-p$. Therefore the Probability Mass Function is $$ P(X)=\begin{cases} p & \text{for }X=H \\ 1-p & \text{for }X=T \end{cases} $$ and it is parameterized by $p$. "multinomial over k possible outcomes" is very similar, but with more parameters. BTW, I personally think the term "multinomial" is confusing. People uses "Multinoulli Distribution" or "Categorical Distribution" to describe a distribution with multiple outcomes. See page 62 of this book.
What is distribution parameterization? It means to use a parameter or a set of parameters to describe a probability distribution. The easiest example would be the Bernoulli distribution with one parameter $p$: Suppose we want to have a pro
39,384
Simple method of forecasting number of guests given current and historical data
There are simple methods to use but they are probably profoundly wrong as daily data presents a ton of opportunities . Simply striking daily averages is both simple and useless BUT I guess if there is no analytics around it is probably better than the overall average . The absolute LAST approach would be to use the overall mean . Models need to be as simple as possible BUT never too simple. As was nicely summarized by @Frans your problem/opportunity is a complicated one but very rewarding. Besides some of the items mentioned there are individual lead and lag effects around each holiday along with possible level shifts and changes in day-of-the-week effects and of course how to identify and treat anomalies. There are also possible week-of-the=month effects and day-of-the-month effects et al. Identifying the structure is the problem and possibly even incorporating pricing and advertising effects. Take a look at http://autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation particularly slide 50- . I have been working with fast-food restaurant chains to even push down the forecast to 15 minute intervals and will try and give you some guidance. If you post your data and specify the country and the start date , I will try and help further. Preferably you might post 3 years of daily data as there are probably seasonal and holiday effects that might need to be identified. In terms of being able to quickly come up with forecasts this is handled by storing and updating models and then quickly forecasting using models that have been archived. Prediction intervals should be approached via monte-carlo to provide robust estimates for the range of future values. You have a complicated problem AND there are a lot of bad simple solutions that may be insufficient but inexpensive. if this is important then perhaps you need to muscle-up to a workable & affordable solution. EDIT UPON RECEIPT OF DATA: After receipt of your data I arbitrarily took the LUNCH series (you had provided both LUNCH and DINNER data )and inserted 0.0's for some missing days obtaining 1454 daily values .. start date 1/1/2013 ending 12/24/2016 and introduced the data to AUTOBOX requesting a 14 day forecast. Here is the 14 day (arbitrarily chosen) forecast . The acf of the original data showed significant memory structure which is of course presuming no special causes while the acf of the final model's residuals showed no remaining stochastic structure in the residuals . Since the sample size is large we get "false conclusions" using the very approximate standard deviation of the acf (1/sqrt(# of observations). The plot of the final model's residuals supported the randomness conclusion or at least the suggestion that the model couldn't be rejected How to evaluate deterministic vs stochastic components of a time series? discusses the advantages of integrating both stochastic (arima/memory) structure and event /fixed effects found via search procedures culminating in a holistic model. Restaurant activity is a classic example of how we do things in predictable rhythms. Arrivals to a restaurant follow day-of-the-week patterns and monthly patterns albeit being very affected by holidays and other special events. To summarize the model contains 6 types of factors/features separating the observed series to signal(predictable) and noise(random) .These 6 features are 1) Baseline ; 2) day-of-the week ; 3) month-of-the-year ; 4) pre, contemporary and post holiday effects ; 5) Deterministic effects discovered via Intervention Detection ; 6) memory (previous values). The Final model's statistics are here with Actual/Fit and Forecast here Detailing the 6 features . First the baseline ..essentially an expectation before identified effects are introduced . now the day-of-the week now the month=of-the-year now the holidays now the identified exogenous deterministic/unattributed effects (partial list) and finally the effect of prior observations i.e. memory reflecting unspecified variables omitted from the model . This is the conditional effect of memory GIVEN the deterministic (assignable cause) structure The window of response around each holiday is presented using the backshift operator B ; https://en.wikipedia.org/wiki/Lag_operator
Simple method of forecasting number of guests given current and historical data
There are simple methods to use but they are probably profoundly wrong as daily data presents a ton of opportunities . Simply striking daily averages is both simple and useless BUT I guess if there is
Simple method of forecasting number of guests given current and historical data There are simple methods to use but they are probably profoundly wrong as daily data presents a ton of opportunities . Simply striking daily averages is both simple and useless BUT I guess if there is no analytics around it is probably better than the overall average . The absolute LAST approach would be to use the overall mean . Models need to be as simple as possible BUT never too simple. As was nicely summarized by @Frans your problem/opportunity is a complicated one but very rewarding. Besides some of the items mentioned there are individual lead and lag effects around each holiday along with possible level shifts and changes in day-of-the-week effects and of course how to identify and treat anomalies. There are also possible week-of-the=month effects and day-of-the-month effects et al. Identifying the structure is the problem and possibly even incorporating pricing and advertising effects. Take a look at http://autobox.com/cms/index.php/afs-university/intro-to-forecasting/doc_download/53-capabilities-presentation particularly slide 50- . I have been working with fast-food restaurant chains to even push down the forecast to 15 minute intervals and will try and give you some guidance. If you post your data and specify the country and the start date , I will try and help further. Preferably you might post 3 years of daily data as there are probably seasonal and holiday effects that might need to be identified. In terms of being able to quickly come up with forecasts this is handled by storing and updating models and then quickly forecasting using models that have been archived. Prediction intervals should be approached via monte-carlo to provide robust estimates for the range of future values. You have a complicated problem AND there are a lot of bad simple solutions that may be insufficient but inexpensive. if this is important then perhaps you need to muscle-up to a workable & affordable solution. EDIT UPON RECEIPT OF DATA: After receipt of your data I arbitrarily took the LUNCH series (you had provided both LUNCH and DINNER data )and inserted 0.0's for some missing days obtaining 1454 daily values .. start date 1/1/2013 ending 12/24/2016 and introduced the data to AUTOBOX requesting a 14 day forecast. Here is the 14 day (arbitrarily chosen) forecast . The acf of the original data showed significant memory structure which is of course presuming no special causes while the acf of the final model's residuals showed no remaining stochastic structure in the residuals . Since the sample size is large we get "false conclusions" using the very approximate standard deviation of the acf (1/sqrt(# of observations). The plot of the final model's residuals supported the randomness conclusion or at least the suggestion that the model couldn't be rejected How to evaluate deterministic vs stochastic components of a time series? discusses the advantages of integrating both stochastic (arima/memory) structure and event /fixed effects found via search procedures culminating in a holistic model. Restaurant activity is a classic example of how we do things in predictable rhythms. Arrivals to a restaurant follow day-of-the-week patterns and monthly patterns albeit being very affected by holidays and other special events. To summarize the model contains 6 types of factors/features separating the observed series to signal(predictable) and noise(random) .These 6 features are 1) Baseline ; 2) day-of-the week ; 3) month-of-the-year ; 4) pre, contemporary and post holiday effects ; 5) Deterministic effects discovered via Intervention Detection ; 6) memory (previous values). The Final model's statistics are here with Actual/Fit and Forecast here Detailing the 6 features . First the baseline ..essentially an expectation before identified effects are introduced . now the day-of-the week now the month=of-the-year now the holidays now the identified exogenous deterministic/unattributed effects (partial list) and finally the effect of prior observations i.e. memory reflecting unspecified variables omitted from the model . This is the conditional effect of memory GIVEN the deterministic (assignable cause) structure The window of response around each holiday is presented using the backshift operator B ; https://en.wikipedia.org/wiki/Lag_operator
Simple method of forecasting number of guests given current and historical data There are simple methods to use but they are probably profoundly wrong as daily data presents a ton of opportunities . Simply striking daily averages is both simple and useless BUT I guess if there is
39,385
Simple method of forecasting number of guests given current and historical data
This is not a trivial task as there are many ways to approach this problem, as well as things to take into consideration. Instead of proposing a model, I will give you some general advice. What you are describing is a time-series and predicting the number of guests in the future is a forecasting problem. A time-series can be modeled for example using a mixed model, modeling the days as random effects because they occur multiple times in the dataset. The first things you may want to consider are: Did the average number of guests remain fairly constant over the past years or has there been an increase/decrease? This determines whether the time-series is stationary or not. Are there any special days to take into consideration, such as national holidays? In a regression model, these could be included as dummy variables. What other factors might affect the number of guests on a given day? Surely the daily or weekly menu may have an effect, or perhaps there are a different number of guests depending on the season. Some other things that might help once you want to decide on a model: Excel is very limited when it comes to statistical analysis. Consider using R or Python, both of which are free programs. In R, there is a package forecast with a plethora of useful models exactly for the purpose of forecasting time-series. If you are going with a regression model, consider that the number of guests are count data. Independent counts are Poisson distributed, but since there will be returning guests, recommendations from other guests, changes in the daily menu and many other (possibly unknown) factors affecting the number of guests, you may want to consider a distribution that can model these extra sources of variance (overdispersion) by using e.g. a negative binomial distribution. You mentioned you want to report an expected number of guest with a lower and upper bound. Prediction intervals can give you this lower and upper bound for a given amount of uncertainty. The expected value depends on the distribution you intend to use. I imagine the restaurant won't wait for you to complete this analysis, so I should also note that for starters simply going with the mean number of guests might work reasonably well, especially considering the time investment to calculate it. You could even do this in Excel. Lastly, search for questions related to yours. There are many good questions about time-series on this site.
Simple method of forecasting number of guests given current and historical data
This is not a trivial task as there are many ways to approach this problem, as well as things to take into consideration. Instead of proposing a model, I will give you some general advice. What you a
Simple method of forecasting number of guests given current and historical data This is not a trivial task as there are many ways to approach this problem, as well as things to take into consideration. Instead of proposing a model, I will give you some general advice. What you are describing is a time-series and predicting the number of guests in the future is a forecasting problem. A time-series can be modeled for example using a mixed model, modeling the days as random effects because they occur multiple times in the dataset. The first things you may want to consider are: Did the average number of guests remain fairly constant over the past years or has there been an increase/decrease? This determines whether the time-series is stationary or not. Are there any special days to take into consideration, such as national holidays? In a regression model, these could be included as dummy variables. What other factors might affect the number of guests on a given day? Surely the daily or weekly menu may have an effect, or perhaps there are a different number of guests depending on the season. Some other things that might help once you want to decide on a model: Excel is very limited when it comes to statistical analysis. Consider using R or Python, both of which are free programs. In R, there is a package forecast with a plethora of useful models exactly for the purpose of forecasting time-series. If you are going with a regression model, consider that the number of guests are count data. Independent counts are Poisson distributed, but since there will be returning guests, recommendations from other guests, changes in the daily menu and many other (possibly unknown) factors affecting the number of guests, you may want to consider a distribution that can model these extra sources of variance (overdispersion) by using e.g. a negative binomial distribution. You mentioned you want to report an expected number of guest with a lower and upper bound. Prediction intervals can give you this lower and upper bound for a given amount of uncertainty. The expected value depends on the distribution you intend to use. I imagine the restaurant won't wait for you to complete this analysis, so I should also note that for starters simply going with the mean number of guests might work reasonably well, especially considering the time investment to calculate it. You could even do this in Excel. Lastly, search for questions related to yours. There are many good questions about time-series on this site.
Simple method of forecasting number of guests given current and historical data This is not a trivial task as there are many ways to approach this problem, as well as things to take into consideration. Instead of proposing a model, I will give you some general advice. What you a
39,386
What type of chart is this?
It is a chord diagram or radial network diagram.
What type of chart is this?
It is a chord diagram or radial network diagram.
What type of chart is this? It is a chord diagram or radial network diagram.
What type of chart is this? It is a chord diagram or radial network diagram.
39,387
What type of chart is this?
It's a circos plot, see here.
What type of chart is this?
It's a circos plot, see here.
What type of chart is this? It's a circos plot, see here.
What type of chart is this? It's a circos plot, see here.
39,388
How to simulate a uniform distribution of a triangular area?
The uniform distribution on the simplex $y_1+y_2+y_3=1$, all $y_i\ge 0$, is known as the Dirichlet$(1,1,1)$ distribution. By setting $x_i=(1-3\times 0.1)y_i + 0.1$ you will achieve a uniform distribution on the simplex $x_1+x_2+x_3=0.7$, because it shrinks everything with a constant scale factor and therefore preserves relative areas. Values from a Dirichlet distribution can be obtained by generating independent Gamma variables and dividing them by their sum. The $(1,1,1)$ means each of these Gamma variables must have a Gamma$(1)$ distribution (which is an exponential distribution). Here is sample R code: n <- 1e3 alpha <- 1 x <- matrix(rgamma(n*3, alpha), ncol=3) x <- x / rowSums(x) * 0.7 + 0.1 Incidentally, an alternate way to generate the raw coordinates (on the third line) is with a uniform distribution x <- matrix(-log(runif(3*n)), ncol=3) because the distribution of $-\log(U)$, for $U$ Uniform, is Exponential. Thus this method requires no special statistical functions to carry out. But how to confirm the result is correct? One way is to rotate the simplex into the plane and plot the points. This R code computes such a rotation matrix, confirms it is a rotation matrix by verifying its cross product is the identity, and plots the points. beta <- apply(contr.helmert(3), 2, function(y) y / sqrt(crossprod(y))) crossprod(cbind(beta, 1/sqrt(3))) # Outputs the 3 x 3 identity matrix z <- x %*% beta plot(z) They look pretty uniform.
How to simulate a uniform distribution of a triangular area?
The uniform distribution on the simplex $y_1+y_2+y_3=1$, all $y_i\ge 0$, is known as the Dirichlet$(1,1,1)$ distribution. By setting $x_i=(1-3\times 0.1)y_i + 0.1$ you will achieve a uniform distribu
How to simulate a uniform distribution of a triangular area? The uniform distribution on the simplex $y_1+y_2+y_3=1$, all $y_i\ge 0$, is known as the Dirichlet$(1,1,1)$ distribution. By setting $x_i=(1-3\times 0.1)y_i + 0.1$ you will achieve a uniform distribution on the simplex $x_1+x_2+x_3=0.7$, because it shrinks everything with a constant scale factor and therefore preserves relative areas. Values from a Dirichlet distribution can be obtained by generating independent Gamma variables and dividing them by their sum. The $(1,1,1)$ means each of these Gamma variables must have a Gamma$(1)$ distribution (which is an exponential distribution). Here is sample R code: n <- 1e3 alpha <- 1 x <- matrix(rgamma(n*3, alpha), ncol=3) x <- x / rowSums(x) * 0.7 + 0.1 Incidentally, an alternate way to generate the raw coordinates (on the third line) is with a uniform distribution x <- matrix(-log(runif(3*n)), ncol=3) because the distribution of $-\log(U)$, for $U$ Uniform, is Exponential. Thus this method requires no special statistical functions to carry out. But how to confirm the result is correct? One way is to rotate the simplex into the plane and plot the points. This R code computes such a rotation matrix, confirms it is a rotation matrix by verifying its cross product is the identity, and plots the points. beta <- apply(contr.helmert(3), 2, function(y) y / sqrt(crossprod(y))) crossprod(cbind(beta, 1/sqrt(3))) # Outputs the 3 x 3 identity matrix z <- x %*% beta plot(z) They look pretty uniform.
How to simulate a uniform distribution of a triangular area? The uniform distribution on the simplex $y_1+y_2+y_3=1$, all $y_i\ge 0$, is known as the Dirichlet$(1,1,1)$ distribution. By setting $x_i=(1-3\times 0.1)y_i + 0.1$ you will achieve a uniform distribu
39,389
How to simulate a uniform distribution of a triangular area?
For Mathematica users, an easy way to do what the OP asks is to define the right-angled isosceles triangle on the Cartesian plane: R = Triangle[{{.1, .1}, {.1, .8}, {.8, .1}}]; ... and then draw (Uniform) random numbers from it: pts = RandomPoint[R, 10^4]; All done. To visualise both the triangle R and the sample data pts within: Graphics[{R, Red, PointSize[Tiny], Point[pts]}, Frame->True, PlotRange -> {{0,1}, {0,1}}] where x1, x2 and x3 are given by: {x1, x2} = Transpose[pts]; x3 = 1-x1-x2;
How to simulate a uniform distribution of a triangular area?
For Mathematica users, an easy way to do what the OP asks is to define the right-angled isosceles triangle on the Cartesian plane: R = Triangle[{{.1, .1}, {.1, .8}, {.8, .1}}]; ... and then draw (
How to simulate a uniform distribution of a triangular area? For Mathematica users, an easy way to do what the OP asks is to define the right-angled isosceles triangle on the Cartesian plane: R = Triangle[{{.1, .1}, {.1, .8}, {.8, .1}}]; ... and then draw (Uniform) random numbers from it: pts = RandomPoint[R, 10^4]; All done. To visualise both the triangle R and the sample data pts within: Graphics[{R, Red, PointSize[Tiny], Point[pts]}, Frame->True, PlotRange -> {{0,1}, {0,1}}] where x1, x2 and x3 are given by: {x1, x2} = Transpose[pts]; x3 = 1-x1-x2;
How to simulate a uniform distribution of a triangular area? For Mathematica users, an easy way to do what the OP asks is to define the right-angled isosceles triangle on the Cartesian plane: R = Triangle[{{.1, .1}, {.1, .8}, {.8, .1}}]; ... and then draw (
39,390
What is the difference between the Poisson distribution and the uniform distribution?
@scortchi has the right answer. To summarize: The arrival time stamps are uniformly distributed. The inter-arrival times are exponentially distributed. The count of arrivals per uniform time period is Poisson distributed.
What is the difference between the Poisson distribution and the uniform distribution?
@scortchi has the right answer. To summarize: The arrival time stamps are uniformly distributed. The inter-arrival times are exponentially distributed. The count of arrivals per uniform time period i
What is the difference between the Poisson distribution and the uniform distribution? @scortchi has the right answer. To summarize: The arrival time stamps are uniformly distributed. The inter-arrival times are exponentially distributed. The count of arrivals per uniform time period is Poisson distributed.
What is the difference between the Poisson distribution and the uniform distribution? @scortchi has the right answer. To summarize: The arrival time stamps are uniformly distributed. The inter-arrival times are exponentially distributed. The count of arrivals per uniform time period i
39,391
For regression with time varying parameters, SGD or Kalman filter?
Both of these things can be used in an online manner, but they do this in different ways. So they are not competitors. The Kalman filter has two purposes. First, for a batch of data, it will yield the log-likelihood of all your observed data, assuming you are estimating a Linear-Gaussian state space model. The log-likelihood is a function of the parameters, assuming your observed data are known. Second, for online data, if you know the parameters, it will recursively compute distributions of your hidden states. When used in an online manner, it recursively calculates statistical distributions for states, assuming parameters are known. SGD is an algorithm that takes as an input a log-likelihood function. It doesn't care what model you are using, so long as you can calculate a gradient of a loss (the loss is the negative of the log-likelihood). It is a procedure for finding your parameters that maximize (or minimize the negative of) this function. When used in an online fashion, it adjusts parameters as it sees new data. The word "stochastic" refers to the fact that it doesn't use all the data to calculate a likelihood, not to the fact that it recursively computes statistical distributions. So both can be used in an online manner. But here the KF computes distributions of the hidden states given parameters, and SGD adjusts the parameters to become more suitable.
For regression with time varying parameters, SGD or Kalman filter?
Both of these things can be used in an online manner, but they do this in different ways. So they are not competitors. The Kalman filter has two purposes. First, for a batch of data, it will yield the
For regression with time varying parameters, SGD or Kalman filter? Both of these things can be used in an online manner, but they do this in different ways. So they are not competitors. The Kalman filter has two purposes. First, for a batch of data, it will yield the log-likelihood of all your observed data, assuming you are estimating a Linear-Gaussian state space model. The log-likelihood is a function of the parameters, assuming your observed data are known. Second, for online data, if you know the parameters, it will recursively compute distributions of your hidden states. When used in an online manner, it recursively calculates statistical distributions for states, assuming parameters are known. SGD is an algorithm that takes as an input a log-likelihood function. It doesn't care what model you are using, so long as you can calculate a gradient of a loss (the loss is the negative of the log-likelihood). It is a procedure for finding your parameters that maximize (or minimize the negative of) this function. When used in an online fashion, it adjusts parameters as it sees new data. The word "stochastic" refers to the fact that it doesn't use all the data to calculate a likelihood, not to the fact that it recursively computes statistical distributions. So both can be used in an online manner. But here the KF computes distributions of the hidden states given parameters, and SGD adjusts the parameters to become more suitable.
For regression with time varying parameters, SGD or Kalman filter? Both of these things can be used in an online manner, but they do this in different ways. So they are not competitors. The Kalman filter has two purposes. First, for a batch of data, it will yield the
39,392
For regression with time varying parameters, SGD or Kalman filter?
The Kalman filter is a model based optimization algorithm that assumes linear dynamics and Gaussian noise. If these assumptions hold, it is guaranteed to converge to the optimum and should be used instead of SGD. SGD is a model free heuristic which (hopefully) converges to a local optimum. It 'works' for non-linear dynamics, and is often used when the dynamics are not even explicitly represented. Since it is model free and relies on noisy measurements of the gradient, it tends to be slow. Vanilla SGD is not very fast - more recent variants such as Adam and RMSProp tend to work better since they incorporate momentum, which can be thought of as smoothing out the gradient estimate.
For regression with time varying parameters, SGD or Kalman filter?
The Kalman filter is a model based optimization algorithm that assumes linear dynamics and Gaussian noise. If these assumptions hold, it is guaranteed to converge to the optimum and should be used ins
For regression with time varying parameters, SGD or Kalman filter? The Kalman filter is a model based optimization algorithm that assumes linear dynamics and Gaussian noise. If these assumptions hold, it is guaranteed to converge to the optimum and should be used instead of SGD. SGD is a model free heuristic which (hopefully) converges to a local optimum. It 'works' for non-linear dynamics, and is often used when the dynamics are not even explicitly represented. Since it is model free and relies on noisy measurements of the gradient, it tends to be slow. Vanilla SGD is not very fast - more recent variants such as Adam and RMSProp tend to work better since they incorporate momentum, which can be thought of as smoothing out the gradient estimate.
For regression with time varying parameters, SGD or Kalman filter? The Kalman filter is a model based optimization algorithm that assumes linear dynamics and Gaussian noise. If these assumptions hold, it is guaranteed to converge to the optimum and should be used ins
39,393
For regression with time varying parameters, SGD or Kalman filter?
With time varying parameters, only the Kalman filter can be used. (Unless you find an innovative way to use SGD). Let's look at a situation where both algorithms would make sense: linear regression $Y=\beta X+\epsilon$. Call $n$ the length of vector $X$. SGD (for MLE) assumes a fixed $\beta$ and will just find it. It is an online method but does not handle time dependence at all. First, you must go over the dataset several times, most efficiently in a random order, which breaks time dependence. And you can't expect it will "forget" the influence of the past lines in a way you can control easily. The Kalman filter assumes time varying $\beta(t)$ that is called a "state" instead of parameter. The "parameters" here are the variance of $\epsilon$ and possibly how $\beta(t)$ is allowed to change with time: typically the variance of a step if it is a Gaussian random walk or Brownian motion. The Kalman filter computes an estimate for $\beta$ and a covariance matrix for this estimation at each time $t$. It goes over the dataset only once and importantly in an ordered time fashion. When $\beta$ is fixed, the Kalman filter is essentially useless, because it will just find the usual MLE estimation of $\beta$ that could be found easily with matrix inversion. One could argue that the Kalman filter has the advantage to be an online method but since you need to store an $n\times n$ matrix... it's infeasible with big $n$ where online methods are usually needed. And operations on the matrix are costly anyway. To summarize: Time varying $\beta(t)$: Kalman filter. Infeasible with big $n$. big $n$ : SGD. Infeasible with time varying parameter. The Kalman filter is also used in advanced learning with NN but I don't much about it. People are researching ways to mix the two algorithms but it does not seem very mature for the moment. (as far as I know). The Kalman filter with big $n$ is being researched, with advanced Bayesian method for weather forecast.
For regression with time varying parameters, SGD or Kalman filter?
With time varying parameters, only the Kalman filter can be used. (Unless you find an innovative way to use SGD). Let's look at a situation where both algorithms would make sense: linear regression $Y
For regression with time varying parameters, SGD or Kalman filter? With time varying parameters, only the Kalman filter can be used. (Unless you find an innovative way to use SGD). Let's look at a situation where both algorithms would make sense: linear regression $Y=\beta X+\epsilon$. Call $n$ the length of vector $X$. SGD (for MLE) assumes a fixed $\beta$ and will just find it. It is an online method but does not handle time dependence at all. First, you must go over the dataset several times, most efficiently in a random order, which breaks time dependence. And you can't expect it will "forget" the influence of the past lines in a way you can control easily. The Kalman filter assumes time varying $\beta(t)$ that is called a "state" instead of parameter. The "parameters" here are the variance of $\epsilon$ and possibly how $\beta(t)$ is allowed to change with time: typically the variance of a step if it is a Gaussian random walk or Brownian motion. The Kalman filter computes an estimate for $\beta$ and a covariance matrix for this estimation at each time $t$. It goes over the dataset only once and importantly in an ordered time fashion. When $\beta$ is fixed, the Kalman filter is essentially useless, because it will just find the usual MLE estimation of $\beta$ that could be found easily with matrix inversion. One could argue that the Kalman filter has the advantage to be an online method but since you need to store an $n\times n$ matrix... it's infeasible with big $n$ where online methods are usually needed. And operations on the matrix are costly anyway. To summarize: Time varying $\beta(t)$: Kalman filter. Infeasible with big $n$. big $n$ : SGD. Infeasible with time varying parameter. The Kalman filter is also used in advanced learning with NN but I don't much about it. People are researching ways to mix the two algorithms but it does not seem very mature for the moment. (as far as I know). The Kalman filter with big $n$ is being researched, with advanced Bayesian method for weather forecast.
For regression with time varying parameters, SGD or Kalman filter? With time varying parameters, only the Kalman filter can be used. (Unless you find an innovative way to use SGD). Let's look at a situation where both algorithms would make sense: linear regression $Y
39,394
For regression with time varying parameters, SGD or Kalman filter?
Stochastic gradient descent is an optimization algorithm. It is a variant of gradient descent. It is used to find minima or maxima of functions. The difference between SGD and vanilla gradient descent is that SGD works on samples of the objective function, while vanilla gradient descent works on the exact objective function. In statistical learning, for example, you want to find a parameter vector which maximizes the likelihood function of the data. The parameters are assumed static. Kalman filter is a type of online Bayesian learning. It can be used to learn states that varies with time in a nonstationary or stationary way. For this, Kalman filter assumes a model which describes the dynamics of the states over time. States can be any variable you want, including time-varying parameters of a statistical model. In a dynamic linear regression model, you must assume a model of how the parameters of linear regression are varying over time. A very simple model is to assume that the parameters vary as a random walk. At any time, a prior probability distribution synthesizes knowledge about the states (in your case, parameters of a model). With observation of data, you use Bayes rule to update to a posterior distribution. In the Kalman filter, both prior and posterior are Gaussians. Kalman filter has been shown to be optimal in the sense that it minimizes the mean squared error of the real unobserved state and its prediction. A closely related method is recursive least squares, which is a particular case of the Kalman filter. In summary, Kalman filter is an online algorithm and SGD may be used online. Kalman filter assumes a dynamic model of your parameters, while SGD assumes the parameters do not vary over time. SGD will not be optimal in a dynamic setting, specially because it relies on the stepsize parameter, which must be set by the modeler and must follow some theoretical conditions to converge. The Kalman filter also has an equivalent of the stepsize parameter, which is called "the Kalman gain", which automatically adapts to the data.
For regression with time varying parameters, SGD or Kalman filter?
Stochastic gradient descent is an optimization algorithm. It is a variant of gradient descent. It is used to find minima or maxima of functions. The difference between SGD and vanilla gradient descent
For regression with time varying parameters, SGD or Kalman filter? Stochastic gradient descent is an optimization algorithm. It is a variant of gradient descent. It is used to find minima or maxima of functions. The difference between SGD and vanilla gradient descent is that SGD works on samples of the objective function, while vanilla gradient descent works on the exact objective function. In statistical learning, for example, you want to find a parameter vector which maximizes the likelihood function of the data. The parameters are assumed static. Kalman filter is a type of online Bayesian learning. It can be used to learn states that varies with time in a nonstationary or stationary way. For this, Kalman filter assumes a model which describes the dynamics of the states over time. States can be any variable you want, including time-varying parameters of a statistical model. In a dynamic linear regression model, you must assume a model of how the parameters of linear regression are varying over time. A very simple model is to assume that the parameters vary as a random walk. At any time, a prior probability distribution synthesizes knowledge about the states (in your case, parameters of a model). With observation of data, you use Bayes rule to update to a posterior distribution. In the Kalman filter, both prior and posterior are Gaussians. Kalman filter has been shown to be optimal in the sense that it minimizes the mean squared error of the real unobserved state and its prediction. A closely related method is recursive least squares, which is a particular case of the Kalman filter. In summary, Kalman filter is an online algorithm and SGD may be used online. Kalman filter assumes a dynamic model of your parameters, while SGD assumes the parameters do not vary over time. SGD will not be optimal in a dynamic setting, specially because it relies on the stepsize parameter, which must be set by the modeler and must follow some theoretical conditions to converge. The Kalman filter also has an equivalent of the stepsize parameter, which is called "the Kalman gain", which automatically adapts to the data.
For regression with time varying parameters, SGD or Kalman filter? Stochastic gradient descent is an optimization algorithm. It is a variant of gradient descent. It is used to find minima or maxima of functions. The difference between SGD and vanilla gradient descent
39,395
For regression with time varying parameters, SGD or Kalman filter?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There is an alternative at all theses methods which unifies both : It is the relaxc controller concept: https://www.researchgate.net/publication/347510415_Relaxc_vs_Kalman
For regression with time varying parameters, SGD or Kalman filter?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For regression with time varying parameters, SGD or Kalman filter? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There is an alternative at all theses methods which unifies both : It is the relaxc controller concept: https://www.researchgate.net/publication/347510415_Relaxc_vs_Kalman
For regression with time varying parameters, SGD or Kalman filter? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
39,396
Who is the father(or mother) of the linear least squares analysis as we know it?
I highly recommend Prof. Stephen Stigler's A History of Statistics: Measurement of Uncertainty before 1900. Chapter 1 discusses your question in depth. (A link is available here.) Legendre first developed the method of least squares and derived the normal equations in 1805 as a way to solve an overdetermined system of linear equations. Quoting Stigler, "For stark clarity of exposition the presentation [of Legendre] is unsurpassed; it must be counted as one of the clearest and most elegant introductions of a new statistical method in the history of statistics." The context of ordinary least squares A fascinating point is that development of least squares came before the discovery of the normal distribution and modern justifications for the use of least squares. Least squares was developed as a way of combining multiple, imperfect astronomical observations to recover underlying parameters governing the movements of heavenly bodies. Each astronomical observation defines a linear equation, and with more observations than parameters, the astronomers of the time were faced with an inconsistent system. What to do? Mayer developed a method where observations were separated into $k$ groups, the equations in each group were averaged together, and then the underlying system could be solved (Stigler 1990). Legendre instead proposed introducing an error term and minimizing the sum of squared error. References Stigler, Stephen, A History of Statistics: Measurement of Uncertainty Before 1900, 1990. Belknap Press
Who is the father(or mother) of the linear least squares analysis as we know it?
I highly recommend Prof. Stephen Stigler's A History of Statistics: Measurement of Uncertainty before 1900. Chapter 1 discusses your question in depth. (A link is available here.) Legendre first devel
Who is the father(or mother) of the linear least squares analysis as we know it? I highly recommend Prof. Stephen Stigler's A History of Statistics: Measurement of Uncertainty before 1900. Chapter 1 discusses your question in depth. (A link is available here.) Legendre first developed the method of least squares and derived the normal equations in 1805 as a way to solve an overdetermined system of linear equations. Quoting Stigler, "For stark clarity of exposition the presentation [of Legendre] is unsurpassed; it must be counted as one of the clearest and most elegant introductions of a new statistical method in the history of statistics." The context of ordinary least squares A fascinating point is that development of least squares came before the discovery of the normal distribution and modern justifications for the use of least squares. Least squares was developed as a way of combining multiple, imperfect astronomical observations to recover underlying parameters governing the movements of heavenly bodies. Each astronomical observation defines a linear equation, and with more observations than parameters, the astronomers of the time were faced with an inconsistent system. What to do? Mayer developed a method where observations were separated into $k$ groups, the equations in each group were averaged together, and then the underlying system could be solved (Stigler 1990). Legendre instead proposed introducing an error term and minimizing the sum of squared error. References Stigler, Stephen, A History of Statistics: Measurement of Uncertainty Before 1900, 1990. Belknap Press
Who is the father(or mother) of the linear least squares analysis as we know it? I highly recommend Prof. Stephen Stigler's A History of Statistics: Measurement of Uncertainty before 1900. Chapter 1 discusses your question in depth. (A link is available here.) Legendre first devel
39,397
Gradient boosting - extreme predictions vs predictions close to 0.5
I have prepared a short script to show what I think should be the right intuition. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import ensemble from sklearn.model_selection import train_test_split def create_dataset(location, scale, N): class_zero = pd.DataFrame({ 'x': np.random.normal(location, scale, size=N), 'y': np.random.normal(location, scale, size=N), 'C': [0.0] * N }) class_one = pd.DataFrame({ 'x': np.random.normal(-location, scale, size=N), 'y': np.random.normal(-location, scale, size=N), 'C': [1.0] * N }) return class_one.append(class_zero, ignore_index=True) def preditions(values): X_train, X_test, tgt_train, tgt_test = train_test_split(values[["x", "y"]], values["C"], test_size=0.5, random_state=9) clf = ensemble.GradientBoostingRegressor() clf.fit(X_train, tgt_train) y_hat = clf.predict(X_test) return y_hat N = 10000 scale = 1.0 locations = [0.0, 1.0, 1.5, 2.0] f, axarr = plt.subplots(2, len(locations)) for i in range(0, len(locations)): print(i) values = create_dataset(locations[i], scale, N) axarr[0, i].set_title("location: " + str(locations[i])) d = values[values.C==0] axarr[0, i].scatter(d.x, d.y, c="#0000FF", alpha=0.7, edgecolor="none") d = values[values.C==1] axarr[0, i].scatter(d.x, d.y, c="#00FF00", alpha=0.7, edgecolor="none") y_hats = preditions(values) axarr[1, i].hist(y_hats, bins=50) axarr[1, i].set_xlim((0, 1)) What the script does: it creates different scenarios where the two classes are progressively more and more separable - I could provide here a more formal definition of this but I guess that you should get the intuition it fits a GBM regressor on the test data and outputs the predicted values feeding the test X values to the trained model The produced chart shows how the generated data in each of the scenario looks like and it shows the distribution of the predicted values. The interpretation: lack of separability translates in predicted $y$ being at or right around 0.5. All this shows the intuition, I guess it should not be hard to prove this in a more formal fashion although I would start from a logistic regression - that would make the math definitely easier. EDIT 1 I am guessing in the leftmost example, where the two classes are not separable, if you set the parameters of the model to overfit the data (e.g. deep trees, large number of trees and features, relatively high learning rate), you would still get the model to predict extreme outcomes, right? In other words, the distribution of predictions is indicative of how closely the model ended up fitting the data? Let's assume that we have a super deep tree decision tree. In this scenario, we would see the distribution of prediction values peak at 0 and 1. We would also see a low training error. We can make the training error arbitrary small, we could have that deep tree overfit to the point where each leaf of the tree correspond to one datapoint in the train set, and each datapoint in the train set corresponds to a leaf in the tree. It would be the poor performance on the test set of a model very accurate on the training set a clear sign of overfitting. Note that in my chart I do present the predictions on the test set, they are much more informative. One additional note: let's work with the leftmost example. Let's train the model on all class A datapoints in the top half of the circle and on all class B datapoints in the bottom half of the circle. We would have a model very accurate, with a distribution of prediction values peaking at 0 and 1. The predictions on the test set (all class A points in the bottom half circle, and class B points in the top half circle) would be also peaking at 0 and 1 - but they would be entirely incorrect. This is some nasty "adversarial" training strategy. Nevertheless, in summary: the distribution sheds like on the degree of separability, but it is not really what matters.
Gradient boosting - extreme predictions vs predictions close to 0.5
I have prepared a short script to show what I think should be the right intuition. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import ensemble from sklearn.mode
Gradient boosting - extreme predictions vs predictions close to 0.5 I have prepared a short script to show what I think should be the right intuition. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import ensemble from sklearn.model_selection import train_test_split def create_dataset(location, scale, N): class_zero = pd.DataFrame({ 'x': np.random.normal(location, scale, size=N), 'y': np.random.normal(location, scale, size=N), 'C': [0.0] * N }) class_one = pd.DataFrame({ 'x': np.random.normal(-location, scale, size=N), 'y': np.random.normal(-location, scale, size=N), 'C': [1.0] * N }) return class_one.append(class_zero, ignore_index=True) def preditions(values): X_train, X_test, tgt_train, tgt_test = train_test_split(values[["x", "y"]], values["C"], test_size=0.5, random_state=9) clf = ensemble.GradientBoostingRegressor() clf.fit(X_train, tgt_train) y_hat = clf.predict(X_test) return y_hat N = 10000 scale = 1.0 locations = [0.0, 1.0, 1.5, 2.0] f, axarr = plt.subplots(2, len(locations)) for i in range(0, len(locations)): print(i) values = create_dataset(locations[i], scale, N) axarr[0, i].set_title("location: " + str(locations[i])) d = values[values.C==0] axarr[0, i].scatter(d.x, d.y, c="#0000FF", alpha=0.7, edgecolor="none") d = values[values.C==1] axarr[0, i].scatter(d.x, d.y, c="#00FF00", alpha=0.7, edgecolor="none") y_hats = preditions(values) axarr[1, i].hist(y_hats, bins=50) axarr[1, i].set_xlim((0, 1)) What the script does: it creates different scenarios where the two classes are progressively more and more separable - I could provide here a more formal definition of this but I guess that you should get the intuition it fits a GBM regressor on the test data and outputs the predicted values feeding the test X values to the trained model The produced chart shows how the generated data in each of the scenario looks like and it shows the distribution of the predicted values. The interpretation: lack of separability translates in predicted $y$ being at or right around 0.5. All this shows the intuition, I guess it should not be hard to prove this in a more formal fashion although I would start from a logistic regression - that would make the math definitely easier. EDIT 1 I am guessing in the leftmost example, where the two classes are not separable, if you set the parameters of the model to overfit the data (e.g. deep trees, large number of trees and features, relatively high learning rate), you would still get the model to predict extreme outcomes, right? In other words, the distribution of predictions is indicative of how closely the model ended up fitting the data? Let's assume that we have a super deep tree decision tree. In this scenario, we would see the distribution of prediction values peak at 0 and 1. We would also see a low training error. We can make the training error arbitrary small, we could have that deep tree overfit to the point where each leaf of the tree correspond to one datapoint in the train set, and each datapoint in the train set corresponds to a leaf in the tree. It would be the poor performance on the test set of a model very accurate on the training set a clear sign of overfitting. Note that in my chart I do present the predictions on the test set, they are much more informative. One additional note: let's work with the leftmost example. Let's train the model on all class A datapoints in the top half of the circle and on all class B datapoints in the bottom half of the circle. We would have a model very accurate, with a distribution of prediction values peaking at 0 and 1. The predictions on the test set (all class A points in the bottom half circle, and class B points in the top half circle) would be also peaking at 0 and 1 - but they would be entirely incorrect. This is some nasty "adversarial" training strategy. Nevertheless, in summary: the distribution sheds like on the degree of separability, but it is not really what matters.
Gradient boosting - extreme predictions vs predictions close to 0.5 I have prepared a short script to show what I think should be the right intuition. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import ensemble from sklearn.mode
39,398
Gradient boosting - extreme predictions vs predictions close to 0.5
First I would suggest to use one data set instead of two to explore the output probability predictions. The reason is simple: if we change the data no one knows what will happen. As demoed in @IcannotFixThis's answer the exact same model will have different probability output, if the data changes from overlapping to more separable. If we insist to talk on two different data set, from the limited information, we can only say it is possible that "extreme predictions" means the model is overfitting / the the data is too "simple" for the model
Gradient boosting - extreme predictions vs predictions close to 0.5
First I would suggest to use one data set instead of two to explore the output probability predictions. The reason is simple: if we change the data no one knows what will happen. As demoed in @Icannot
Gradient boosting - extreme predictions vs predictions close to 0.5 First I would suggest to use one data set instead of two to explore the output probability predictions. The reason is simple: if we change the data no one knows what will happen. As demoed in @IcannotFixThis's answer the exact same model will have different probability output, if the data changes from overlapping to more separable. If we insist to talk on two different data set, from the limited information, we can only say it is possible that "extreme predictions" means the model is overfitting / the the data is too "simple" for the model
Gradient boosting - extreme predictions vs predictions close to 0.5 First I would suggest to use one data set instead of two to explore the output probability predictions. The reason is simple: if we change the data no one knows what will happen. As demoed in @Icannot
39,399
Gradient boosting - extreme predictions vs predictions close to 0.5
The predictions usually depend on your model. Decision trees in general yield quite "calibrated" outputs which can nearly be interpreted as a probability. Some like SVM for example, don't. But this also highly depends on over-fitting/under-fitting. Or on the number of features (no, it does not have to be "better" necessarily). Actually, if this is one class, the first one probably over-fitted. But first things first: where is your other class? As your doing predictions, you should always plot both classes (with different colours). From the predictions of one class, you cannot say too much. If you want to measure the performance and to make conclusions about how much is learned from the features, use a score like ROC AUC where the order of events matters and not the distribution. If you intend to work with distributions, you may have a look at probability calibration methods (or read which classifiers yield good calibrations anyway). They are not perfect but aim at transforming the predictions into probabilities (and therefore give a meaning to the output).
Gradient boosting - extreme predictions vs predictions close to 0.5
The predictions usually depend on your model. Decision trees in general yield quite "calibrated" outputs which can nearly be interpreted as a probability. Some like SVM for example, don't. But this al
Gradient boosting - extreme predictions vs predictions close to 0.5 The predictions usually depend on your model. Decision trees in general yield quite "calibrated" outputs which can nearly be interpreted as a probability. Some like SVM for example, don't. But this also highly depends on over-fitting/under-fitting. Or on the number of features (no, it does not have to be "better" necessarily). Actually, if this is one class, the first one probably over-fitted. But first things first: where is your other class? As your doing predictions, you should always plot both classes (with different colours). From the predictions of one class, you cannot say too much. If you want to measure the performance and to make conclusions about how much is learned from the features, use a score like ROC AUC where the order of events matters and not the distribution. If you intend to work with distributions, you may have a look at probability calibration methods (or read which classifiers yield good calibrations anyway). They are not perfect but aim at transforming the predictions into probabilities (and therefore give a meaning to the output).
Gradient boosting - extreme predictions vs predictions close to 0.5 The predictions usually depend on your model. Decision trees in general yield quite "calibrated" outputs which can nearly be interpreted as a probability. Some like SVM for example, don't. But this al
39,400
Gradient boosting - extreme predictions vs predictions close to 0.5
The first scenario may be due to over fitting of the training data. In-sample and out of sample performance also depends on what evaluation metric you are using (or is applicable to the problem). Besides comparing the metrics, try checking the confusion matrices as well to check the misclassifications. Using metrics like logloss and introducing regularization parameters might be another option.(Check out XGBoost - It allows adding alpha, beta regularization parameters.)
Gradient boosting - extreme predictions vs predictions close to 0.5
The first scenario may be due to over fitting of the training data. In-sample and out of sample performance also depends on what evaluation metric you are using (or is applicable to the problem). Besi
Gradient boosting - extreme predictions vs predictions close to 0.5 The first scenario may be due to over fitting of the training data. In-sample and out of sample performance also depends on what evaluation metric you are using (or is applicable to the problem). Besides comparing the metrics, try checking the confusion matrices as well to check the misclassifications. Using metrics like logloss and introducing regularization parameters might be another option.(Check out XGBoost - It allows adding alpha, beta regularization parameters.)
Gradient boosting - extreme predictions vs predictions close to 0.5 The first scenario may be due to over fitting of the training data. In-sample and out of sample performance also depends on what evaluation metric you are using (or is applicable to the problem). Besi