idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
47,701
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
My suggestion is to parameterise the function $f(t)$ and use the methodology of state space models and the Kalman filter. For example, a second order autoregressive, AR(2), process is a relatively general, yet simple, specification that can capture smooth cycles. Then, you would deal with a Gaussian linear model with an unobserved component. Once you write the state space representation of the model, the Kalman filter can be used to evaluate the likelihood function given the data. By maximising the likelihood function the parameters of the model ($\alpha$ and variances $\sigma^2_\varepsilon$, $\sigma^2_\eta$) can be estimated. In addition, an estimate of the unobserved component $f(t)$ is obtained. Some references about the methodology are given at the end of this answer. Here, I define the model in terms that would allow you to run the Kalman. The equations of the model: \begin{align} y_t =& \mu_t + \alpha f_t\\ \mu_t =& (1-\alpha)\mu_{t-1} + \varepsilon_t\,,\quad& \varepsilon_t\sim NID(0,\sigma^2_\varepsilon)\\ f_t =& \phi_1f_{t-1} + \phi_2f_{t-2} + \eta_t\,,\quad& \eta_t\sim NID(0,\sigma^2_\eta)\\ \hbox{Cov}&(\varepsilon,\eta_t)=0 \end{align} $y_t$ is the observed series, which is modeled as of an AR(1) process plus an unobserved transitory component $f(t)$. I included two disturbance terms, $\varepsilon$ and $\eta$, that are independent of each other; the first one can be discarded by setting $\sigma^2_\varepsilon=0$. State space representation (input for the Kalman filter): \begin{eqnarray} y_t=\left( \begin{array}{ccc} 1&\alpha&0 \end{array} \right) \left( \begin{array}{c} \mu_t\\f_t\\f_{t-1} \end{array} \right)\,,\quad \left( \begin{array}{c} \mu_t\\f_t\\f_{t-1} \end{array} \right)= \left( \begin{array}{ccc} 1-\alpha&0&0\\ 0&\phi_1&\phi_2\\ 0&1&0 \end{array} \right) \left( \begin{array}{c} \mu_{t-1}\\f_{t-1}\\f_{t-2} \end{array} \right)\,. \end{eqnarray} The covariance matrix of the disturbance terms is a diagonal matrix (since the disturbances are independent of each other) with diagonal $(\sigma^2_\varepsilon\,, \sigma^2_\eta\,, 0)$. This model is essentially the model proposed by Clark to extract the business cycle from time series of the gross domestic product. I simply added the coefficient $\alpha$ in order to resemble your model. The difference with your original model is that here $f(t)$ must be specified by means of a linear model. You may need to try other specifications of $f(t)$ that may fit better the overall dynamics of the trend/cycle component in your data. You may also need to include another component to capture, for example, seasonal cycles. Searching for information about basic structural model you could find a common specification of the seasonal component. If you cannot come up with a sensible specification that fits your data, then you may need to explore non-parametric methods as I mentioned in a comment. References: [1] Clark, P. K. (1987). "The Cyclical Component of U.S. Economic Activity", The Quarterly Journal of Economics, 102, 797-814. Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press. Durbin, J. and Koopman, S. J. (2001). Time Series Analysis by State Space Methods. Oxford University Press.
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
My suggestion is to parameterise the function $f(t)$ and use the methodology of state space models and the Kalman filter. For example, a second order autoregressive, AR(2), process is a relatively gen
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? My suggestion is to parameterise the function $f(t)$ and use the methodology of state space models and the Kalman filter. For example, a second order autoregressive, AR(2), process is a relatively general, yet simple, specification that can capture smooth cycles. Then, you would deal with a Gaussian linear model with an unobserved component. Once you write the state space representation of the model, the Kalman filter can be used to evaluate the likelihood function given the data. By maximising the likelihood function the parameters of the model ($\alpha$ and variances $\sigma^2_\varepsilon$, $\sigma^2_\eta$) can be estimated. In addition, an estimate of the unobserved component $f(t)$ is obtained. Some references about the methodology are given at the end of this answer. Here, I define the model in terms that would allow you to run the Kalman. The equations of the model: \begin{align} y_t =& \mu_t + \alpha f_t\\ \mu_t =& (1-\alpha)\mu_{t-1} + \varepsilon_t\,,\quad& \varepsilon_t\sim NID(0,\sigma^2_\varepsilon)\\ f_t =& \phi_1f_{t-1} + \phi_2f_{t-2} + \eta_t\,,\quad& \eta_t\sim NID(0,\sigma^2_\eta)\\ \hbox{Cov}&(\varepsilon,\eta_t)=0 \end{align} $y_t$ is the observed series, which is modeled as of an AR(1) process plus an unobserved transitory component $f(t)$. I included two disturbance terms, $\varepsilon$ and $\eta$, that are independent of each other; the first one can be discarded by setting $\sigma^2_\varepsilon=0$. State space representation (input for the Kalman filter): \begin{eqnarray} y_t=\left( \begin{array}{ccc} 1&\alpha&0 \end{array} \right) \left( \begin{array}{c} \mu_t\\f_t\\f_{t-1} \end{array} \right)\,,\quad \left( \begin{array}{c} \mu_t\\f_t\\f_{t-1} \end{array} \right)= \left( \begin{array}{ccc} 1-\alpha&0&0\\ 0&\phi_1&\phi_2\\ 0&1&0 \end{array} \right) \left( \begin{array}{c} \mu_{t-1}\\f_{t-1}\\f_{t-2} \end{array} \right)\,. \end{eqnarray} The covariance matrix of the disturbance terms is a diagonal matrix (since the disturbances are independent of each other) with diagonal $(\sigma^2_\varepsilon\,, \sigma^2_\eta\,, 0)$. This model is essentially the model proposed by Clark to extract the business cycle from time series of the gross domestic product. I simply added the coefficient $\alpha$ in order to resemble your model. The difference with your original model is that here $f(t)$ must be specified by means of a linear model. You may need to try other specifications of $f(t)$ that may fit better the overall dynamics of the trend/cycle component in your data. You may also need to include another component to capture, for example, seasonal cycles. Searching for information about basic structural model you could find a common specification of the seasonal component. If you cannot come up with a sensible specification that fits your data, then you may need to explore non-parametric methods as I mentioned in a comment. References: [1] Clark, P. K. (1987). "The Cyclical Component of U.S. Economic Activity", The Quarterly Journal of Economics, 102, 797-814. Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press. Durbin, J. and Koopman, S. J. (2001). Time Series Analysis by State Space Methods. Oxford University Press.
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? My suggestion is to parameterise the function $f(t)$ and use the methodology of state space models and the Kalman filter. For example, a second order autoregressive, AR(2), process is a relatively gen
47,702
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
From the graphs you have shown in your question, it appears that there may be some periodic signal in your data with a fixed frequency value. To determine whether this is the case, you should generate the Discrete Fourier Transform (DFT) of your data and plot its periodogram. From the plotted periodogram below, it is evident that there is a strong periodic signal in your data, and the estimated frequency is 0.16. Based on this result, I would recommend using some kind of periodic regression model as your first attempt at modelling the data. Fitting a model with a single sinusoidal wave at this estimated frequency should already explain a lot of the variation in your data. You can add a trend term to this and other terms if you want, but I would start with a simple periodic regression and build it up from there. If you do this, you might find that you get reasonable residuals and you no longer need to worry about any auto-regressive behaviour. R code: Here is the R code I used to generate these plots: #Load required libraries library(ggplot2); library(stats); #Import data (put in a text file) IMPORT <- read.table('CV DATA.txt') N <- length(IMPORT$V1); DATA <- data.frame(Time = 1:N, Value = IMPORT$V1); #Create discrete Fourier transform of the data VALS <- fft(DATA$Value - mean(DATA$Value))[1:(N/2)]; DFT <- data.frame(Frequency = (0:(N/2-1))/N, Value = VALS, Norm = Mod(VALS)); #Set theme setting for plots THEME <- list(theme(plot.title = element_text(hjust = 0.5, face = 'bold', size = 16), plot.subtitle = element_text(hjust = 0.5, face = 'bold'))); #Generate time series plot FIGURE1 <- ggplot(data = DATA, aes(x = Time, y = Value)) + geom_line(size = 1, colour = 'blue') + THEME + ggtitle('Plot of time-series data') + xlab('Time') + ylab('Value'); #Generate periodogram II <- which(DFT$Norm == max(DFT$Norm)); FF <- DFT$Frequency[II]; FIGURE2 <- ggplot(data = DFT, aes(x = Frequency, y = Norm)) + geom_line(size = 1, colour = 'red') + geom_vline(xintercept = FF, size = 1, linetype = 'dashed') + THEME + ggtitle('Periodogram of time-series data') + xlab('Frequency') + ylab('Norm of DFT'); #Plot the time-series and its periodogram FIGURE1; FIGURE2; #Show the estimated frequency of the signal FF; [1] 0.16
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
From the graphs you have shown in your question, it appears that there may be some periodic signal in your data with a fixed frequency value. To determine whether this is the case, you should generat
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? From the graphs you have shown in your question, it appears that there may be some periodic signal in your data with a fixed frequency value. To determine whether this is the case, you should generate the Discrete Fourier Transform (DFT) of your data and plot its periodogram. From the plotted periodogram below, it is evident that there is a strong periodic signal in your data, and the estimated frequency is 0.16. Based on this result, I would recommend using some kind of periodic regression model as your first attempt at modelling the data. Fitting a model with a single sinusoidal wave at this estimated frequency should already explain a lot of the variation in your data. You can add a trend term to this and other terms if you want, but I would start with a simple periodic regression and build it up from there. If you do this, you might find that you get reasonable residuals and you no longer need to worry about any auto-regressive behaviour. R code: Here is the R code I used to generate these plots: #Load required libraries library(ggplot2); library(stats); #Import data (put in a text file) IMPORT <- read.table('CV DATA.txt') N <- length(IMPORT$V1); DATA <- data.frame(Time = 1:N, Value = IMPORT$V1); #Create discrete Fourier transform of the data VALS <- fft(DATA$Value - mean(DATA$Value))[1:(N/2)]; DFT <- data.frame(Frequency = (0:(N/2-1))/N, Value = VALS, Norm = Mod(VALS)); #Set theme setting for plots THEME <- list(theme(plot.title = element_text(hjust = 0.5, face = 'bold', size = 16), plot.subtitle = element_text(hjust = 0.5, face = 'bold'))); #Generate time series plot FIGURE1 <- ggplot(data = DATA, aes(x = Time, y = Value)) + geom_line(size = 1, colour = 'blue') + THEME + ggtitle('Plot of time-series data') + xlab('Time') + ylab('Value'); #Generate periodogram II <- which(DFT$Norm == max(DFT$Norm)); FF <- DFT$Frequency[II]; FIGURE2 <- ggplot(data = DFT, aes(x = Frequency, y = Norm)) + geom_line(size = 1, colour = 'red') + geom_vline(xintercept = FF, size = 1, linetype = 'dashed') + THEME + ggtitle('Periodogram of time-series data') + xlab('Frequency') + ylab('Norm of DFT'); #Plot the time-series and its periodogram FIGURE1; FIGURE2; #Show the estimated frequency of the signal FF; [1] 0.16
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? From the graphs you have shown in your question, it appears that there may be some periodic signal in your data with a fixed frequency value. To determine whether this is the case, you should generat
47,703
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
This answer is not theoretical but practical and might work in some cases. Use with cautions since it is not guaranteed to work for all cases. Since $f(t)$ change slowly, it is possible to split it into several order 2 polynomials and still have many points. Instead of decomposing the time series in trend + seasonality, we split it and detrend it. After that, we fit the AR(1) model. We take a rolling window approach and only consider fits that are significant (p-value < 0.001). Then we average over all coefficients. Implementing this methodology I was able to get the following results (in a slightly different example): real alpha = 0.1 estimated alpha = 0.12591253250291573 standard deviation = 0.08668464697167208 The code I used is provided below. import pandas as pd from matplotlib import pylab as plt from statsmodels.tsa.arima_model import ARMA import seaborn as sns import statsmodels.api as sm import random import numpy as np import statsmodels %matplotlib inline random.seed(1) #defining the trend function def trend(t, amp=1): return amp*(1 + np.sin(t/10)) #length of time series n_time_steps = 250 #amplitud of time series amplitud=10 noise_frac_aplitud= 0.5 #initializing the time series time_series = np.zeros(n_time_steps) time_series[0] = trend(0, amplitud) #The AR(1) parameter. Our goal will be to find this parameter. alpha = 0.1 #making the time series for t in range(1,n_time_steps): time_series[t] = (1 - alpha)*time_series[t - 1] + alpha*trend(t, amp=amplitud) + alpha*np.random.normal(0,noise_frac_aplitud*amplitud) #passing the time series to a pandas format dates = sm.tsa.datetools.dates_from_range('2000m1', length=len(time_series)) time_series_pd= pd.Series(time_series, index=dates) window = 40 n_iter = n_time_steps - window alpha_list = [] alpha_elite_list = [] #n_time_steps for i in range(n_iter): #not rolling window but intervals... to fix temp_time_series_pd = time_series_pd[i:window + i] plt.plot(temp_time_series_pd) res = sm.tsa.detrend(temp_time_series_pd, order=2) mod = ARMA(res, order=(1,0)) ar1_fit = mod.fit() score = statsmodels.tsa.arima_model.ARMAResults(mod,ar1_fit.params) #The alpha parameter is... alpha_list.append(1 - ar1_fit.params[1]) if score.pvalues[1] < 0.001: alpha_elite_list.append(1 - ar1_fit.params[1]) print("real alpha = ", alpha) print("estimated alpha = ", np.mean(alpha_elite_list)) print("standard deviation = ", np.std(alpha_elite_list))
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series?
This answer is not theoretical but practical and might work in some cases. Use with cautions since it is not guaranteed to work for all cases. Since $f(t)$ change slowly, it is possible to split it i
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? This answer is not theoretical but practical and might work in some cases. Use with cautions since it is not guaranteed to work for all cases. Since $f(t)$ change slowly, it is possible to split it into several order 2 polynomials and still have many points. Instead of decomposing the time series in trend + seasonality, we split it and detrend it. After that, we fit the AR(1) model. We take a rolling window approach and only consider fits that are significant (p-value < 0.001). Then we average over all coefficients. Implementing this methodology I was able to get the following results (in a slightly different example): real alpha = 0.1 estimated alpha = 0.12591253250291573 standard deviation = 0.08668464697167208 The code I used is provided below. import pandas as pd from matplotlib import pylab as plt from statsmodels.tsa.arima_model import ARMA import seaborn as sns import statsmodels.api as sm import random import numpy as np import statsmodels %matplotlib inline random.seed(1) #defining the trend function def trend(t, amp=1): return amp*(1 + np.sin(t/10)) #length of time series n_time_steps = 250 #amplitud of time series amplitud=10 noise_frac_aplitud= 0.5 #initializing the time series time_series = np.zeros(n_time_steps) time_series[0] = trend(0, amplitud) #The AR(1) parameter. Our goal will be to find this parameter. alpha = 0.1 #making the time series for t in range(1,n_time_steps): time_series[t] = (1 - alpha)*time_series[t - 1] + alpha*trend(t, amp=amplitud) + alpha*np.random.normal(0,noise_frac_aplitud*amplitud) #passing the time series to a pandas format dates = sm.tsa.datetools.dates_from_range('2000m1', length=len(time_series)) time_series_pd= pd.Series(time_series, index=dates) window = 40 n_iter = n_time_steps - window alpha_list = [] alpha_elite_list = [] #n_time_steps for i in range(n_iter): #not rolling window but intervals... to fix temp_time_series_pd = time_series_pd[i:window + i] plt.plot(temp_time_series_pd) res = sm.tsa.detrend(temp_time_series_pd, order=2) mod = ARMA(res, order=(1,0)) ar1_fit = mod.fit() score = statsmodels.tsa.arima_model.ARMAResults(mod,ar1_fit.params) #The alpha parameter is... alpha_list.append(1 - ar1_fit.params[1]) if score.pvalues[1] < 0.001: alpha_elite_list.append(1 - ar1_fit.params[1]) print("real alpha = ", alpha) print("estimated alpha = ", np.mean(alpha_elite_list)) print("standard deviation = ", np.std(alpha_elite_list))
How to fit an autoregressive (AR(1)) model with trend and/or seasonality to a time series? This answer is not theoretical but practical and might work in some cases. Use with cautions since it is not guaranteed to work for all cases. Since $f(t)$ change slowly, it is possible to split it i
47,704
What is "Adjusted CV" or "Bias-corrected CV"?
In this context, bias correction refers to the fact that, when we do perform resampling (bootstrap or cross-validation) we almost certainly do not use our whole sample of size $N$; this potential leads to the biased estimates of the MSEP (Mean Squared Error of Prediction). There are various methodologies that can control for this kind of resampling bias. For example, one of the mostly commonly referenced techniques is the bootstrap 0.632 (Efron, 1983, JASA, Sect. 6). What all methodologies have in common is that they derive a relation approximating the expected difference in performance between a learner trained with the "resampled sample" and another ideal learner trained with the full sample. They then recombine/weight the estimates in such way that the apparent discrepancy is minimised. For example, the adjCV estimator, as implemented in pls::MSEP, adjusts by a factor proportional to the difference of the whole sample MSEP and mean out-of-fold MSEP (see Mevik & Cederkvist, 2005, Chemometrics, Sect. 2.4 ). Similarly, the bootstrap 0.632 estimator recombines the out-of-bootstrap-sample error estimate with the in-bootstrap-sample error estimate. A nice succinct introduction to the topic touching among the issue of bias (and variance) can be found in Sections 7.10 (Cross-validation) and 7.11 (Bootstrap Methods) from Hastie et al.'s classic textbook Elements of Statistical Learning, they touch upon bias mostly in the context of bootstrap 0.632 but the rationale for biased adjusted CV is the same. Finally, the CV community has already two very enlightening posts regarding: What is the .632+ rule in bootstrapping? and Bias and variance in leave-one-out vs K-fold cross validation; they definitely worth one's time! (Personal-note: People tend to make a big issue about bias but I have found that the variance is the one that often kills an analysis.)
What is "Adjusted CV" or "Bias-corrected CV"?
In this context, bias correction refers to the fact that, when we do perform resampling (bootstrap or cross-validation) we almost certainly do not use our whole sample of size $N$; this potential lead
What is "Adjusted CV" or "Bias-corrected CV"? In this context, bias correction refers to the fact that, when we do perform resampling (bootstrap or cross-validation) we almost certainly do not use our whole sample of size $N$; this potential leads to the biased estimates of the MSEP (Mean Squared Error of Prediction). There are various methodologies that can control for this kind of resampling bias. For example, one of the mostly commonly referenced techniques is the bootstrap 0.632 (Efron, 1983, JASA, Sect. 6). What all methodologies have in common is that they derive a relation approximating the expected difference in performance between a learner trained with the "resampled sample" and another ideal learner trained with the full sample. They then recombine/weight the estimates in such way that the apparent discrepancy is minimised. For example, the adjCV estimator, as implemented in pls::MSEP, adjusts by a factor proportional to the difference of the whole sample MSEP and mean out-of-fold MSEP (see Mevik & Cederkvist, 2005, Chemometrics, Sect. 2.4 ). Similarly, the bootstrap 0.632 estimator recombines the out-of-bootstrap-sample error estimate with the in-bootstrap-sample error estimate. A nice succinct introduction to the topic touching among the issue of bias (and variance) can be found in Sections 7.10 (Cross-validation) and 7.11 (Bootstrap Methods) from Hastie et al.'s classic textbook Elements of Statistical Learning, they touch upon bias mostly in the context of bootstrap 0.632 but the rationale for biased adjusted CV is the same. Finally, the CV community has already two very enlightening posts regarding: What is the .632+ rule in bootstrapping? and Bias and variance in leave-one-out vs K-fold cross validation; they definitely worth one's time! (Personal-note: People tend to make a big issue about bias but I have found that the variance is the one that often kills an analysis.)
What is "Adjusted CV" or "Bias-corrected CV"? In this context, bias correction refers to the fact that, when we do perform resampling (bootstrap or cross-validation) we almost certainly do not use our whole sample of size $N$; this potential lead
47,705
Correcting Kullback-Leibler divergence for size of datasets
The fundamental issue is that the KL divergence between the true underlying distributions is zero, as they are the same in your code ($U(0,1)$,) but sampling variation (almost) ensures that in finite samples the KL divergence between the two empirical distributions will be positive, as the empirical distributions will not be exactly equal. Since the empirical distributions converge (uniformly) to the true distributions as the sample size goes to infinity, the sample KL divergence goes to its true value almost surely as the sample size $\rightarrow \infty$, which causes your histograms to shift closer and closer to zero as the sample size increases. If you look at where the histograms are centered (roughly) on the x-axis, you'll see that the histogram for $n=100,000$ is located at about $1/100^{th}$ of where the histogram for $n=1000$ is located ($3\times 10^{-4}$ vs. $3\times 10^{-2}$, approximately.) The ratio of the sample sizes is, not coincidentally, $100-1$. The same effect can also be seen with respect to the $n=10,000$ histogram compared to the other two. Note that binning into a constant number of bins would not in general allow the KL divergence to approach the true value in cases where the two underlying distributions were not the same, instead, convergence would be to the true value of the KL divergence between the discrete distributions formed in the obvious way from the underlying continuous distributions and the bin boundaries, so the convergence to the true value in this case is a happy coincidence brought on by the way you wrote the code.
Correcting Kullback-Leibler divergence for size of datasets
The fundamental issue is that the KL divergence between the true underlying distributions is zero, as they are the same in your code ($U(0,1)$,) but sampling variation (almost) ensures that in finite
Correcting Kullback-Leibler divergence for size of datasets The fundamental issue is that the KL divergence between the true underlying distributions is zero, as they are the same in your code ($U(0,1)$,) but sampling variation (almost) ensures that in finite samples the KL divergence between the two empirical distributions will be positive, as the empirical distributions will not be exactly equal. Since the empirical distributions converge (uniformly) to the true distributions as the sample size goes to infinity, the sample KL divergence goes to its true value almost surely as the sample size $\rightarrow \infty$, which causes your histograms to shift closer and closer to zero as the sample size increases. If you look at where the histograms are centered (roughly) on the x-axis, you'll see that the histogram for $n=100,000$ is located at about $1/100^{th}$ of where the histogram for $n=1000$ is located ($3\times 10^{-4}$ vs. $3\times 10^{-2}$, approximately.) The ratio of the sample sizes is, not coincidentally, $100-1$. The same effect can also be seen with respect to the $n=10,000$ histogram compared to the other two. Note that binning into a constant number of bins would not in general allow the KL divergence to approach the true value in cases where the two underlying distributions were not the same, instead, convergence would be to the true value of the KL divergence between the discrete distributions formed in the obvious way from the underlying continuous distributions and the bin boundaries, so the convergence to the true value in this case is a happy coincidence brought on by the way you wrote the code.
Correcting Kullback-Leibler divergence for size of datasets The fundamental issue is that the KL divergence between the true underlying distributions is zero, as they are the same in your code ($U(0,1)$,) but sampling variation (almost) ensures that in finite
47,706
Correcting Kullback-Leibler divergence for size of datasets
The KL divergence doesn't really produce smaller distances with larger datasets or vice-versa. In your example, the distances are incomparable because of the sampling step in your code (in generate_histogram). Essentially, when you use that function to generate a probability mass function with 100 data points, there's quite a bit of sampling uncertainty that's fed into the KL divergence. For example, here are a few realisations that I got when I ran that function with a 100 data points: generate_histogram(100)/100. # array([0.09, 0.02, 0.01, 0.03, 0.02, 0.03, 0.02, 0.01, 0.77]) # array([0.06, 0.02, 0.01, 0.04, 0.06, 0.81]) # array([0.09, 0.01, 0.01, 0.01, 0.01, 0.01, 0.04, 0.04, 0.78]) # array([0.09, 0.03, 0.01, 0.03, 0.03, 0.09, 0.72]) # array([0.08, 0.01, 0.01, 0.04, 0.86]) # array([0.09, 0.02, 0.01, 0.01, 0.01, 0.01, 0.02, 0.09, 0.74]) (By the way, you should check your implementation here, as the arrays are not of equal lengths - i.e. some bin probabilities are zero and these are ignored, and I don't know what scipy does with unequal array lengths. This isn't an issue with the other two sections with higher data points as the probabilities usually aren't zero for any block. I suspect that this is the reason for why the first histogram is so skewed whereas the others aren't.) When you call generate_histogram with 10k data points, on the other hand, there's not much sampling uncertainty in the probabilities and every probability vector that's fed into the KL divergence function looks the same: np.round(generate_histogram(10000)/10000., 2) # array([0.1 , 0. , 0. , 0.01, 0.01, 0.01, 0.01, 0.03, 0.06, 0.77]) # array([0.1 , 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0.01, 0. , 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0. , 0.01, 0.01, 0.01, 0.01, 0.02, 0.02, 0.06, 0.77]) Since every realisation is the same to 2 d.p., the KL divergence reflects this and is basically around 0 most times. I speculate that your very first function (KL_divergence(a, b)) doesn't work well because, for smaller datasets, 100 bins is massive. The issue with using 100 bins with a dataset of a comparable size is that there'll be ones, twos and zeros in most bins, and there'll be significant sampling uncertainty, which might lead you to think that the KL divergence is affected by the size of the datasets, when in reality, it's only the number of bins and the sampling uncertainty of the probabilities that affect it. I do not have much experience with the KL divergence but it is only a function of random variables after all, and you should be able to account for the uncertainty in its estimation, based on your specific use case. For example, if you're working with different size datasets, you would be able to bootstrap, i.e.: Bootstrap (sample with replacement) from both datasets $D_1$ and $D_2$ to obtain bootstrapped datasets $D_1^*$ and $D_2^*$. Obtain an estimate of the ECDF by the .cumsum() as you've done already. Calculate the KL divergence for this set of bootstrapped data. Repeat. This way, you account for the difference in datasets by obtaining sampling uncertainty around the KL divergence. If you're only looking for a point estimate, you can simply take the mode of the bootstrapped distribution. If, for whatever reason, you want to feed in an equal number of data points to the KL divergence function (perhaps to utilize a higher number of bins), you could do kernel density estimation on both of your datasets, sample a lot of values from those densities and use the KL divergence on the resulting probability vector. This wouldn't really account for sampling uncertainty, but it would allow you to get over implementation issues.
Correcting Kullback-Leibler divergence for size of datasets
The KL divergence doesn't really produce smaller distances with larger datasets or vice-versa. In your example, the distances are incomparable because of the sampling step in your code (in generate_hi
Correcting Kullback-Leibler divergence for size of datasets The KL divergence doesn't really produce smaller distances with larger datasets or vice-versa. In your example, the distances are incomparable because of the sampling step in your code (in generate_histogram). Essentially, when you use that function to generate a probability mass function with 100 data points, there's quite a bit of sampling uncertainty that's fed into the KL divergence. For example, here are a few realisations that I got when I ran that function with a 100 data points: generate_histogram(100)/100. # array([0.09, 0.02, 0.01, 0.03, 0.02, 0.03, 0.02, 0.01, 0.77]) # array([0.06, 0.02, 0.01, 0.04, 0.06, 0.81]) # array([0.09, 0.01, 0.01, 0.01, 0.01, 0.01, 0.04, 0.04, 0.78]) # array([0.09, 0.03, 0.01, 0.03, 0.03, 0.09, 0.72]) # array([0.08, 0.01, 0.01, 0.04, 0.86]) # array([0.09, 0.02, 0.01, 0.01, 0.01, 0.01, 0.02, 0.09, 0.74]) (By the way, you should check your implementation here, as the arrays are not of equal lengths - i.e. some bin probabilities are zero and these are ignored, and I don't know what scipy does with unequal array lengths. This isn't an issue with the other two sections with higher data points as the probabilities usually aren't zero for any block. I suspect that this is the reason for why the first histogram is so skewed whereas the others aren't.) When you call generate_histogram with 10k data points, on the other hand, there's not much sampling uncertainty in the probabilities and every probability vector that's fed into the KL divergence function looks the same: np.round(generate_histogram(10000)/10000., 2) # array([0.1 , 0. , 0. , 0.01, 0.01, 0.01, 0.01, 0.03, 0.06, 0.77]) # array([0.1 , 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0.01, 0. , 0.01, 0.01, 0.01, 0.01, 0.02, 0.06, 0.77]) # array([0.1 , 0. , 0.01, 0.01, 0.01, 0.01, 0.02, 0.02, 0.06, 0.77]) Since every realisation is the same to 2 d.p., the KL divergence reflects this and is basically around 0 most times. I speculate that your very first function (KL_divergence(a, b)) doesn't work well because, for smaller datasets, 100 bins is massive. The issue with using 100 bins with a dataset of a comparable size is that there'll be ones, twos and zeros in most bins, and there'll be significant sampling uncertainty, which might lead you to think that the KL divergence is affected by the size of the datasets, when in reality, it's only the number of bins and the sampling uncertainty of the probabilities that affect it. I do not have much experience with the KL divergence but it is only a function of random variables after all, and you should be able to account for the uncertainty in its estimation, based on your specific use case. For example, if you're working with different size datasets, you would be able to bootstrap, i.e.: Bootstrap (sample with replacement) from both datasets $D_1$ and $D_2$ to obtain bootstrapped datasets $D_1^*$ and $D_2^*$. Obtain an estimate of the ECDF by the .cumsum() as you've done already. Calculate the KL divergence for this set of bootstrapped data. Repeat. This way, you account for the difference in datasets by obtaining sampling uncertainty around the KL divergence. If you're only looking for a point estimate, you can simply take the mode of the bootstrapped distribution. If, for whatever reason, you want to feed in an equal number of data points to the KL divergence function (perhaps to utilize a higher number of bins), you could do kernel density estimation on both of your datasets, sample a lot of values from those densities and use the KL divergence on the resulting probability vector. This wouldn't really account for sampling uncertainty, but it would allow you to get over implementation issues.
Correcting Kullback-Leibler divergence for size of datasets The KL divergence doesn't really produce smaller distances with larger datasets or vice-versa. In your example, the distances are incomparable because of the sampling step in your code (in generate_hi
47,707
How do I interpret the coefficients of a log-linear regression with quadratic terms?
By "impact" of $x$ I understand you want to estimate the change in the predicted value when $x$ changes by some (small) amount $\delta x.$ This is a simple calculation beginning with the fitted model $$\log(\hat y(x)) = \hat a + \hat b x + \hat c x^2$$ where the "hats" on the terms designate estimated values. Plugging in $x+\delta x$ for the changed value of $x$ and subtracting the original value of $\log\hat y$ gives $$\log(\hat y(x+\delta x)) - \log(\hat y(x)) = \hat b\, \delta x + \hat c (2x\, \delta x + (\delta x)^2).$$ Provided $\hat c(\delta x)^2$ is of negligible size compared to the remaining terms on the right hand side; that is, when $$\left|\hat c\, \delta x\right|\ \ll\ \left|\hat b + 2 \hat c\, x\right|,$$ we may neglect it for these interpretive purposes and write $$\log\left(\frac{\hat y(x+\delta x)}{\hat y(x)}\right) = \log(\hat y(x+\delta x)) - \log(\hat y(x)) \approx \left(\hat b + 2 \hat c x\right) \delta x .$$ On the left is the logarithm of the relative change in the predicted response $\hat y(x).$ For small relative changes the (natural) logarithm will be very close to 1/100th of the percentage difference. For instance, when the log is 0.15, the relative change will be very close to a +15% increase. (For many purposes this rule of thumb holds for percentages between $\pm 20\%,$ roughly.) On the right is a multiple of the change $\delta x$ induced in the regressor. That multiple is $\hat b + 2\hat c x.$ Of note is that it depends on the value of $x$ you started with. In other words, the change in the response depends on what the regressor value is: it is not constant. Another way to restate this interpretation is to exponentiate both sides, which expresses the response on its original (rather than log) scale, yielding $$\hat y(x+\delta x) \approx \hat y(x)\exp\left(\left(\hat b + 2 \hat c x\right) \delta x\right) \approx \hat y(x)\left(1 + \left(\hat b + 2 \hat c x\right) \delta x\right).$$ The new value, on the left hand side, is expressed as change of the old value by approximately $100\% \times \left(\hat b + 2 \hat c x\right) \delta x.$ Although this might seem a little complicated and not easy to remember, please note that all the calculations involved are simple: they are just some multiplications and additions. To those familiar with the differential Calculus, they can be read directly off the original model equation with only the simplest mental arithmetic, because (taking differentials) it is immediate that $$\frac{y^\prime (x)}{y(x)}\, dx = \frac{d}{dx} \log(y(x) )\, dx = (b + 2cx)\, dx$$ and all you have to do is "put hats on" all the estimates and, as usual, interpret $dx$ as a (sufficiently) small increment in $x.$
How do I interpret the coefficients of a log-linear regression with quadratic terms?
By "impact" of $x$ I understand you want to estimate the change in the predicted value when $x$ changes by some (small) amount $\delta x.$ This is a simple calculation beginning with the fitted model
How do I interpret the coefficients of a log-linear regression with quadratic terms? By "impact" of $x$ I understand you want to estimate the change in the predicted value when $x$ changes by some (small) amount $\delta x.$ This is a simple calculation beginning with the fitted model $$\log(\hat y(x)) = \hat a + \hat b x + \hat c x^2$$ where the "hats" on the terms designate estimated values. Plugging in $x+\delta x$ for the changed value of $x$ and subtracting the original value of $\log\hat y$ gives $$\log(\hat y(x+\delta x)) - \log(\hat y(x)) = \hat b\, \delta x + \hat c (2x\, \delta x + (\delta x)^2).$$ Provided $\hat c(\delta x)^2$ is of negligible size compared to the remaining terms on the right hand side; that is, when $$\left|\hat c\, \delta x\right|\ \ll\ \left|\hat b + 2 \hat c\, x\right|,$$ we may neglect it for these interpretive purposes and write $$\log\left(\frac{\hat y(x+\delta x)}{\hat y(x)}\right) = \log(\hat y(x+\delta x)) - \log(\hat y(x)) \approx \left(\hat b + 2 \hat c x\right) \delta x .$$ On the left is the logarithm of the relative change in the predicted response $\hat y(x).$ For small relative changes the (natural) logarithm will be very close to 1/100th of the percentage difference. For instance, when the log is 0.15, the relative change will be very close to a +15% increase. (For many purposes this rule of thumb holds for percentages between $\pm 20\%,$ roughly.) On the right is a multiple of the change $\delta x$ induced in the regressor. That multiple is $\hat b + 2\hat c x.$ Of note is that it depends on the value of $x$ you started with. In other words, the change in the response depends on what the regressor value is: it is not constant. Another way to restate this interpretation is to exponentiate both sides, which expresses the response on its original (rather than log) scale, yielding $$\hat y(x+\delta x) \approx \hat y(x)\exp\left(\left(\hat b + 2 \hat c x\right) \delta x\right) \approx \hat y(x)\left(1 + \left(\hat b + 2 \hat c x\right) \delta x\right).$$ The new value, on the left hand side, is expressed as change of the old value by approximately $100\% \times \left(\hat b + 2 \hat c x\right) \delta x.$ Although this might seem a little complicated and not easy to remember, please note that all the calculations involved are simple: they are just some multiplications and additions. To those familiar with the differential Calculus, they can be read directly off the original model equation with only the simplest mental arithmetic, because (taking differentials) it is immediate that $$\frac{y^\prime (x)}{y(x)}\, dx = \frac{d}{dx} \log(y(x) )\, dx = (b + 2cx)\, dx$$ and all you have to do is "put hats on" all the estimates and, as usual, interpret $dx$ as a (sufficiently) small increment in $x.$
How do I interpret the coefficients of a log-linear regression with quadratic terms? By "impact" of $x$ I understand you want to estimate the change in the predicted value when $x$ changes by some (small) amount $\delta x.$ This is a simple calculation beginning with the fitted model
47,708
What is the difference between monte carlo integration and gibbs sampling?
Monte Carlo integration is a technique for numerically integrating a function by evaluating it at many randomly chosen points. It's useful for computing integrals when a closed form solution doesn't exist, and when the problem is high dimensional (in this case, standard numerical integration methods based on quadtrature are inefficient). The function to be integrated need not be a probability distribution. Markov chain Monte Carlo (MCMC) refers to a class of methods for sampling from a probability distribution. It works by constructing a Markov chain whose equilibrium distribution matches the distribution of interest, then sampling from the Markov chain. This is useful when one cannot directly sample from the distribution of interest, particularly in high dimensional settings. Gibbs sampling is an MCMC method. Monte Carlo integration and MCMC both fall under the general category of Monte Carlo methods, which use random sampling (the name refers to the Monte Carlo casino in Monaco). But, as above, they're used for completely different purposes (integrating a general function vs. sampling from a probability distribution). A connection arises when MCMC methods are used for inference. For example, suppose we want to estimate a parameter as the mean of the posterior distribution. We can use MCMC to sample from the posterior, then take the mean of the samples. This corresponds to a form of Monte Carlo integration over the posterior.
What is the difference between monte carlo integration and gibbs sampling?
Monte Carlo integration is a technique for numerically integrating a function by evaluating it at many randomly chosen points. It's useful for computing integrals when a closed form solution doesn't e
What is the difference between monte carlo integration and gibbs sampling? Monte Carlo integration is a technique for numerically integrating a function by evaluating it at many randomly chosen points. It's useful for computing integrals when a closed form solution doesn't exist, and when the problem is high dimensional (in this case, standard numerical integration methods based on quadtrature are inefficient). The function to be integrated need not be a probability distribution. Markov chain Monte Carlo (MCMC) refers to a class of methods for sampling from a probability distribution. It works by constructing a Markov chain whose equilibrium distribution matches the distribution of interest, then sampling from the Markov chain. This is useful when one cannot directly sample from the distribution of interest, particularly in high dimensional settings. Gibbs sampling is an MCMC method. Monte Carlo integration and MCMC both fall under the general category of Monte Carlo methods, which use random sampling (the name refers to the Monte Carlo casino in Monaco). But, as above, they're used for completely different purposes (integrating a general function vs. sampling from a probability distribution). A connection arises when MCMC methods are used for inference. For example, suppose we want to estimate a parameter as the mean of the posterior distribution. We can use MCMC to sample from the posterior, then take the mean of the samples. This corresponds to a form of Monte Carlo integration over the posterior.
What is the difference between monte carlo integration and gibbs sampling? Monte Carlo integration is a technique for numerically integrating a function by evaluating it at many randomly chosen points. It's useful for computing integrals when a closed form solution doesn't e
47,709
K-nearest neighbor supervised or unsupervised machine learning?
Assuming K is given, strictly speaking, KNN does not have any learning involved, i.e., there are no parameters we can tune to make the performance better. Or we are not trying to optimize an objective function from the training data set. This is a major differences from most supervised learning algorithms. It is a rule that can be used in production time that can classify or clustering a instance based on its neighbors. Compute neighbors does not require label but label can be used to make the decision for the classification.
K-nearest neighbor supervised or unsupervised machine learning?
Assuming K is given, strictly speaking, KNN does not have any learning involved, i.e., there are no parameters we can tune to make the performance better. Or we are not trying to optimize an objective
K-nearest neighbor supervised or unsupervised machine learning? Assuming K is given, strictly speaking, KNN does not have any learning involved, i.e., there are no parameters we can tune to make the performance better. Or we are not trying to optimize an objective function from the training data set. This is a major differences from most supervised learning algorithms. It is a rule that can be used in production time that can classify or clustering a instance based on its neighbors. Compute neighbors does not require label but label can be used to make the decision for the classification.
K-nearest neighbor supervised or unsupervised machine learning? Assuming K is given, strictly speaking, KNN does not have any learning involved, i.e., there are no parameters we can tune to make the performance better. Or we are not trying to optimize an objective
47,710
How to back-transform a log transformed regression model in R with bias correction
You didn't give any details about why you think the outputs are wildly unlikely, but my guess is that your errors are not normally distributed. That "smearing adjustment" (bias correction) you're using is only valid if the errors are normal. There is a more general smearing adjustment you can use, which is easy to implement. If I recall correctly, and I think I do, the steps are: Compute $\exp(X\hat{\beta})$, i.e. the retransformed but unadjusted prediction. Regress $Y$ against $\exp(X\hat{\beta})$ without an intercept. Call the resulting regression coefficient $\gamma$. Compute the adjusted retransformed prediction as $\gamma \exp(X\hat{\beta})$. It's about the most intuitive thing you can do--forget the theory based on the normal distribution and just estimate the multiplier that gets the job done. For details, see Duan, Naihua. “Smearing Estimate: A Nonparametric Retransformation Method.” Journal of the American Statistical Association, vol. 78, no. 383, 1983, pp. 605–610. JSTOR, JSTOR, www.jstor.org/stable/2288126.
How to back-transform a log transformed regression model in R with bias correction
You didn't give any details about why you think the outputs are wildly unlikely, but my guess is that your errors are not normally distributed. That "smearing adjustment" (bias correction) you're usi
How to back-transform a log transformed regression model in R with bias correction You didn't give any details about why you think the outputs are wildly unlikely, but my guess is that your errors are not normally distributed. That "smearing adjustment" (bias correction) you're using is only valid if the errors are normal. There is a more general smearing adjustment you can use, which is easy to implement. If I recall correctly, and I think I do, the steps are: Compute $\exp(X\hat{\beta})$, i.e. the retransformed but unadjusted prediction. Regress $Y$ against $\exp(X\hat{\beta})$ without an intercept. Call the resulting regression coefficient $\gamma$. Compute the adjusted retransformed prediction as $\gamma \exp(X\hat{\beta})$. It's about the most intuitive thing you can do--forget the theory based on the normal distribution and just estimate the multiplier that gets the job done. For details, see Duan, Naihua. “Smearing Estimate: A Nonparametric Retransformation Method.” Journal of the American Statistical Association, vol. 78, no. 383, 1983, pp. 605–610. JSTOR, JSTOR, www.jstor.org/stable/2288126.
How to back-transform a log transformed regression model in R with bias correction You didn't give any details about why you think the outputs are wildly unlikely, but my guess is that your errors are not normally distributed. That "smearing adjustment" (bias correction) you're usi
47,711
Optimal proposal for self-normalized importance sampling
It should be noted that $q_{opt}$ actually minimizes the approximate variance given by the Delta method. You can get this by solving $$ q_{opt} = \arg\min_q\mathbb{E}_q[w^2(X)(f(X)-I)^2], \; \text{ s.t.} \int q(x)dx=1 $$ Now, since: $$ \mathbb{E}_q[w^2(X)(f(X)-I)^2] = \int\frac{p^2(x)}{q(x)}(f(x)-I)^2dx = \int L(x,q(x))dx $$ for $L(x,q(x)) = \frac{p^2(x)}{q(x)}(f(x)-I)^2$, using Lagrange multipliers for calculus of variations yields $$ \begin{aligned} 0&=\frac{\partial L}{\partial q} +\lambda \\ &= -\frac{p^2(x)}{q^2(x)}(f(x)-I)^2 + \lambda \end{aligned} $$ Thus $$ q^2(x) = \frac{p^2(x)}{\lambda}(f(x)-I)^2 \implies q_{opt}(x) \propto p(x)|f(x)-I| $$
Optimal proposal for self-normalized importance sampling
It should be noted that $q_{opt}$ actually minimizes the approximate variance given by the Delta method. You can get this by solving $$ q_{opt} = \arg\min_q\mathbb{E}_q[w^2(X)(f(X)-I)^2], \; \text{ s
Optimal proposal for self-normalized importance sampling It should be noted that $q_{opt}$ actually minimizes the approximate variance given by the Delta method. You can get this by solving $$ q_{opt} = \arg\min_q\mathbb{E}_q[w^2(X)(f(X)-I)^2], \; \text{ s.t.} \int q(x)dx=1 $$ Now, since: $$ \mathbb{E}_q[w^2(X)(f(X)-I)^2] = \int\frac{p^2(x)}{q(x)}(f(x)-I)^2dx = \int L(x,q(x))dx $$ for $L(x,q(x)) = \frac{p^2(x)}{q(x)}(f(x)-I)^2$, using Lagrange multipliers for calculus of variations yields $$ \begin{aligned} 0&=\frac{\partial L}{\partial q} +\lambda \\ &= -\frac{p^2(x)}{q^2(x)}(f(x)-I)^2 + \lambda \end{aligned} $$ Thus $$ q^2(x) = \frac{p^2(x)}{\lambda}(f(x)-I)^2 \implies q_{opt}(x) \propto p(x)|f(x)-I| $$
Optimal proposal for self-normalized importance sampling It should be noted that $q_{opt}$ actually minimizes the approximate variance given by the Delta method. You can get this by solving $$ q_{opt} = \arg\min_q\mathbb{E}_q[w^2(X)(f(X)-I)^2], \; \text{ s
47,712
Finding a function minimizing the expected value
Note: Normally for self-study questions we try to give hints rather than full solutions. However, in the present case you are dealing with a functional-optimisation problem where I think most students would not have any idea how to do any of this, without seeing a full solution for a few cases. In view of this, I have decided to give a full solution below. Re-framing the optimisation problem: When you are undertaking optimisation in function spaces, half the battle is re-framing the problem in a way that brings it back to optimisation in the reals. In this case, this can be done by recognising that the functional argument is operating on $X$, but the loss is then with respect to $Y$. In this kind of case, with a bit of effort you can split your optimisation problem to turn it into a set of optimisations conditional on values of $x$, which reduces the problem to a standard optimisation problem dealing with real numbers (rather than a function). Let's have a look at how to do this in the present case. You can apply the law-of-total-expectation to restate your optimisation problem as follows: $$\underset{g \in \mathscr{G}}{\text{Minimise}} \quad F(g) = \int \limits_{\mathscr{X}} H(g(x),x) p_X(x) dx,$$ where $\mathscr{G}$ is some appropriately large function-space for the function $g$ (we will come back to this), and the inner-function $H$ is the conditional expectation: $$H(g(x),x) \equiv \int \limits_{\mathscr{Y}} (y-g(x))^2 p_{Y|X}(y|x) dy.$$ Now, observe here that for a fixed argument value of $x$, the function $H(\text{ }\cdot \text{ }, x)$ depends on $g$ only through the individual value $g(x)$. Since you are choosing a function $g$ with argument value $x$ in your optimisation, this means that minimising the objective over the function-space is equivalent to minimising the inner function $H(g(x),x)$ for each individual $x$. Thus, substituting $w = g(x)$, your optimisation problem reduces to the non-linear programming problem: $$\underset{w \in \mathbb{R}}{\text{Minimise}} \quad H(w,x) \quad \quad \quad \text{for all } x \in \mathscr{X}.$$ This means we can solve this optimisation problem by finding the point-wise optimised function $\hat{g}$ that minimises the above conditional expectation for each individual $x \in \mathscr{X}$. One caveat on this: If we optimise in this way, we have to go back and check that the resulting optimised function $\hat{g}$ is within the allowable function-space in the initial optimisation problem. We have glossed over this in the above explanation, since it will turn out that the optimised function in this case is a well-known result that is usually considered to be within the scope of the optimisation. Nevertheless, the above reasoning should be read with the implicit caveat that it might not apply if the function space $\mathscr{G}$ is not sufficiently broad to encompass the point-wise optimised function $\hat{g}$. In that case the problem becomes much more complicated! Solving the point-wise optimisation problem: To conduct our point-wise real optimisation we will use standard calculus techniques. For a fixed value of $x$ the derivative of $H$ with respect to our argument value is: $$\begin{equation} \begin{aligned} \frac{\partial H}{\partial w}(w,x) &= \frac{\partial}{\partial w} \int \limits_{\mathscr{Y}} (y-w)^2 p_{Y|X}(y|x) dy \\[6pt] &= \int \limits_{\mathscr{Y}} \frac{\partial}{\partial w} (y-w)^2 p_{Y|X}(y|x) dy \\[6pt] &= -2 \int \limits_{\mathscr{Y}} (y-w) p_{Y|X}(y|x) dy \\[6pt] &= -2 \Bigg[ \int \limits_{\mathscr{Y}} y p_{Y|X}(y|x) dy - w \int \limits_{\mathscr{Y}} p_{Y|X}(y|x) dy \Bigg] \\[6pt] &= -2 \Bigg[ \mathbb{E}(Y|X=x) - w \Bigg]. \\[6pt] \end{aligned} \end{equation}$$ (Note that we have brought the derivative operator inside the integral in this working. This step can be justified by assuming that the support $\mathscr{Y}$ is not affected by the estimator $w = g(x)$, and the function $H$ has continuous partial derivatives.) For each fixed $x$ the function $H(w,x)$ is strictly convex in $w$, so the minimising point occurs at the unique critical point of the function, so we have: $$0 = \frac{\partial H}{\partial w}(\hat{w},x) = -2 \Bigg[ \mathbb{E}(Y|X=x) - \hat{w} \Bigg] \quad \quad \implies \quad \quad \hat{w} = \mathbb{E}(Y|X=x).$$ Hence, our point-wise optimised function is: $$\hat{g}(x) = \mathbb{E}(Y|X=x).$$ Assuming that this function is within the function-space for the initial optimisation problem (which it should be), we have found the optimising function. From this result we can see that the way to minimise squared-error-loss is to choose the conditional expectation of $Y$ given $X$ as estimator. This is a well-known result in estimation theory, but as you can see, the derivation requires a bit of knowledge of how to deal with functional optimisation problems.
Finding a function minimizing the expected value
Note: Normally for self-study questions we try to give hints rather than full solutions. However, in the present case you are dealing with a functional-optimisation problem where I think most student
Finding a function minimizing the expected value Note: Normally for self-study questions we try to give hints rather than full solutions. However, in the present case you are dealing with a functional-optimisation problem where I think most students would not have any idea how to do any of this, without seeing a full solution for a few cases. In view of this, I have decided to give a full solution below. Re-framing the optimisation problem: When you are undertaking optimisation in function spaces, half the battle is re-framing the problem in a way that brings it back to optimisation in the reals. In this case, this can be done by recognising that the functional argument is operating on $X$, but the loss is then with respect to $Y$. In this kind of case, with a bit of effort you can split your optimisation problem to turn it into a set of optimisations conditional on values of $x$, which reduces the problem to a standard optimisation problem dealing with real numbers (rather than a function). Let's have a look at how to do this in the present case. You can apply the law-of-total-expectation to restate your optimisation problem as follows: $$\underset{g \in \mathscr{G}}{\text{Minimise}} \quad F(g) = \int \limits_{\mathscr{X}} H(g(x),x) p_X(x) dx,$$ where $\mathscr{G}$ is some appropriately large function-space for the function $g$ (we will come back to this), and the inner-function $H$ is the conditional expectation: $$H(g(x),x) \equiv \int \limits_{\mathscr{Y}} (y-g(x))^2 p_{Y|X}(y|x) dy.$$ Now, observe here that for a fixed argument value of $x$, the function $H(\text{ }\cdot \text{ }, x)$ depends on $g$ only through the individual value $g(x)$. Since you are choosing a function $g$ with argument value $x$ in your optimisation, this means that minimising the objective over the function-space is equivalent to minimising the inner function $H(g(x),x)$ for each individual $x$. Thus, substituting $w = g(x)$, your optimisation problem reduces to the non-linear programming problem: $$\underset{w \in \mathbb{R}}{\text{Minimise}} \quad H(w,x) \quad \quad \quad \text{for all } x \in \mathscr{X}.$$ This means we can solve this optimisation problem by finding the point-wise optimised function $\hat{g}$ that minimises the above conditional expectation for each individual $x \in \mathscr{X}$. One caveat on this: If we optimise in this way, we have to go back and check that the resulting optimised function $\hat{g}$ is within the allowable function-space in the initial optimisation problem. We have glossed over this in the above explanation, since it will turn out that the optimised function in this case is a well-known result that is usually considered to be within the scope of the optimisation. Nevertheless, the above reasoning should be read with the implicit caveat that it might not apply if the function space $\mathscr{G}$ is not sufficiently broad to encompass the point-wise optimised function $\hat{g}$. In that case the problem becomes much more complicated! Solving the point-wise optimisation problem: To conduct our point-wise real optimisation we will use standard calculus techniques. For a fixed value of $x$ the derivative of $H$ with respect to our argument value is: $$\begin{equation} \begin{aligned} \frac{\partial H}{\partial w}(w,x) &= \frac{\partial}{\partial w} \int \limits_{\mathscr{Y}} (y-w)^2 p_{Y|X}(y|x) dy \\[6pt] &= \int \limits_{\mathscr{Y}} \frac{\partial}{\partial w} (y-w)^2 p_{Y|X}(y|x) dy \\[6pt] &= -2 \int \limits_{\mathscr{Y}} (y-w) p_{Y|X}(y|x) dy \\[6pt] &= -2 \Bigg[ \int \limits_{\mathscr{Y}} y p_{Y|X}(y|x) dy - w \int \limits_{\mathscr{Y}} p_{Y|X}(y|x) dy \Bigg] \\[6pt] &= -2 \Bigg[ \mathbb{E}(Y|X=x) - w \Bigg]. \\[6pt] \end{aligned} \end{equation}$$ (Note that we have brought the derivative operator inside the integral in this working. This step can be justified by assuming that the support $\mathscr{Y}$ is not affected by the estimator $w = g(x)$, and the function $H$ has continuous partial derivatives.) For each fixed $x$ the function $H(w,x)$ is strictly convex in $w$, so the minimising point occurs at the unique critical point of the function, so we have: $$0 = \frac{\partial H}{\partial w}(\hat{w},x) = -2 \Bigg[ \mathbb{E}(Y|X=x) - \hat{w} \Bigg] \quad \quad \implies \quad \quad \hat{w} = \mathbb{E}(Y|X=x).$$ Hence, our point-wise optimised function is: $$\hat{g}(x) = \mathbb{E}(Y|X=x).$$ Assuming that this function is within the function-space for the initial optimisation problem (which it should be), we have found the optimising function. From this result we can see that the way to minimise squared-error-loss is to choose the conditional expectation of $Y$ given $X$ as estimator. This is a well-known result in estimation theory, but as you can see, the derivation requires a bit of knowledge of how to deal with functional optimisation problems.
Finding a function minimizing the expected value Note: Normally for self-study questions we try to give hints rather than full solutions. However, in the present case you are dealing with a functional-optimisation problem where I think most student
47,713
Finding a function minimizing the expected value
Interpreting the question as @whuber did in his comment, here is a quite vague hint: https://en.wikipedia.org/wiki/Law_of_total_expectation Edit after the spoilers: graphical illustration of the application of the hint, as well as shorter version of Ben's answer: Here are the two dependent random variables $X$ and $Y$. You can see the distribution of $Y$ "sliced" at some values of $X$. The law of total expectation says, that if you want to integrate an expression of $X$ and $Y$, then you can first integrate it on each slice, and then integrate over the slices. Which is very handy in this case. The expression of the question at a slice $x_0$ becomes $E((Y|X=x_0) - g(x_0))^2$, and it is well known (and mentioned by you in the question), that it is minimized by the average $g(x_0) = E(Y|X=x_0)$. Since such $g$ minimizes the value of the expression at every slice, it minimizes the slice integral as well.
Finding a function minimizing the expected value
Interpreting the question as @whuber did in his comment, here is a quite vague hint: https://en.wikipedia.org/wiki/Law_of_total_expectation Edit after the spoilers: graphical illustration of the appl
Finding a function minimizing the expected value Interpreting the question as @whuber did in his comment, here is a quite vague hint: https://en.wikipedia.org/wiki/Law_of_total_expectation Edit after the spoilers: graphical illustration of the application of the hint, as well as shorter version of Ben's answer: Here are the two dependent random variables $X$ and $Y$. You can see the distribution of $Y$ "sliced" at some values of $X$. The law of total expectation says, that if you want to integrate an expression of $X$ and $Y$, then you can first integrate it on each slice, and then integrate over the slices. Which is very handy in this case. The expression of the question at a slice $x_0$ becomes $E((Y|X=x_0) - g(x_0))^2$, and it is well known (and mentioned by you in the question), that it is minimized by the average $g(x_0) = E(Y|X=x_0)$. Since such $g$ minimizes the value of the expression at every slice, it minimizes the slice integral as well.
Finding a function minimizing the expected value Interpreting the question as @whuber did in his comment, here is a quite vague hint: https://en.wikipedia.org/wiki/Law_of_total_expectation Edit after the spoilers: graphical illustration of the appl
47,714
Correlation between Ornstein-Uhlenbeck processes
They are not perfectly positively correlated: Even when random variables are deterministically related (which would require $X$ to be deterministic in this case), perfect correlation requires them to be related via an affine transformation. This would require a relationship of the form: $$V(t) = \ln \Big[ 1+\frac{U(t)}{X(t)} \Big] = a + b U(t),$$ where $a \in \mathbb{R}$ and $b>0$. Solving for the process $X$ gives: $$X(t) = \frac{U(t)}{\exp(a + b U(t))-1}.$$ This is inconsistent with your specification that $X$ is a geometric Brownian motion. However, note that if your geometric Brownian motion process has a large mean and small variance (such that it is approximately constant at a mean value $\mu_X$ that is much bigger than $U(t)$) then you would have $X(t) \approx \mu_X \gg U(t)$ which gives the approximation: $$V(t) = \ln \Big[ 1+\frac{U(t)}{X(t)} \Big] \approx \frac{U(t)}{X(t)} \approx \frac{1}{\mu_X} \cdot U(t),$$ so in this case you could get something that is close to an affine transform, and so you would get something close to perfect correlation.
Correlation between Ornstein-Uhlenbeck processes
They are not perfectly positively correlated: Even when random variables are deterministically related (which would require $X$ to be deterministic in this case), perfect correlation requires them to
Correlation between Ornstein-Uhlenbeck processes They are not perfectly positively correlated: Even when random variables are deterministically related (which would require $X$ to be deterministic in this case), perfect correlation requires them to be related via an affine transformation. This would require a relationship of the form: $$V(t) = \ln \Big[ 1+\frac{U(t)}{X(t)} \Big] = a + b U(t),$$ where $a \in \mathbb{R}$ and $b>0$. Solving for the process $X$ gives: $$X(t) = \frac{U(t)}{\exp(a + b U(t))-1}.$$ This is inconsistent with your specification that $X$ is a geometric Brownian motion. However, note that if your geometric Brownian motion process has a large mean and small variance (such that it is approximately constant at a mean value $\mu_X$ that is much bigger than $U(t)$) then you would have $X(t) \approx \mu_X \gg U(t)$ which gives the approximation: $$V(t) = \ln \Big[ 1+\frac{U(t)}{X(t)} \Big] \approx \frac{U(t)}{X(t)} \approx \frac{1}{\mu_X} \cdot U(t),$$ so in this case you could get something that is close to an affine transform, and so you would get something close to perfect correlation.
Correlation between Ornstein-Uhlenbeck processes They are not perfectly positively correlated: Even when random variables are deterministically related (which would require $X$ to be deterministic in this case), perfect correlation requires them to
47,715
Mean square convergence of linear processes
Absolute summability will allow you to show that the sequence (in $n$) $$ X_t^n = \sum_{j=-n}^{n} \psi_jZ_{t-j} $$ has a mean-square limit (we are not talking about almost-sure convergence, here). That is, we want to show that there exists some $X_t$ (we don't know it exists yet,because it's an infinite sum) such that $$ \mathbb{E}[|X_t^n -X_t|^2] \to 0 $$ as $n \to \infty$. Often it is easier to verify that the sequence $X_t^n$ is Cauchy, which is an equivalent condition. This means that $\mathbb{E}[|X_t^n -X_t^m|^2] \to 0$ as $m,n \to \infty$. Here's the proof. For $n > m > 0$ \begin{align*} &\mathbb{E}[|X_t^n -X_t^m|^2] \\ &= \mathbb{E}\left[\left| \sum_{m < |j| \le n} \psi_jZ_{t-j}\right|^2\right] \\ &= \sum_{m < |i| \le n} \sum_{m < |k| \le n} \psi_i \psi_k\mathbb{E}[Z_{t-i }Z_{t-k}] \\ &\le\sum_{m < |i| \le n} \sum_{m < |k| \le n} |\psi_i| |\psi_k| |\mathbb{E}[Z_{t-i}Z_{t-k}]| \tag{triangle ineq.}\\ &\le\sum_{m < |i| \le n} \sum_{m < |k| \le n} |\psi_i| |\psi_k| (\mathbb{E}[Z_{t-i}^2])^{1/2} (\mathbb{E}[Z_{t-k}^2])^{1/2} \tag{Cauchy-Schwarz}\\ &= \text{Var}(Z_t) \left( \sum_{m < |j| \le n} |\psi_j| \right)^2 \tag{stationarity of $Z_t$}\\ &\to 0 \tag{absolute summability}. \end{align*} Only after you know that $X_t$ exists, can you show the order of taking the limit and expectation doesn't matter. Or in other words, you can show that $E[X_t^n] \to EX_t$, $E[|X_t^n|^2] \to E[X_t^2]$, and $E[X_t^nX_s^n] \to E[X_tX_s]$; but existence comes first. Edit: To prove as convergence, we can use the Borel-Cantelli lemma. Pick $\epsilon > 0$ and call $$ A_n = \{|X_t^n - X_t| > \epsilon \} = \left\{ \left|\sum_{|j|>n} \psi_j Z_{t-j}\right| > \epsilon\right\}. $$ Using the same reasoning above \begin{align*} \sum_{n=1}^{\infty} P(A_n) &\le \epsilon^{-2}\sum_{n=1}^{\infty} E\left[\left|\sum_{|j|>n} \psi_j Z_{t-j}\right|^2\right] \\ &\le \text{Var}(Z_t) \left( \sum_{j \in \mathbb{Z}} |\psi_j| \right)^2\\ &< \infty. \end{align*} So $X_t^n \overset{as}{\to} X_t$ for each $t$.
Mean square convergence of linear processes
Absolute summability will allow you to show that the sequence (in $n$) $$ X_t^n = \sum_{j=-n}^{n} \psi_jZ_{t-j} $$ has a mean-square limit (we are not talking about almost-sure convergence, here). Tha
Mean square convergence of linear processes Absolute summability will allow you to show that the sequence (in $n$) $$ X_t^n = \sum_{j=-n}^{n} \psi_jZ_{t-j} $$ has a mean-square limit (we are not talking about almost-sure convergence, here). That is, we want to show that there exists some $X_t$ (we don't know it exists yet,because it's an infinite sum) such that $$ \mathbb{E}[|X_t^n -X_t|^2] \to 0 $$ as $n \to \infty$. Often it is easier to verify that the sequence $X_t^n$ is Cauchy, which is an equivalent condition. This means that $\mathbb{E}[|X_t^n -X_t^m|^2] \to 0$ as $m,n \to \infty$. Here's the proof. For $n > m > 0$ \begin{align*} &\mathbb{E}[|X_t^n -X_t^m|^2] \\ &= \mathbb{E}\left[\left| \sum_{m < |j| \le n} \psi_jZ_{t-j}\right|^2\right] \\ &= \sum_{m < |i| \le n} \sum_{m < |k| \le n} \psi_i \psi_k\mathbb{E}[Z_{t-i }Z_{t-k}] \\ &\le\sum_{m < |i| \le n} \sum_{m < |k| \le n} |\psi_i| |\psi_k| |\mathbb{E}[Z_{t-i}Z_{t-k}]| \tag{triangle ineq.}\\ &\le\sum_{m < |i| \le n} \sum_{m < |k| \le n} |\psi_i| |\psi_k| (\mathbb{E}[Z_{t-i}^2])^{1/2} (\mathbb{E}[Z_{t-k}^2])^{1/2} \tag{Cauchy-Schwarz}\\ &= \text{Var}(Z_t) \left( \sum_{m < |j| \le n} |\psi_j| \right)^2 \tag{stationarity of $Z_t$}\\ &\to 0 \tag{absolute summability}. \end{align*} Only after you know that $X_t$ exists, can you show the order of taking the limit and expectation doesn't matter. Or in other words, you can show that $E[X_t^n] \to EX_t$, $E[|X_t^n|^2] \to E[X_t^2]$, and $E[X_t^nX_s^n] \to E[X_tX_s]$; but existence comes first. Edit: To prove as convergence, we can use the Borel-Cantelli lemma. Pick $\epsilon > 0$ and call $$ A_n = \{|X_t^n - X_t| > \epsilon \} = \left\{ \left|\sum_{|j|>n} \psi_j Z_{t-j}\right| > \epsilon\right\}. $$ Using the same reasoning above \begin{align*} \sum_{n=1}^{\infty} P(A_n) &\le \epsilon^{-2}\sum_{n=1}^{\infty} E\left[\left|\sum_{|j|>n} \psi_j Z_{t-j}\right|^2\right] \\ &\le \text{Var}(Z_t) \left( \sum_{j \in \mathbb{Z}} |\psi_j| \right)^2\\ &< \infty. \end{align*} So $X_t^n \overset{as}{\to} X_t$ for each $t$.
Mean square convergence of linear processes Absolute summability will allow you to show that the sequence (in $n$) $$ X_t^n = \sum_{j=-n}^{n} \psi_jZ_{t-j} $$ has a mean-square limit (we are not talking about almost-sure convergence, here). Tha
47,716
Compare a diagnostic test to gold standard
If you use McNemar's test you are testing whether the table is symmetric: whether more people are diagnosed sick by the new method and well by the old versus well by the new and sick by the old. This is a perfectly reasonable scientific question to have.For a concrete situation suppose the two methods being compared are ratings of mental health problems by a psychiatrist and by a family physician. Since they see different case mix in their practice you might ask whether this affects their threshold for declaring someone ill. If you use Cohen's kappa you are evaluating whether agreement between the methods is more than would be expected by chance. This again a perfectly reasonable question to have but it is different. So if you are comparing two methods for diagnosing mild cognitive impairment where there is no gold standard you might treat agreement between methods as justifying the concept of MCI and if they disagree you might wonder whether it is a useful diagnosis at all. Calculating sensitivity and specificity is the usual method for diagnostic tests and evaluates the performance separately in the two groups: well according to the gold standard and sick according to the gold standard. Again this is a reasonable thing to do but it is different from the other two. In this case you have two separate things which you are interested in and your focus in a practical situation might be on one or the other. For instance if you are screening for a fatal disease you might want a test with high sensitivity since you do not want to miss cases. On the other hand if you are recruiting into a trial you might not mind missing a few but on cost grounds you might want high specificity since you do not want o do the full diagnostic work-up on more people than is absolutely essential.
Compare a diagnostic test to gold standard
If you use McNemar's test you are testing whether the table is symmetric: whether more people are diagnosed sick by the new method and well by the old versus well by the new and sick by the old. This
Compare a diagnostic test to gold standard If you use McNemar's test you are testing whether the table is symmetric: whether more people are diagnosed sick by the new method and well by the old versus well by the new and sick by the old. This is a perfectly reasonable scientific question to have.For a concrete situation suppose the two methods being compared are ratings of mental health problems by a psychiatrist and by a family physician. Since they see different case mix in their practice you might ask whether this affects their threshold for declaring someone ill. If you use Cohen's kappa you are evaluating whether agreement between the methods is more than would be expected by chance. This again a perfectly reasonable question to have but it is different. So if you are comparing two methods for diagnosing mild cognitive impairment where there is no gold standard you might treat agreement between methods as justifying the concept of MCI and if they disagree you might wonder whether it is a useful diagnosis at all. Calculating sensitivity and specificity is the usual method for diagnostic tests and evaluates the performance separately in the two groups: well according to the gold standard and sick according to the gold standard. Again this is a reasonable thing to do but it is different from the other two. In this case you have two separate things which you are interested in and your focus in a practical situation might be on one or the other. For instance if you are screening for a fatal disease you might want a test with high sensitivity since you do not want to miss cases. On the other hand if you are recruiting into a trial you might not mind missing a few but on cost grounds you might want high specificity since you do not want o do the full diagnostic work-up on more people than is absolutely essential.
Compare a diagnostic test to gold standard If you use McNemar's test you are testing whether the table is symmetric: whether more people are diagnosed sick by the new method and well by the old versus well by the new and sick by the old. This
47,717
Compare a diagnostic test to gold standard
You are asking about agreement, so you should use a test for agreement. With just two diagnostic measures ('raters') that are categorical in nature, the standard test is Cohen's kappa. Here's a version applied to your data, coded in R: tab2 = as.data.frame(tab) library(irr) kappa2(tab2[rep(1:4, times=tab2[,3]),1:2]) # Cohen's Kappa for 2 Raters (Weights: unweighted) # # Subjects = 46 # Raters = 2 # Kappa = 0.363 # # z = 2.52 # p-value = 0.0118 The test is significant, implying that there is greater agreement than you would expect by chance alone. You don't have to stop there. You could measure the percent agreeing, for example: $$ \text{percent agreeing} = \frac{7+27}{7+4+8+27} = 73.9\% $$ Sensitivity and specificity (or the positive and negative predictive values) constitute a similar kind of information, but decomposed and at a greater level of detail, which may be more useful but is also more complex. You could also test to see if the new test is biased relative to the gold standard. Specifically, your test calls only called 11 people sick, whereas the gold standard noted that 15 were. Is the new test saying 'sick' less often than it should? That's what McNemar's test would do for you here. mcnemar.test(tab) # # McNemar's Chi-squared test with continuity correction # # data: tab # McNemar's chi-squared = 0.75, df = 1, p-value = 0.3865 There is insufficient evidence in your dataset to determine that the test is biased relative to the gold standard.
Compare a diagnostic test to gold standard
You are asking about agreement, so you should use a test for agreement. With just two diagnostic measures ('raters') that are categorical in nature, the standard test is Cohen's kappa. Here's a vers
Compare a diagnostic test to gold standard You are asking about agreement, so you should use a test for agreement. With just two diagnostic measures ('raters') that are categorical in nature, the standard test is Cohen's kappa. Here's a version applied to your data, coded in R: tab2 = as.data.frame(tab) library(irr) kappa2(tab2[rep(1:4, times=tab2[,3]),1:2]) # Cohen's Kappa for 2 Raters (Weights: unweighted) # # Subjects = 46 # Raters = 2 # Kappa = 0.363 # # z = 2.52 # p-value = 0.0118 The test is significant, implying that there is greater agreement than you would expect by chance alone. You don't have to stop there. You could measure the percent agreeing, for example: $$ \text{percent agreeing} = \frac{7+27}{7+4+8+27} = 73.9\% $$ Sensitivity and specificity (or the positive and negative predictive values) constitute a similar kind of information, but decomposed and at a greater level of detail, which may be more useful but is also more complex. You could also test to see if the new test is biased relative to the gold standard. Specifically, your test calls only called 11 people sick, whereas the gold standard noted that 15 were. Is the new test saying 'sick' less often than it should? That's what McNemar's test would do for you here. mcnemar.test(tab) # # McNemar's Chi-squared test with continuity correction # # data: tab # McNemar's chi-squared = 0.75, df = 1, p-value = 0.3865 There is insufficient evidence in your dataset to determine that the test is biased relative to the gold standard.
Compare a diagnostic test to gold standard You are asking about agreement, so you should use a test for agreement. With just two diagnostic measures ('raters') that are categorical in nature, the standard test is Cohen's kappa. Here's a vers
47,718
Unbiased Estimation of $\mu^2$ under certain conditions
Only the third question remains to be answered, the case where $X$ has infinite variance. When $n \gt 1,$ you can split the data into two smaller nonoverlapping (and therefore independent) samples, estimate $\mu$ separately in each subsample, and multiply the estimates. The independence assures the expectation of that product is the product of the expectations, so if each one of the estimates of $\mu$ is unbiased, so is your product estimate. The simplest form of this idea is to let $X_i$ be one subsample of size $1$ and $X_j$ (for $j\ne i$) a different subsample of size $1.$ Using the estimator of $\mu$ in the question (for the case $n=1$) gives the estimator $$t_{ij}(\mathbf{X}) = \left(\frac{1}{1} X_i\right) \left(\frac{1}{1} X_j\right) = X_iX_j.$$ Clearly $t_{ij}$ is unbiased because $$\mathbb{E}(t_{ij}(\mathbf{X})) = \mathbb{E}(X_iX_j) = \mathbb{E}(X_i)\mathbb{E}(X_j) = \mu^2.$$ We can go further. Intuitively, this approach ignores a lot of information available in the sample. The theory of U statistics is based on generating all possible estimates $t_{ij}, 1\le i \lt j \le n,$ and averaging them: $$U(\mathbf{X}) = \frac{1}{\binom{n}{2}}\sum_{1 \le i \lt j \le n} t_{ij}(\mathbf X) = \frac{1}{\binom{n}{2}}\sum_{1 \le i \lt j \le n} X_iX_j.$$ (The linearity of expectation shows this average remains unbiased.) Computations of variances show that when the underlying variance is finite, the "U statistic" has smaller variance than any individual $t_{ij}.$ (You might enjoy carrying out this approach for parts (1) and (2) of the question, because it leads directly and easily to the solutions given.) It would seem, however, that all bets are off when the underlying variance is infinite. Indeed, $t_{ij}$ may tend to vary less than the U statistic. Simulations with power-law tails suggest the U-statistic approach still has merit. The estimates produced by any individual $t_{ij}$ tend to be less extreme than the U statistic, because they have less of a chance of sampling the occasional whopping big outlier that such distributions produce. Consequently, there's potentially a high risk in any given application that a $t_{ij}$ will grossly underestimate $\mu^2.$
Unbiased Estimation of $\mu^2$ under certain conditions
Only the third question remains to be answered, the case where $X$ has infinite variance. When $n \gt 1,$ you can split the data into two smaller nonoverlapping (and therefore independent) samples, es
Unbiased Estimation of $\mu^2$ under certain conditions Only the third question remains to be answered, the case where $X$ has infinite variance. When $n \gt 1,$ you can split the data into two smaller nonoverlapping (and therefore independent) samples, estimate $\mu$ separately in each subsample, and multiply the estimates. The independence assures the expectation of that product is the product of the expectations, so if each one of the estimates of $\mu$ is unbiased, so is your product estimate. The simplest form of this idea is to let $X_i$ be one subsample of size $1$ and $X_j$ (for $j\ne i$) a different subsample of size $1.$ Using the estimator of $\mu$ in the question (for the case $n=1$) gives the estimator $$t_{ij}(\mathbf{X}) = \left(\frac{1}{1} X_i\right) \left(\frac{1}{1} X_j\right) = X_iX_j.$$ Clearly $t_{ij}$ is unbiased because $$\mathbb{E}(t_{ij}(\mathbf{X})) = \mathbb{E}(X_iX_j) = \mathbb{E}(X_i)\mathbb{E}(X_j) = \mu^2.$$ We can go further. Intuitively, this approach ignores a lot of information available in the sample. The theory of U statistics is based on generating all possible estimates $t_{ij}, 1\le i \lt j \le n,$ and averaging them: $$U(\mathbf{X}) = \frac{1}{\binom{n}{2}}\sum_{1 \le i \lt j \le n} t_{ij}(\mathbf X) = \frac{1}{\binom{n}{2}}\sum_{1 \le i \lt j \le n} X_iX_j.$$ (The linearity of expectation shows this average remains unbiased.) Computations of variances show that when the underlying variance is finite, the "U statistic" has smaller variance than any individual $t_{ij}.$ (You might enjoy carrying out this approach for parts (1) and (2) of the question, because it leads directly and easily to the solutions given.) It would seem, however, that all bets are off when the underlying variance is infinite. Indeed, $t_{ij}$ may tend to vary less than the U statistic. Simulations with power-law tails suggest the U-statistic approach still has merit. The estimates produced by any individual $t_{ij}$ tend to be less extreme than the U statistic, because they have less of a chance of sampling the occasional whopping big outlier that such distributions produce. Consequently, there's potentially a high risk in any given application that a $t_{ij}$ will grossly underestimate $\mu^2.$
Unbiased Estimation of $\mu^2$ under certain conditions Only the third question remains to be answered, the case where $X$ has infinite variance. When $n \gt 1,$ you can split the data into two smaller nonoverlapping (and therefore independent) samples, es
47,719
Randomly choose between numbers that yields a specific amount of binary 1's
The algorithm to use depends on (a) the capabilities of your software platform; (b) how many such random draws you need; (c) how large the number of digits $n$ is; and (d) how large the number of possible results $\binom{n}{k}$ (where $k$ is the number of ones) is. Most statistical work is done with 32 or 64 signed integer and/or double-precision IEEE floating point numbers, so I will assume that of (a). Here is a set of solutions illustrated with working R code. To be specific, they all draw uniformly, independently, and randomly from the set $\mathcal{B}(n,k)$ of integers which, when represented in binary, have up to $n$ digits of which exactly $k$ are ones. You need a single random integer. Take a sample $i_1, i_2, \ldots, i_k$ without replacement from the set of places $0,1,\ldots, n-1$ and return $2^{i_1} + 2^{i_2} + \cdots + 2^{i_k}.$ rchoose <- function(n, k) sum(2^sample(0:(n-1), k)) This algorithm has $O(n)$ time and storage requirements. You need a large number $N$ of random integers where $n$ and $k$ are small. "Small" means both (1) your system accurately represents all integers through $2^{n}-1$ and (2) you have enough speed and RAM to compute and store all the elements of $\mathcal{B}(n,k).$ The solution is to generate an array representing all elements of $\mathcal{B}(n,k)$ and then (rapidly) draw randomly from this array: rchoose.many <- function(N, n, k) { b <- colSums(2^combn(0:(n-1), k)) sample(b, N, replace=TRUE) } This algorithm requires $O(n \binom{n}{k})$ time to initialize plus $O(N)$ additional time to run. Its storage requirements are $O(n \binom{n}{k})$ (but could be reduced to $O(\binom{n}{k})$ by accumulating the values in a loop during initialization). You need a large number of random integers where $n$ and $k$ are not small. You're still limited by the need to represent $n$-digit binary integers in your system. About the best you can do is to loop $N$ times over the single-draw solution (1): rchoose.many.large <- function(N, n, k) { replicate(N, rchoose(n, k)) } This takes $O(Nn)$ time and $O(n)$ storage. Comparing the asymptotic requirements provides a criterion for selecting the appropriate solution in any situation. Examples These timings (on one modest workstation) provide some indication of the possibilities. system.time(x <- rchoose.many(1, 33, 7)) # 8 sec. system.time(x <- rchoose.many(1e5, 33, 7)) # 8 sec. system.time(x <- rchoose.many(1, 33, 16)) # (Not run: would take about 35 min.) system.time(x <- rchoose.many.large(1e5, 33, 7)) # 0.8 sec. system.time(x <- rchoose.many.large(1e5, 33, 16)) # 0.9 sec. Here's a histogram of one largish sample (of a million draws) showing the distribution for $n=10,k=4.$ The bins have to be wider than $1$, for otherwise the counts will be either $0$ or close to a constant (because the distribution is uniform on $\mathcal{B}(10,4)$). I chose a width of $4:$ library(ggplot2) ggplot(data.frame(x=rchoose.many(1e6, 10, 4)), aes(x)) + geom_histogram(binwidth=4)
Randomly choose between numbers that yields a specific amount of binary 1's
The algorithm to use depends on (a) the capabilities of your software platform; (b) how many such random draws you need; (c) how large the number of digits $n$ is; and (d) how large the number of poss
Randomly choose between numbers that yields a specific amount of binary 1's The algorithm to use depends on (a) the capabilities of your software platform; (b) how many such random draws you need; (c) how large the number of digits $n$ is; and (d) how large the number of possible results $\binom{n}{k}$ (where $k$ is the number of ones) is. Most statistical work is done with 32 or 64 signed integer and/or double-precision IEEE floating point numbers, so I will assume that of (a). Here is a set of solutions illustrated with working R code. To be specific, they all draw uniformly, independently, and randomly from the set $\mathcal{B}(n,k)$ of integers which, when represented in binary, have up to $n$ digits of which exactly $k$ are ones. You need a single random integer. Take a sample $i_1, i_2, \ldots, i_k$ without replacement from the set of places $0,1,\ldots, n-1$ and return $2^{i_1} + 2^{i_2} + \cdots + 2^{i_k}.$ rchoose <- function(n, k) sum(2^sample(0:(n-1), k)) This algorithm has $O(n)$ time and storage requirements. You need a large number $N$ of random integers where $n$ and $k$ are small. "Small" means both (1) your system accurately represents all integers through $2^{n}-1$ and (2) you have enough speed and RAM to compute and store all the elements of $\mathcal{B}(n,k).$ The solution is to generate an array representing all elements of $\mathcal{B}(n,k)$ and then (rapidly) draw randomly from this array: rchoose.many <- function(N, n, k) { b <- colSums(2^combn(0:(n-1), k)) sample(b, N, replace=TRUE) } This algorithm requires $O(n \binom{n}{k})$ time to initialize plus $O(N)$ additional time to run. Its storage requirements are $O(n \binom{n}{k})$ (but could be reduced to $O(\binom{n}{k})$ by accumulating the values in a loop during initialization). You need a large number of random integers where $n$ and $k$ are not small. You're still limited by the need to represent $n$-digit binary integers in your system. About the best you can do is to loop $N$ times over the single-draw solution (1): rchoose.many.large <- function(N, n, k) { replicate(N, rchoose(n, k)) } This takes $O(Nn)$ time and $O(n)$ storage. Comparing the asymptotic requirements provides a criterion for selecting the appropriate solution in any situation. Examples These timings (on one modest workstation) provide some indication of the possibilities. system.time(x <- rchoose.many(1, 33, 7)) # 8 sec. system.time(x <- rchoose.many(1e5, 33, 7)) # 8 sec. system.time(x <- rchoose.many(1, 33, 16)) # (Not run: would take about 35 min.) system.time(x <- rchoose.many.large(1e5, 33, 7)) # 0.8 sec. system.time(x <- rchoose.many.large(1e5, 33, 16)) # 0.9 sec. Here's a histogram of one largish sample (of a million draws) showing the distribution for $n=10,k=4.$ The bins have to be wider than $1$, for otherwise the counts will be either $0$ or close to a constant (because the distribution is uniform on $\mathcal{B}(10,4)$). I chose a width of $4:$ library(ggplot2) ggplot(data.frame(x=rchoose.many(1e6, 10, 4)), aes(x)) + geom_histogram(binwidth=4)
Randomly choose between numbers that yields a specific amount of binary 1's The algorithm to use depends on (a) the capabilities of your software platform; (b) how many such random draws you need; (c) how large the number of digits $n$ is; and (d) how large the number of poss
47,720
Is $\theta$ a location or a scale parameter in the $\mathcal N(\theta,\theta)$ and $\mathcal N(\theta,\theta^2)$ densities?
Since, when $X\sim{\cal N}(\theta,\theta^2)$,$$Z=\dfrac{X-\theta}{\theta}=\dfrac{X}{\theta}-1\sim{\cal N}(0,1)$$and assuming $\theta\ne 0$, since $\theta=0$ is a special case that results in a Dirac mass at zero, the parameter $\theta$ is a scale parameter as $$X=\theta(Z+1)$$ is the scaled version of $Z+1$ that has a fixed distribution. (Note that applying $\theta=0$ to the above results in the correct Dirac mass at zero.) When $X\sim{\cal N}(\theta,\theta)$, with $\theta>0$, $$Z=\dfrac{X-\theta}{\theta^{1/2}}=\dfrac{X}{\theta^{1/2}}-\theta^{1/2}\sim{\cal N}(0,1)$$the parameter $\theta$ is neither scale nor location. (Again, applying $\theta=0$ to the above results in the correct Dirac mass at zero.)
Is $\theta$ a location or a scale parameter in the $\mathcal N(\theta,\theta)$ and $\mathcal N(\thet
Since, when $X\sim{\cal N}(\theta,\theta^2)$,$$Z=\dfrac{X-\theta}{\theta}=\dfrac{X}{\theta}-1\sim{\cal N}(0,1)$$and assuming $\theta\ne 0$, since $\theta=0$ is a special case that results in a Dirac m
Is $\theta$ a location or a scale parameter in the $\mathcal N(\theta,\theta)$ and $\mathcal N(\theta,\theta^2)$ densities? Since, when $X\sim{\cal N}(\theta,\theta^2)$,$$Z=\dfrac{X-\theta}{\theta}=\dfrac{X}{\theta}-1\sim{\cal N}(0,1)$$and assuming $\theta\ne 0$, since $\theta=0$ is a special case that results in a Dirac mass at zero, the parameter $\theta$ is a scale parameter as $$X=\theta(Z+1)$$ is the scaled version of $Z+1$ that has a fixed distribution. (Note that applying $\theta=0$ to the above results in the correct Dirac mass at zero.) When $X\sim{\cal N}(\theta,\theta)$, with $\theta>0$, $$Z=\dfrac{X-\theta}{\theta^{1/2}}=\dfrac{X}{\theta^{1/2}}-\theta^{1/2}\sim{\cal N}(0,1)$$the parameter $\theta$ is neither scale nor location. (Again, applying $\theta=0$ to the above results in the correct Dirac mass at zero.)
Is $\theta$ a location or a scale parameter in the $\mathcal N(\theta,\theta)$ and $\mathcal N(\thet Since, when $X\sim{\cal N}(\theta,\theta^2)$,$$Z=\dfrac{X-\theta}{\theta}=\dfrac{X}{\theta}-1\sim{\cal N}(0,1)$$and assuming $\theta\ne 0$, since $\theta=0$ is a special case that results in a Dirac m
47,721
How is a ROCAUC=1.0 possible with imperfect accuracy? [duplicate]
ROC AUC and the $c$-statistic are equivalent, and measure the probability that a randomly-chosen positive sample is ranked higher than a randomly-chosen negative sample. If all positives have score 0.49 and all negatives have score 0.48, then the ROC AUC is 1.0 because of this property. This can lead to counter-intuitive results. In this hypothetical, the accuracy, using the rule of a 0.5 cutoff, is 0.0 because all of the predictions are below 0.5! In your data, there's a sample with prediction 0.4 but label 1.0; this is the sample with the lowest score and label 1.0. This sample, and the sample with label 1.0 and score 0.5, are decreasing your accuracy. But the highest score for the samples with label 0.0 is 0.1, so we know that we are dealing with the case of a perfect ROC AUC because 0.1 is less than 0.4. I've found this book to be a good resource for information about ROC curves: Wojtek J. Krzanowski (Author) & David J. Hand. ROC Curves for Continuous Data
How is a ROCAUC=1.0 possible with imperfect accuracy? [duplicate]
ROC AUC and the $c$-statistic are equivalent, and measure the probability that a randomly-chosen positive sample is ranked higher than a randomly-chosen negative sample. If all positives have score 0.
How is a ROCAUC=1.0 possible with imperfect accuracy? [duplicate] ROC AUC and the $c$-statistic are equivalent, and measure the probability that a randomly-chosen positive sample is ranked higher than a randomly-chosen negative sample. If all positives have score 0.49 and all negatives have score 0.48, then the ROC AUC is 1.0 because of this property. This can lead to counter-intuitive results. In this hypothetical, the accuracy, using the rule of a 0.5 cutoff, is 0.0 because all of the predictions are below 0.5! In your data, there's a sample with prediction 0.4 but label 1.0; this is the sample with the lowest score and label 1.0. This sample, and the sample with label 1.0 and score 0.5, are decreasing your accuracy. But the highest score for the samples with label 0.0 is 0.1, so we know that we are dealing with the case of a perfect ROC AUC because 0.1 is less than 0.4. I've found this book to be a good resource for information about ROC curves: Wojtek J. Krzanowski (Author) & David J. Hand. ROC Curves for Continuous Data
How is a ROCAUC=1.0 possible with imperfect accuracy? [duplicate] ROC AUC and the $c$-statistic are equivalent, and measure the probability that a randomly-chosen positive sample is ranked higher than a randomly-chosen negative sample. If all positives have score 0.
47,722
Why do planned comparisons and post-hoc tests differ?
They aren't really the same. A planned comparison is something you are committing to before you see your data, and will run no matter what the results look like. A post-hoc comparison is more opportunistic. You look at that because, when you looked at the data, that particular comparison looked interesting. The idea here is that there will always be something that looked [most] interesting, so you need to account for that opportunism. The difference between these two approaches for the same contrast will depend on a few issues, notably how many possible contrasts there are. A Tukey test gets classified as 'post-hoc' whether it is really the original intention or not because it looks at all possible pairwise contrasts. A way to think about this is that people could use 'I'll compare everything' as a get out of jail free card. You just claim that you want to test everything under the sun, and then you can say that it was all a-priori. But by virtue of comparing everything, it is equivalent to having seen your data first. The test naturally accounts for that, and the result is equivalent to a post-hoc result. Your contrasts are clearly a-priori, and appear to be orthogonal. I think it is appropriate for you to go with the top set.
Why do planned comparisons and post-hoc tests differ?
They aren't really the same. A planned comparison is something you are committing to before you see your data, and will run no matter what the results look like. A post-hoc comparison is more opport
Why do planned comparisons and post-hoc tests differ? They aren't really the same. A planned comparison is something you are committing to before you see your data, and will run no matter what the results look like. A post-hoc comparison is more opportunistic. You look at that because, when you looked at the data, that particular comparison looked interesting. The idea here is that there will always be something that looked [most] interesting, so you need to account for that opportunism. The difference between these two approaches for the same contrast will depend on a few issues, notably how many possible contrasts there are. A Tukey test gets classified as 'post-hoc' whether it is really the original intention or not because it looks at all possible pairwise contrasts. A way to think about this is that people could use 'I'll compare everything' as a get out of jail free card. You just claim that you want to test everything under the sun, and then you can say that it was all a-priori. But by virtue of comparing everything, it is equivalent to having seen your data first. The test naturally accounts for that, and the result is equivalent to a post-hoc result. Your contrasts are clearly a-priori, and appear to be orthogonal. I think it is appropriate for you to go with the top set.
Why do planned comparisons and post-hoc tests differ? They aren't really the same. A planned comparison is something you are committing to before you see your data, and will run no matter what the results look like. A post-hoc comparison is more opport
47,723
How to understand SE of regression slope equation
The intuitive understanding is indeed as you suggest in the comment. If you think about the value of the slope for the regression as something that will change every time you draw a new sample (which it does), then the standard deviation of the resulting sampling distribution is the standard error for that parameter. So, if you imagine collecting 1,000 different samples (from the same population), then calculate the slope parameter for those different samples, you will have 1,000 slope estimates, and the standard deviation of those values will be very close to the calculated standard error produced by this formula. Now, if the question is about the actual elements comprising the formula, this is much more challenging to explain "intuitively". I'll hold off from attempting that just yet, as the first part may have answered your question. Update #1 The variability of the estimate for the slope parameter can be associated with the spread of the points about the population regression line. The more spread out the variability of these points about the line, the more "wiggle" you will see in the estimates that you might obtain. (I'd love to generate a graphic for this, alas...no time.) So, to interpret the parts of the equation for the standard error for this parameter, we should rewrite the formula from the original post $$s(b_1) = \sqrt{\frac{1}{n-2}·\frac{\sum{(y_i-\hat{y}_i)^2}}{\sum{(x_i-\bar{x})^2}}}$$ as $$\begin{align}s(b_1) & = \sqrt{\frac{1}{n-2}·\frac{\sum{(y_i-\bar{y})^2}}{\sum{(x_i-\bar{x})^2}}·\frac{\sum{(y_i-\hat{y}_i)^2}}{\sum{(y_i-\bar{y})^2}}} \\ & = \sqrt{\frac{1}{n-2}}\sqrt{\frac{\frac{1}{n-1}\sum{(y_i-\bar{y})^2}}{\frac{1}{n-1}\sum{(x_i-\bar{x})^2}}}\sqrt{\frac{SS_\text{error}}{SS_\text{total}}} \\ & = \sqrt{\frac{1}{n-2}}\sqrt{\frac{sd(y)^2}{sd(x)^2}}\sqrt{\frac{SS_\text{total}-SS_\text{model}}{SS_\text{total}}} \\ & = \sqrt{\frac{1}{n-2}}\frac{sd(y)}{sd(x)}\sqrt{1-r^2} \end{align}$$ If were are willing to ignore the initial sample-size related scaling factor as a bias-adjustment, the next factor in the expression is a scaling factor along each of the dimensions. Imagine all the possible diagonal lines that might fit into a box. If you change the height or width of the box, it will change the amount of variability you might observe. The last factor is the measure of the spread of points about the line. The more spread, the more variability in the possible diagonal lines that might be observed (and thus, more variability in the slope). The less spread of points about the line, the less variability in the slope, and the less variability in the regression parameter.
How to understand SE of regression slope equation
The intuitive understanding is indeed as you suggest in the comment. If you think about the value of the slope for the regression as something that will change every time you draw a new sample (which
How to understand SE of regression slope equation The intuitive understanding is indeed as you suggest in the comment. If you think about the value of the slope for the regression as something that will change every time you draw a new sample (which it does), then the standard deviation of the resulting sampling distribution is the standard error for that parameter. So, if you imagine collecting 1,000 different samples (from the same population), then calculate the slope parameter for those different samples, you will have 1,000 slope estimates, and the standard deviation of those values will be very close to the calculated standard error produced by this formula. Now, if the question is about the actual elements comprising the formula, this is much more challenging to explain "intuitively". I'll hold off from attempting that just yet, as the first part may have answered your question. Update #1 The variability of the estimate for the slope parameter can be associated with the spread of the points about the population regression line. The more spread out the variability of these points about the line, the more "wiggle" you will see in the estimates that you might obtain. (I'd love to generate a graphic for this, alas...no time.) So, to interpret the parts of the equation for the standard error for this parameter, we should rewrite the formula from the original post $$s(b_1) = \sqrt{\frac{1}{n-2}·\frac{\sum{(y_i-\hat{y}_i)^2}}{\sum{(x_i-\bar{x})^2}}}$$ as $$\begin{align}s(b_1) & = \sqrt{\frac{1}{n-2}·\frac{\sum{(y_i-\bar{y})^2}}{\sum{(x_i-\bar{x})^2}}·\frac{\sum{(y_i-\hat{y}_i)^2}}{\sum{(y_i-\bar{y})^2}}} \\ & = \sqrt{\frac{1}{n-2}}\sqrt{\frac{\frac{1}{n-1}\sum{(y_i-\bar{y})^2}}{\frac{1}{n-1}\sum{(x_i-\bar{x})^2}}}\sqrt{\frac{SS_\text{error}}{SS_\text{total}}} \\ & = \sqrt{\frac{1}{n-2}}\sqrt{\frac{sd(y)^2}{sd(x)^2}}\sqrt{\frac{SS_\text{total}-SS_\text{model}}{SS_\text{total}}} \\ & = \sqrt{\frac{1}{n-2}}\frac{sd(y)}{sd(x)}\sqrt{1-r^2} \end{align}$$ If were are willing to ignore the initial sample-size related scaling factor as a bias-adjustment, the next factor in the expression is a scaling factor along each of the dimensions. Imagine all the possible diagonal lines that might fit into a box. If you change the height or width of the box, it will change the amount of variability you might observe. The last factor is the measure of the spread of points about the line. The more spread, the more variability in the possible diagonal lines that might be observed (and thus, more variability in the slope). The less spread of points about the line, the less variability in the slope, and the less variability in the regression parameter.
How to understand SE of regression slope equation The intuitive understanding is indeed as you suggest in the comment. If you think about the value of the slope for the regression as something that will change every time you draw a new sample (which
47,724
A question on probability involving Binomial distribution
Your intuition is correct. Algebraic demonstration of that fact can proceed as follows: $$\begin{equation} \begin{aligned} \mathbb{P}(X = i) &= \sum_{j=i}^n {j \choose i} s^i (1-s)^{j-i} {n \choose j} p^j (1-p)^{n-j} \\[8pt] &= \sum_{j=i}^n \frac{j!}{i! (j-i)!} \frac{n!}{j! (n-j)!} s^i (1-s)^{j-i} p^j (1-p)^{n-j} \\[8pt] &= \sum_{j=i}^n \frac{n!}{i! (j-i)! (n-j)!} s^i (1-s)^{j-i} p^j (1-p)^{n-j} \\[8pt] &= \sum_{r=0}^{n-i} \frac{n!}{i! r! (n-i-r)!} s^i (1-s)^r p^{r+i} (1-p)^{n-i-r} \\[8pt] &= \frac{n!}{i! (n-i)!} (ps)^i (1-p)^{n-i} \sum_{r=0}^{n-i} \frac{(n-i)!}{r! (n-i-r)!} (1-s)^r p^r (1-p)^{-r} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \sum_{r=0}^{n-i} {n-i \choose r} \bigg( \frac{(1-s) p}{1-p} \bigg)^r \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( 1 + \frac{(1-s) p}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( \frac{1-p + p-ps}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( \frac{1-ps}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-ps)^{n-i} \\[8pt] &= \text{Bin}(i | n, ps). \\[8pt] \end{aligned} \end{equation}$$ (Note that the seventh step, removing the summation, is an application of the binomial theorem.)
A question on probability involving Binomial distribution
Your intuition is correct. Algebraic demonstration of that fact can proceed as follows: $$\begin{equation} \begin{aligned} \mathbb{P}(X = i) &= \sum_{j=i}^n {j \choose i} s^i (1-s)^{j-i} {n \choose
A question on probability involving Binomial distribution Your intuition is correct. Algebraic demonstration of that fact can proceed as follows: $$\begin{equation} \begin{aligned} \mathbb{P}(X = i) &= \sum_{j=i}^n {j \choose i} s^i (1-s)^{j-i} {n \choose j} p^j (1-p)^{n-j} \\[8pt] &= \sum_{j=i}^n \frac{j!}{i! (j-i)!} \frac{n!}{j! (n-j)!} s^i (1-s)^{j-i} p^j (1-p)^{n-j} \\[8pt] &= \sum_{j=i}^n \frac{n!}{i! (j-i)! (n-j)!} s^i (1-s)^{j-i} p^j (1-p)^{n-j} \\[8pt] &= \sum_{r=0}^{n-i} \frac{n!}{i! r! (n-i-r)!} s^i (1-s)^r p^{r+i} (1-p)^{n-i-r} \\[8pt] &= \frac{n!}{i! (n-i)!} (ps)^i (1-p)^{n-i} \sum_{r=0}^{n-i} \frac{(n-i)!}{r! (n-i-r)!} (1-s)^r p^r (1-p)^{-r} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \sum_{r=0}^{n-i} {n-i \choose r} \bigg( \frac{(1-s) p}{1-p} \bigg)^r \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( 1 + \frac{(1-s) p}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( \frac{1-p + p-ps}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-p)^{n-i} \bigg( \frac{1-ps}{1-p} \bigg)^{n-i} \\[8pt] &= {n \choose i} (ps)^i (1-ps)^{n-i} \\[8pt] &= \text{Bin}(i | n, ps). \\[8pt] \end{aligned} \end{equation}$$ (Note that the seventh step, removing the summation, is an application of the binomial theorem.)
A question on probability involving Binomial distribution Your intuition is correct. Algebraic demonstration of that fact can proceed as follows: $$\begin{equation} \begin{aligned} \mathbb{P}(X = i) &= \sum_{j=i}^n {j \choose i} s^i (1-s)^{j-i} {n \choose
47,725
An example of a bivariate pdf, where marginals are triangular distributions
A nice way to do this is to use copulae. In your case: let $X \sim \text{Triangular}(0,1)$ with pdf $f(x)$ and parameter $b$, and let $Y \sim \text{Triangular}(0,1)$ with pdf $g(y)$ and parameter $c$: with cdf's $F(x)$ and $G(y)$: ... where I am using the Prob function (from the mathStatica package for Mathematica) to automate the nitty-gritties of the cdf calculation. Then, define a copula function, which is a function of the two cdf's $F$ and $G$ that creates a bivariate joint distribution function (cdf) from $F$ and $G$, such that the marginal pdf's of $X$ and $Y$ are still $f$ and $g$ respectively. Here, I use a Morgenstern copula with parameter $\alpha$ that induces correlation (there are many many other Copula functions available): Let $h(x,y)$ denote the bivariate Triangular joint pdf obtained via a Morgenstern copula. Here we differentiate the Copula function (joint cdf) to derive the joint pdf $h(x,y)$: The following diagram plots the joint pdf $h(x,y)$ when $b = \frac12$, $c = \frac34$ and $\alpha = 0$ (independent): Here is the same plot when $\alpha = -1$:
An example of a bivariate pdf, where marginals are triangular distributions
A nice way to do this is to use copulae. In your case: let $X \sim \text{Triangular}(0,1)$ with pdf $f(x)$ and parameter $b$, and let $Y \sim \text{Triangular}(0,1)$ with pdf $g(y)$ and parameter $c
An example of a bivariate pdf, where marginals are triangular distributions A nice way to do this is to use copulae. In your case: let $X \sim \text{Triangular}(0,1)$ with pdf $f(x)$ and parameter $b$, and let $Y \sim \text{Triangular}(0,1)$ with pdf $g(y)$ and parameter $c$: with cdf's $F(x)$ and $G(y)$: ... where I am using the Prob function (from the mathStatica package for Mathematica) to automate the nitty-gritties of the cdf calculation. Then, define a copula function, which is a function of the two cdf's $F$ and $G$ that creates a bivariate joint distribution function (cdf) from $F$ and $G$, such that the marginal pdf's of $X$ and $Y$ are still $f$ and $g$ respectively. Here, I use a Morgenstern copula with parameter $\alpha$ that induces correlation (there are many many other Copula functions available): Let $h(x,y)$ denote the bivariate Triangular joint pdf obtained via a Morgenstern copula. Here we differentiate the Copula function (joint cdf) to derive the joint pdf $h(x,y)$: The following diagram plots the joint pdf $h(x,y)$ when $b = \frac12$, $c = \frac34$ and $\alpha = 0$ (independent): Here is the same plot when $\alpha = -1$:
An example of a bivariate pdf, where marginals are triangular distributions A nice way to do this is to use copulae. In your case: let $X \sim \text{Triangular}(0,1)$ with pdf $f(x)$ and parameter $b$, and let $Y \sim \text{Triangular}(0,1)$ with pdf $g(y)$ and parameter $c
47,726
Why is it valid to use CV to set parameters and hyperparameters but not seeds?
Provided you do your cross-validation properly (i.e. cross validate the whole procedure) I don't think it's "wrong", I think the most likely result is just that it won't be helpful. Doing this CV properly though means you have to be careful to include these decisions in each fold. It's common to see people forgetting to include modeling decisions in the cross validation and thereby biasing their assessments. I'm going to use the term procedure rather than model for this combination process just to not confuse the two. To that end, consider the evaluation of two modeling procedures: within each fold of our CV we pick a random random seed and fit our random forest. within each fold of our CV we fit a RF for seed $s = 1, \dots, 100$ and take the best one. In this case the first procedure is the combination of picking a seed then fitting the RF. The second procedure is the combination of fitting 100 RFs and picking the best one. In both cases we're properly CV-ing the model so we'll get a fair assessment of it. I think it's fine to compare these so long as you think there is a possibility that different seeds might help. If you really believe there's no way this could help then you're only opening yourself up to type I errors by doing this. But there is a precedent for this sort of thing. In nonconvex optimization you generally only get a local optimum so it's common to do many random restarts and then pick the best of the collection of resulting local optima; see e.g. this paper How many random restarts are enough?: the authors aren't questioning if any random restarts should be done but rather how many. So in our case if we think there is a possible benefit to it then we can include it as a hyperparameter, but again we need to be explicit about how it's now part of the modeling procedure and so it needs to be cross validated. I bet if you do this you'll find that, provided you're using enough trees, that the two will have indistinguishable CV values. In that case you ought to take the simpler model, which will be the one without the seed optimizing. But that's fine: you've compared two models correctly via cross validation and you picked the better one. We do that every day. If anyone disagrees please let me know!
Why is it valid to use CV to set parameters and hyperparameters but not seeds?
Provided you do your cross-validation properly (i.e. cross validate the whole procedure) I don't think it's "wrong", I think the most likely result is just that it won't be helpful. Doing this CV prop
Why is it valid to use CV to set parameters and hyperparameters but not seeds? Provided you do your cross-validation properly (i.e. cross validate the whole procedure) I don't think it's "wrong", I think the most likely result is just that it won't be helpful. Doing this CV properly though means you have to be careful to include these decisions in each fold. It's common to see people forgetting to include modeling decisions in the cross validation and thereby biasing their assessments. I'm going to use the term procedure rather than model for this combination process just to not confuse the two. To that end, consider the evaluation of two modeling procedures: within each fold of our CV we pick a random random seed and fit our random forest. within each fold of our CV we fit a RF for seed $s = 1, \dots, 100$ and take the best one. In this case the first procedure is the combination of picking a seed then fitting the RF. The second procedure is the combination of fitting 100 RFs and picking the best one. In both cases we're properly CV-ing the model so we'll get a fair assessment of it. I think it's fine to compare these so long as you think there is a possibility that different seeds might help. If you really believe there's no way this could help then you're only opening yourself up to type I errors by doing this. But there is a precedent for this sort of thing. In nonconvex optimization you generally only get a local optimum so it's common to do many random restarts and then pick the best of the collection of resulting local optima; see e.g. this paper How many random restarts are enough?: the authors aren't questioning if any random restarts should be done but rather how many. So in our case if we think there is a possible benefit to it then we can include it as a hyperparameter, but again we need to be explicit about how it's now part of the modeling procedure and so it needs to be cross validated. I bet if you do this you'll find that, provided you're using enough trees, that the two will have indistinguishable CV values. In that case you ought to take the simpler model, which will be the one without the seed optimizing. But that's fine: you've compared two models correctly via cross validation and you picked the better one. We do that every day. If anyone disagrees please let me know!
Why is it valid to use CV to set parameters and hyperparameters but not seeds? Provided you do your cross-validation properly (i.e. cross validate the whole procedure) I don't think it's "wrong", I think the most likely result is just that it won't be helpful. Doing this CV prop
47,727
How is the minimum $\lambda$ computed in group LASSO?
If a change of $\beta$ in any direction will not decrease the cost/objective function then you have found yourself in, at least, a local minimum. The calculations below will show for which λ the solution/point $\beta = 0$ stops to be a minimum. Consider the effect of 'a change of $\beta^{(l)}$ by an infinitesimal distance $\partial l$' on the 'change of 1) the error/residual term and 2) the penalty term'. $$\underbrace{ \frac{1}{2} \left\lVert\vec{y}-\sum_{l=1}^mX^{(l)}\vec{\beta^{(l)}}\right\rVert_2^2}_{\text{RSS term}} + \underbrace { \lambda\sum_{l=1}^m\sqrt{p_l}\left\lVert\vec{\beta^{(l)}}\right\rVert_2}_{\text{ penalty term}}$$ The penalty term will change by: $$\partial \left( \lambda \sqrt{p_l} \lVert\vec{\beta^{(l)}}\rVert_2 \right) = \left( \lambda \sqrt{p_l} \right) \partial l$$ (independent on the direction of change) The error term will change by: $$\partial \frac{1}{2} RSS = \left( \lvert {X^{(l)}}^Ty \rvert \right) \partial l $$ where, if all beta are zero, then this term ${X^{(l)}}^Ty$ is the gradient of the RSS term. This gradient is the direction in which the directional derivative, change/minimization, will be greatest, and it's value is $\lvert {X^{(l)}}^Ty \rvert$ . So per group the relative change of error term and penalty term (which needs to be greater than 1 or otherwise the overall cost term does not decrease) will be: $$ \frac{ \lvert {X^{(l)}}^Ty \rvert } { \lambda \sqrt{p_l} } > 1$$ Thus $$ \lambda_{min} = \underset{l}{max} \left(\frac{ \lvert {X^{(l)}}^Ty \rvert } { \sqrt{p_l} } \right) $$ which becomes $\lambda=\left\lVert X^ty\right\rVert_\infty$ if all group sizes $p_l$ are equal to one. Note that, initially, a change of $\beta^{(l)}$ in two groups together is not beneficial. You could always improve the solution by shifting more weight to the group with a higher ratio for $\frac{ \lvert {X^{(l)}}^Ty \rvert } { \lambda \sqrt{p_l} }$. This is most easily/intuitively seen in a geometrical viewpoint. The shape of iso-surface for beta is a polytope which makes contact with the iso-surface for the error which is a ellipsoid. They will initially contact in a point of the polytope (see for instance the graphical views in this answer or this answer ). Instead of $\lambda_{min}$ it might be better to speak about the suppremum (the smallest upper bound). We have for all $\lambda$ with non-zero $\beta$ that $\lambda < \lambda_{min}$ $$ \lambda < \underset{l}{max} \left(\frac{ \lvert {X^{(l)}}^Ty \rvert } { \sqrt{p_l} } \right) $$
How is the minimum $\lambda$ computed in group LASSO?
If a change of $\beta$ in any direction will not decrease the cost/objective function then you have found yourself in, at least, a local minimum. The calculations below will show for which λ the solut
How is the minimum $\lambda$ computed in group LASSO? If a change of $\beta$ in any direction will not decrease the cost/objective function then you have found yourself in, at least, a local minimum. The calculations below will show for which λ the solution/point $\beta = 0$ stops to be a minimum. Consider the effect of 'a change of $\beta^{(l)}$ by an infinitesimal distance $\partial l$' on the 'change of 1) the error/residual term and 2) the penalty term'. $$\underbrace{ \frac{1}{2} \left\lVert\vec{y}-\sum_{l=1}^mX^{(l)}\vec{\beta^{(l)}}\right\rVert_2^2}_{\text{RSS term}} + \underbrace { \lambda\sum_{l=1}^m\sqrt{p_l}\left\lVert\vec{\beta^{(l)}}\right\rVert_2}_{\text{ penalty term}}$$ The penalty term will change by: $$\partial \left( \lambda \sqrt{p_l} \lVert\vec{\beta^{(l)}}\rVert_2 \right) = \left( \lambda \sqrt{p_l} \right) \partial l$$ (independent on the direction of change) The error term will change by: $$\partial \frac{1}{2} RSS = \left( \lvert {X^{(l)}}^Ty \rvert \right) \partial l $$ where, if all beta are zero, then this term ${X^{(l)}}^Ty$ is the gradient of the RSS term. This gradient is the direction in which the directional derivative, change/minimization, will be greatest, and it's value is $\lvert {X^{(l)}}^Ty \rvert$ . So per group the relative change of error term and penalty term (which needs to be greater than 1 or otherwise the overall cost term does not decrease) will be: $$ \frac{ \lvert {X^{(l)}}^Ty \rvert } { \lambda \sqrt{p_l} } > 1$$ Thus $$ \lambda_{min} = \underset{l}{max} \left(\frac{ \lvert {X^{(l)}}^Ty \rvert } { \sqrt{p_l} } \right) $$ which becomes $\lambda=\left\lVert X^ty\right\rVert_\infty$ if all group sizes $p_l$ are equal to one. Note that, initially, a change of $\beta^{(l)}$ in two groups together is not beneficial. You could always improve the solution by shifting more weight to the group with a higher ratio for $\frac{ \lvert {X^{(l)}}^Ty \rvert } { \lambda \sqrt{p_l} }$. This is most easily/intuitively seen in a geometrical viewpoint. The shape of iso-surface for beta is a polytope which makes contact with the iso-surface for the error which is a ellipsoid. They will initially contact in a point of the polytope (see for instance the graphical views in this answer or this answer ). Instead of $\lambda_{min}$ it might be better to speak about the suppremum (the smallest upper bound). We have for all $\lambda$ with non-zero $\beta$ that $\lambda < \lambda_{min}$ $$ \lambda < \underset{l}{max} \left(\frac{ \lvert {X^{(l)}}^Ty \rvert } { \sqrt{p_l} } \right) $$
How is the minimum $\lambda$ computed in group LASSO? If a change of $\beta$ in any direction will not decrease the cost/objective function then you have found yourself in, at least, a local minimum. The calculations below will show for which λ the solut
47,728
Mix of text and numeric data
You have two main options here: As you said, create some numeric features out of the text description and merge it with the rest of the numeric data. The features created out of the text description can be either the document-term matrix (with tf-idf or not), can be SVD components or even averaged word-vectors (look for word2vec etc). You can build two separate classifiers (one using text data only and one using numeric only) and then combine their output using some meta-modelling.
Mix of text and numeric data
You have two main options here: As you said, create some numeric features out of the text description and merge it with the rest of the numeric data. The features created out of the text description
Mix of text and numeric data You have two main options here: As you said, create some numeric features out of the text description and merge it with the rest of the numeric data. The features created out of the text description can be either the document-term matrix (with tf-idf or not), can be SVD components or even averaged word-vectors (look for word2vec etc). You can build two separate classifiers (one using text data only and one using numeric only) and then combine their output using some meta-modelling.
Mix of text and numeric data You have two main options here: As you said, create some numeric features out of the text description and merge it with the rest of the numeric data. The features created out of the text description
47,729
Mix of text and numeric data
I think there is a more satisfying solution than what has been suggested already, one that creates a single model that properly deals with the two kinds of input data and their relationship to the output class. Use a sequence model like an RNN to convert text into a kind of embedding. That embedding output is used directly as input to a dense layer that also takes the non-text data as input. The benefit of putting this into one model is you can merely rely on backpropagation to learn the right level of dependency of the output class on the two kinds of inputs, as well as let it train the RNN jointly with the final classifier. No need to add the complexity of an ensemble. For details, here is a good tutorial: http://digital-thinking.de/deep-learning-combining-numerical-and-text-features-in-deep-neural-networks/
Mix of text and numeric data
I think there is a more satisfying solution than what has been suggested already, one that creates a single model that properly deals with the two kinds of input data and their relationship to the out
Mix of text and numeric data I think there is a more satisfying solution than what has been suggested already, one that creates a single model that properly deals with the two kinds of input data and their relationship to the output class. Use a sequence model like an RNN to convert text into a kind of embedding. That embedding output is used directly as input to a dense layer that also takes the non-text data as input. The benefit of putting this into one model is you can merely rely on backpropagation to learn the right level of dependency of the output class on the two kinds of inputs, as well as let it train the RNN jointly with the final classifier. No need to add the complexity of an ensemble. For details, here is a good tutorial: http://digital-thinking.de/deep-learning-combining-numerical-and-text-features-in-deep-neural-networks/
Mix of text and numeric data I think there is a more satisfying solution than what has been suggested already, one that creates a single model that properly deals with the two kinds of input data and their relationship to the out
47,730
Integrate out missing variables in Gaussian Processing?
The linked question is discussing data imputation for the purposes of building a predictive model. What I believe the accepted answer is referring to is using a Gaussian Process as a model for the missing data, conditional on the observed data. "Integrating out" these missing variables then means marginalising the predictions of the resulting predictive model (the model built using these "imputed" data points) over the distribution of possible values the missing points could take. I am not confident enough to give you a rigorous mathematical explanation, but I will attempt a verbose, intuitive example since this has been open for a while, and perhaps someone else can expand on it or correct me if I also misunderstand. Suppose we wish to build a predictive model of some function $f(x,y,z)$, but the process by which we collect data on variable $z$ is messy. So sometimes we will only have access to $x$ and $y$. In order to build our predictive model, we require $z$. One approach is to attempt to "impute" $z$, given it's previous observations and any covariance structure we believe might be present between $x$, $y$ and $z$: this is where we can use a Gaussian Process. In this case, we suppose a Gaussian Process models $z$ as a function of $x$ and $y$. To make things a bit easier to read, assume we concatenate $x$ and $y$ into a vector, $r$ (so $r:=[x, y]$). We consequently end up with a set of different $r$ corresponding to all the data points we have collected. For some of these datapoints, there will not be a corresponding measurement $z$ (as it was corrupted, is missing, or otherwise unavailable). If we refer to these points ($x$ and $y$ measurements missing a particular $z$) as $r^*$, and the missing values as $z^*$, our task is to compute $z^*$ at $r^*$ conditional on $r$ and $z$. Using a Gaussian Process we can do this as follows: Let $k$ be some appropriate covariance kernel and $\theta$ its hyperparameters, and let $\mathbf{z}$ be the stacked vector of observed $z$ (so those $z$ measurements we do have). Additionally, I will use the somewhat abusive notation of $k(a,b)$ to represent the covariance matrix obtained by evaluating $k$ pairwise between the elements of $a$ and $b$. Note: If you're unfamiliar with how to pick a kernel or how to identify the hyperparameters, I suggest reading some of the introductory material on Gaussian Processes - I won't cover it here in the interest of brevity. I personally like this three part (1,2,3) series by Michael Betancourt. The predictive distribution of $z^*$ is a multivariate normal distribution conditional on the currently observed values $z$, the points at which they are observed, $r$, the kernel $k$, and the kernel hyperparameters $\theta$: $p(z^* | r, z, k, \theta) \sim \mathrm{MultiNormal} (\mu_p, \Sigma_p)$ where the "predictive" (or conditional) mean distribution $\mu_p$ is given by: $\mu_p = k(r^*,r) [k(r, r)]^{-1} \mathbf{z}$ And the predictive covariance matrix $\Sigma$ is: $\Sigma = k(r^*, r^*) - k(r^*, r) [k(r, r)]^{-1} k(r,r^*)$ Given this model for $p(z^* | r, z, k, \theta)$, we are able to assess the probability of $z^*$ given the observed data and the assumptions we used to select $k$. Now, suppose we want to use these imputed values of $z^*$ in a predictive model for some other quantity $f(x,y,z)$: It is here we would "integrate out" missing variables. Since our predictions of $z^*$ are a distribution (a normal distribution, $p(z^*)$, given we have used a Gaussian Process as a model for $z$), we probably want to capture uncertainty in the fact we do not know $z$ at these locations exactly. Very simply, if we want to make a prediction of $f(x,y,z)$ using $x, y$ and $p(z^*)$, we should account for all the values $z^*$ could be when making this prediction. This amounts to approaching the following integral: $p(f(x,y,z^*)) = \displaystyle\int f(x,y,z^*) p(z^*) dz^*$ This is usually approached numerically since $f$ rarely permits an analytical treatment (but it might). Typical approaches are quadrature or Monte-Carlo simulation. The important point is that predictions made using $z^*$ are done so over the distribution of possible $z^*$. This implies these predictions are themselves a distribution, and should be assessed as such to evaluate predictive performance in a robust way,
Integrate out missing variables in Gaussian Processing?
The linked question is discussing data imputation for the purposes of building a predictive model. What I believe the accepted answer is referring to is using a Gaussian Process as a model for the mis
Integrate out missing variables in Gaussian Processing? The linked question is discussing data imputation for the purposes of building a predictive model. What I believe the accepted answer is referring to is using a Gaussian Process as a model for the missing data, conditional on the observed data. "Integrating out" these missing variables then means marginalising the predictions of the resulting predictive model (the model built using these "imputed" data points) over the distribution of possible values the missing points could take. I am not confident enough to give you a rigorous mathematical explanation, but I will attempt a verbose, intuitive example since this has been open for a while, and perhaps someone else can expand on it or correct me if I also misunderstand. Suppose we wish to build a predictive model of some function $f(x,y,z)$, but the process by which we collect data on variable $z$ is messy. So sometimes we will only have access to $x$ and $y$. In order to build our predictive model, we require $z$. One approach is to attempt to "impute" $z$, given it's previous observations and any covariance structure we believe might be present between $x$, $y$ and $z$: this is where we can use a Gaussian Process. In this case, we suppose a Gaussian Process models $z$ as a function of $x$ and $y$. To make things a bit easier to read, assume we concatenate $x$ and $y$ into a vector, $r$ (so $r:=[x, y]$). We consequently end up with a set of different $r$ corresponding to all the data points we have collected. For some of these datapoints, there will not be a corresponding measurement $z$ (as it was corrupted, is missing, or otherwise unavailable). If we refer to these points ($x$ and $y$ measurements missing a particular $z$) as $r^*$, and the missing values as $z^*$, our task is to compute $z^*$ at $r^*$ conditional on $r$ and $z$. Using a Gaussian Process we can do this as follows: Let $k$ be some appropriate covariance kernel and $\theta$ its hyperparameters, and let $\mathbf{z}$ be the stacked vector of observed $z$ (so those $z$ measurements we do have). Additionally, I will use the somewhat abusive notation of $k(a,b)$ to represent the covariance matrix obtained by evaluating $k$ pairwise between the elements of $a$ and $b$. Note: If you're unfamiliar with how to pick a kernel or how to identify the hyperparameters, I suggest reading some of the introductory material on Gaussian Processes - I won't cover it here in the interest of brevity. I personally like this three part (1,2,3) series by Michael Betancourt. The predictive distribution of $z^*$ is a multivariate normal distribution conditional on the currently observed values $z$, the points at which they are observed, $r$, the kernel $k$, and the kernel hyperparameters $\theta$: $p(z^* | r, z, k, \theta) \sim \mathrm{MultiNormal} (\mu_p, \Sigma_p)$ where the "predictive" (or conditional) mean distribution $\mu_p$ is given by: $\mu_p = k(r^*,r) [k(r, r)]^{-1} \mathbf{z}$ And the predictive covariance matrix $\Sigma$ is: $\Sigma = k(r^*, r^*) - k(r^*, r) [k(r, r)]^{-1} k(r,r^*)$ Given this model for $p(z^* | r, z, k, \theta)$, we are able to assess the probability of $z^*$ given the observed data and the assumptions we used to select $k$. Now, suppose we want to use these imputed values of $z^*$ in a predictive model for some other quantity $f(x,y,z)$: It is here we would "integrate out" missing variables. Since our predictions of $z^*$ are a distribution (a normal distribution, $p(z^*)$, given we have used a Gaussian Process as a model for $z$), we probably want to capture uncertainty in the fact we do not know $z$ at these locations exactly. Very simply, if we want to make a prediction of $f(x,y,z)$ using $x, y$ and $p(z^*)$, we should account for all the values $z^*$ could be when making this prediction. This amounts to approaching the following integral: $p(f(x,y,z^*)) = \displaystyle\int f(x,y,z^*) p(z^*) dz^*$ This is usually approached numerically since $f$ rarely permits an analytical treatment (but it might). Typical approaches are quadrature or Monte-Carlo simulation. The important point is that predictions made using $z^*$ are done so over the distribution of possible $z^*$. This implies these predictions are themselves a distribution, and should be assessed as such to evaluate predictive performance in a robust way,
Integrate out missing variables in Gaussian Processing? The linked question is discussing data imputation for the purposes of building a predictive model. What I believe the accepted answer is referring to is using a Gaussian Process as a model for the mis
47,731
Regularization on weights without bias
Here's my understanding of this quote. This is sort of a hand-wavy argument, but still gives some intuition. Let's consider a simple linear layer: $$y = Wx + b$$ ... or equivalently: $$y_i = x_{1}W_{i,1} + ... + x_{n}W_{i,n} + b_i$$ If we focus on one weight $W_{i,j}$, its value is determined by observing two variables $(x_j, y_i)$. If the training data has $N$ rows, there're only $N$ pairs $(x_j, y_i)$, out of which $W_{i,j}$ is going to learn the correct value. That is a lot of flexibility, which the authors summarize in this phrase: Fitting the weight well requires observing both variables in a variety of conditions. In other words, the number of training rows $N$ must be really big in order to capture the correct slope without regularization. On the other hand, $b_i$ affects just $y_i$, which basically means its value can be better estimated from the same number of examples $N$. The authors put it this way: This means that we do not induce too much variance by leaving the biases unregularized. In the end, we'd like to regularize the weights that have "more freedom", that's why regularizing $W_{i,j}$ makes more sense than $b_i$.
Regularization on weights without bias
Here's my understanding of this quote. This is sort of a hand-wavy argument, but still gives some intuition. Let's consider a simple linear layer: $$y = Wx + b$$ ... or equivalently: $$y_i = x_{1}W_{i
Regularization on weights without bias Here's my understanding of this quote. This is sort of a hand-wavy argument, but still gives some intuition. Let's consider a simple linear layer: $$y = Wx + b$$ ... or equivalently: $$y_i = x_{1}W_{i,1} + ... + x_{n}W_{i,n} + b_i$$ If we focus on one weight $W_{i,j}$, its value is determined by observing two variables $(x_j, y_i)$. If the training data has $N$ rows, there're only $N$ pairs $(x_j, y_i)$, out of which $W_{i,j}$ is going to learn the correct value. That is a lot of flexibility, which the authors summarize in this phrase: Fitting the weight well requires observing both variables in a variety of conditions. In other words, the number of training rows $N$ must be really big in order to capture the correct slope without regularization. On the other hand, $b_i$ affects just $y_i$, which basically means its value can be better estimated from the same number of examples $N$. The authors put it this way: This means that we do not induce too much variance by leaving the biases unregularized. In the end, we'd like to regularize the weights that have "more freedom", that's why regularizing $W_{i,j}$ makes more sense than $b_i$.
Regularization on weights without bias Here's my understanding of this quote. This is sort of a hand-wavy argument, but still gives some intuition. Let's consider a simple linear layer: $$y = Wx + b$$ ... or equivalently: $$y_i = x_{1}W_{i
47,732
Regularization on weights without bias
In ML lingo a weight is a coefficient of a bona fide regression variable and bias is the intercept. Also, in regression language interaction has a specific meaning, not the same as used in the text quoted. In your text how variables interact means simply that there is a function that into translates inputs $x$ into output: $$a=f(b+wx)$$ So, the weight $w$ specifies how the variables $a$ and $x$ interact, using the language of the book. Now, the bias $b$ only controls an output $a$. In other words, they're saying that $x$ via $w$ impacts $a$, while $b$ impact $a$ directly, it doesn't need another variable. I wouldn't pay too much attention to this argument. It's too wobbly to me. What you need to understand is that if you set all weights $w=0$, the model will still somewhat work, because $b\ne 0$, and it will cause your layer to still produce the value that is around the mean. It will not be a very intelligent forecast, since it doesn't accept any inputs, but it will be showing some kind of an average output. If you set all $w=b=0$ this will not work well, since it'll be producing zero all the time.
Regularization on weights without bias
In ML lingo a weight is a coefficient of a bona fide regression variable and bias is the intercept. Also, in regression language interaction has a specific meaning, not the same as used in the text qu
Regularization on weights without bias In ML lingo a weight is a coefficient of a bona fide regression variable and bias is the intercept. Also, in regression language interaction has a specific meaning, not the same as used in the text quoted. In your text how variables interact means simply that there is a function that into translates inputs $x$ into output: $$a=f(b+wx)$$ So, the weight $w$ specifies how the variables $a$ and $x$ interact, using the language of the book. Now, the bias $b$ only controls an output $a$. In other words, they're saying that $x$ via $w$ impacts $a$, while $b$ impact $a$ directly, it doesn't need another variable. I wouldn't pay too much attention to this argument. It's too wobbly to me. What you need to understand is that if you set all weights $w=0$, the model will still somewhat work, because $b\ne 0$, and it will cause your layer to still produce the value that is around the mean. It will not be a very intelligent forecast, since it doesn't accept any inputs, but it will be showing some kind of an average output. If you set all $w=b=0$ this will not work well, since it'll be producing zero all the time.
Regularization on weights without bias In ML lingo a weight is a coefficient of a bona fide regression variable and bias is the intercept. Also, in regression language interaction has a specific meaning, not the same as used in the text qu
47,733
Prove that the squared exponential covariance is non-negative definite
I am not an expert but I'll sketch a standard argument which is explained in more detail in Rasmussen and Williams, Chapter 4 Section 2.1 (that book has answered a ton of my question about GPs). So we are working with the squared exponential function right? We have: $$K_{i,j}= \alpha \cdot \mathrm{exp}(\frac{-(x_i-x_j)^2}{2\ell^2}) = \alpha \cdot \mathrm{exp}(\frac{-(|x_i-x_j|)^2}{2\ell^2})$$ Since kernel can be written as a function of $|x_i-x_j|$, it is stationary (isotropic, even). Since it is stationary, the trick is we can apply Bochner's theorem to $K_{i,j}$. In this case, showing positive semidefiniteness of the square exponential reduces to finding a suitable function $S(s)$ which we can take the Fourier transform $\mathcal{F}_s$ such that $\mathcal{F}_s(S(s))=K_{i,j}$. Now the Fourier transform of a Gaussian is another Gaussian, so the $S(s)$ function that we are looking for turns out to be $$ S(s) = \alpha (2\pi \ell^2)^{D/2} \mathrm{exp}(−2\pi^2 \ell^2s^2). $$ If you calculate the Fourier transform of this function you will get your kernel, thus showing it's positive semidefinite. I'm sorry if that's too terse, but I can try to derive more details if that helps.
Prove that the squared exponential covariance is non-negative definite
I am not an expert but I'll sketch a standard argument which is explained in more detail in Rasmussen and Williams, Chapter 4 Section 2.1 (that book has answered a ton of my question about GPs). So we
Prove that the squared exponential covariance is non-negative definite I am not an expert but I'll sketch a standard argument which is explained in more detail in Rasmussen and Williams, Chapter 4 Section 2.1 (that book has answered a ton of my question about GPs). So we are working with the squared exponential function right? We have: $$K_{i,j}= \alpha \cdot \mathrm{exp}(\frac{-(x_i-x_j)^2}{2\ell^2}) = \alpha \cdot \mathrm{exp}(\frac{-(|x_i-x_j|)^2}{2\ell^2})$$ Since kernel can be written as a function of $|x_i-x_j|$, it is stationary (isotropic, even). Since it is stationary, the trick is we can apply Bochner's theorem to $K_{i,j}$. In this case, showing positive semidefiniteness of the square exponential reduces to finding a suitable function $S(s)$ which we can take the Fourier transform $\mathcal{F}_s$ such that $\mathcal{F}_s(S(s))=K_{i,j}$. Now the Fourier transform of a Gaussian is another Gaussian, so the $S(s)$ function that we are looking for turns out to be $$ S(s) = \alpha (2\pi \ell^2)^{D/2} \mathrm{exp}(−2\pi^2 \ell^2s^2). $$ If you calculate the Fourier transform of this function you will get your kernel, thus showing it's positive semidefinite. I'm sorry if that's too terse, but I can try to derive more details if that helps.
Prove that the squared exponential covariance is non-negative definite I am not an expert but I'll sketch a standard argument which is explained in more detail in Rasmussen and Williams, Chapter 4 Section 2.1 (that book has answered a ton of my question about GPs). So we
47,734
Prove that the squared exponential covariance is non-negative definite
There are also 3 more proofs here: How to prove that the radial basis function is a kernel? Note that the "squared exponential" kernel is also called a "radial basis function" (RBF) kernel and a "Gaussian" kernel.
Prove that the squared exponential covariance is non-negative definite
There are also 3 more proofs here: How to prove that the radial basis function is a kernel? Note that the "squared exponential" kernel is also called a "radial basis function" (RBF) kernel and a "Gaus
Prove that the squared exponential covariance is non-negative definite There are also 3 more proofs here: How to prove that the radial basis function is a kernel? Note that the "squared exponential" kernel is also called a "radial basis function" (RBF) kernel and a "Gaussian" kernel.
Prove that the squared exponential covariance is non-negative definite There are also 3 more proofs here: How to prove that the radial basis function is a kernel? Note that the "squared exponential" kernel is also called a "radial basis function" (RBF) kernel and a "Gaus
47,735
How are ergodicity and "weak dependence" related?
The concepts are not interchangeable. Ergodicity deals with studying the systems where different realizations of the process are not available. For instance, in coin toss we could reasonably argue that we could generate any number of realizations of the sequence of coin tosses. We'll toss 10 coins 1000 times, and it gives us the 1000 samples of the process. So, we could study the statistical properties of the 10 coin tosses. We could a few more 10 coin tosses and increase the sample, improve the estimates etc. This is not always possible. In many cases we cannot generate many realizations of the process, but we can observe one realization for a long long time. So, the ergodicity suggests that we could replace many realization of the process with a very long observation over time. That over time we can obtain the same estimates of the parameters of the process as if we obtained many realizations. Weak exogeneity (independence) deals with a one process, one time series. We're studying this process that is going on. We may need to forecast it in future etc. It's nice if the process has this property where the correlation doesn't stick for too long.
How are ergodicity and "weak dependence" related?
The concepts are not interchangeable. Ergodicity deals with studying the systems where different realizations of the process are not available. For instance, in coin toss we could reasonably argue tha
How are ergodicity and "weak dependence" related? The concepts are not interchangeable. Ergodicity deals with studying the systems where different realizations of the process are not available. For instance, in coin toss we could reasonably argue that we could generate any number of realizations of the sequence of coin tosses. We'll toss 10 coins 1000 times, and it gives us the 1000 samples of the process. So, we could study the statistical properties of the 10 coin tosses. We could a few more 10 coin tosses and increase the sample, improve the estimates etc. This is not always possible. In many cases we cannot generate many realizations of the process, but we can observe one realization for a long long time. So, the ergodicity suggests that we could replace many realization of the process with a very long observation over time. That over time we can obtain the same estimates of the parameters of the process as if we obtained many realizations. Weak exogeneity (independence) deals with a one process, one time series. We're studying this process that is going on. We may need to forecast it in future etc. It's nice if the process has this property where the correlation doesn't stick for too long.
How are ergodicity and "weak dependence" related? The concepts are not interchangeable. Ergodicity deals with studying the systems where different realizations of the process are not available. For instance, in coin toss we could reasonably argue tha
47,736
How are ergodicity and "weak dependence" related?
I had the same question, and found these lecture notes. Page 8 states that a mixing process is ergodic (called Theorem 7) and that a mixing process is also called weakly dependent. In other words, a weakly dependent process is ergodic. It is my understanding that we require ergodicity to estimate the asymptotic covariance matrix of serially correlated series, such that assuming either ergodicity or weak dependence is sufficient.
How are ergodicity and "weak dependence" related?
I had the same question, and found these lecture notes. Page 8 states that a mixing process is ergodic (called Theorem 7) and that a mixing process is also called weakly dependent. In other words, a w
How are ergodicity and "weak dependence" related? I had the same question, and found these lecture notes. Page 8 states that a mixing process is ergodic (called Theorem 7) and that a mixing process is also called weakly dependent. In other words, a weakly dependent process is ergodic. It is my understanding that we require ergodicity to estimate the asymptotic covariance matrix of serially correlated series, such that assuming either ergodicity or weak dependence is sufficient.
How are ergodicity and "weak dependence" related? I had the same question, and found these lecture notes. Page 8 states that a mixing process is ergodic (called Theorem 7) and that a mixing process is also called weakly dependent. In other words, a w
47,737
How do I show this using the Cauchy-Schwarz inequality
I am able to use Cauchy-Schwartz inequality, but I not quite getting the same result. I may have made a mistake, so here are all the steps. Note that for positive semi-definite matrices, trace defines an inner product. That is $tr(AB) = \langle B^T, A \rangle$. Then by Cauchy-Schwarz for $A$ and $B$ positive semi-definite symmetric matrices, $$tr(AB) = \langle B, A \rangle \leq \sqrt{\langle B,B\rangle \langle A,A \rangle } = \sqrt{tr(B^2) tr(A^2)} \,.$$ In addition, for positive semi-definite matrices, all eigenvalues are non-negative, so $tr(B^2) \leq tr(B)^2$. Also, the product of positive semi-definite matrices is also a positive semidefinite if the product is symmetric (so $\Lambda \tilde{\Sigma} \Lambda$ is PSD). Finally, for matrices $A,B,C$, $tr(ABC) = tr(CAB)$. Putting all of this together. \begin{align*} \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr((\Lambda \tilde{\Sigma} \Lambda)^2)}}& \geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr(\Lambda \tilde{\Sigma} \Lambda)^2}}\\ &= \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr(\Lambda^2 \tilde{\Sigma})^2}}\\ & = \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{\langle \tilde{\Sigma}, \Lambda^2\rangle^2}}\\ &\geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{ tr(\tilde{\Sigma}^2)tr(\Lambda^4)}}\\ & \geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{1}{tr(\Lambda^2)^2}}\\ & = \dfrac{1}{p} \end{align*} I don't get that this is greater than 1.
How do I show this using the Cauchy-Schwarz inequality
I am able to use Cauchy-Schwartz inequality, but I not quite getting the same result. I may have made a mistake, so here are all the steps. Note that for positive semi-definite matrices, trace define
How do I show this using the Cauchy-Schwarz inequality I am able to use Cauchy-Schwartz inequality, but I not quite getting the same result. I may have made a mistake, so here are all the steps. Note that for positive semi-definite matrices, trace defines an inner product. That is $tr(AB) = \langle B^T, A \rangle$. Then by Cauchy-Schwarz for $A$ and $B$ positive semi-definite symmetric matrices, $$tr(AB) = \langle B, A \rangle \leq \sqrt{\langle B,B\rangle \langle A,A \rangle } = \sqrt{tr(B^2) tr(A^2)} \,.$$ In addition, for positive semi-definite matrices, all eigenvalues are non-negative, so $tr(B^2) \leq tr(B)^2$. Also, the product of positive semi-definite matrices is also a positive semidefinite if the product is symmetric (so $\Lambda \tilde{\Sigma} \Lambda$ is PSD). Finally, for matrices $A,B,C$, $tr(ABC) = tr(CAB)$. Putting all of this together. \begin{align*} \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr((\Lambda \tilde{\Sigma} \Lambda)^2)}}& \geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr(\Lambda \tilde{\Sigma} \Lambda)^2}}\\ &= \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{tr(\Lambda^2 \tilde{\Sigma})^2}}\\ & = \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{\langle \tilde{\Sigma}, \Lambda^2\rangle^2}}\\ &\geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{tr(\tilde{\Sigma}^2)}{ tr(\tilde{\Sigma}^2)tr(\Lambda^4)}}\\ & \geq \dfrac{tr(\Lambda^2)}{p} \sqrt{ \dfrac{1}{tr(\Lambda^2)^2}}\\ & = \dfrac{1}{p} \end{align*} I don't get that this is greater than 1.
How do I show this using the Cauchy-Schwarz inequality I am able to use Cauchy-Schwartz inequality, but I not quite getting the same result. I may have made a mistake, so here are all the steps. Note that for positive semi-definite matrices, trace define
47,738
For what types of research designs should (Days|Subject) vs. (1|Days:Subject) random effect specification be used?
Your fit2 does not really fit a random slope. Instead, it sets a random intercept for each combination of Days and Subject. This implies that for the computation of the random effects fit2 treats Days as a categorical rather than continuous variable. You can check the difference by comparing the output of ranef(fit1) and ranef(fit2). For this example the correct model is clearly fit1, you can see it also by comparing the AIC or BIC of the two models.
For what types of research designs should (Days|Subject) vs. (1|Days:Subject) random effect specific
Your fit2 does not really fit a random slope. Instead, it sets a random intercept for each combination of Days and Subject. This implies that for the computation of the random effects fit2 treats Days
For what types of research designs should (Days|Subject) vs. (1|Days:Subject) random effect specification be used? Your fit2 does not really fit a random slope. Instead, it sets a random intercept for each combination of Days and Subject. This implies that for the computation of the random effects fit2 treats Days as a categorical rather than continuous variable. You can check the difference by comparing the output of ranef(fit1) and ranef(fit2). For this example the correct model is clearly fit1, you can see it also by comparing the AIC or BIC of the two models.
For what types of research designs should (Days|Subject) vs. (1|Days:Subject) random effect specific Your fit2 does not really fit a random slope. Instead, it sets a random intercept for each combination of Days and Subject. This implies that for the computation of the random effects fit2 treats Days
47,739
Prove that the vector $(X_{n},Y_{n})$ converges in probability if and only if each component converges in probability
Both directions can be proved simply using definitions. For the $\Rightarrow$ direction, use $\Pr\left(|X_n-X| > \epsilon\right) \le \Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> \epsilon\right) $. For the $\Leftarrow$ direction, note $\Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> 2\epsilon \right)\le \Pr(|X_n-X| > \epsilon$ or$ |Y_n-Y|> \epsilon )$.
Prove that the vector $(X_{n},Y_{n})$ converges in probability if and only if each component converg
Both directions can be proved simply using definitions. For the $\Rightarrow$ direction, use $\Pr\left(|X_n-X| > \epsilon\right) \le \Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> \epsilon\right) $. For the $\L
Prove that the vector $(X_{n},Y_{n})$ converges in probability if and only if each component converges in probability Both directions can be proved simply using definitions. For the $\Rightarrow$ direction, use $\Pr\left(|X_n-X| > \epsilon\right) \le \Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> \epsilon\right) $. For the $\Leftarrow$ direction, note $\Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> 2\epsilon \right)\le \Pr(|X_n-X| > \epsilon$ or$ |Y_n-Y|> \epsilon )$.
Prove that the vector $(X_{n},Y_{n})$ converges in probability if and only if each component converg Both directions can be proved simply using definitions. For the $\Rightarrow$ direction, use $\Pr\left(|X_n-X| > \epsilon\right) \le \Pr\left(\sqrt{|X_n-X|^2+|Y_n-Y|^2}> \epsilon\right) $. For the $\L
47,740
ntree parameter in predict.gbm
A good use of that parameter is saving time on hyperparameter tuning. Suppose you want to tune the model on number of trees with a test data set. And you want to try from 1000 to 5000 trees, step by 1000. Instead of building 5 models, you can just build one model with 5000 tree, and use this ntree parameter to see the performance on 1000 to 4000 trees!
ntree parameter in predict.gbm
A good use of that parameter is saving time on hyperparameter tuning. Suppose you want to tune the model on number of trees with a test data set. And you want to try from 1000 to 5000 trees, step by 1
ntree parameter in predict.gbm A good use of that parameter is saving time on hyperparameter tuning. Suppose you want to tune the model on number of trees with a test data set. And you want to try from 1000 to 5000 trees, step by 1000. Instead of building 5 models, you can just build one model with 5000 tree, and use this ntree parameter to see the performance on 1000 to 4000 trees!
ntree parameter in predict.gbm A good use of that parameter is saving time on hyperparameter tuning. Suppose you want to tune the model on number of trees with a test data set. And you want to try from 1000 to 5000 trees, step by 1
47,741
ntree parameter in predict.gbm
From the documentation: predict.gbm produces predicted values for each observation in newdata using the the first n.trees iterations of the boosting sequence. If n.trees is a vector than the result is a matrix with each column representing the predictions from gbm models with n.trees[1] iterations, n.trees[2] iterations, and so on. Since GBM trains the models in serial, you can tell gbm.predict to only use the first n.trees number of trees. Or you can tell it to give you the predictions from multiple numbers of n.trees. I assume you can use this to look at questions like, "After how many trees do my predictions level-off in their accuracy?" This also means that you cannot specify a higher n.trees than you did in training the model. If you do, it will give you a warning message and do predictions based on how many n.trees you fit the model on. See the code at https://github.com/cran/gbm/blob/master/R/predict.gbm.R#L57-L58: n.trees[n.trees>object$n.trees] <- object$n.trees warning("Number of trees not specified or exceeded number fit so far. Using ",paste(n.trees,collapse=" "),".")
ntree parameter in predict.gbm
From the documentation: predict.gbm produces predicted values for each observation in newdata using the the first n.trees iterations of the boosting sequence. If n.trees is a vector than the resu
ntree parameter in predict.gbm From the documentation: predict.gbm produces predicted values for each observation in newdata using the the first n.trees iterations of the boosting sequence. If n.trees is a vector than the result is a matrix with each column representing the predictions from gbm models with n.trees[1] iterations, n.trees[2] iterations, and so on. Since GBM trains the models in serial, you can tell gbm.predict to only use the first n.trees number of trees. Or you can tell it to give you the predictions from multiple numbers of n.trees. I assume you can use this to look at questions like, "After how many trees do my predictions level-off in their accuracy?" This also means that you cannot specify a higher n.trees than you did in training the model. If you do, it will give you a warning message and do predictions based on how many n.trees you fit the model on. See the code at https://github.com/cran/gbm/blob/master/R/predict.gbm.R#L57-L58: n.trees[n.trees>object$n.trees] <- object$n.trees warning("Number of trees not specified or exceeded number fit so far. Using ",paste(n.trees,collapse=" "),".")
ntree parameter in predict.gbm From the documentation: predict.gbm produces predicted values for each observation in newdata using the the first n.trees iterations of the boosting sequence. If n.trees is a vector than the resu
47,742
How do we characterize probabilities on this infinite sample space
[Note: I think this answer is correct. If it's wrong, then hopefully at least it will be a starting point for further discussions.] It's not possible to talk about the "fraction of sequences with infinitely many 1s" unless you have some way of defining "fraction". This can't be done consistently without using measure theory. In measure theory terms, there is no way of defining a measure on this space such that every subset is measurable. In everyday terms, if you can talk about "the fraction of sequences with property $P$" for every choice of $P$, then you will reach a contradiction. Measure theory issues cannot be avoided by saying "all sequences have equal measure" as this would restrict you to using either the zero measure or the counting measure. In either case, you wouldn't have a probability space because there is no way that the measure of the entire space could be equal to $1$. Fortunately, there is only one way to define a sensible measure on this space, and under this measure, the probability that a sequence contains infinitely many 1s is $1$. This is reasonable, because certainly everyone would agree that the probability of getting infinitely many heads when flipping a coin infinitely many times is $1$, and this is a very similar situation. How is the measure defined? To define a measure, you need to specify a sigma-algebra of measurable subsets and a measure on them. Fortunately, in this case it is possible to take a short cut since we are exactly in the situation of Theorem 7.16 in this pdf. (A finite space is a metric space under the discrete metric in which the Borel sets are all the subsets of the space.) [Edit: I have just noticed that the theorem in that pdf is for infinite copies of a single space rather than different spaces. But I think it is still true, for example, see the Kolmogorov Extension Theorem article on Wikipedia, under 'General Form'.] The resulting product measure is defined as follows: For a finite list $b=(b_1, b_2, \ldots, b_n)$ of numbers, define $$S_b = \{ s \in S: s_i = b_i, 1\le i \le n\}.$$ Then define the measure of $S_b$ by $$\mu(S_b) = \prod_{i=1}^n P(a_i = b_i) = \prod_{i=1}^n \frac{1}{ik+1}$$ and the theorem tells us that this magically extends to a probability measure on all measurable subsets of $S$. Now we have to use the fact that $\mu$ is a measure to get $\mu(S_{RL})=1$. The complement of $S_{RL}$ is $$\cup_{n=1}^\infty \cap_{k=n}^\infty A_k$$ where $A_k$ is the subset of $S$ consisting of those sequences for which $a_k \neq 1$. This shows that $S_{RL}$ is a measurable set because its complement can be expressed in terms of $A_k$, which are measurable because they can be expressed in terms of $S_b$'s using countable unions, intersections and complements. Now, $$\mu(\cap_{k=n}^\infty A_k)=0$$ because $\mu(\cap_{k=n}^N A_k) \rightarrow 0$ as $N \rightarrow \infty$. (Edit: coveniently, I think this is actually the calculation given in the other answer.) Therefore, the measure of the complement of $S_{RL}$ is $$\mu(S_{RL}^C)= \mu(\cup_{n=1}^\infty \cap_{k=n}^\infty A_k) = 0$$ because the measure of a countable union of sets of zero measure is zero, by properties of measure, and so $\mu(S_{RL})=1$ because $\mu$ is a probability measure by construction.
How do we characterize probabilities on this infinite sample space
[Note: I think this answer is correct. If it's wrong, then hopefully at least it will be a starting point for further discussions.] It's not possible to talk about the "fraction of sequences with infi
How do we characterize probabilities on this infinite sample space [Note: I think this answer is correct. If it's wrong, then hopefully at least it will be a starting point for further discussions.] It's not possible to talk about the "fraction of sequences with infinitely many 1s" unless you have some way of defining "fraction". This can't be done consistently without using measure theory. In measure theory terms, there is no way of defining a measure on this space such that every subset is measurable. In everyday terms, if you can talk about "the fraction of sequences with property $P$" for every choice of $P$, then you will reach a contradiction. Measure theory issues cannot be avoided by saying "all sequences have equal measure" as this would restrict you to using either the zero measure or the counting measure. In either case, you wouldn't have a probability space because there is no way that the measure of the entire space could be equal to $1$. Fortunately, there is only one way to define a sensible measure on this space, and under this measure, the probability that a sequence contains infinitely many 1s is $1$. This is reasonable, because certainly everyone would agree that the probability of getting infinitely many heads when flipping a coin infinitely many times is $1$, and this is a very similar situation. How is the measure defined? To define a measure, you need to specify a sigma-algebra of measurable subsets and a measure on them. Fortunately, in this case it is possible to take a short cut since we are exactly in the situation of Theorem 7.16 in this pdf. (A finite space is a metric space under the discrete metric in which the Borel sets are all the subsets of the space.) [Edit: I have just noticed that the theorem in that pdf is for infinite copies of a single space rather than different spaces. But I think it is still true, for example, see the Kolmogorov Extension Theorem article on Wikipedia, under 'General Form'.] The resulting product measure is defined as follows: For a finite list $b=(b_1, b_2, \ldots, b_n)$ of numbers, define $$S_b = \{ s \in S: s_i = b_i, 1\le i \le n\}.$$ Then define the measure of $S_b$ by $$\mu(S_b) = \prod_{i=1}^n P(a_i = b_i) = \prod_{i=1}^n \frac{1}{ik+1}$$ and the theorem tells us that this magically extends to a probability measure on all measurable subsets of $S$. Now we have to use the fact that $\mu$ is a measure to get $\mu(S_{RL})=1$. The complement of $S_{RL}$ is $$\cup_{n=1}^\infty \cap_{k=n}^\infty A_k$$ where $A_k$ is the subset of $S$ consisting of those sequences for which $a_k \neq 1$. This shows that $S_{RL}$ is a measurable set because its complement can be expressed in terms of $A_k$, which are measurable because they can be expressed in terms of $S_b$'s using countable unions, intersections and complements. Now, $$\mu(\cap_{k=n}^\infty A_k)=0$$ because $\mu(\cap_{k=n}^N A_k) \rightarrow 0$ as $N \rightarrow \infty$. (Edit: coveniently, I think this is actually the calculation given in the other answer.) Therefore, the measure of the complement of $S_{RL}$ is $$\mu(S_{RL}^C)= \mu(\cup_{n=1}^\infty \cap_{k=n}^\infty A_k) = 0$$ because the measure of a countable union of sets of zero measure is zero, by properties of measure, and so $\mu(S_{RL})=1$ because $\mu$ is a probability measure by construction.
How do we characterize probabilities on this infinite sample space [Note: I think this answer is correct. If it's wrong, then hopefully at least it will be a starting point for further discussions.] It's not possible to talk about the "fraction of sequences with infi
47,743
How do we characterize probabilities on this infinite sample space
I would say that to ask this question, we must ask the probability that any sequence has finite ones. I think it's sufficient to show that the probability of generating a sequence, as you described, that has finite ones, by rolling a dice at each step, is zero. If that's true then drawing any element of $S$ should guarantee to give you an element in $S_{R L}.$ $$ \prod_{j=1}^\infty \left( 1 - \frac{1}{k j + 1} \right) = \prod_{j=1}^\infty \frac{k j}{k j +1} = 0. $$ It's not obvious that the above product is zero, but it is shown (and I'm borrowing this result from) Chapter 2 Example 6a of this book, which was brought up to ask (and answer) the question in this recent thread. Now it's easy to show that the above product, with a finite upper bound is nonzero, simply by the fact that none of the factors are individually zero. So it follows that, $$ \prod_{j=m}^\infty \frac{k j} {k j + 1} = 0 $$ for any positive integer $m.$ Now let's calculate the probability of a sequence in $S$ having exactly $n$ zeros. To do this, we define $W_n$ to be the set of all subsets of $\mathbb{N}$ of size $n$. (For example, elements of $W_3$ can be $\{1,4,9 \}, \{2, 888, 1.5 \times 10^{55} \},$ etc.) Then the probability of having exactly $n$ ones is, $$ P(\text{n ones}) = \sum_{s \in W_n} \left( \prod_{j \in s} \frac{1}{k j + 1} \prod_{j \notin s} \frac{k j }{k j + 1} \right). $$ We can decompose the second product in the summand to look like this, $$ \prod_{j \notin s} \frac{k j}{k j + 1} = \prod_{j \notin s; j < \max(s)} \frac{k j}{k j +1} \prod_{j = \max(s)+1}^\infty \frac{k j} {k j + 1}, $$ where $\max(s)$ is the highest integer in the set $s.$ Remember, we're summing over all subsets of $\mathbb{N}$ of size $n,$ so $s$ is a sequence of $n$ positive integers. It follows that $\max(s)$ is finite for all sets $s$ in the sum. Thus $P(\text{n ones})$ is a sum of terms, each of which are zero. Therefore $P(\text{n ones}) = 0.$ This is true for all positive integers $n.$ It follows that all sequences must have infinite ones.
How do we characterize probabilities on this infinite sample space
I would say that to ask this question, we must ask the probability that any sequence has finite ones. I think it's sufficient to show that the probability of generating a sequence, as you described, t
How do we characterize probabilities on this infinite sample space I would say that to ask this question, we must ask the probability that any sequence has finite ones. I think it's sufficient to show that the probability of generating a sequence, as you described, that has finite ones, by rolling a dice at each step, is zero. If that's true then drawing any element of $S$ should guarantee to give you an element in $S_{R L}.$ $$ \prod_{j=1}^\infty \left( 1 - \frac{1}{k j + 1} \right) = \prod_{j=1}^\infty \frac{k j}{k j +1} = 0. $$ It's not obvious that the above product is zero, but it is shown (and I'm borrowing this result from) Chapter 2 Example 6a of this book, which was brought up to ask (and answer) the question in this recent thread. Now it's easy to show that the above product, with a finite upper bound is nonzero, simply by the fact that none of the factors are individually zero. So it follows that, $$ \prod_{j=m}^\infty \frac{k j} {k j + 1} = 0 $$ for any positive integer $m.$ Now let's calculate the probability of a sequence in $S$ having exactly $n$ zeros. To do this, we define $W_n$ to be the set of all subsets of $\mathbb{N}$ of size $n$. (For example, elements of $W_3$ can be $\{1,4,9 \}, \{2, 888, 1.5 \times 10^{55} \},$ etc.) Then the probability of having exactly $n$ ones is, $$ P(\text{n ones}) = \sum_{s \in W_n} \left( \prod_{j \in s} \frac{1}{k j + 1} \prod_{j \notin s} \frac{k j }{k j + 1} \right). $$ We can decompose the second product in the summand to look like this, $$ \prod_{j \notin s} \frac{k j}{k j + 1} = \prod_{j \notin s; j < \max(s)} \frac{k j}{k j +1} \prod_{j = \max(s)+1}^\infty \frac{k j} {k j + 1}, $$ where $\max(s)$ is the highest integer in the set $s.$ Remember, we're summing over all subsets of $\mathbb{N}$ of size $n,$ so $s$ is a sequence of $n$ positive integers. It follows that $\max(s)$ is finite for all sets $s$ in the sum. Thus $P(\text{n ones})$ is a sum of terms, each of which are zero. Therefore $P(\text{n ones}) = 0.$ This is true for all positive integers $n.$ It follows that all sequences must have infinite ones.
How do we characterize probabilities on this infinite sample space I would say that to ask this question, we must ask the probability that any sequence has finite ones. I think it's sufficient to show that the probability of generating a sequence, as you described, t
47,744
A Coin Flip Problem
Let your coin be $X_1$ and denote sum of heads as $S$. As I have written in the comment the answers seems to be $$P(X_1 = 1| S \ge k) = \frac{\sum_{i = k}^{n} \binom{n-1}{i-1}}{\sum_{i=k}^{n}\binom{n}{i}}$$ Here is a plot of theoretical vs sample probabilities with $n = 20$ and 1e^7 trials We can see that with low values of $k$ we get almost no additional information, thus the probability is close to unconditional $0.5$ Partially recreated code as requested by @Maximilian library(tidyverse) coin_flips <- function(n, k) { # Create n x k matrix of binary outcomes flips <- matrix(as.numeric(rbinom(n * k, 1, 0.5)), ncol = k) firsts <- flips[, 1] flips <- t(apply(flips, 1, sort, decreasing = T)) # i-th column is an indicator value [S >= i] # where S is the sum of heads flips <- as.tibble(flips) f <- function(x) { if (sum(x) > 0) { return(sum(x * firsts) / sum(x)) } return(1) } summary <- flips %>% summarise_all(.funs = f) colnames(summary) <- 1:k return(summary) } # Example usage cf <- coin_flips(1000000, 20) cf %>% gather %>% ggplot(aes(as.numeric(key), value)) + geom_point() + ylim(c(0.48, 1))
A Coin Flip Problem
Let your coin be $X_1$ and denote sum of heads as $S$. As I have written in the comment the answers seems to be $$P(X_1 = 1| S \ge k) = \frac{\sum_{i = k}^{n} \binom{n-1}{i-1}}{\sum_{i=k}^{n}\binom{
A Coin Flip Problem Let your coin be $X_1$ and denote sum of heads as $S$. As I have written in the comment the answers seems to be $$P(X_1 = 1| S \ge k) = \frac{\sum_{i = k}^{n} \binom{n-1}{i-1}}{\sum_{i=k}^{n}\binom{n}{i}}$$ Here is a plot of theoretical vs sample probabilities with $n = 20$ and 1e^7 trials We can see that with low values of $k$ we get almost no additional information, thus the probability is close to unconditional $0.5$ Partially recreated code as requested by @Maximilian library(tidyverse) coin_flips <- function(n, k) { # Create n x k matrix of binary outcomes flips <- matrix(as.numeric(rbinom(n * k, 1, 0.5)), ncol = k) firsts <- flips[, 1] flips <- t(apply(flips, 1, sort, decreasing = T)) # i-th column is an indicator value [S >= i] # where S is the sum of heads flips <- as.tibble(flips) f <- function(x) { if (sum(x) > 0) { return(sum(x * firsts) / sum(x)) } return(1) } summary <- flips %>% summarise_all(.funs = f) colnames(summary) <- 1:k return(summary) } # Example usage cf <- coin_flips(1000000, 20) cf %>% gather %>% ggplot(aes(as.numeric(key), value)) + geom_point() + ylim(c(0.48, 1))
A Coin Flip Problem Let your coin be $X_1$ and denote sum of heads as $S$. As I have written in the comment the answers seems to be $$P(X_1 = 1| S \ge k) = \frac{\sum_{i = k}^{n} \binom{n-1}{i-1}}{\sum_{i=k}^{n}\binom{
47,745
Two measurement devices vs 1 device multiple measurements
Regardless of how these devices behave, an additive model of variability provides useful insight. Such a model supposes that the response of an instrument is the sum of three independent quantities (none of which we necessarily know): The true value it is trying to measure, $\mu$. A random measurement error $X$ with mean $0$ and variance $\sigma^2$. $\sigma$ measures the imprecision of the instrument. A fixed error $Y$ which, because we do not know it, we also model as a random variable with mean $0$ and variance $\tau^2$. One way to view this is to suppose there is a large bin of instruments you could have used and the one(s) you are using have been pulled randomly from that bin. Overall these instruments are accurate (that's the mean $0$ assumption) but they do vary systematically from one to another (that's what $\tau$ measures). Although this model is rarely exactly right, it typically holds to a sufficiently good approximation that we can use it to find near-optimal combinations of measurements. This is part of the theory of experimental design. Suppose--this requires an assumption that's often not quite true, but is useful to get started--the results of the two instruments are independent and that the results of repeated measurements by one instrument are independent. Consider two possibilities: Repeated measurements by one instrument. Assumptions 1-3 enable us to view each measurement as a sum $$Z_i = \mu + X_i + Y$$ where $i$ is an index denoting the measurement and ranges from $1$ through $n$. Notice that $Y$ has no subscript because it is a property of the instrument itself: it doesn't change from one measurement to the other. We may compute the variance of the average of the measurements--conceived of as an average of these random variables $Z_i$--as $$\operatorname{Var}(\bar Z) = \frac{1}{n}\sigma^2 + \tau^2.$$ As $n$ gets larger, $\sigma^2/n$ grows smaller. Moreover, if we take expectations in the sense of what an arbitrarily large number of measurements would produce on average, $$E[\bar Z] = \mu + Y$$ shows that even the average is biased (unless you were lucky enough to draw an instrument with $Y\approx 0$--but you can't know that). The moral of this calculation is that averaging measurements from one instrument reduces the imprecision but has no effect on the accuracy. Independent measurements by multiple instruments. Now $i$ indexes both the measurement and the instrument. Accordingly, $$Z_i = \mu + X_i + Y_i.$$ Now $$\operatorname{Var}(\bar Z) = \frac{1}{n}\sigma^2 + \frac{1}{n}\tau^2$$ and (in the same sense as before, taking an arbitrarily large number of instruments), $$E[\bar Z] = \mu.$$ As $n$ gets larger, both $\sigma^2/n$ and $\tau^2/n$ grow smaller. Regardless, the expected value of the measurement is correct: $\bar Z$ is more likely to be accurate in this case. Thus, averaging measurements from multiple instruments reduces the imprecision and improves the accuracy. The decision seems clear: when you have the choice, use multiple instruments. Making repeated measurements from the same instrument is no substitute. When you have even more time and budget to design your experiment, you may combine both approaches: use multiple instruments and repeat the measurements made with each instrument. Such data can be analysed with an Analysis of Variance (ANOVA) to estimate $\sigma^2$ and $\tau^2$, the components of variance. You can use this information, along with what you know about the costs of making measurements and buying more instruments, to estimate the best balance between the numbers of instruments and numbers of repeated measurements made by each. The calculations aren't really any more difficult than illustrated here.
Two measurement devices vs 1 device multiple measurements
Regardless of how these devices behave, an additive model of variability provides useful insight. Such a model supposes that the response of an instrument is the sum of three independent quantities (n
Two measurement devices vs 1 device multiple measurements Regardless of how these devices behave, an additive model of variability provides useful insight. Such a model supposes that the response of an instrument is the sum of three independent quantities (none of which we necessarily know): The true value it is trying to measure, $\mu$. A random measurement error $X$ with mean $0$ and variance $\sigma^2$. $\sigma$ measures the imprecision of the instrument. A fixed error $Y$ which, because we do not know it, we also model as a random variable with mean $0$ and variance $\tau^2$. One way to view this is to suppose there is a large bin of instruments you could have used and the one(s) you are using have been pulled randomly from that bin. Overall these instruments are accurate (that's the mean $0$ assumption) but they do vary systematically from one to another (that's what $\tau$ measures). Although this model is rarely exactly right, it typically holds to a sufficiently good approximation that we can use it to find near-optimal combinations of measurements. This is part of the theory of experimental design. Suppose--this requires an assumption that's often not quite true, but is useful to get started--the results of the two instruments are independent and that the results of repeated measurements by one instrument are independent. Consider two possibilities: Repeated measurements by one instrument. Assumptions 1-3 enable us to view each measurement as a sum $$Z_i = \mu + X_i + Y$$ where $i$ is an index denoting the measurement and ranges from $1$ through $n$. Notice that $Y$ has no subscript because it is a property of the instrument itself: it doesn't change from one measurement to the other. We may compute the variance of the average of the measurements--conceived of as an average of these random variables $Z_i$--as $$\operatorname{Var}(\bar Z) = \frac{1}{n}\sigma^2 + \tau^2.$$ As $n$ gets larger, $\sigma^2/n$ grows smaller. Moreover, if we take expectations in the sense of what an arbitrarily large number of measurements would produce on average, $$E[\bar Z] = \mu + Y$$ shows that even the average is biased (unless you were lucky enough to draw an instrument with $Y\approx 0$--but you can't know that). The moral of this calculation is that averaging measurements from one instrument reduces the imprecision but has no effect on the accuracy. Independent measurements by multiple instruments. Now $i$ indexes both the measurement and the instrument. Accordingly, $$Z_i = \mu + X_i + Y_i.$$ Now $$\operatorname{Var}(\bar Z) = \frac{1}{n}\sigma^2 + \frac{1}{n}\tau^2$$ and (in the same sense as before, taking an arbitrarily large number of instruments), $$E[\bar Z] = \mu.$$ As $n$ gets larger, both $\sigma^2/n$ and $\tau^2/n$ grow smaller. Regardless, the expected value of the measurement is correct: $\bar Z$ is more likely to be accurate in this case. Thus, averaging measurements from multiple instruments reduces the imprecision and improves the accuracy. The decision seems clear: when you have the choice, use multiple instruments. Making repeated measurements from the same instrument is no substitute. When you have even more time and budget to design your experiment, you may combine both approaches: use multiple instruments and repeat the measurements made with each instrument. Such data can be analysed with an Analysis of Variance (ANOVA) to estimate $\sigma^2$ and $\tau^2$, the components of variance. You can use this information, along with what you know about the costs of making measurements and buying more instruments, to estimate the best balance between the numbers of instruments and numbers of repeated measurements made by each. The calculations aren't really any more difficult than illustrated here.
Two measurement devices vs 1 device multiple measurements Regardless of how these devices behave, an additive model of variability provides useful insight. Such a model supposes that the response of an instrument is the sum of three independent quantities (n
47,746
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term
A GARCH(1,1) model is \begin{aligned} y_t &= \mu_t + u_t, \\ \mu_t &= \dots \text{(e.g. a constant or an ARMA equation without the term $u_t$)}, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \beta_1 \sigma_{t-1}^2, \\ \varepsilon_t &\sim i.i.d(0,1). \\ \end{aligned} The three components in the conditional variance equation you refer to are $\omega$, $u_{t-1}^2$, and $\sigma_{t-1}^2$. Your question seems to be, how is $\omega$ different from $\sigma_{t-1}^2$? First, note that $\omega$ is not the long-run variance; the latter actually is $\sigma_{LR}^2:=\frac{\omega}{1-(\alpha_1+\beta_1)}$. $\omega$ is an offset term, the lowest value the variance can achieve in any time period, and is related to the long-run variance as $\omega=\sigma_{LR}^2(1-(\alpha_1+\beta_1))$. Second, $\sigma_{t-1}^2$ is not the historical variance of the moving window; it is instantaneous variance at time $t-1$.
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term
A GARCH(1,1) model is \begin{aligned} y_t &= \mu_t + u_t, \\ \mu_t &= \dots \text{(e.g. a constant or an ARMA equation without the term $u_t$)}, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \ome
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term A GARCH(1,1) model is \begin{aligned} y_t &= \mu_t + u_t, \\ \mu_t &= \dots \text{(e.g. a constant or an ARMA equation without the term $u_t$)}, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \beta_1 \sigma_{t-1}^2, \\ \varepsilon_t &\sim i.i.d(0,1). \\ \end{aligned} The three components in the conditional variance equation you refer to are $\omega$, $u_{t-1}^2$, and $\sigma_{t-1}^2$. Your question seems to be, how is $\omega$ different from $\sigma_{t-1}^2$? First, note that $\omega$ is not the long-run variance; the latter actually is $\sigma_{LR}^2:=\frac{\omega}{1-(\alpha_1+\beta_1)}$. $\omega$ is an offset term, the lowest value the variance can achieve in any time period, and is related to the long-run variance as $\omega=\sigma_{LR}^2(1-(\alpha_1+\beta_1))$. Second, $\sigma_{t-1}^2$ is not the historical variance of the moving window; it is instantaneous variance at time $t-1$.
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term A GARCH(1,1) model is \begin{aligned} y_t &= \mu_t + u_t, \\ \mu_t &= \dots \text{(e.g. a constant or an ARMA equation without the term $u_t$)}, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \ome
47,747
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term
Think you have it backwards on sigma squared. The "beta" of the GARCH model is the coefficient of historical variance.
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term
Think you have it backwards on sigma squared. The "beta" of the GARCH model is the coefficient of historical variance.
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term Think you have it backwards on sigma squared. The "beta" of the GARCH model is the coefficient of historical variance.
Understanding the GARCH(1,1) model: the constant, the ARCH term and the GARCH term Think you have it backwards on sigma squared. The "beta" of the GARCH model is the coefficient of historical variance.
47,748
Skewness of Tweedie distribution
Exponential dispersion family is a broad family of distributions allowed in GLMs. The general form of the PDF can be written as follows: $$f(x;\theta,\phi)=a(x,\phi)\exp\Big[\frac{1}{\phi}\big(x\theta-\kappa(\theta)\big)\Big].$$ The term $\kappa(\theta)$ is denoted with kappa because it is intimately related to the cumulants. Specifically, cumulant generating function (CGF) is given by $$K(t;\theta,\lambda)=\frac{1}{\phi}\big(\kappa(\theta + t\phi)-\kappa(\theta)\big)$$ (see Wikipedia or Eq 2.6 in Jørgensen 1987, or Jørgensen's The Theory of Dispersion Models, 1997. Note that with $\phi=1$ the family reduces to the natural exponential family, see Wikipedia for its CGF.) It follows that the first three cumulants are given by: \begin{align} \kappa_1 &= \kappa'(\theta)\\ \kappa_2 &= \phi\kappa''(\theta)\\ \kappa_3 &= \phi^2\kappa'''(\theta) \end{align} (Again note that for the natural exponential family cumulants are simply derivatives of $\kappa(\theta)$.) For Tweedie distribution it must hold that \begin{align} \kappa_1 &= \kappa'(\theta) = \mu\\ \kappa_2 &= \phi\kappa''(\theta) = \phi\mu^p \end{align} so it follows that $$\kappa_3=\phi^2\kappa'''(\theta)=\phi^2(\kappa''(\theta))'=\phi^2(\mu^p)'=\phi^2p\mu^{p-1}\mu'=\phi^2p\mu^{p-1}\mu^p=\phi^2p\mu^{2p-1}.$$ Now we can compute skewness: $$\operatorname{Skewness}[X]=\frac{\kappa_3}{\kappa_2^{3/2}}=\frac{\phi^2p\mu^{2p-1}}{(\phi\mu^p)^{3/2}}=\phi^{1/2}p\mu^{p/2-1}.$$ As a sanity check, this formula yields correct values for $p=0$, $p=1$, and $p=2$; these are skewness formulas for the Gaussian, Poisson, and gamma. Let's verify that it works correctly for $1<p<2$: # Tweedie random generation, using compound Poisson-Gamma representation def tweediernd(n=1, p=1.5, phi=10, mu=1): # See Dunn & Smyth paper linked above for these formulas lambd = mu**(2-p)/(2-p)/phi # Poisson rate alpha = -(2-p)/(1-p) # gamma shape beta = phi*(p-1)*mu**(p-1) # gamma scale x = np.zeros(n) for i in range(n): x[i] = np.sum(np.random.gamma(alpha, scale=beta, size=np.random.poisson(lambd))) return x np.random.seed(42) x = tweediernd(n=10000) print('Mean: ', np.mean(x)) # 1 print('Variance:', np.var(x)) # 10 print('Skewness:', scipy.stats.skew(x)) # sqrt(10)*1.5 = 4.74 This yields: Mean: 0.996421833721 Variance: 9.86859188577 Skewness: 4.763172234662853
Skewness of Tweedie distribution
Exponential dispersion family is a broad family of distributions allowed in GLMs. The general form of the PDF can be written as follows: $$f(x;\theta,\phi)=a(x,\phi)\exp\Big[\frac{1}{\phi}\big(x\theta
Skewness of Tweedie distribution Exponential dispersion family is a broad family of distributions allowed in GLMs. The general form of the PDF can be written as follows: $$f(x;\theta,\phi)=a(x,\phi)\exp\Big[\frac{1}{\phi}\big(x\theta-\kappa(\theta)\big)\Big].$$ The term $\kappa(\theta)$ is denoted with kappa because it is intimately related to the cumulants. Specifically, cumulant generating function (CGF) is given by $$K(t;\theta,\lambda)=\frac{1}{\phi}\big(\kappa(\theta + t\phi)-\kappa(\theta)\big)$$ (see Wikipedia or Eq 2.6 in Jørgensen 1987, or Jørgensen's The Theory of Dispersion Models, 1997. Note that with $\phi=1$ the family reduces to the natural exponential family, see Wikipedia for its CGF.) It follows that the first three cumulants are given by: \begin{align} \kappa_1 &= \kappa'(\theta)\\ \kappa_2 &= \phi\kappa''(\theta)\\ \kappa_3 &= \phi^2\kappa'''(\theta) \end{align} (Again note that for the natural exponential family cumulants are simply derivatives of $\kappa(\theta)$.) For Tweedie distribution it must hold that \begin{align} \kappa_1 &= \kappa'(\theta) = \mu\\ \kappa_2 &= \phi\kappa''(\theta) = \phi\mu^p \end{align} so it follows that $$\kappa_3=\phi^2\kappa'''(\theta)=\phi^2(\kappa''(\theta))'=\phi^2(\mu^p)'=\phi^2p\mu^{p-1}\mu'=\phi^2p\mu^{p-1}\mu^p=\phi^2p\mu^{2p-1}.$$ Now we can compute skewness: $$\operatorname{Skewness}[X]=\frac{\kappa_3}{\kappa_2^{3/2}}=\frac{\phi^2p\mu^{2p-1}}{(\phi\mu^p)^{3/2}}=\phi^{1/2}p\mu^{p/2-1}.$$ As a sanity check, this formula yields correct values for $p=0$, $p=1$, and $p=2$; these are skewness formulas for the Gaussian, Poisson, and gamma. Let's verify that it works correctly for $1<p<2$: # Tweedie random generation, using compound Poisson-Gamma representation def tweediernd(n=1, p=1.5, phi=10, mu=1): # See Dunn & Smyth paper linked above for these formulas lambd = mu**(2-p)/(2-p)/phi # Poisson rate alpha = -(2-p)/(1-p) # gamma shape beta = phi*(p-1)*mu**(p-1) # gamma scale x = np.zeros(n) for i in range(n): x[i] = np.sum(np.random.gamma(alpha, scale=beta, size=np.random.poisson(lambd))) return x np.random.seed(42) x = tweediernd(n=10000) print('Mean: ', np.mean(x)) # 1 print('Variance:', np.var(x)) # 10 print('Skewness:', scipy.stats.skew(x)) # sqrt(10)*1.5 = 4.74 This yields: Mean: 0.996421833721 Variance: 9.86859188577 Skewness: 4.763172234662853
Skewness of Tweedie distribution Exponential dispersion family is a broad family of distributions allowed in GLMs. The general form of the PDF can be written as follows: $$f(x;\theta,\phi)=a(x,\phi)\exp\Big[\frac{1}{\phi}\big(x\theta
47,749
If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variables model?
The direction of regression may be important to prevent attenuation bias. Your question about the regression $x \sim y$ versus $y \sim x$ has many angles. A problem which you might encounter is regression attenuation or regression dilution. This does not depend on which variable the experiment was controlled in the experiment or in which direction the causal relation has. It does depend on the error made in the variables. It happens if the 'independent variable in the regression' has a large error. The underlying mathematics does not care about the direction of the causal relationship, or which variable was controlled for, it cares about the errors in the variables. $$ (y+\epsilon_y) = a + b(x + \epsilon_x) \qquad vs \qquad \frac{(y+\epsilon_y)-a}{b} = (x + \epsilon_x) $$ What we do when changing the "direction" of a regression, is not really changing the direction, but much more like ignoring either $\epsilon_x$ (situation left hand side) or $\epsilon_y$ (situation right hand side). The variable that was controlled in the experiment, Voltage in this case, can also have a large error, even if it was "controlled". (you do not really set the voltage, you set some button or switch that controls the voltage and eventually you measure the voltage by reading it from a voltmeter or something). So this left hand side situation with the $\epsilon_x$ ignored $(y+\epsilon_y) = a + b(x)$ may be wrong and cause problems (that is attenuation). 'In this problem' you do not have to worry about direction of regression and attenuation. In this problem there is not a large error, or at least not much noise. The curves are smooth with little jitter. If there is an error then it is a systematic error, but such errors are not linked to regression attenuation (which is about the random errors). Also such systematic errors have little to do with other issues in the mathematics. Except for possibly making some inverse regression ill-posed due to crossing some asymptote or creating negative values in roots logs etcetera. These systematic errors are more like something that should be dealt with on the practical side (testing equipment, performing good calibration, etcetera). Much more important is to use the proper model. In the referenced question I have shown how the polynomial model is not working well in the large range https://stats.stackexchange.com/a/315546/164061 . The nonlinear model is not doing much better $$\text{Amplification}=\frac{1}{1-\left(\frac{\text{Voltage}}{p_0}\right)^{p_1}}+p_2+\epsilon$$ Or at least, I cant seem to make it converge. And I believe it is ill-posed. All this work of trying this perfect fitting is a bit an overkill for the simple task of getting an estimate for the Voltage value at Amplification = 150. This is an interpolation problem, not a fitting problem! This can be done by getting the values of the nearest data points above and below the 150 and use the line between these points to make the estimate. There is no noise which makes this method work badly. If there would be noise then one could use a line trough several points. This interpolation working well, shows that the direction of the relationship is not really the issue here. If one does wish to fit a reasonable curve then I believe it would be better to dig deeper into the mechanics of the system and use knowledge of the device to create a good curve fit, rather than some polynomial or simplistic literature curve, which are "just" experimental relationships which have little value for generalization and provide little information on the mechanics and inner workings of the devices (which may not be the prime goal of the experiments, but would be a bonus, and at least would increase the robustness of the fitting method). It is hard to do this work by just gazing at the raw data without much information of the system, however, I was able to get a reasonable fit for a differential equation with two power terms. $$\frac{\partial A}{\partial V} = a(A-1)^b + c(A-1)^d + \epsilon$$ This is a separable equation and can be solved both numerically (easy, e.g. by deSolve in R) or analytically (although this seems to involve the hypergeometric function $_2F_1$ which is not easy to fit directly). Example of fitting with a differentiated function Upon request the code to fit with the differentiated function. I want to stress two points: It is indeed an interesting method to fit, although it a whole different question and not necessary (overkill) to make the predictions at Amplification 150. If there is more noise then it may even be be worse than a local linear or polynomial fit in the region around Amplification =150. The fit is only empirically determined and we can not guarantee that it helps to cancel out noise without introducing more bias in return. Some more information about the system (ab initio approach) would certainly help. I started out with this method of differentiation, in the first place to investigate and explore potential relationships based on differential equations. To use this method as a final fitting procedure is not always successful and recommendable, because the differentiation will amplify the noise. So it is not a general method to solve the problem and each problem has it's own quirks with a different approach (this also makes the answer to the question about 'regress $x$ on $y$ instead $y$ on $x$' not general and difficult to formulate). code: #### demonstrating fit based on differentiated data or differential equation #### #### note that this is no production code #### so no checks all sorts and several hard coded limits library(deSolve) library(optimr) # we make two plots side by side layout(matrix(c(1,2), 1, 2, byrow = TRUE)) # getting data (for simplicity of the example we just take the 91200913 serial number) A <- c(1.00252,1.00452,1.00537,1.0056,1.00683,1.0069,1.00847,1.00935,1.01157,1.01418,1.01914,1.0247,1.02919,1.03511,1.04545,1.07362,1.11549,1.17123,1.25019,1.36276,1.5104,1.69862,1.9518,2.26756,2.66278,3.14247,3.73163,4.46152,5.36262,6.49514,7.9227,9.73803,12.0663,15.0943,19.1004,20.0563,21.0672,22.142,23.2867,24.5037,25.8102,27.2024,28.6916,30.2968,32.0181,33.8775,35.8937,38.0569,40.4069,42.9713,45.7766,48.8312,52.2068,55.916,60.0356,64.6109,69.7152,75.4698,82.0003,89.4222,97.9493,107.807,119.441,133.2,149.796,170.113,195.89,229.058,273.481,335.96,431.682,593.091,918.112,1903.74) V <- c(24.9681,29.9591,34.9494,44.9372,49.9329,54.9625,59.9639,64.9641,69.965,74.9663,79.969,84.9719,89.974,94.9752,99.9759,109.974,119.969,129.96,139.96,149.963,159.958,169.959,179.963,189.957,199.97,209.971,219.971,229.973,239.966,249.962,259.971,269.971,279.97,289.968,299.959,301.968,303.967,305.966,307.965,309.955,311.963,313.963,315.961,317.962,319.956,321.951,323.961,325.962,327.963,329.965,331.966,333.959,335.97,337.972,339.973,341.974,343.967,345.97,347.978,349.98,351.98,353.971,355.983,357.983,359.983,361.975,363.989,365.989,367.984,369.979,371.992,373.994,375.985,377.999) # numerical differentiation (the noise in this example is not high, alternatively one could higher order methods e.g. a Savitky Golay filter) dA <- A[-1]-A[-74] dV <- V[-1]-V[-74] mA <- 0.5*(A[-1]+A[-74]) mV <- 0.5*(V[-1]+V[-74]) # plotting the derivartive dA/dv with a function of (A-1) # substracting A by 1, resulting in A-1 was done # to get an asymptote to zero instead of one. # this give probably more interresting relations on a log-log scale col <- hsv(0,0,0.5) plot(mA-1, dA/dV, ylim=c(0.000001,2000), xlim=c(0.001,4000), pch=21,cex=0.5, col = col, bg = col, log="xy",xlab="amplification",ylab="d amplification / d Volt") title("dA/dV as function of A \n data points and fit") # this above plot looks so much like two seperate straight lines # lets try to fit a double power law # # in the nls fit we use a scaling by the amplification # to give more weight on the lower values # (this arbitrary scaling does introduce some subjectivity, # but it is just a practical way to avoid the alternatively # plotting of logarithms which require the use of the 'port' # algorithm and limits to prevent logs of negative values. # so We can do it better, but we ar just lazy for this demonstration) fit <- nls(dA/dV ~ a*(mA-1)^b+c*(mA-1)^d, start=c(a=0.026, b=0.9, c=0.0005, d=1.88), weights = (dA/dV)^-1, control = nls.control(maxiter = 2000 , minFactor = 10^-9)) # ploting coefs <- coef(fit) x <- 10^(c(-10:16)/3) lines(x,coefs[1]*x^coefs[2]+coefs[3]*x^coefs[4],col=col) text_string <- paste0("fitted line: dA/dV =",round(coefs[1],5),"A^",round(coefs[2],2)," + ",round(coefs[3],5),"A^",round(coefs[4],2)) text(10^-3,10^-6,text_string,pos = 4) # going back to the case A vs V # - we will need to integrate the fit # # - note that the fit of dA/dV vs A was done to explore the relationship # the differentiation is not a very robust operation (noisy) and usually # we would whish to obatain a symbolic integration of the differential equation # and use the result to relate A as a function of V rather than dA/dV as a function of A # # but this involves a hypergemoetric function 2F1 which happens to be difficult to fit # also the function is not that noisy at all so the fit in the space (dA/dV, A) works well # # - in the end we do add some computation with the optim library to optimize the fit in the # space A vs V, thus without performing differentiation # (or at least, under the hood the optim algorithm will do some differtiation of the Loss function # to determine the convergence, but we do not do the noisy differntiation dA/dV) # plotting data points plot(V, A, xlim=c(0,400), ylim=c(1,4000), log="y", pch=21,cex=0.7, col = col, bg = col, xlab="Voltage",ylab="Amplification") title("A as function of V \n data points and two fits") legend(0,4000, c("data points","fit dA/dV vs A","fit A vs V"), col=c(8,4,2), pt.bg = c(col,0,0), lty=c(NA,2,2), pch=c(21,NA,NA)) # differential equation for use with deSolve fitje <- function(t, state, parameters) { with(as.list(c(state, parameters)), { dV <- -1 dA <- -(a*(A-1)^b+c*(A-1)^d) list(c(dV, dA)) }) } # integration using deSolve (we put this in a function for easier use with optim) integraal <- function(coefs) { parameters <- coefs state <- c(V = max(V), A = max(A)) times <- seq(0, 1900, by = 0.1) out <- ode(y = state, times = times, func = fitje, parms = parameters) out } out <- integraal(coefs) # plot deSolve result lines(out[,"V"],out[,"A"], col=4,lty=2) # solving the fit in the space V,A by putting the deSolve function into optim # (this will take potentially take some time and is only recommended if we can not # find an analytical function to put into optim/nls/nlmer/etc) f <- function(coefs) { A_o <- A model <- integraal(coefs) A_m <- approx(x = model[,"V"], y = model[,"A"], xout = V)$y sum((A_o-A_m)^2) } f(coefs) # fingers crossed that it won't diverge to NAs due to getting out of the range of the approx-function # # not that using the coefficient from the previous fit will help the convergence # performing this method sec without a good initial guess may end badly model2 <- optim(coefs, f) #plotting coefs2 <- model2$par out2 <- integraal(coefs2) lines(out2[,"V"],out2[,"A"], col=2,lty=2) Image generated by this code:
If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variab
The direction of regression may be important to prevent attenuation bias. Your question about the regression $x \sim y$ versus $y \sim x$ has many angles. A problem which you might encounter is regres
If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variables model? The direction of regression may be important to prevent attenuation bias. Your question about the regression $x \sim y$ versus $y \sim x$ has many angles. A problem which you might encounter is regression attenuation or regression dilution. This does not depend on which variable the experiment was controlled in the experiment or in which direction the causal relation has. It does depend on the error made in the variables. It happens if the 'independent variable in the regression' has a large error. The underlying mathematics does not care about the direction of the causal relationship, or which variable was controlled for, it cares about the errors in the variables. $$ (y+\epsilon_y) = a + b(x + \epsilon_x) \qquad vs \qquad \frac{(y+\epsilon_y)-a}{b} = (x + \epsilon_x) $$ What we do when changing the "direction" of a regression, is not really changing the direction, but much more like ignoring either $\epsilon_x$ (situation left hand side) or $\epsilon_y$ (situation right hand side). The variable that was controlled in the experiment, Voltage in this case, can also have a large error, even if it was "controlled". (you do not really set the voltage, you set some button or switch that controls the voltage and eventually you measure the voltage by reading it from a voltmeter or something). So this left hand side situation with the $\epsilon_x$ ignored $(y+\epsilon_y) = a + b(x)$ may be wrong and cause problems (that is attenuation). 'In this problem' you do not have to worry about direction of regression and attenuation. In this problem there is not a large error, or at least not much noise. The curves are smooth with little jitter. If there is an error then it is a systematic error, but such errors are not linked to regression attenuation (which is about the random errors). Also such systematic errors have little to do with other issues in the mathematics. Except for possibly making some inverse regression ill-posed due to crossing some asymptote or creating negative values in roots logs etcetera. These systematic errors are more like something that should be dealt with on the practical side (testing equipment, performing good calibration, etcetera). Much more important is to use the proper model. In the referenced question I have shown how the polynomial model is not working well in the large range https://stats.stackexchange.com/a/315546/164061 . The nonlinear model is not doing much better $$\text{Amplification}=\frac{1}{1-\left(\frac{\text{Voltage}}{p_0}\right)^{p_1}}+p_2+\epsilon$$ Or at least, I cant seem to make it converge. And I believe it is ill-posed. All this work of trying this perfect fitting is a bit an overkill for the simple task of getting an estimate for the Voltage value at Amplification = 150. This is an interpolation problem, not a fitting problem! This can be done by getting the values of the nearest data points above and below the 150 and use the line between these points to make the estimate. There is no noise which makes this method work badly. If there would be noise then one could use a line trough several points. This interpolation working well, shows that the direction of the relationship is not really the issue here. If one does wish to fit a reasonable curve then I believe it would be better to dig deeper into the mechanics of the system and use knowledge of the device to create a good curve fit, rather than some polynomial or simplistic literature curve, which are "just" experimental relationships which have little value for generalization and provide little information on the mechanics and inner workings of the devices (which may not be the prime goal of the experiments, but would be a bonus, and at least would increase the robustness of the fitting method). It is hard to do this work by just gazing at the raw data without much information of the system, however, I was able to get a reasonable fit for a differential equation with two power terms. $$\frac{\partial A}{\partial V} = a(A-1)^b + c(A-1)^d + \epsilon$$ This is a separable equation and can be solved both numerically (easy, e.g. by deSolve in R) or analytically (although this seems to involve the hypergeometric function $_2F_1$ which is not easy to fit directly). Example of fitting with a differentiated function Upon request the code to fit with the differentiated function. I want to stress two points: It is indeed an interesting method to fit, although it a whole different question and not necessary (overkill) to make the predictions at Amplification 150. If there is more noise then it may even be be worse than a local linear or polynomial fit in the region around Amplification =150. The fit is only empirically determined and we can not guarantee that it helps to cancel out noise without introducing more bias in return. Some more information about the system (ab initio approach) would certainly help. I started out with this method of differentiation, in the first place to investigate and explore potential relationships based on differential equations. To use this method as a final fitting procedure is not always successful and recommendable, because the differentiation will amplify the noise. So it is not a general method to solve the problem and each problem has it's own quirks with a different approach (this also makes the answer to the question about 'regress $x$ on $y$ instead $y$ on $x$' not general and difficult to formulate). code: #### demonstrating fit based on differentiated data or differential equation #### #### note that this is no production code #### so no checks all sorts and several hard coded limits library(deSolve) library(optimr) # we make two plots side by side layout(matrix(c(1,2), 1, 2, byrow = TRUE)) # getting data (for simplicity of the example we just take the 91200913 serial number) A <- c(1.00252,1.00452,1.00537,1.0056,1.00683,1.0069,1.00847,1.00935,1.01157,1.01418,1.01914,1.0247,1.02919,1.03511,1.04545,1.07362,1.11549,1.17123,1.25019,1.36276,1.5104,1.69862,1.9518,2.26756,2.66278,3.14247,3.73163,4.46152,5.36262,6.49514,7.9227,9.73803,12.0663,15.0943,19.1004,20.0563,21.0672,22.142,23.2867,24.5037,25.8102,27.2024,28.6916,30.2968,32.0181,33.8775,35.8937,38.0569,40.4069,42.9713,45.7766,48.8312,52.2068,55.916,60.0356,64.6109,69.7152,75.4698,82.0003,89.4222,97.9493,107.807,119.441,133.2,149.796,170.113,195.89,229.058,273.481,335.96,431.682,593.091,918.112,1903.74) V <- c(24.9681,29.9591,34.9494,44.9372,49.9329,54.9625,59.9639,64.9641,69.965,74.9663,79.969,84.9719,89.974,94.9752,99.9759,109.974,119.969,129.96,139.96,149.963,159.958,169.959,179.963,189.957,199.97,209.971,219.971,229.973,239.966,249.962,259.971,269.971,279.97,289.968,299.959,301.968,303.967,305.966,307.965,309.955,311.963,313.963,315.961,317.962,319.956,321.951,323.961,325.962,327.963,329.965,331.966,333.959,335.97,337.972,339.973,341.974,343.967,345.97,347.978,349.98,351.98,353.971,355.983,357.983,359.983,361.975,363.989,365.989,367.984,369.979,371.992,373.994,375.985,377.999) # numerical differentiation (the noise in this example is not high, alternatively one could higher order methods e.g. a Savitky Golay filter) dA <- A[-1]-A[-74] dV <- V[-1]-V[-74] mA <- 0.5*(A[-1]+A[-74]) mV <- 0.5*(V[-1]+V[-74]) # plotting the derivartive dA/dv with a function of (A-1) # substracting A by 1, resulting in A-1 was done # to get an asymptote to zero instead of one. # this give probably more interresting relations on a log-log scale col <- hsv(0,0,0.5) plot(mA-1, dA/dV, ylim=c(0.000001,2000), xlim=c(0.001,4000), pch=21,cex=0.5, col = col, bg = col, log="xy",xlab="amplification",ylab="d amplification / d Volt") title("dA/dV as function of A \n data points and fit") # this above plot looks so much like two seperate straight lines # lets try to fit a double power law # # in the nls fit we use a scaling by the amplification # to give more weight on the lower values # (this arbitrary scaling does introduce some subjectivity, # but it is just a practical way to avoid the alternatively # plotting of logarithms which require the use of the 'port' # algorithm and limits to prevent logs of negative values. # so We can do it better, but we ar just lazy for this demonstration) fit <- nls(dA/dV ~ a*(mA-1)^b+c*(mA-1)^d, start=c(a=0.026, b=0.9, c=0.0005, d=1.88), weights = (dA/dV)^-1, control = nls.control(maxiter = 2000 , minFactor = 10^-9)) # ploting coefs <- coef(fit) x <- 10^(c(-10:16)/3) lines(x,coefs[1]*x^coefs[2]+coefs[3]*x^coefs[4],col=col) text_string <- paste0("fitted line: dA/dV =",round(coefs[1],5),"A^",round(coefs[2],2)," + ",round(coefs[3],5),"A^",round(coefs[4],2)) text(10^-3,10^-6,text_string,pos = 4) # going back to the case A vs V # - we will need to integrate the fit # # - note that the fit of dA/dV vs A was done to explore the relationship # the differentiation is not a very robust operation (noisy) and usually # we would whish to obatain a symbolic integration of the differential equation # and use the result to relate A as a function of V rather than dA/dV as a function of A # # but this involves a hypergemoetric function 2F1 which happens to be difficult to fit # also the function is not that noisy at all so the fit in the space (dA/dV, A) works well # # - in the end we do add some computation with the optim library to optimize the fit in the # space A vs V, thus without performing differentiation # (or at least, under the hood the optim algorithm will do some differtiation of the Loss function # to determine the convergence, but we do not do the noisy differntiation dA/dV) # plotting data points plot(V, A, xlim=c(0,400), ylim=c(1,4000), log="y", pch=21,cex=0.7, col = col, bg = col, xlab="Voltage",ylab="Amplification") title("A as function of V \n data points and two fits") legend(0,4000, c("data points","fit dA/dV vs A","fit A vs V"), col=c(8,4,2), pt.bg = c(col,0,0), lty=c(NA,2,2), pch=c(21,NA,NA)) # differential equation for use with deSolve fitje <- function(t, state, parameters) { with(as.list(c(state, parameters)), { dV <- -1 dA <- -(a*(A-1)^b+c*(A-1)^d) list(c(dV, dA)) }) } # integration using deSolve (we put this in a function for easier use with optim) integraal <- function(coefs) { parameters <- coefs state <- c(V = max(V), A = max(A)) times <- seq(0, 1900, by = 0.1) out <- ode(y = state, times = times, func = fitje, parms = parameters) out } out <- integraal(coefs) # plot deSolve result lines(out[,"V"],out[,"A"], col=4,lty=2) # solving the fit in the space V,A by putting the deSolve function into optim # (this will take potentially take some time and is only recommended if we can not # find an analytical function to put into optim/nls/nlmer/etc) f <- function(coefs) { A_o <- A model <- integraal(coefs) A_m <- approx(x = model[,"V"], y = model[,"A"], xout = V)$y sum((A_o-A_m)^2) } f(coefs) # fingers crossed that it won't diverge to NAs due to getting out of the range of the approx-function # # not that using the coefficient from the previous fit will help the convergence # performing this method sec without a good initial guess may end badly model2 <- optim(coefs, f) #plotting coefs2 <- model2$par out2 <- integraal(coefs2) lines(out2[,"V"],out2[,"A"], col=2,lty=2) Image generated by this code:
If in this problem I regress $x$ on $y$ instead than $y$ on $x$, do I need to use an error-in-variab The direction of regression may be important to prevent attenuation bias. Your question about the regression $x \sim y$ versus $y \sim x$ has many angles. A problem which you might encounter is regres
47,750
Proposal distribution in Hamiltonian Monte Carlo
The proposal distribution for the original Hamiltonian Monte Carlo algorithm is just a delta function around the final point in the numerical trajectory with the momentum negated, $$K(z' | z) = \delta \, (z' - R(\Phi_{\epsilon, L}(z))), $$ where $z = (q, p)$ is a point on phase space, $\Phi_{\epsilon, L}(z)$ is the action of the numerical integrator, and $R$ is the negation operator that fits the sign of the momentum, $$R(q, p) = (q, -p).$$ Importantly, the original Hamiltonian Monte Carlo algorithm cannot be interpreted as a Metropolis-Hastings algorithm on the target parameter space and so there is no proposal distribution $K(q'|q)$. The Metropolis-Hastings acceptance procedure has to be done on the extended phase space that includes the auxiliary momenta in addition to the target parameters. The overall procedure of sampling momenta to generate a point in phase space, numerically integrating a trajectory, accepting or rejecting the final point in phase space, and then throwing the momenta away to recover a new parameter value, however, defines a Markov kernel on the target parameter space. Tierney (https://projecteuclid.org/euclid.aoap/1027961031) discusses some of the formal details of working with delta function proposals for Metropolis-Hastings.
Proposal distribution in Hamiltonian Monte Carlo
The proposal distribution for the original Hamiltonian Monte Carlo algorithm is just a delta function around the final point in the numerical trajectory with the momentum negated, $$K(z' | z) = \delta
Proposal distribution in Hamiltonian Monte Carlo The proposal distribution for the original Hamiltonian Monte Carlo algorithm is just a delta function around the final point in the numerical trajectory with the momentum negated, $$K(z' | z) = \delta \, (z' - R(\Phi_{\epsilon, L}(z))), $$ where $z = (q, p)$ is a point on phase space, $\Phi_{\epsilon, L}(z)$ is the action of the numerical integrator, and $R$ is the negation operator that fits the sign of the momentum, $$R(q, p) = (q, -p).$$ Importantly, the original Hamiltonian Monte Carlo algorithm cannot be interpreted as a Metropolis-Hastings algorithm on the target parameter space and so there is no proposal distribution $K(q'|q)$. The Metropolis-Hastings acceptance procedure has to be done on the extended phase space that includes the auxiliary momenta in addition to the target parameters. The overall procedure of sampling momenta to generate a point in phase space, numerically integrating a trajectory, accepting or rejecting the final point in phase space, and then throwing the momenta away to recover a new parameter value, however, defines a Markov kernel on the target parameter space. Tierney (https://projecteuclid.org/euclid.aoap/1027961031) discusses some of the formal details of working with delta function proposals for Metropolis-Hastings.
Proposal distribution in Hamiltonian Monte Carlo The proposal distribution for the original Hamiltonian Monte Carlo algorithm is just a delta function around the final point in the numerical trajectory with the momentum negated, $$K(z' | z) = \delta
47,751
Proposal distribution in Hamiltonian Monte Carlo
The proposal distribution in Hamiltonian Monte Carlo does not have an explicit form in general. Instead, samples from it are defined operationally: first sample an initial velocity and then move the position using a number of leap-frog steps. The final position is a sample from the proposal distribution.
Proposal distribution in Hamiltonian Monte Carlo
The proposal distribution in Hamiltonian Monte Carlo does not have an explicit form in general. Instead, samples from it are defined operationally: first sample an initial velocity and then move the p
Proposal distribution in Hamiltonian Monte Carlo The proposal distribution in Hamiltonian Monte Carlo does not have an explicit form in general. Instead, samples from it are defined operationally: first sample an initial velocity and then move the position using a number of leap-frog steps. The final position is a sample from the proposal distribution.
Proposal distribution in Hamiltonian Monte Carlo The proposal distribution in Hamiltonian Monte Carlo does not have an explicit form in general. Instead, samples from it are defined operationally: first sample an initial velocity and then move the p
47,752
Uniform distribution on $\mathbb{Q} \cap [0, 1]$ (sort of) [duplicate]
No it is not possible. If such a random variable exists, we would have $\Pr(X=q)=0$ for every $q \in \mathbb{Q} \cap [0,1]$, because we can write the singleton $\{q\}$ as a decreasing intersection of intervals $[a_n, b_n]$ whose length $b_n-a_n$ goes to $0$, and then $\Pr(X = q) = \lim \Pr(X \in [a_n,b_n])=0$. Now, the set $\mathbb{Q} \cap [0,1]$ is countable, that is, it is equal to a sequence ${\{q_i\}}_{i \in \mathbb{N}}$. One should have $\sum_{i \geq 0} \Pr(X=q_i)=\Pr(X \in \mathbb{Q} \cap [0,1])=1$ and finally we would get $0=1$.
Uniform distribution on $\mathbb{Q} \cap [0, 1]$ (sort of) [duplicate]
No it is not possible. If such a random variable exists, we would have $\Pr(X=q)=0$ for every $q \in \mathbb{Q} \cap [0,1]$, because we can write the singleton $\{q\}$ as a decreasing intersection of
Uniform distribution on $\mathbb{Q} \cap [0, 1]$ (sort of) [duplicate] No it is not possible. If such a random variable exists, we would have $\Pr(X=q)=0$ for every $q \in \mathbb{Q} \cap [0,1]$, because we can write the singleton $\{q\}$ as a decreasing intersection of intervals $[a_n, b_n]$ whose length $b_n-a_n$ goes to $0$, and then $\Pr(X = q) = \lim \Pr(X \in [a_n,b_n])=0$. Now, the set $\mathbb{Q} \cap [0,1]$ is countable, that is, it is equal to a sequence ${\{q_i\}}_{i \in \mathbb{N}}$. One should have $\sum_{i \geq 0} \Pr(X=q_i)=\Pr(X \in \mathbb{Q} \cap [0,1])=1$ and finally we would get $0=1$.
Uniform distribution on $\mathbb{Q} \cap [0, 1]$ (sort of) [duplicate] No it is not possible. If such a random variable exists, we would have $\Pr(X=q)=0$ for every $q \in \mathbb{Q} \cap [0,1]$, because we can write the singleton $\{q\}$ as a decreasing intersection of
47,753
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression?
In reality the box-cox transformation finds a transformation that homogenize variance. And constant variance is really an important assumption! The comment of @whuber: The Box-Cox transform is a data transformation (usually for positive data) defined by $Y^{(\lambda)}= \frac{y^\lambda - 1}{\lambda}$ (when $\lambda\not=0$ and its limit $\log y$ when $\lambda=0$). This transform can be used in different ways, and the Box-Cox method usually refers to likelihood estimation of the transform parameter $\lambda$. $\lambda$ could potentially be chosen in other ways, but this post (and the question) is about this likelihood method of choosing $\lambda$. What happens is that boxcox transform maximizes a likelihood function constructed from a constant variance normal model. And the main contribution in maximizing that likelihood comes from homogenizing the variance! ( * ) You could construct some similar likelihood function from some other location-scale family (maybe, for example, constructed from $t_{10}$, say) and the constant variance assumption, and it would give similar results. Or you could construct a boxcox-like criterion function from robust regression, again with constant variance. It would give similar results. (eventually, I want to come back here showing this with some code). ( * ) This shouldn't really be surprising. By drawing a few figures you can convince yourself that changing the scale of a density is a much larger change, influencing density values (that is, likelihood values) much more than just changing the basic form a little, but keeping the scale. I once built (with Xlispstat) a slider demonstration showing this convincingly, but what you should do is simply to make some simple examples and you will see this result for yourself. What happens is simply that the contribution to the likelihood function from constant variance assumption greatly overshadows changes to the likelihood by small changes to the form of the basic density $f_0$ used to generate the location-scale family.
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for
In reality the box-cox transformation finds a transformation that homogenize variance. And constant variance is really an important assumption! The comment of @whuber: The Box-Cox transform is a dat
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression? In reality the box-cox transformation finds a transformation that homogenize variance. And constant variance is really an important assumption! The comment of @whuber: The Box-Cox transform is a data transformation (usually for positive data) defined by $Y^{(\lambda)}= \frac{y^\lambda - 1}{\lambda}$ (when $\lambda\not=0$ and its limit $\log y$ when $\lambda=0$). This transform can be used in different ways, and the Box-Cox method usually refers to likelihood estimation of the transform parameter $\lambda$. $\lambda$ could potentially be chosen in other ways, but this post (and the question) is about this likelihood method of choosing $\lambda$. What happens is that boxcox transform maximizes a likelihood function constructed from a constant variance normal model. And the main contribution in maximizing that likelihood comes from homogenizing the variance! ( * ) You could construct some similar likelihood function from some other location-scale family (maybe, for example, constructed from $t_{10}$, say) and the constant variance assumption, and it would give similar results. Or you could construct a boxcox-like criterion function from robust regression, again with constant variance. It would give similar results. (eventually, I want to come back here showing this with some code). ( * ) This shouldn't really be surprising. By drawing a few figures you can convince yourself that changing the scale of a density is a much larger change, influencing density values (that is, likelihood values) much more than just changing the basic form a little, but keeping the scale. I once built (with Xlispstat) a slider demonstration showing this convincingly, but what you should do is simply to make some simple examples and you will see this result for yourself. What happens is simply that the contribution to the likelihood function from constant variance assumption greatly overshadows changes to the likelihood by small changes to the form of the basic density $f_0$ used to generate the location-scale family.
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for In reality the box-cox transformation finds a transformation that homogenize variance. And constant variance is really an important assumption! The comment of @whuber: The Box-Cox transform is a dat
47,754
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression?
I'm assuming you're referring to Box-Cox normality plots by "method" in your question. It is true that normality assumption in OLS is not required for the method to be useful. For instance, regardless of the error distribution it will produce the coefficients that are unbiased under certain other conditions. With that said though normality assumption is not useless. For instance, in small samples without normality assumption you can't say much about the probability distribution of the ceofficients beyond the variance and covariance. With normality assumption you can estimate this probability distribution. On large samples under certain conditions you can do this without normality assumption using central limit theorem. Normality assumption makes maximum likelihood estimation (MLE) to produce the same coefficients as OLS, and share many properties of the estimators in (again) small samples. finally, many people use Box-Cox transformation not to normalize the data, but to stabilize the variance. Sometimes variance increases for larger levels of the dependent variable. In this case Box-Cox transformation can help make variance uniform across the sample. This is related to the assumption of homoscedasticity in OLS
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for
I'm assuming you're referring to Box-Cox normality plots by "method" in your question. It is true that normality assumption in OLS is not required for the method to be useful. For instance, regardless
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression? I'm assuming you're referring to Box-Cox normality plots by "method" in your question. It is true that normality assumption in OLS is not required for the method to be useful. For instance, regardless of the error distribution it will produce the coefficients that are unbiased under certain other conditions. With that said though normality assumption is not useless. For instance, in small samples without normality assumption you can't say much about the probability distribution of the ceofficients beyond the variance and covariance. With normality assumption you can estimate this probability distribution. On large samples under certain conditions you can do this without normality assumption using central limit theorem. Normality assumption makes maximum likelihood estimation (MLE) to produce the same coefficients as OLS, and share many properties of the estimators in (again) small samples. finally, many people use Box-Cox transformation not to normalize the data, but to stabilize the variance. Sometimes variance increases for larger levels of the dependent variable. In this case Box-Cox transformation can help make variance uniform across the sample. This is related to the assumption of homoscedasticity in OLS
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for I'm assuming you're referring to Box-Cox normality plots by "method" in your question. It is true that normality assumption in OLS is not required for the method to be useful. For instance, regardless
47,755
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression?
Sorry that my question is a bit unorganized, but one of my question(and most confused part) is why do we want our predictors and responses variable to be symmetric or normally distributed. And after dwelling on this for two days, I think now I quite got the answer. Here is what I found is most useful: https://stats.stackexchange.com/a/123252/161581 The core idea is: the log-or-power-transformed, more normally distributed variables are more likely to fulfill linear regression's assumptions, particularly linearity, homoscedasticity, and normally distributed residual. As for the reason, the quote picture in my question can answer for the linearity part. Or as @Penguin_Knight said, skewed independent variable would have some data points with very high leverage, potentially able to bias the regression slope. For others, in the link above, there are two pictures(which I copied below) that show how transformation can help to make the variance of errors more like a constant and residual plot a better look (i.e. no discernible pattern).
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for
Sorry that my question is a bit unorganized, but one of my question(and most confused part) is why do we want our predictors and responses variable to be symmetric or normally distributed. And after d
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for linear regression? Sorry that my question is a bit unorganized, but one of my question(and most confused part) is why do we want our predictors and responses variable to be symmetric or normally distributed. And after dwelling on this for two days, I think now I quite got the answer. Here is what I found is most useful: https://stats.stackexchange.com/a/123252/161581 The core idea is: the log-or-power-transformed, more normally distributed variables are more likely to fulfill linear regression's assumptions, particularly linearity, homoscedasticity, and normally distributed residual. As for the reason, the quote picture in my question can answer for the linearity part. Or as @Penguin_Knight said, skewed independent variable would have some data points with very high leverage, potentially able to bias the regression slope. For others, in the link above, there are two pictures(which I copied below) that show how transformation can help to make the variance of errors more like a constant and residual plot a better look (i.e. no discernible pattern).
Why in Box-Cox method we try to make x and y normally distributed, but that's not an assumption for Sorry that my question is a bit unorganized, but one of my question(and most confused part) is why do we want our predictors and responses variable to be symmetric or normally distributed. And after d
47,756
Connecting scatter plots with linear interpolation?
I'll phrase this, following your example, in terms of plotting with time on the $x$ or horizontal axis: that is pleasantly easy to imagine and discuss and (I guess) the most common example of this issue in practice. Translation to other variables on that axis, such as position in space, seems straightforward (until someone points out complications). What seems tacit in your question is that at most one value is possible at any given time. That is, I am setting aside any kind of summary or smoothing issue producing some kind of summary{$y|x$} of possibly several values of $y$ at any one $x$. There is a difference in principle between measurements at all possible times (e.g. class attendance at every class) measurements at some possible times (e.g. temperature or pressure measurements at certain times), so that other values could have been observed in between; they are just not included in the data. For #1 straight-line connections are, I suggest, primarily for psychological support to aid mental grasp of a series as a whole, both general patterns (e.g. trends) and particular details (e.g. spikes). This only needs much discussion or defence if it is not clear to readers, or someone objects to it as meaningless. As values in between observed times aren't defined, there is no sense in which a connecting line has the purpose of interpolation. (Commonly line connection is a graphical option or setting and doesn't require any call to an interpolation routine, but which part of one's software is used is an implementation detail.) Personally I dislike nonlinear connection for such cases (rounded curves in terms of the question), but I imagine the defence for it as also that it is a psychological prop. I find it, uninvited, in student reports and gather that it's provided somehow in MS Excel. But wrong seems too strong a word. If the objection is that there is no evidence to support the picture of nonlinear change between known data points, then the same objection applies to linear change as well. So, the question seems more one of aesthetic preference or an appeal to simplicity, rather than a strongly statistical or scientific argument. Whatever the grounds for the nonlinear connection, I would suggest that it is good practice to explain it in papers, reports and books, at least qualitatively, say by mentioning spline or polynomial interpolation or some other method. It is disturbing whenever researchers cannot explain how a curve was produced. In contrast, linear connections do not seem to need any comment, as being familiar from early puzzle books and schooling. For #2 interpolation makes sense, as we have defined this to be the situation where further observations between data are possible. Even if it is decided that linear interpolation is as much as you want to consider, that is as much a matter of taste as anything else. More interestingly, what also can make sense is to interpolate using more than the data points on either side, in which case the interpolation need not be linear. (Again, this can grade into smoothing.) But much depends on the details. First example: population growth of most countries can be treated as continuous in magnitude and in time (forgetting what we do know about people being discrete). As such, it can be interpolated between census or estimate dates, but even then seasonality could be a detail. A sensible interpolation will often -- indeed usually -- not be linear at all. Second example: if air or ground temperature is measured only once per day (as was customary until well into the 20th century) then finer interpolation is possible in principle because temperature is essentially continuously varying, but in practice interpolation within days would be pointless unless it reflected daily heating and cooling. I am troubled by possible confusion in the question on what is linear and what is monotone, as locally linear interpolation is necessarily locally monotone too, and the curves you show have turning points. There are attractive interpolation methods that combine cubic spline behaviour with respect for maxima and minima in the raw data. The results are monotone between given turning points, but not necessarily linear. In MATLAB terms, pchip (piecewise cubic Hermite interpolation), explained very well by Cleve Moler e.g. here, is one such method. I'll report that Moler's MATLAB code is very portable, even if like me you don't use MATLAB.
Connecting scatter plots with linear interpolation?
I'll phrase this, following your example, in terms of plotting with time on the $x$ or horizontal axis: that is pleasantly easy to imagine and discuss and (I guess) the most common example of this iss
Connecting scatter plots with linear interpolation? I'll phrase this, following your example, in terms of plotting with time on the $x$ or horizontal axis: that is pleasantly easy to imagine and discuss and (I guess) the most common example of this issue in practice. Translation to other variables on that axis, such as position in space, seems straightforward (until someone points out complications). What seems tacit in your question is that at most one value is possible at any given time. That is, I am setting aside any kind of summary or smoothing issue producing some kind of summary{$y|x$} of possibly several values of $y$ at any one $x$. There is a difference in principle between measurements at all possible times (e.g. class attendance at every class) measurements at some possible times (e.g. temperature or pressure measurements at certain times), so that other values could have been observed in between; they are just not included in the data. For #1 straight-line connections are, I suggest, primarily for psychological support to aid mental grasp of a series as a whole, both general patterns (e.g. trends) and particular details (e.g. spikes). This only needs much discussion or defence if it is not clear to readers, or someone objects to it as meaningless. As values in between observed times aren't defined, there is no sense in which a connecting line has the purpose of interpolation. (Commonly line connection is a graphical option or setting and doesn't require any call to an interpolation routine, but which part of one's software is used is an implementation detail.) Personally I dislike nonlinear connection for such cases (rounded curves in terms of the question), but I imagine the defence for it as also that it is a psychological prop. I find it, uninvited, in student reports and gather that it's provided somehow in MS Excel. But wrong seems too strong a word. If the objection is that there is no evidence to support the picture of nonlinear change between known data points, then the same objection applies to linear change as well. So, the question seems more one of aesthetic preference or an appeal to simplicity, rather than a strongly statistical or scientific argument. Whatever the grounds for the nonlinear connection, I would suggest that it is good practice to explain it in papers, reports and books, at least qualitatively, say by mentioning spline or polynomial interpolation or some other method. It is disturbing whenever researchers cannot explain how a curve was produced. In contrast, linear connections do not seem to need any comment, as being familiar from early puzzle books and schooling. For #2 interpolation makes sense, as we have defined this to be the situation where further observations between data are possible. Even if it is decided that linear interpolation is as much as you want to consider, that is as much a matter of taste as anything else. More interestingly, what also can make sense is to interpolate using more than the data points on either side, in which case the interpolation need not be linear. (Again, this can grade into smoothing.) But much depends on the details. First example: population growth of most countries can be treated as continuous in magnitude and in time (forgetting what we do know about people being discrete). As such, it can be interpolated between census or estimate dates, but even then seasonality could be a detail. A sensible interpolation will often -- indeed usually -- not be linear at all. Second example: if air or ground temperature is measured only once per day (as was customary until well into the 20th century) then finer interpolation is possible in principle because temperature is essentially continuously varying, but in practice interpolation within days would be pointless unless it reflected daily heating and cooling. I am troubled by possible confusion in the question on what is linear and what is monotone, as locally linear interpolation is necessarily locally monotone too, and the curves you show have turning points. There are attractive interpolation methods that combine cubic spline behaviour with respect for maxima and minima in the raw data. The results are monotone between given turning points, but not necessarily linear. In MATLAB terms, pchip (piecewise cubic Hermite interpolation), explained very well by Cleve Moler e.g. here, is one such method. I'll report that Moler's MATLAB code is very portable, even if like me you don't use MATLAB.
Connecting scatter plots with linear interpolation? I'll phrase this, following your example, in terms of plotting with time on the $x$ or horizontal axis: that is pleasantly easy to imagine and discuss and (I guess) the most common example of this iss
47,757
Strange variance weights for Poisson GLM for square root link
The weights in the glm function are $$ w_i = \left.\frac{(\partial \mu_i/\partial\eta_i)^2}{\text{var}(\mu_i)}\right|_{\mu_i=h(\eta_i) = \eta_i^2} $$ So if $\mu_i = \eta_i^2$ and you recall that $\text{var}(\mu_i)=\mu_i$ then $\partial \mu_i/\partial\eta_i = 2\eta_i$ so $w_i = 4$. This is what you get from glm > counts <- c(18,17,15,20,10,20,25,13,12) > outcome <- gl(3,1,9) > treatment <- gl(3,3) > glm.D93 <- glm(counts ~ outcome + treatment, family = poisson(link = "sqrt")) > > glm.D93$weights 1 2 3 4 5 6 7 8 9 4 4 4 4 4 4 4 4 4 On the other hand, the log-link function has $\partial \mu_i/\partial\eta_i = \exp(\eta_i)$ and thus you get different weights.
Strange variance weights for Poisson GLM for square root link
The weights in the glm function are $$ w_i = \left.\frac{(\partial \mu_i/\partial\eta_i)^2}{\text{var}(\mu_i)}\right|_{\mu_i=h(\eta_i) = \eta_i^2} $$ So if $\mu_i = \eta_i^2$ and you recall that $\tex
Strange variance weights for Poisson GLM for square root link The weights in the glm function are $$ w_i = \left.\frac{(\partial \mu_i/\partial\eta_i)^2}{\text{var}(\mu_i)}\right|_{\mu_i=h(\eta_i) = \eta_i^2} $$ So if $\mu_i = \eta_i^2$ and you recall that $\text{var}(\mu_i)=\mu_i$ then $\partial \mu_i/\partial\eta_i = 2\eta_i$ so $w_i = 4$. This is what you get from glm > counts <- c(18,17,15,20,10,20,25,13,12) > outcome <- gl(3,1,9) > treatment <- gl(3,3) > glm.D93 <- glm(counts ~ outcome + treatment, family = poisson(link = "sqrt")) > > glm.D93$weights 1 2 3 4 5 6 7 8 9 4 4 4 4 4 4 4 4 4 On the other hand, the log-link function has $\partial \mu_i/\partial\eta_i = \exp(\eta_i)$ and thus you get different weights.
Strange variance weights for Poisson GLM for square root link The weights in the glm function are $$ w_i = \left.\frac{(\partial \mu_i/\partial\eta_i)^2}{\text{var}(\mu_i)}\right|_{\mu_i=h(\eta_i) = \eta_i^2} $$ So if $\mu_i = \eta_i^2$ and you recall that $\tex
47,758
Standard Deviation in Neural Network Regression
First of all you want to get standard deviations that says something about test errors not training errors. There are different aproaches to solve the problem. Ensampling/bootstrapping: multiple different splits of your training data and get out of bag estimates for each split. Then for each observation calculate the standard error of th prediction. For test data just calculate the standard errors for all the estimates. The mean of the estimates are by the way a better model than any single model. So this can improve your prediction error. Modeling the standard error directly: Make a neural network that outputs the (log of the) standard error of the prediction given an input trained on your validation errors. This is then optimized using MLE and should be pretty straight forward. Just train a network with the following objective and the residuals as the targets (logsigma(X) is a neural network outputing a scalar from negative infinity to infinity). $obj=\sum-1/2*logsigma(X_i)-\frac{residual_i^2}{2*exp(logsigma(X_i)*2)}$ Pros and cons: The ensampling bootstrapping is well tested, but it gives you the standard derivation of the estimate, not the expected standard error of the observation. Modeling the standard error directly is not as well accepted, but it gives you unbiased estimates of the standard error of your residuals for each observation. If you have time and the courage I would try the latter one. You can off course make both, so you make an ensemble of models and make a neural network trained on the out of bag residuals.
Standard Deviation in Neural Network Regression
First of all you want to get standard deviations that says something about test errors not training errors. There are different aproaches to solve the problem. Ensampling/bootstrapping: multiple diffe
Standard Deviation in Neural Network Regression First of all you want to get standard deviations that says something about test errors not training errors. There are different aproaches to solve the problem. Ensampling/bootstrapping: multiple different splits of your training data and get out of bag estimates for each split. Then for each observation calculate the standard error of th prediction. For test data just calculate the standard errors for all the estimates. The mean of the estimates are by the way a better model than any single model. So this can improve your prediction error. Modeling the standard error directly: Make a neural network that outputs the (log of the) standard error of the prediction given an input trained on your validation errors. This is then optimized using MLE and should be pretty straight forward. Just train a network with the following objective and the residuals as the targets (logsigma(X) is a neural network outputing a scalar from negative infinity to infinity). $obj=\sum-1/2*logsigma(X_i)-\frac{residual_i^2}{2*exp(logsigma(X_i)*2)}$ Pros and cons: The ensampling bootstrapping is well tested, but it gives you the standard derivation of the estimate, not the expected standard error of the observation. Modeling the standard error directly is not as well accepted, but it gives you unbiased estimates of the standard error of your residuals for each observation. If you have time and the courage I would try the latter one. You can off course make both, so you make an ensemble of models and make a neural network trained on the out of bag residuals.
Standard Deviation in Neural Network Regression First of all you want to get standard deviations that says something about test errors not training errors. There are different aproaches to solve the problem. Ensampling/bootstrapping: multiple diffe
47,759
Overfitting in polynomial regression and other concerns
There is no such rule about specific order polynomials which is agnostic to your dataset. If any such rule existed, I would expect it to be a function of your data or your data generating process - without knowing something about that, it's hard to say. Without saying anything about specific order polynomials, your general statement is right - larger order polynomials are more likely to overfit. As the order of the polynomial increases, so does the variance of the estimator. Yes, this is a common issue with higher order polynomials. It is similar in spirit to Runge's phenomenon. The common solutions are to find the best order via cross-validation (grid search), or by controlling the size of the coefficients with regularization.
Overfitting in polynomial regression and other concerns
There is no such rule about specific order polynomials which is agnostic to your dataset. If any such rule existed, I would expect it to be a function of your data or your data generating process - wi
Overfitting in polynomial regression and other concerns There is no such rule about specific order polynomials which is agnostic to your dataset. If any such rule existed, I would expect it to be a function of your data or your data generating process - without knowing something about that, it's hard to say. Without saying anything about specific order polynomials, your general statement is right - larger order polynomials are more likely to overfit. As the order of the polynomial increases, so does the variance of the estimator. Yes, this is a common issue with higher order polynomials. It is similar in spirit to Runge's phenomenon. The common solutions are to find the best order via cross-validation (grid search), or by controlling the size of the coefficients with regularization.
Overfitting in polynomial regression and other concerns There is no such rule about specific order polynomials which is agnostic to your dataset. If any such rule existed, I would expect it to be a function of your data or your data generating process - wi
47,760
How to use hyper-geometric test
You can look at wikipedia. The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of k successes (out of n total draws) from a population of size N containing K successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing k or more successes from the population in n total draws. In a test for under-representation, the p-value is the probability of randomly drawing k or fewer successes. The null-hypothesis here is that u, that, is, the probability under radation of the gene, is equal to p, the probability without radiation of the gene. While I would think that p has to be considered uncertain here as well, here a shortcut is taken by estimating p directly from the sample, as $\frac{A}{A+B}$. It is possible that this is equivalent. The formulas are the probability under the null hypothesis (where $p = \frac{A}{A + B}$) that you would see $C$ or more bacteria with the gene out of $C + D$ samples. So that is the $p$-value. That is the answer. Since you are thinking about $p$-values, maybe you can read a couple of blogposts from Andrew Gelman, I think it is a good idea to be a bit sceptical about this hypothesis framework.
How to use hyper-geometric test
You can look at wikipedia. The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of k successes
How to use hyper-geometric test You can look at wikipedia. The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of k successes (out of n total draws) from a population of size N containing K successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing k or more successes from the population in n total draws. In a test for under-representation, the p-value is the probability of randomly drawing k or fewer successes. The null-hypothesis here is that u, that, is, the probability under radation of the gene, is equal to p, the probability without radiation of the gene. While I would think that p has to be considered uncertain here as well, here a shortcut is taken by estimating p directly from the sample, as $\frac{A}{A+B}$. It is possible that this is equivalent. The formulas are the probability under the null hypothesis (where $p = \frac{A}{A + B}$) that you would see $C$ or more bacteria with the gene out of $C + D$ samples. So that is the $p$-value. That is the answer. Since you are thinking about $p$-values, maybe you can read a couple of blogposts from Andrew Gelman, I think it is a good idea to be a bit sceptical about this hypothesis framework.
How to use hyper-geometric test You can look at wikipedia. The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of k successes
47,761
Why does skew and heteroscedasticity lead to bias?
In this context "biased" essentially means "disproportionately influenced by". RMSE, a commonly used error metric, sums the total squared error across all predictions. In this case if your model were to predict with 90% accuracy the value of a \$100,000 home and a \$1,000,000 home respectively, you would be penalized 100x more for the 10% prediction error in the \$1,000,000 home, because $$\frac{(1,000,000 - 900,000)^2}{(100,000 - 90,000)^2} = \frac{100,000^2}{10,000^2} = 100$$ Zillow is getting around this problem by using a different type of error metric which penalizes errors proportional to the magnitude of the prediction, so that in the case of the above example both 10% errors would be penalized to the same extent. This problem can be caused by right-skew alone (presence of very high-priced homes) or heteroscedasticity alone (more variability in the high-priced homes), but the two together make the problem even worse. This problem is fundamentally related to heteroscedasticity (the higher the price is, the more variability there is), and significantly right-skewed response variables frequently exhibit some amount of heteroscedasticity with respect to the explanatory variable(s). EDIT in response to Ashtray's edit: I have altered my response to qualify the statement that right-skew alone can bias the model. If the response variable is right-skewed but the samples are perfectly homoscedastic with respect to the independent variable(s), then this issue with RMSE does not arise, because the variance (and therefore the error) will be the same at all scales. However in almost any real-world scenario where the response variable is dramatically right-skewed, the samples will also exhibit some amount of heteroscadasticity. As for your question about whether this is guaranteed to irreparably bias a model designed to penalize RMSE, the answer is that it won't, necessarily. However the more important question is, even in the case where you have enough cheap-home data to compensate for the more expensive homes, are you building a model that's actually predicting what you want? By using RMSE, you're implicitly saying that that you care as much about a \$10,000 difference in the price of a \$100,000 home as you do in the price of a \$1,000,000 home, or even a \$100,000,000 home, which is almost certainly not true. Hopefully I've managed to answer all of your lingering questions.
Why does skew and heteroscedasticity lead to bias?
In this context "biased" essentially means "disproportionately influenced by". RMSE, a commonly used error metric, sums the total squared error across all predictions. In this case if your model were
Why does skew and heteroscedasticity lead to bias? In this context "biased" essentially means "disproportionately influenced by". RMSE, a commonly used error metric, sums the total squared error across all predictions. In this case if your model were to predict with 90% accuracy the value of a \$100,000 home and a \$1,000,000 home respectively, you would be penalized 100x more for the 10% prediction error in the \$1,000,000 home, because $$\frac{(1,000,000 - 900,000)^2}{(100,000 - 90,000)^2} = \frac{100,000^2}{10,000^2} = 100$$ Zillow is getting around this problem by using a different type of error metric which penalizes errors proportional to the magnitude of the prediction, so that in the case of the above example both 10% errors would be penalized to the same extent. This problem can be caused by right-skew alone (presence of very high-priced homes) or heteroscedasticity alone (more variability in the high-priced homes), but the two together make the problem even worse. This problem is fundamentally related to heteroscedasticity (the higher the price is, the more variability there is), and significantly right-skewed response variables frequently exhibit some amount of heteroscedasticity with respect to the explanatory variable(s). EDIT in response to Ashtray's edit: I have altered my response to qualify the statement that right-skew alone can bias the model. If the response variable is right-skewed but the samples are perfectly homoscedastic with respect to the independent variable(s), then this issue with RMSE does not arise, because the variance (and therefore the error) will be the same at all scales. However in almost any real-world scenario where the response variable is dramatically right-skewed, the samples will also exhibit some amount of heteroscadasticity. As for your question about whether this is guaranteed to irreparably bias a model designed to penalize RMSE, the answer is that it won't, necessarily. However the more important question is, even in the case where you have enough cheap-home data to compensate for the more expensive homes, are you building a model that's actually predicting what you want? By using RMSE, you're implicitly saying that that you care as much about a \$10,000 difference in the price of a \$100,000 home as you do in the price of a \$1,000,000 home, or even a \$100,000,000 home, which is almost certainly not true. Hopefully I've managed to answer all of your lingering questions.
Why does skew and heteroscedasticity lead to bias? In this context "biased" essentially means "disproportionately influenced by". RMSE, a commonly used error metric, sums the total squared error across all predictions. In this case if your model were
47,762
When and why do we use sparse coding?
Parsimony. Sparse representations of a signal are easier to describe because they're short and highlight the essential features. This can be helpful if one wants to understand the signal, the process that generated it, or other systems that interact with it. Denoising. In this context, the measured signal is a mixture of some underlying/true signal and noise. The goal is to remove the noise. If the underlying signal is sparse in some basis (which is often the case for interesting signals) and the noise is not (e.g. white noise), then denoising can be done by constructing a sparse approximation of the measured signal. Data compression The goal here is to store the signal, transmit it over a communication channel, or perform further processing on it. These operations require memory, communication, and computational resources that scale with the size of the signal. Sparse coding can be used to compress a set of signals, reducing the resources needed. Compressed sensing The goal here is to measure signals efficiently by exploiting knowledge about their structure. This allows more efficient storage and transmission, and may also allow measurements to be made more quickly. Typically, specialized hardware is involved. If a signal is known to be sparse in some basis, it's possible to acquire it using fewer measurements than would otherwise be necessary. The original signal can then be reconstructed from the reduced set of measurements. Sometimes a class of signals is known a priori to be sparse in a particular basis. For example, natural images are sparse in the wavelet basis. In this case, the known basis can be used to design the measurement and reconstruction procedure. But, if the basis isn't known, it can be learned from a set of example signals using sparse coding (aka dictionary learning).
When and why do we use sparse coding?
Parsimony. Sparse representations of a signal are easier to describe because they're short and highlight the essential features. This can be helpful if one wants to understand the signal, the process
When and why do we use sparse coding? Parsimony. Sparse representations of a signal are easier to describe because they're short and highlight the essential features. This can be helpful if one wants to understand the signal, the process that generated it, or other systems that interact with it. Denoising. In this context, the measured signal is a mixture of some underlying/true signal and noise. The goal is to remove the noise. If the underlying signal is sparse in some basis (which is often the case for interesting signals) and the noise is not (e.g. white noise), then denoising can be done by constructing a sparse approximation of the measured signal. Data compression The goal here is to store the signal, transmit it over a communication channel, or perform further processing on it. These operations require memory, communication, and computational resources that scale with the size of the signal. Sparse coding can be used to compress a set of signals, reducing the resources needed. Compressed sensing The goal here is to measure signals efficiently by exploiting knowledge about their structure. This allows more efficient storage and transmission, and may also allow measurements to be made more quickly. Typically, specialized hardware is involved. If a signal is known to be sparse in some basis, it's possible to acquire it using fewer measurements than would otherwise be necessary. The original signal can then be reconstructed from the reduced set of measurements. Sometimes a class of signals is known a priori to be sparse in a particular basis. For example, natural images are sparse in the wavelet basis. In this case, the known basis can be used to design the measurement and reconstruction procedure. But, if the basis isn't known, it can be learned from a set of example signals using sparse coding (aka dictionary learning).
When and why do we use sparse coding? Parsimony. Sparse representations of a signal are easier to describe because they're short and highlight the essential features. This can be helpful if one wants to understand the signal, the process
47,763
When and why do we use sparse coding?
"In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero." Wikipedia There are some datasets where instances have a large number of attributes. This dataset can be thought of as a sparse matrix if most of the recorded attribute are zero. In this scenario we could potentially have a very large file containing the dataset without an equivalent amount of "information." One way to reduce the size of the dataset files without losing any information is to use a sparse file format. For example, an ARFF file can be stored in either dense or sparse format. From Weka's documentation, the header information is the same between the two formats. The difference is in how the instances are represented. The instances in the dense representation look like this: 0, X, 0, Y, "class A" 0, 0, W, 0, "class B" While a sparse representation of the same instances looks like this: {1 X, 3 Y, 4 "class A"} {2 W, 4 "class B"} It can be seen that the first instance, where most of the attributes are nonzero, becomes longer in the sparse representation. The second instance, however, has mostly zeros as attributes and is represented more efficiently. If most of your dataset is like the second instance - if the dataset is a sparse matrix - then a sparse file format might make sense to reduce the size of storing the dataset with no loss in information.
When and why do we use sparse coding?
"In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero." Wikipedia There are some datasets where instances have a large number
When and why do we use sparse coding? "In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero." Wikipedia There are some datasets where instances have a large number of attributes. This dataset can be thought of as a sparse matrix if most of the recorded attribute are zero. In this scenario we could potentially have a very large file containing the dataset without an equivalent amount of "information." One way to reduce the size of the dataset files without losing any information is to use a sparse file format. For example, an ARFF file can be stored in either dense or sparse format. From Weka's documentation, the header information is the same between the two formats. The difference is in how the instances are represented. The instances in the dense representation look like this: 0, X, 0, Y, "class A" 0, 0, W, 0, "class B" While a sparse representation of the same instances looks like this: {1 X, 3 Y, 4 "class A"} {2 W, 4 "class B"} It can be seen that the first instance, where most of the attributes are nonzero, becomes longer in the sparse representation. The second instance, however, has mostly zeros as attributes and is represented more efficiently. If most of your dataset is like the second instance - if the dataset is a sparse matrix - then a sparse file format might make sense to reduce the size of storing the dataset with no loss in information.
When and why do we use sparse coding? "In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero." Wikipedia There are some datasets where instances have a large number
47,764
Does correlation correlate with causation?
The sentence "correlation does not imply causation" is usually understood much broader than it should be. If two variables A and B are highly correlated, then something is causing something else. You just cannot conclude that A causes B because there are a number of other possibilities: B causes A A and B are both caused by C Only if D is given, does A cause B A causes E which in turn causes B (An error of type I occured and the correlation is not significant, but this one is dealt with by talking about strong correlations.)
Does correlation correlate with causation?
The sentence "correlation does not imply causation" is usually understood much broader than it should be. If two variables A and B are highly correlated, then something is causing something else. You
Does correlation correlate with causation? The sentence "correlation does not imply causation" is usually understood much broader than it should be. If two variables A and B are highly correlated, then something is causing something else. You just cannot conclude that A causes B because there are a number of other possibilities: B causes A A and B are both caused by C Only if D is given, does A cause B A causes E which in turn causes B (An error of type I occured and the correlation is not significant, but this one is dealt with by talking about strong correlations.)
Does correlation correlate with causation? The sentence "correlation does not imply causation" is usually understood much broader than it should be. If two variables A and B are highly correlated, then something is causing something else. You
47,765
Does correlation correlate with causation?
Does correlation (any statistical association) correlate with (related to) causation? This question (words in parentheses are mine) is quite general and essentially the answer seem me yes. Causal inference in statistics framework exist exactly because the answer to that question is yes. However in order to give more useful answer we need to better specified question. The following question seem me the best: Does causation imply correlation? This question already exist in this site and several interesting answer are given. Read here: Does causation imply correlation? others related questions are: Under what conditions does correlation imply causation? Statistics and causal inference? Regression and causality in econometrics
Does correlation correlate with causation?
Does correlation (any statistical association) correlate with (related to) causation? This question (words in parentheses are mine) is quite general and essentially the answer seem me yes. Causal i
Does correlation correlate with causation? Does correlation (any statistical association) correlate with (related to) causation? This question (words in parentheses are mine) is quite general and essentially the answer seem me yes. Causal inference in statistics framework exist exactly because the answer to that question is yes. However in order to give more useful answer we need to better specified question. The following question seem me the best: Does causation imply correlation? This question already exist in this site and several interesting answer are given. Read here: Does causation imply correlation? others related questions are: Under what conditions does correlation imply causation? Statistics and causal inference? Regression and causality in econometrics
Does correlation correlate with causation? Does correlation (any statistical association) correlate with (related to) causation? This question (words in parentheses are mine) is quite general and essentially the answer seem me yes. Causal i
47,766
Does correlation correlate with causation?
Correlation is a special type of association and association is different from causation which can be only inferred from a randomized experiment(the reason would be the confounder). References: 1. Association and Correlation 2. Openintro-Statistics
Does correlation correlate with causation?
Correlation is a special type of association and association is different from causation which can be only inferred from a randomized experiment(the reason would be the confounder). References: 1. As
Does correlation correlate with causation? Correlation is a special type of association and association is different from causation which can be only inferred from a randomized experiment(the reason would be the confounder). References: 1. Association and Correlation 2. Openintro-Statistics
Does correlation correlate with causation? Correlation is a special type of association and association is different from causation which can be only inferred from a randomized experiment(the reason would be the confounder). References: 1. As
47,767
Intuitive explanation of the relationship between standard error of model coefficient and residual variance
When you simulate the the data, you know the population coefficients, because you chose them. But if I simulate the data and only give you the data, you don't know the population coefficients. You only have the data - just as it is with real data. When you look at data that has noise about a linear relationship, there's a variety of population lines that are consistent with the data -- lines that could reasonably have produced that data: The three marked lines are each plausible population lines -- the observed data might fairly easily have resulted from any of those lines (as well as an infinity of other lines near to those). But if we reduce the standard deviation of the error term: then the lines that could plausibly have produced that data have a much smaller range of slopes and intercepts; while all the lines consistent with the second set of data could have produced the first set of data, there are lines that could easily have produced the first set of data that would be relatively implausible for the second set of data. Literally then, for the second set of data, you have less uncertainty about where the population line might be. Or look at it this way: if I simulate 50 samples like the left hand (grey) points (all with the same coefficients and with the larger $\sigma$), then the coefficients of the fitted regression lines will vary from sample to sample. If I then do the same with the smaller $\sigma$, they vary correspondingly less. Here we plot slope vs intercept for each of 50 samples of size 100, for large and small $\sigma$: and indeed we see that the second set of points (fitted coefficients) vary much less. If you do this with many such samples, it turns out that the typical distance of the points from the center in any direction is proportional to $\sigma$. How does larger spread of $x$'s make the standard error smaller? Consider these two plots, where I have split my larger noise sample into points close to the x-mean and points further away (which makes the standard deviations of the x's relatively small and large): and consider just the slope for now. Looking at points in subset from the center half of this large-$\sigma$ data set, we can see that there's a wider range of slopes that might have produced that data than could reasonably have produced the outer-half of the values -- the spread of points about the population line is relatively wider compared to the spread of the x's, so there's more "wiggle room" for the slope if the x-spread is narrow. Specifically, the two red lines are quite consistent with the middle half of the points but are not consistent with the outer half (it is relatively much less likely that either line could have produced the points in the right side plot).
Intuitive explanation of the relationship between standard error of model coefficient and residual v
When you simulate the the data, you know the population coefficients, because you chose them. But if I simulate the data and only give you the data, you don't know the population coefficients. You onl
Intuitive explanation of the relationship between standard error of model coefficient and residual variance When you simulate the the data, you know the population coefficients, because you chose them. But if I simulate the data and only give you the data, you don't know the population coefficients. You only have the data - just as it is with real data. When you look at data that has noise about a linear relationship, there's a variety of population lines that are consistent with the data -- lines that could reasonably have produced that data: The three marked lines are each plausible population lines -- the observed data might fairly easily have resulted from any of those lines (as well as an infinity of other lines near to those). But if we reduce the standard deviation of the error term: then the lines that could plausibly have produced that data have a much smaller range of slopes and intercepts; while all the lines consistent with the second set of data could have produced the first set of data, there are lines that could easily have produced the first set of data that would be relatively implausible for the second set of data. Literally then, for the second set of data, you have less uncertainty about where the population line might be. Or look at it this way: if I simulate 50 samples like the left hand (grey) points (all with the same coefficients and with the larger $\sigma$), then the coefficients of the fitted regression lines will vary from sample to sample. If I then do the same with the smaller $\sigma$, they vary correspondingly less. Here we plot slope vs intercept for each of 50 samples of size 100, for large and small $\sigma$: and indeed we see that the second set of points (fitted coefficients) vary much less. If you do this with many such samples, it turns out that the typical distance of the points from the center in any direction is proportional to $\sigma$. How does larger spread of $x$'s make the standard error smaller? Consider these two plots, where I have split my larger noise sample into points close to the x-mean and points further away (which makes the standard deviations of the x's relatively small and large): and consider just the slope for now. Looking at points in subset from the center half of this large-$\sigma$ data set, we can see that there's a wider range of slopes that might have produced that data than could reasonably have produced the outer-half of the values -- the spread of points about the population line is relatively wider compared to the spread of the x's, so there's more "wiggle room" for the slope if the x-spread is narrow. Specifically, the two red lines are quite consistent with the middle half of the points but are not consistent with the outer half (it is relatively much less likely that either line could have produced the points in the right side plot).
Intuitive explanation of the relationship between standard error of model coefficient and residual v When you simulate the the data, you know the population coefficients, because you chose them. But if I simulate the data and only give you the data, you don't know the population coefficients. You onl
47,768
Difference between R² and Chi-Square
Found this after a quick google: "R^2 is used to quantify the amount of variability in the data that is explained by your model. It's useful for comparing the fits of different models. The Chi-square goodness of fit test is used to test if your data follows a particular distribution. It's more useful for testing model assumptions rather than comparing models." sounds like Chi-square is more useful if you have a function you are trying to test (or a model you are trying to fit to your data) as opposed to the R^2 which tells you how much variability there is in your data, and therefore how much the best model fits.
Difference between R² and Chi-Square
Found this after a quick google: "R^2 is used to quantify the amount of variability in the data that is explained by your model. It's useful for comparing the fits of different models. The Chi-square
Difference between R² and Chi-Square Found this after a quick google: "R^2 is used to quantify the amount of variability in the data that is explained by your model. It's useful for comparing the fits of different models. The Chi-square goodness of fit test is used to test if your data follows a particular distribution. It's more useful for testing model assumptions rather than comparing models." sounds like Chi-square is more useful if you have a function you are trying to test (or a model you are trying to fit to your data) as opposed to the R^2 which tells you how much variability there is in your data, and therefore how much the best model fits.
Difference between R² and Chi-Square Found this after a quick google: "R^2 is used to quantify the amount of variability in the data that is explained by your model. It's useful for comparing the fits of different models. The Chi-square
47,769
Difference between R² and Chi-Square
Chi^2 provides a per-feature measurement of dependency with the target. This is useful at the feature-selection stage, for a classification model. We'd like to weed out the low-dependent features. (scikit-learn guide for additional such measurements for classification and regression models). R^2 provides a model-level measurement of the target's variance explained. This is useful at the model-evaluation stage. (scikit-learn guide for R^2)
Difference between R² and Chi-Square
Chi^2 provides a per-feature measurement of dependency with the target. This is useful at the feature-selection stage, for a classification model. We'd like to weed out the low-dependent features. (sc
Difference between R² and Chi-Square Chi^2 provides a per-feature measurement of dependency with the target. This is useful at the feature-selection stage, for a classification model. We'd like to weed out the low-dependent features. (scikit-learn guide for additional such measurements for classification and regression models). R^2 provides a model-level measurement of the target's variance explained. This is useful at the model-evaluation stage. (scikit-learn guide for R^2)
Difference between R² and Chi-Square Chi^2 provides a per-feature measurement of dependency with the target. This is useful at the feature-selection stage, for a classification model. We'd like to weed out the low-dependent features. (sc
47,770
Why accuracy gradually increase then suddenly drop with dropout
When you increase dropout beyond a certain threshold, it results in the model not being able to fit properly. Intuitively, a higher dropout rate would result in a higher variance to some of the layers, which also degrades training. Dropout is like all other forms of regularization in that it reduces model capacity. If you reduce the capacity too much, it is sure that you will get bad results. The solution is to not use such high dropout. If you must, lowering the learning rate and using higher momentum may help. Furthermore, be careful where you use dropout. It is usually ineffective in the convolutional layers, and very harmful to use right before the softmax layer.
Why accuracy gradually increase then suddenly drop with dropout
When you increase dropout beyond a certain threshold, it results in the model not being able to fit properly. Intuitively, a higher dropout rate would result in a higher variance to some of the layers
Why accuracy gradually increase then suddenly drop with dropout When you increase dropout beyond a certain threshold, it results in the model not being able to fit properly. Intuitively, a higher dropout rate would result in a higher variance to some of the layers, which also degrades training. Dropout is like all other forms of regularization in that it reduces model capacity. If you reduce the capacity too much, it is sure that you will get bad results. The solution is to not use such high dropout. If you must, lowering the learning rate and using higher momentum may help. Furthermore, be careful where you use dropout. It is usually ineffective in the convolutional layers, and very harmful to use right before the softmax layer.
Why accuracy gradually increase then suddenly drop with dropout When you increase dropout beyond a certain threshold, it results in the model not being able to fit properly. Intuitively, a higher dropout rate would result in a higher variance to some of the layers
47,771
Why accuracy gradually increase then suddenly drop with dropout
I think I met the same problem with this dramatic-drop-in-accuracy issue. I think the problem with this issue might be related to the softmax function and the cross-entropy that you defined(loss function). Mine issue comes exactly from this cross-entropy function, and I used one-hot-format labels BTW. Taken that classic way of cross-entropy would cause nan or 0 gradient if "predict_y" is all zero or nan, so when the training iteration is big enough, all weights could suddenly become 0. This is exactly the reason why we can witness a sudden and dramatic drop in training accuracy. I solved this problem by using "tf.nn.softmax_cross_entropy_with_logits" instead, which can handle the extreme case safely. The alternative loss function is defined as follow: tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ture_y,logits=predict_y)) You can give it a shot.
Why accuracy gradually increase then suddenly drop with dropout
I think I met the same problem with this dramatic-drop-in-accuracy issue. I think the problem with this issue might be related to the softmax function and the cross-entropy that you defined(loss funct
Why accuracy gradually increase then suddenly drop with dropout I think I met the same problem with this dramatic-drop-in-accuracy issue. I think the problem with this issue might be related to the softmax function and the cross-entropy that you defined(loss function). Mine issue comes exactly from this cross-entropy function, and I used one-hot-format labels BTW. Taken that classic way of cross-entropy would cause nan or 0 gradient if "predict_y" is all zero or nan, so when the training iteration is big enough, all weights could suddenly become 0. This is exactly the reason why we can witness a sudden and dramatic drop in training accuracy. I solved this problem by using "tf.nn.softmax_cross_entropy_with_logits" instead, which can handle the extreme case safely. The alternative loss function is defined as follow: tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ture_y,logits=predict_y)) You can give it a shot.
Why accuracy gradually increase then suddenly drop with dropout I think I met the same problem with this dramatic-drop-in-accuracy issue. I think the problem with this issue might be related to the softmax function and the cross-entropy that you defined(loss funct
47,772
H2O: Can I use the h2o for time series predictions?
You can use H2O for time series, and you would normally do some data engineering to create time-based features. In my book (Practical Machine Learning with H2O) one of the three main data sets is prediction of football match results, so that shows some of the techniques. I normally do things like arima and adf.test in R, and use the outputs as features I load into H2O. Though that is not ideal if your data set is one that won't fit in memory (one of the key advantages of H2O over R). There are two feature requests, which you could comment on or vote for: https://0xdata.atlassian.net/browse/PUBDEV-2590 and https://0xdata.atlassian.net/browse/PUBDEV-4153, but it appears no-one is working on them yet. LSTMs should be available from H2O using DeepWater (i.e. using TensorFlow or MxNet as a back-end). I'm still hunting for a tutorial specifically on this, myself.
H2O: Can I use the h2o for time series predictions?
You can use H2O for time series, and you would normally do some data engineering to create time-based features. In my book (Practical Machine Learning with H2O) one of the three main data sets is pred
H2O: Can I use the h2o for time series predictions? You can use H2O for time series, and you would normally do some data engineering to create time-based features. In my book (Practical Machine Learning with H2O) one of the three main data sets is prediction of football match results, so that shows some of the techniques. I normally do things like arima and adf.test in R, and use the outputs as features I load into H2O. Though that is not ideal if your data set is one that won't fit in memory (one of the key advantages of H2O over R). There are two feature requests, which you could comment on or vote for: https://0xdata.atlassian.net/browse/PUBDEV-2590 and https://0xdata.atlassian.net/browse/PUBDEV-4153, but it appears no-one is working on them yet. LSTMs should be available from H2O using DeepWater (i.e. using TensorFlow or MxNet as a back-end). I'm still hunting for a tutorial specifically on this, myself.
H2O: Can I use the h2o for time series predictions? You can use H2O for time series, and you would normally do some data engineering to create time-based features. In my book (Practical Machine Learning with H2O) one of the three main data sets is pred
47,773
H2O: Can I use the h2o for time series predictions?
Methods designed especially for time series work better for such data then black-box machine learning algorithms as shown, for example, in this blog entry. The time-series models take into consideration the time-dependence of your data, while the general purpose methods do not. Of course, you can add to your data additional columns with lags, but then still you would be assuming that $Y_{t-4}$ is some distinct variable that does not have to have anything in common with $Y_{t-3}$, or $Y_{t-5}$... You could think of some more complicated transformation of your data so to try to imitate what the time-series models do, but then, why to re-invent the wheel..? As about H2O, you should ask the authors. (However, as it is a general purpose machine learning software, so I doubt they will be interested in implementing some specialized models.)
H2O: Can I use the h2o for time series predictions?
Methods designed especially for time series work better for such data then black-box machine learning algorithms as shown, for example, in this blog entry. The time-series models take into considerati
H2O: Can I use the h2o for time series predictions? Methods designed especially for time series work better for such data then black-box machine learning algorithms as shown, for example, in this blog entry. The time-series models take into consideration the time-dependence of your data, while the general purpose methods do not. Of course, you can add to your data additional columns with lags, but then still you would be assuming that $Y_{t-4}$ is some distinct variable that does not have to have anything in common with $Y_{t-3}$, or $Y_{t-5}$... You could think of some more complicated transformation of your data so to try to imitate what the time-series models do, but then, why to re-invent the wheel..? As about H2O, you should ask the authors. (However, as it is a general purpose machine learning software, so I doubt they will be interested in implementing some specialized models.)
H2O: Can I use the h2o for time series predictions? Methods designed especially for time series work better for such data then black-box machine learning algorithms as shown, for example, in this blog entry. The time-series models take into considerati
47,774
(Nomenclature) Are there two different Weak Laws of Large Numbers?
Many results in statistics have generic names that apply to a collection of theorems asserting some result under different conditions. My understanding is that a "weak law of large numbers" can refer to any theorem that shows convergence-in-probability of the sequence of sample means to a corresponding mean. Any theorem that specifies sufficient conditions for this kind of convergence in the context of some model could legitimately be regarded as a "weak law of large numbers". Hence, there could potentially be hundreds of specific theorems under the general rubric of the "weak law of large numbers", each giving conditions for convergence in some context. (Similarly, reference to the "strong law of large numbers" can refer to any theorem showing almost-sure convergence of a sequence of means to a corresponding mean parameter, and reference to a "central limit theorem" can refer to any theorem that asserts distributional convergence of a scaled sequence of sample means. All these terms can be used generically to refer to a class of theorems or specifically to refer to particular theorems.)
(Nomenclature) Are there two different Weak Laws of Large Numbers?
Many results in statistics have generic names that apply to a collection of theorems asserting some result under different conditions. My understanding is that a "weak law of large numbers" can refer
(Nomenclature) Are there two different Weak Laws of Large Numbers? Many results in statistics have generic names that apply to a collection of theorems asserting some result under different conditions. My understanding is that a "weak law of large numbers" can refer to any theorem that shows convergence-in-probability of the sequence of sample means to a corresponding mean. Any theorem that specifies sufficient conditions for this kind of convergence in the context of some model could legitimately be regarded as a "weak law of large numbers". Hence, there could potentially be hundreds of specific theorems under the general rubric of the "weak law of large numbers", each giving conditions for convergence in some context. (Similarly, reference to the "strong law of large numbers" can refer to any theorem showing almost-sure convergence of a sequence of means to a corresponding mean parameter, and reference to a "central limit theorem" can refer to any theorem that asserts distributional convergence of a scaled sequence of sample means. All these terms can be used generically to refer to a class of theorems or specifically to refer to particular theorems.)
(Nomenclature) Are there two different Weak Laws of Large Numbers? Many results in statistics have generic names that apply to a collection of theorems asserting some result under different conditions. My understanding is that a "weak law of large numbers" can refer
47,775
Two-way repeated measures linear mixed model
A linear mixed model is what you want. First, make sure that Subject is a factor: Mydata$Subject <- as.factor(Mydata$Subject) Then, I would fit the model with saturated fixed- and random-effects structures: mod1 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Condition + Size | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation I know part of the formula is redundant, but I wanted to make it clear as possible to anyone reading in the future. Note that the model with the fixed effects structure of (1 + Condition + Size + Condition*Size | Subject) is not identified and will not converge. You want Condition specified as a random effect because this allows the variance at different levels of condition to be different, and Size lets the slope of Size be different across people. Then I would do a top-down testing procedure. I would then set up identical models, but take out the random slopes one-by-one: mod2 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Size | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation mod3 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Condition | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation anova(mod1,mod2,refit=FALSE) # tests for significance of condition random effecct anova(mod1,mod3,refit=FALSE) # tests for significance of size random effecct Note that you are estimating a random slope and covariance between slope and intercept at the same time with those two anova tests, so you will want to adjust your p-values accordingly. If removing the slopes yields a significant difference, leave them in; if the fit is just as good when you take them out, then you can leave them out. Then you can do the same with fixed effects, but you can specify REML=FALSE for those and there is no need to adjust the p-value. You can also use the t-tests from lmerTest; they generally give the same results. It is unclear as to if Size is continuous or not. You say that it is, but then you talk about there being different "levels" of it and wanting to do post-hoc tests to compare it at different levels. If you are treating it as linear and continuous, then none of that matters; if you want to do a post-hoc test for a continuous moderator, you could look at simple slope analyses.
Two-way repeated measures linear mixed model
A linear mixed model is what you want. First, make sure that Subject is a factor: Mydata$Subject <- as.factor(Mydata$Subject) Then, I would fit the model with saturated fixed- and random-effects stru
Two-way repeated measures linear mixed model A linear mixed model is what you want. First, make sure that Subject is a factor: Mydata$Subject <- as.factor(Mydata$Subject) Then, I would fit the model with saturated fixed- and random-effects structures: mod1 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Condition + Size | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation I know part of the formula is redundant, but I wanted to make it clear as possible to anyone reading in the future. Note that the model with the fixed effects structure of (1 + Condition + Size + Condition*Size | Subject) is not identified and will not converge. You want Condition specified as a random effect because this allows the variance at different levels of condition to be different, and Size lets the slope of Size be different across people. Then I would do a top-down testing procedure. I would then set up identical models, but take out the random slopes one-by-one: mod2 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Size | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation mod3 <- lmer(Estimate ~ Condition + Size + Condition * Size + # Fixed effects (1 + Condition | Subject), # Random effects, nested within subject data=Mydata, REML=TRUE) # Specifying data and estimation anova(mod1,mod2,refit=FALSE) # tests for significance of condition random effecct anova(mod1,mod3,refit=FALSE) # tests for significance of size random effecct Note that you are estimating a random slope and covariance between slope and intercept at the same time with those two anova tests, so you will want to adjust your p-values accordingly. If removing the slopes yields a significant difference, leave them in; if the fit is just as good when you take them out, then you can leave them out. Then you can do the same with fixed effects, but you can specify REML=FALSE for those and there is no need to adjust the p-value. You can also use the t-tests from lmerTest; they generally give the same results. It is unclear as to if Size is continuous or not. You say that it is, but then you talk about there being different "levels" of it and wanting to do post-hoc tests to compare it at different levels. If you are treating it as linear and continuous, then none of that matters; if you want to do a post-hoc test for a continuous moderator, you could look at simple slope analyses.
Two-way repeated measures linear mixed model A linear mixed model is what you want. First, make sure that Subject is a factor: Mydata$Subject <- as.factor(Mydata$Subject) Then, I would fit the model with saturated fixed- and random-effects stru
47,776
A Kernel Two Sample Test and Curse of Dimensionality
Just happen to see this post while going over MMD papers. In the context of two-sample testing and considering the test power under a given level, the answer is it does suffer from high dimension. Previous experiment might be misleading because they did not select a fair alternative for testing. For example, consider $P=\mathcal N(0,I)$ and $Q=\mathcal N(\mu, I)$, so $P$ and $Q$ only differ in the mean vector. As dimension increases, you may pick $\mu=[1,0,0,\ldots,0]$ or $\mu=[1,1,1,\ldots,1]$. The latter is more easy to distinguish, which is the one selected in their experiments. The paper below fixes the Kullback-Leibler divergence between $P$ and $Q$ for all dimensions (for example, pick $\mu=[1,0,0,\ldots,0]$), guided by the fact that KLD usually acts as a quantity of how hard a hypothesis testing problem is in information theory. Then the test power is shown to decrease as dimension increases. Also notice the above choice does not take into account the sample sizes. In fact, fixing the KLD is only a heuristic, stated by the authors. We have got a new result on this respect, to which I will post a link once a draft is complete. In summary, fixing the KLD is still a reasonable strategy for fair alternative. Check this paper: "On the Decreasing Power of Kernel and Distance based Nonparametric Hypothesis Tests in High Dimensions"
A Kernel Two Sample Test and Curse of Dimensionality
Just happen to see this post while going over MMD papers. In the context of two-sample testing and considering the test power under a given level, the answer is it does suffer from high dimension. Pre
A Kernel Two Sample Test and Curse of Dimensionality Just happen to see this post while going over MMD papers. In the context of two-sample testing and considering the test power under a given level, the answer is it does suffer from high dimension. Previous experiment might be misleading because they did not select a fair alternative for testing. For example, consider $P=\mathcal N(0,I)$ and $Q=\mathcal N(\mu, I)$, so $P$ and $Q$ only differ in the mean vector. As dimension increases, you may pick $\mu=[1,0,0,\ldots,0]$ or $\mu=[1,1,1,\ldots,1]$. The latter is more easy to distinguish, which is the one selected in their experiments. The paper below fixes the Kullback-Leibler divergence between $P$ and $Q$ for all dimensions (for example, pick $\mu=[1,0,0,\ldots,0]$), guided by the fact that KLD usually acts as a quantity of how hard a hypothesis testing problem is in information theory. Then the test power is shown to decrease as dimension increases. Also notice the above choice does not take into account the sample sizes. In fact, fixing the KLD is only a heuristic, stated by the authors. We have got a new result on this respect, to which I will post a link once a draft is complete. In summary, fixing the KLD is still a reasonable strategy for fair alternative. Check this paper: "On the Decreasing Power of Kernel and Distance based Nonparametric Hypothesis Tests in High Dimensions"
A Kernel Two Sample Test and Curse of Dimensionality Just happen to see this post while going over MMD papers. In the context of two-sample testing and considering the test power under a given level, the answer is it does suffer from high dimension. Pre
47,777
A Kernel Two Sample Test and Curse of Dimensionality
Oh, of course the method strongly suffers from the curse of dimensionality. It's just that they derive their results under what essentially is a parametric alternative, hidden behind the RKHS notation and not made too explicit. Indeed, motivated by exactly the same question as yours, Ery Arias-Castro, Bruno Pelletier and Venkatesh Saligrama recently wrote a paper demonstrating exactly this ("Remember the Curse of Dimensionality: The Case of Goodness-of-Fit Testing in Arbitrary Dimension", available on arXiv). You should read this for a more detailed answer.
A Kernel Two Sample Test and Curse of Dimensionality
Oh, of course the method strongly suffers from the curse of dimensionality. It's just that they derive their results under what essentially is a parametric alternative, hidden behind the RKHS notation
A Kernel Two Sample Test and Curse of Dimensionality Oh, of course the method strongly suffers from the curse of dimensionality. It's just that they derive their results under what essentially is a parametric alternative, hidden behind the RKHS notation and not made too explicit. Indeed, motivated by exactly the same question as yours, Ery Arias-Castro, Bruno Pelletier and Venkatesh Saligrama recently wrote a paper demonstrating exactly this ("Remember the Curse of Dimensionality: The Case of Goodness-of-Fit Testing in Arbitrary Dimension", available on arXiv). You should read this for a more detailed answer.
A Kernel Two Sample Test and Curse of Dimensionality Oh, of course the method strongly suffers from the curse of dimensionality. It's just that they derive their results under what essentially is a parametric alternative, hidden behind the RKHS notation
47,778
A Kernel Two Sample Test and Curse of Dimensionality
I found the introduction of paper, "Can Shared-Neighbor Distances Defeat the Curse of Dimensionality" by Houle et al (2010) to be helpful. Particularly, they make the distinction between dimensions bearing relevant information and irrelevant information. For example, two Gaussian distribution clusters with separated means are more easily separated as dimension increases. On the other hand, if "irrelevant" features (e.g. pure noise) are added as additional dimensions, the separability will not be improved. As such, it makes a great deal of sense that in Figure 5B in Gretton et al. (2012), the performance of the MMD improves when separating Gaussians in higher dimensions. Intuitively, since the Gaussians are "separate" on each new dimension, it can be thought of as "additional information" and hence, is helpful. If pure noise were added as additional dimensions, I would expect the performance of the MMD to decrease. Indeed, as @air's citation demonstrates, the MMD does suffer from the curse of dimensionality. However, that does not mean that it is not vastly superior to density estimation in the high dimensional setting. Density estimation in high dimensions is a very hard problem. Below is a table quoted in Wasserman et al. 2006 (section 6.5) that shows sample size necessary to ensure a relative mean squared error of less than 0.1 at 0 when the density is Gaussian and the optimal bandwidth is chosen:                                          The sample size is increasing very quickly with dimension. This is because the L2 error converges as $O(n^{-4/(4+d)})$, when the optimal bandwidth is used, where d is the dimension. The point is that to do density estimation in high dimensions, you need huge samples. My intuition is that as the number of dimensions increases, the "volume" of the space is increasing exponentially. And hence, if you are going to try and do density estimation by splitting the volume up into small hypercubes and count the number of points in each, you will need exponentially many hypercubes. Kernel MMD, on the other hand, is only looking at pairwise distance between points, and not integrating over the space in which the points lie. As such, it does not care if there is a huge volume of empty space. Of course, as dimensionality increases, all points become increasingly equidistant, and hence the performance of any metric-based method will degrade, but apparently this effect is not as dramatic. One other point of confusion for me was that although Kernel MMD allows you to perform a two sample test without density estimation, in section 3.3.1, they show that the L2 distance between Parzen window density estimates is a special case of the MMD. So, clearly, if you choose a specific kernel they describe, then the kernel MMD is exactly doing density estimation, and hence, I presume, it will scale just as poorly with increasing dimension. Indeed, the kernel $k (x,y) =\int \kappa(x - z) \kappa(y-z) dz$ derived in 3.3.1, for which testing using the kernel MMD coincides with taking the L2 distance between density estimates, is integrating over the underlying space--just as in density estimation--unlike say, if a Gaussian RBF kernel were used instead.
A Kernel Two Sample Test and Curse of Dimensionality
I found the introduction of paper, "Can Shared-Neighbor Distances Defeat the Curse of Dimensionality" by Houle et al (2010) to be helpful. Particularly, they make the distinction between dimensions b
A Kernel Two Sample Test and Curse of Dimensionality I found the introduction of paper, "Can Shared-Neighbor Distances Defeat the Curse of Dimensionality" by Houle et al (2010) to be helpful. Particularly, they make the distinction between dimensions bearing relevant information and irrelevant information. For example, two Gaussian distribution clusters with separated means are more easily separated as dimension increases. On the other hand, if "irrelevant" features (e.g. pure noise) are added as additional dimensions, the separability will not be improved. As such, it makes a great deal of sense that in Figure 5B in Gretton et al. (2012), the performance of the MMD improves when separating Gaussians in higher dimensions. Intuitively, since the Gaussians are "separate" on each new dimension, it can be thought of as "additional information" and hence, is helpful. If pure noise were added as additional dimensions, I would expect the performance of the MMD to decrease. Indeed, as @air's citation demonstrates, the MMD does suffer from the curse of dimensionality. However, that does not mean that it is not vastly superior to density estimation in the high dimensional setting. Density estimation in high dimensions is a very hard problem. Below is a table quoted in Wasserman et al. 2006 (section 6.5) that shows sample size necessary to ensure a relative mean squared error of less than 0.1 at 0 when the density is Gaussian and the optimal bandwidth is chosen:                                          The sample size is increasing very quickly with dimension. This is because the L2 error converges as $O(n^{-4/(4+d)})$, when the optimal bandwidth is used, where d is the dimension. The point is that to do density estimation in high dimensions, you need huge samples. My intuition is that as the number of dimensions increases, the "volume" of the space is increasing exponentially. And hence, if you are going to try and do density estimation by splitting the volume up into small hypercubes and count the number of points in each, you will need exponentially many hypercubes. Kernel MMD, on the other hand, is only looking at pairwise distance between points, and not integrating over the space in which the points lie. As such, it does not care if there is a huge volume of empty space. Of course, as dimensionality increases, all points become increasingly equidistant, and hence the performance of any metric-based method will degrade, but apparently this effect is not as dramatic. One other point of confusion for me was that although Kernel MMD allows you to perform a two sample test without density estimation, in section 3.3.1, they show that the L2 distance between Parzen window density estimates is a special case of the MMD. So, clearly, if you choose a specific kernel they describe, then the kernel MMD is exactly doing density estimation, and hence, I presume, it will scale just as poorly with increasing dimension. Indeed, the kernel $k (x,y) =\int \kappa(x - z) \kappa(y-z) dz$ derived in 3.3.1, for which testing using the kernel MMD coincides with taking the L2 distance between density estimates, is integrating over the underlying space--just as in density estimation--unlike say, if a Gaussian RBF kernel were used instead.
A Kernel Two Sample Test and Curse of Dimensionality I found the introduction of paper, "Can Shared-Neighbor Distances Defeat the Curse of Dimensionality" by Houle et al (2010) to be helpful. Particularly, they make the distinction between dimensions b
47,779
Big O and little o notation explained?
Definition The sequence $a_n = o(x_n)$ if $a_n/x_n \to 0$. We would read it as $a_n$ is of smaller order than $1/n$, or $a_n$ is little-oh of $1/n$. In your case, if some term $a_n$ is $o(1/n)$ that means that $n a_n \to 0$. A few examples of sequences that are $o(1/n)$ are $c/n^p$ where $p > 1$, $1/(n\log(n))$, and $1/n^2 + 1/n^3$. Even though writing $o(1/n)$ conveys less information than writing the specific sequence, it takes up less space than writing the whole thing out, and tells us that, in this case, the term goes to $0$ faster than $1/n$ (it could also tell us about the speed at which sequences approach infinity). Some Other Things On the other hand, writing $a_n = O(1/n)$ means that $|na_n| < M < \infty$ for $n$ bigger than some $n_0$. Typically (but not always) this means that $a_n = c/n$. This means that $a_n$ is of the same or smaller order than $1/n$. Another thing that might pop up is when people write $o_p(x_n)$ or $O_p(x_n)$. Replace all the definitions above with convergence in probability. It doesn't look like you're using this, however, because you aren't talking about convergence of random variables...just their mgfs. Your Problem Also, are you sure that those are what your mgfs look like multiplied together? Specifically the part $1 + \sum_{i=0}^{\infty} \frac{(t/n)^i}{i!}$. I suspect you mean $\sum_{i=0}^{\infty} \frac{(t/n)^i}{i!} = e^{t/n}$ because \begin{align*} M_{\bar{X}}(t) &= E[e^{t/n \sum_iX_i}] \\ &= \prod_{i} M_{X_i}(t/n) \\ &= [(1-p)+pe^{t/n}]^n \\ &= \left[(1-p) + p\left\{ \sum_{i=0}^{\infty} \frac{(t/n)^i}{i!}\right\} \right]^n \\ &= \left[ 1 - p + p\left\{ 1 + t/n + \sum_{i=2}^{\infty}\frac{ (t/n)^{i} }{i!} \right\}\right]^n \\ &= \left[1 + pt/n + \sum_{i=2}^{\infty} \frac{pt^i}{i!n^i} \right]^n \end{align*} Your professor means $\sum_{i=2}^{\infty} \frac{pt^i}{i!n^i} = o(1/n)$ because $$ \sum_{i=2}^{\infty} \frac{pt^i}{i!n^{i-1}} \to 0 $$ as $n \to \infty$ as $i-2 \ge 1$.
Big O and little o notation explained?
Definition The sequence $a_n = o(x_n)$ if $a_n/x_n \to 0$. We would read it as $a_n$ is of smaller order than $1/n$, or $a_n$ is little-oh of $1/n$. In your case, if some term $a_n$ is $o(1/n)$ that m
Big O and little o notation explained? Definition The sequence $a_n = o(x_n)$ if $a_n/x_n \to 0$. We would read it as $a_n$ is of smaller order than $1/n$, or $a_n$ is little-oh of $1/n$. In your case, if some term $a_n$ is $o(1/n)$ that means that $n a_n \to 0$. A few examples of sequences that are $o(1/n)$ are $c/n^p$ where $p > 1$, $1/(n\log(n))$, and $1/n^2 + 1/n^3$. Even though writing $o(1/n)$ conveys less information than writing the specific sequence, it takes up less space than writing the whole thing out, and tells us that, in this case, the term goes to $0$ faster than $1/n$ (it could also tell us about the speed at which sequences approach infinity). Some Other Things On the other hand, writing $a_n = O(1/n)$ means that $|na_n| < M < \infty$ for $n$ bigger than some $n_0$. Typically (but not always) this means that $a_n = c/n$. This means that $a_n$ is of the same or smaller order than $1/n$. Another thing that might pop up is when people write $o_p(x_n)$ or $O_p(x_n)$. Replace all the definitions above with convergence in probability. It doesn't look like you're using this, however, because you aren't talking about convergence of random variables...just their mgfs. Your Problem Also, are you sure that those are what your mgfs look like multiplied together? Specifically the part $1 + \sum_{i=0}^{\infty} \frac{(t/n)^i}{i!}$. I suspect you mean $\sum_{i=0}^{\infty} \frac{(t/n)^i}{i!} = e^{t/n}$ because \begin{align*} M_{\bar{X}}(t) &= E[e^{t/n \sum_iX_i}] \\ &= \prod_{i} M_{X_i}(t/n) \\ &= [(1-p)+pe^{t/n}]^n \\ &= \left[(1-p) + p\left\{ \sum_{i=0}^{\infty} \frac{(t/n)^i}{i!}\right\} \right]^n \\ &= \left[ 1 - p + p\left\{ 1 + t/n + \sum_{i=2}^{\infty}\frac{ (t/n)^{i} }{i!} \right\}\right]^n \\ &= \left[1 + pt/n + \sum_{i=2}^{\infty} \frac{pt^i}{i!n^i} \right]^n \end{align*} Your professor means $\sum_{i=2}^{\infty} \frac{pt^i}{i!n^i} = o(1/n)$ because $$ \sum_{i=2}^{\infty} \frac{pt^i}{i!n^{i-1}} \to 0 $$ as $n \to \infty$ as $i-2 \ge 1$.
Big O and little o notation explained? Definition The sequence $a_n = o(x_n)$ if $a_n/x_n \to 0$. We would read it as $a_n$ is of smaller order than $1/n$, or $a_n$ is little-oh of $1/n$. In your case, if some term $a_n$ is $o(1/n)$ that m
47,780
Vanishing gradient vs. dying ReLU? [duplicate]
ELU and ReLU both have zero or vanishing gradient "on the left". This is still a marked departure from $\tanh$ or logistic units, because those functions are bounded above and below; for ELU and ReLU units, the gradient updates will be larger "on the right". As a demonstration, work out the derivatives for each and note that the logistic and $\tanh$ units usually have smaller gradients for inputs in some interval around 0 such as $ [-2,2]$ than ELU and PReLU; $\tanh$ only attains a gradient of 1 at zero, and the logistic unit not at all! On the other hand, ReLU/ELU/PReLU have gradient 1 for all positive inputs. On the other hand, you're correct that PReLUs avoid having a zero gradient everywhere. I'm not aware of a study exhaustively comparing ELU, ReLU and PReLU units. There's still a long way to go between these practical innovations in neural networks and a theoretical understanding of why they work well.
Vanishing gradient vs. dying ReLU? [duplicate]
ELU and ReLU both have zero or vanishing gradient "on the left". This is still a marked departure from $\tanh$ or logistic units, because those functions are bounded above and below; for ELU and ReLU
Vanishing gradient vs. dying ReLU? [duplicate] ELU and ReLU both have zero or vanishing gradient "on the left". This is still a marked departure from $\tanh$ or logistic units, because those functions are bounded above and below; for ELU and ReLU units, the gradient updates will be larger "on the right". As a demonstration, work out the derivatives for each and note that the logistic and $\tanh$ units usually have smaller gradients for inputs in some interval around 0 such as $ [-2,2]$ than ELU and PReLU; $\tanh$ only attains a gradient of 1 at zero, and the logistic unit not at all! On the other hand, ReLU/ELU/PReLU have gradient 1 for all positive inputs. On the other hand, you're correct that PReLUs avoid having a zero gradient everywhere. I'm not aware of a study exhaustively comparing ELU, ReLU and PReLU units. There's still a long way to go between these practical innovations in neural networks and a theoretical understanding of why they work well.
Vanishing gradient vs. dying ReLU? [duplicate] ELU and ReLU both have zero or vanishing gradient "on the left". This is still a marked departure from $\tanh$ or logistic units, because those functions are bounded above and below; for ELU and ReLU
47,781
Why not use modulus for variance? [duplicate]
Let $\mu=\operatorname{E}(X).$ The main reason for using $\sqrt{\operatorname{var}(X)} = \sqrt{\operatorname{E}((X-\mu)^2)}$ as a measure of dispersion, rather that using the mean absolute deviation $\operatorname{E}(|X-\mu|),$ is that if $X_1,\ldots,X_n$ are independent, then $$ \operatorname{var}(X_1+\cdots+X_n) = \operatorname{var}(X_1)+\cdots+\operatorname{var}(X_n). \tag 1 $$ Nothing like that works with the mean absolute deviation. For example, try it with $X_1,X_2,X_3,\sim\operatorname{i.i.d.} \operatorname{Bernoulli}(1/2).$ In any problem where you use the central limit theorem, you need this. For example: What is the standard deviation of the number of heads that appear when a coin is tossed $900$ times? That's easy to find because of $(1).$
Why not use modulus for variance? [duplicate]
Let $\mu=\operatorname{E}(X).$ The main reason for using $\sqrt{\operatorname{var}(X)} = \sqrt{\operatorname{E}((X-\mu)^2)}$ as a measure of dispersion, rather that using the mean absolute deviation $
Why not use modulus for variance? [duplicate] Let $\mu=\operatorname{E}(X).$ The main reason for using $\sqrt{\operatorname{var}(X)} = \sqrt{\operatorname{E}((X-\mu)^2)}$ as a measure of dispersion, rather that using the mean absolute deviation $\operatorname{E}(|X-\mu|),$ is that if $X_1,\ldots,X_n$ are independent, then $$ \operatorname{var}(X_1+\cdots+X_n) = \operatorname{var}(X_1)+\cdots+\operatorname{var}(X_n). \tag 1 $$ Nothing like that works with the mean absolute deviation. For example, try it with $X_1,X_2,X_3,\sim\operatorname{i.i.d.} \operatorname{Bernoulli}(1/2).$ In any problem where you use the central limit theorem, you need this. For example: What is the standard deviation of the number of heads that appear when a coin is tossed $900$ times? That's easy to find because of $(1).$
Why not use modulus for variance? [duplicate] Let $\mu=\operatorname{E}(X).$ The main reason for using $\sqrt{\operatorname{var}(X)} = \sqrt{\operatorname{E}((X-\mu)^2)}$ as a measure of dispersion, rather that using the mean absolute deviation $
47,782
Why not use modulus for variance? [duplicate]
There are already several good answers here, including in the comments. However as the OP requested a "simpler" justification, here I will expand on my comment. To me this is a very natural distinction between root-mean-square vs. mean-absolute deviations, and why we might prefer one vs. the other when measuring dispersion. (I do not know if it is "simpler"?) Say you have some data $x_1,\ldots,x_n$, which you want to approximate by a constant $c$, i.e. $$x_i\approx c$$ for all $i$. How do you choose the constant? A common approach is to minimize some error $E[c]$. One choice for $E$ is the sum square error $$E_\text{SSE}=\sum_i\big(x_i-c\big)^2$$ the solution will then be $c_\min=\frac{1}{n}\sum x_i$. In other words, we have $$\big[c_\min,E_\min\big]_\text{SSE}=\big[\text{mean}(\mathbf{x}),n\,\text{var}(\mathbf{x})\big]$$ so if you are using the mean as your measure of central tendency, the RMS error is really the "natural" measure of dispersion. On the other hand, if we choose $E$ to be the sum absolute error $$E_\text{SAE}=\sum_i\big|x_i-c\big|$$ the solution will then be $(c_\min)_\text{SAE}=\text{median}(\mathbf{x})$. So if you want to use mean absolute deviation to measure dispersion, really the "natural" measure of central tendency would be the median. Summary: If you want to use mean absolute deviation, then arguably you should be measuring dispersion around the median. If you are already using the mean, then arguably standard deviation is the appropriate measure of dispersion. Here "arguably" is justified by optimality (minimum dispersion).
Why not use modulus for variance? [duplicate]
There are already several good answers here, including in the comments. However as the OP requested a "simpler" justification, here I will expand on my comment. To me this is a very natural distincti
Why not use modulus for variance? [duplicate] There are already several good answers here, including in the comments. However as the OP requested a "simpler" justification, here I will expand on my comment. To me this is a very natural distinction between root-mean-square vs. mean-absolute deviations, and why we might prefer one vs. the other when measuring dispersion. (I do not know if it is "simpler"?) Say you have some data $x_1,\ldots,x_n$, which you want to approximate by a constant $c$, i.e. $$x_i\approx c$$ for all $i$. How do you choose the constant? A common approach is to minimize some error $E[c]$. One choice for $E$ is the sum square error $$E_\text{SSE}=\sum_i\big(x_i-c\big)^2$$ the solution will then be $c_\min=\frac{1}{n}\sum x_i$. In other words, we have $$\big[c_\min,E_\min\big]_\text{SSE}=\big[\text{mean}(\mathbf{x}),n\,\text{var}(\mathbf{x})\big]$$ so if you are using the mean as your measure of central tendency, the RMS error is really the "natural" measure of dispersion. On the other hand, if we choose $E$ to be the sum absolute error $$E_\text{SAE}=\sum_i\big|x_i-c\big|$$ the solution will then be $(c_\min)_\text{SAE}=\text{median}(\mathbf{x})$. So if you want to use mean absolute deviation to measure dispersion, really the "natural" measure of central tendency would be the median. Summary: If you want to use mean absolute deviation, then arguably you should be measuring dispersion around the median. If you are already using the mean, then arguably standard deviation is the appropriate measure of dispersion. Here "arguably" is justified by optimality (minimum dispersion).
Why not use modulus for variance? [duplicate] There are already several good answers here, including in the comments. However as the OP requested a "simpler" justification, here I will expand on my comment. To me this is a very natural distincti
47,783
Derivative of a quadratic form wrt a parameter in the matrix
For typing convenience, define $$\eqalign{ Y &= yy^T,\,\,\,\, A=C^{-1},\,\,\,\, J = \frac{\partial C}{\partial\theta} \cr \lambda &= y^TC^{-1}y = {\rm Tr}(Y^TA)= Y:A \cr }$$ Notice that $(A,C,Y)$ are symmetric matrices. Also note that the colon in the final expression is just a convenient (Frobenius product) notation for the trace function. The cyclic properties of the trace allow the terms of a Frobenius product to be rearranged in a variety of ways. For example, all of the following expressions are equivalent $$\eqalign{ A:BC &= BC:A \cr &= A^T:(BC)^T \cr &= B^TA:C \cr &= AC^T:B \cr }$$ To find $\,\frac{\partial\lambda}{\partial\theta}\,$ start by finding its differential $$\eqalign{ d\lambda &= Y:dA \cr &= -Y:A\,dC\,A \cr &= -AYA:dC \cr &= -AYA:J\,d\theta \cr \frac{\partial\lambda}{\partial\theta} &= -AYA:J \cr &= -{\rm Tr}\Big(C^{-1}yy^TC^{-1}\frac{\partial C}{\partial\theta}\Big) \cr\cr }$$ This is consistent with what you found in the Matrix Cookbook, except you should've used the Frobenius product, instead of the regular matrix product, in the chain rule. For matrix calculus problems, I find it easier to use differentials rather than the chain rule. For many problems, the intermediate quantities required by the chain rule are 3rd and 4th order tensors, which are difficult to comprehend and even harder to calculate.
Derivative of a quadratic form wrt a parameter in the matrix
For typing convenience, define $$\eqalign{ Y &= yy^T,\,\,\,\, A=C^{-1},\,\,\,\, J = \frac{\partial C}{\partial\theta} \cr \lambda &= y^TC^{-1}y = {\rm Tr}(Y^TA)= Y:A \cr }$$ Notice that $(A,C,Y)$ are
Derivative of a quadratic form wrt a parameter in the matrix For typing convenience, define $$\eqalign{ Y &= yy^T,\,\,\,\, A=C^{-1},\,\,\,\, J = \frac{\partial C}{\partial\theta} \cr \lambda &= y^TC^{-1}y = {\rm Tr}(Y^TA)= Y:A \cr }$$ Notice that $(A,C,Y)$ are symmetric matrices. Also note that the colon in the final expression is just a convenient (Frobenius product) notation for the trace function. The cyclic properties of the trace allow the terms of a Frobenius product to be rearranged in a variety of ways. For example, all of the following expressions are equivalent $$\eqalign{ A:BC &= BC:A \cr &= A^T:(BC)^T \cr &= B^TA:C \cr &= AC^T:B \cr }$$ To find $\,\frac{\partial\lambda}{\partial\theta}\,$ start by finding its differential $$\eqalign{ d\lambda &= Y:dA \cr &= -Y:A\,dC\,A \cr &= -AYA:dC \cr &= -AYA:J\,d\theta \cr \frac{\partial\lambda}{\partial\theta} &= -AYA:J \cr &= -{\rm Tr}\Big(C^{-1}yy^TC^{-1}\frac{\partial C}{\partial\theta}\Big) \cr\cr }$$ This is consistent with what you found in the Matrix Cookbook, except you should've used the Frobenius product, instead of the regular matrix product, in the chain rule. For matrix calculus problems, I find it easier to use differentials rather than the chain rule. For many problems, the intermediate quantities required by the chain rule are 3rd and 4th order tensors, which are difficult to comprehend and even harder to calculate.
Derivative of a quadratic form wrt a parameter in the matrix For typing convenience, define $$\eqalign{ Y &= yy^T,\,\,\,\, A=C^{-1},\,\,\,\, J = \frac{\partial C}{\partial\theta} \cr \lambda &= y^TC^{-1}y = {\rm Tr}(Y^TA)= Y:A \cr }$$ Notice that $(A,C,Y)$ are
47,784
Derivative of a quadratic form wrt a parameter in the matrix
I guess the correct chain rule is $$\frac{\partial y^T C^{-1}(\theta)y}{\partial \theta_k} = \sum_{i, j} \frac{\partial y^T C^{-1}(\theta)y}{\partial C_{i,j}(\theta)} \frac{\partial C_{i,j}(\theta)}{\partial \theta_k} = Tr\Big[\Big(\frac{\partial y^T C^{-1}(\theta)y}{\partial C(\theta)}\Big)^T \Big(\frac{\partial C(\theta)}{\partial \theta_k}\Big) \Big]$$ where $Tr(A) = \sum_i a_{i,i}, A \in \Re^{n \times n}$ is a trace function.
Derivative of a quadratic form wrt a parameter in the matrix
I guess the correct chain rule is $$\frac{\partial y^T C^{-1}(\theta)y}{\partial \theta_k} = \sum_{i, j} \frac{\partial y^T C^{-1}(\theta)y}{\partial C_{i,j}(\theta)} \frac{\partial C_{i,j}(\theta)}{
Derivative of a quadratic form wrt a parameter in the matrix I guess the correct chain rule is $$\frac{\partial y^T C^{-1}(\theta)y}{\partial \theta_k} = \sum_{i, j} \frac{\partial y^T C^{-1}(\theta)y}{\partial C_{i,j}(\theta)} \frac{\partial C_{i,j}(\theta)}{\partial \theta_k} = Tr\Big[\Big(\frac{\partial y^T C^{-1}(\theta)y}{\partial C(\theta)}\Big)^T \Big(\frac{\partial C(\theta)}{\partial \theta_k}\Big) \Big]$$ where $Tr(A) = \sum_i a_{i,i}, A \in \Re^{n \times n}$ is a trace function.
Derivative of a quadratic form wrt a parameter in the matrix I guess the correct chain rule is $$\frac{\partial y^T C^{-1}(\theta)y}{\partial \theta_k} = \sum_{i, j} \frac{\partial y^T C^{-1}(\theta)y}{\partial C_{i,j}(\theta)} \frac{\partial C_{i,j}(\theta)}{
47,785
Why Gaussian process has marginalisation/consistency property?
A Gaussian process $\{X(t)\colon t \in \mathbb T\}$ is not defined as just a collection of Gaussian random variables; there is also the requirement that for every $n \geq 1$, every finite collection $\{X(t_1), X(t_2), \cdots, X(t_n)\colon t_1, t_2, \cdots, t_n \in \mathbb T\}$ of $n$ random variables from the process enjoys a multivariate Gaussian (also called jointly Gaussian) distribution. The more facile definition of a Gaussian process used by the OP restricts $n$ to be just $1$. Now, if $\{X(t_1), X(t_2), \cdots, X(t_n)\colon t_1, t_2, \cdots, t_n \in \mathbb T\}$ are jointly Gaussian, then so does any nonempty subset of two or more of these variables enjoy a jointly Gaussian distribution, and of course, each of the random variables is individually (that is, marginally) Gaussian. Furthermore, these marginal distributions are consistent: the distribution of $X(t_1)$ as obtained via marginalization from the joint distribution of $\{X(t_1), X(t_2)\}$ cannot be different from the the distribution of $X(t_1)$ as obtained via marginalization from the joint distribution of $\{X(t_1), X(t_3)\}$ because both are obtained from marginalization of the jointly Gaussian trivariate distribution of $\{X(t_1), X(t_2), X(t_3)\}$. Thus, the consistency requirement is baked into the correct definition of a Gaussian process.
Why Gaussian process has marginalisation/consistency property?
A Gaussian process $\{X(t)\colon t \in \mathbb T\}$ is not defined as just a collection of Gaussian random variables; there is also the requirement that for every $n \geq 1$, every finite collection
Why Gaussian process has marginalisation/consistency property? A Gaussian process $\{X(t)\colon t \in \mathbb T\}$ is not defined as just a collection of Gaussian random variables; there is also the requirement that for every $n \geq 1$, every finite collection $\{X(t_1), X(t_2), \cdots, X(t_n)\colon t_1, t_2, \cdots, t_n \in \mathbb T\}$ of $n$ random variables from the process enjoys a multivariate Gaussian (also called jointly Gaussian) distribution. The more facile definition of a Gaussian process used by the OP restricts $n$ to be just $1$. Now, if $\{X(t_1), X(t_2), \cdots, X(t_n)\colon t_1, t_2, \cdots, t_n \in \mathbb T\}$ are jointly Gaussian, then so does any nonempty subset of two or more of these variables enjoy a jointly Gaussian distribution, and of course, each of the random variables is individually (that is, marginally) Gaussian. Furthermore, these marginal distributions are consistent: the distribution of $X(t_1)$ as obtained via marginalization from the joint distribution of $\{X(t_1), X(t_2)\}$ cannot be different from the the distribution of $X(t_1)$ as obtained via marginalization from the joint distribution of $\{X(t_1), X(t_3)\}$ because both are obtained from marginalization of the jointly Gaussian trivariate distribution of $\{X(t_1), X(t_2), X(t_3)\}$. Thus, the consistency requirement is baked into the correct definition of a Gaussian process.
Why Gaussian process has marginalisation/consistency property? A Gaussian process $\{X(t)\colon t \in \mathbb T\}$ is not defined as just a collection of Gaussian random variables; there is also the requirement that for every $n \geq 1$, every finite collection
47,786
Why Gaussian process has marginalisation/consistency property?
It is actually a good question which shows a subtlety of the definition of a general(not necessarily Gaussian) stochastic process. And I hope it is not too late for you. In GPML, it says A stochastic process is defined as a collection of random variables with a law. Since these random variables are themselves mappings from a probability space to a measurable space, there are already probability measure on this probability space on which the stochastic process defined. Therefore the law of the stochastic process is already implied by the collection of random variables. This is guaranteed by Kolmogorov extension theorem This theorem has multiple names: 1.Kolmogorov extension theorem: focusing on the fact that the law of this stochastic process is (naturally) extended from the law of the collection of random variables. 2.Kolmogorov existence theorem: focusing on the fact that the stochastic process exists, in the sense that it is really something "random" that equipped with a law. (Not just a plain collection of random variables) 3.Kolmogorov consistency theorem: focusing on the fact that if we assume the stochastic process exists, then its law should be consistent with the laws of its components (the random variables) Applying the theorem to this particular question: when defining a Gaussian process, we define its law through the law of any finite dimensional subset of the collection, where it suffice to specify the covariance function.(the mean function is not essential for the law since it is just a translation). So Can I know why this definition automatically defines the consistency requirement? it is not automatic, Kolmogorov extension theorem is at behind. (The third aspect). Which is also the marginalisation property? In the same Wikipage, the consistency conditions (2) says: for all measurable sets $F_{i} \subseteq \mathbb{R}^{n}, m \in \mathbb{N}$ $$ \nu_{t_{1} \ldots t_{k}}\left(F_{1} \times \cdots \times F_{k}\right)=\nu_{t_{1} \ldots t_{k}, t_{k+1}, \ldots, t_{k+m}}(F_{1} \times \cdots \times F_{k} \times \underbrace{\mathbb{R}^{n} \times \cdots \times \mathbb{R}^{n}}_{m}) $$ It means basically if I measure a subset, there is no difference whichever measure I use, no matter a joint one or a marginal one, as long as the subset is measurable (contained in the measurable space where the measure is defined) I strongly recommend you read the Wikipage for more insights. That's all. @whuber thank you for reminds, I have corrected.
Why Gaussian process has marginalisation/consistency property?
It is actually a good question which shows a subtlety of the definition of a general(not necessarily Gaussian) stochastic process. And I hope it is not too late for you. In GPML, it says A stochastic
Why Gaussian process has marginalisation/consistency property? It is actually a good question which shows a subtlety of the definition of a general(not necessarily Gaussian) stochastic process. And I hope it is not too late for you. In GPML, it says A stochastic process is defined as a collection of random variables with a law. Since these random variables are themselves mappings from a probability space to a measurable space, there are already probability measure on this probability space on which the stochastic process defined. Therefore the law of the stochastic process is already implied by the collection of random variables. This is guaranteed by Kolmogorov extension theorem This theorem has multiple names: 1.Kolmogorov extension theorem: focusing on the fact that the law of this stochastic process is (naturally) extended from the law of the collection of random variables. 2.Kolmogorov existence theorem: focusing on the fact that the stochastic process exists, in the sense that it is really something "random" that equipped with a law. (Not just a plain collection of random variables) 3.Kolmogorov consistency theorem: focusing on the fact that if we assume the stochastic process exists, then its law should be consistent with the laws of its components (the random variables) Applying the theorem to this particular question: when defining a Gaussian process, we define its law through the law of any finite dimensional subset of the collection, where it suffice to specify the covariance function.(the mean function is not essential for the law since it is just a translation). So Can I know why this definition automatically defines the consistency requirement? it is not automatic, Kolmogorov extension theorem is at behind. (The third aspect). Which is also the marginalisation property? In the same Wikipage, the consistency conditions (2) says: for all measurable sets $F_{i} \subseteq \mathbb{R}^{n}, m \in \mathbb{N}$ $$ \nu_{t_{1} \ldots t_{k}}\left(F_{1} \times \cdots \times F_{k}\right)=\nu_{t_{1} \ldots t_{k}, t_{k+1}, \ldots, t_{k+m}}(F_{1} \times \cdots \times F_{k} \times \underbrace{\mathbb{R}^{n} \times \cdots \times \mathbb{R}^{n}}_{m}) $$ It means basically if I measure a subset, there is no difference whichever measure I use, no matter a joint one or a marginal one, as long as the subset is measurable (contained in the measurable space where the measure is defined) I strongly recommend you read the Wikipage for more insights. That's all. @whuber thank you for reminds, I have corrected.
Why Gaussian process has marginalisation/consistency property? It is actually a good question which shows a subtlety of the definition of a general(not necessarily Gaussian) stochastic process. And I hope it is not too late for you. In GPML, it says A stochastic
47,787
What is Box-Cox regression?
What is box-cox regression? Is it apply box-cox power transformation then run a linear regression? It could be used to describe that but it will typically mean more than that. Consider that if you just look at $Y$ and find a Box-Cox transformation before you consider your $x$-variables, you're looking at the marginal distribution for $Y$, when the issue in regression is really (a) the shape of the relationships with those predictors and (b) its conditional distribution (especially getting things like conditional variance reasonably close to constant). As such you can't really hope to find a suitable transformation without doing it within the context of the regression itself. So typically this would be "simultaneous" with the regression, not doing one thing then the other. For example, to use the MASS::boxcox function in R you pass it a model object. If you give it the same $y$ but a different model the estimate of $\lambda$ you end up with is different. However, once you have an estimate of $\lambda$ in the context of a model, you can then transform your $y$ variable and rerun your model using regression (just as the routine to find suitable values of $\lambda$ does at each value of $\lambda$ it looks at). Is there any relationship between "box-cox regression" and "the Cox model" in survival analysis? No direct connection, outside the obvious one (Cox himself).
What is Box-Cox regression?
What is box-cox regression? Is it apply box-cox power transformation then run a linear regression? It could be used to describe that but it will typically mean more than that. Consider that if you j
What is Box-Cox regression? What is box-cox regression? Is it apply box-cox power transformation then run a linear regression? It could be used to describe that but it will typically mean more than that. Consider that if you just look at $Y$ and find a Box-Cox transformation before you consider your $x$-variables, you're looking at the marginal distribution for $Y$, when the issue in regression is really (a) the shape of the relationships with those predictors and (b) its conditional distribution (especially getting things like conditional variance reasonably close to constant). As such you can't really hope to find a suitable transformation without doing it within the context of the regression itself. So typically this would be "simultaneous" with the regression, not doing one thing then the other. For example, to use the MASS::boxcox function in R you pass it a model object. If you give it the same $y$ but a different model the estimate of $\lambda$ you end up with is different. However, once you have an estimate of $\lambda$ in the context of a model, you can then transform your $y$ variable and rerun your model using regression (just as the routine to find suitable values of $\lambda$ does at each value of $\lambda$ it looks at). Is there any relationship between "box-cox regression" and "the Cox model" in survival analysis? No direct connection, outside the obvious one (Cox himself).
What is Box-Cox regression? What is box-cox regression? Is it apply box-cox power transformation then run a linear regression? It could be used to describe that but it will typically mean more than that. Consider that if you j
47,788
What is the point of Root Mean Absolute Error, RMAE, when evaluating forecasting errors?
I think it seems like a misunderstanding, AFAIK rMAE is "relative Mean Absolute Error" not "root Mean Absolute Error" and as a result it has no unit (e.g. dollars) And it might be useful for comparison of classifiers which were tested on completely different datasets (with different units etc.) See this link for more information: http://www.gepsoft.com/gxpt4kb/Chapter09/Section1/SS03/SSS5.htm
What is the point of Root Mean Absolute Error, RMAE, when evaluating forecasting errors?
I think it seems like a misunderstanding, AFAIK rMAE is "relative Mean Absolute Error" not "root Mean Absolute Error" and as a result it has no unit (e.g. dollars) And it might be useful for compariso
What is the point of Root Mean Absolute Error, RMAE, when evaluating forecasting errors? I think it seems like a misunderstanding, AFAIK rMAE is "relative Mean Absolute Error" not "root Mean Absolute Error" and as a result it has no unit (e.g. dollars) And it might be useful for comparison of classifiers which were tested on completely different datasets (with different units etc.) See this link for more information: http://www.gepsoft.com/gxpt4kb/Chapter09/Section1/SS03/SSS5.htm
What is the point of Root Mean Absolute Error, RMAE, when evaluating forecasting errors? I think it seems like a misunderstanding, AFAIK rMAE is "relative Mean Absolute Error" not "root Mean Absolute Error" and as a result it has no unit (e.g. dollars) And it might be useful for compariso
47,789
What if do not use any activation function in the neural network? [duplicate]
Consider a two layer neural network. Let $x \in \mathbb{R}^n$ be your input vector, and consider a single layer without an activation function and weight matrix $A$ and bias $b$, it would computer $$Ax + b$$ A second layer (without activation and weight matrix $C$ and bias $d$) would then compute $$C(Ax + b)+d$$ This is equal to \begin{equation} CAx + Cb + d \end{equation} This is equivalent to a single layer neural network with weight matrix $CA$ and bias vector $Cb+d$. It is well known that single layer neural networks cannot solve some "simple" problems, for example, they cannot solve the XOR problem. Suppose we have a single layer neural network $Ax + b$ that can solve the XOR problem. The matrix is of the form $A = (w_{1}, w_{2})$ since it takes two inputs and outputs a single value. Then \begin{equation} 0w_1 + 0w_2 + b \leq 0 \iff b \leq 0 \\ 0w_1 + 1w_2 + b > 0 \iff b > -w_2 \\ 1w_1 + 0w_2 + b > 0 \iff b > -w_1 \\ 1w_1 + 1w_2 + b \leq 0 \iff b \leq -w_1 - w_2 \end{equation} Suppose all of the left hand sides are true (which is required to solve the XOR problem) then we know $b \leq 0$. The second and third line gives that $w_1$ and $w_2$ are also negative (since they are less than $b$ which is negative). The fourth line gives $b \leq -w_1 - w_2$ then since $b$ is negative $2b \leq b \leq -w_1 - w_2$ but then $b > -w_2$ and $b > -w_1$ gives $2b > -w_1 - w_2$ which is a contradiction. Hence the single layer neural network cannot solve the XOR problem. Less formally, $Ax+b$ defines a line which separates the plane such that all points are classified according to the side of the line they lay on. Try drawing a straight line such that $(0,0)$ and $(1,1)$ are on one side and $(0,1)$ and $(1,0)$ is on the other, you will not be able to. Introducing non-linear activation functions between the layers allows for the network to solve a larger variety of problems. To be more precise the Universal approximation theorem states that a feed-forward network with a single hidden layer containing a finite number of neurons, can approximate continuous functions on compact subsets of $\mathbb{R}^n$, under mild assumptions on the activation function.
What if do not use any activation function in the neural network? [duplicate]
Consider a two layer neural network. Let $x \in \mathbb{R}^n$ be your input vector, and consider a single layer without an activation function and weight matrix $A$ and bias $b$, it would computer $$A
What if do not use any activation function in the neural network? [duplicate] Consider a two layer neural network. Let $x \in \mathbb{R}^n$ be your input vector, and consider a single layer without an activation function and weight matrix $A$ and bias $b$, it would computer $$Ax + b$$ A second layer (without activation and weight matrix $C$ and bias $d$) would then compute $$C(Ax + b)+d$$ This is equal to \begin{equation} CAx + Cb + d \end{equation} This is equivalent to a single layer neural network with weight matrix $CA$ and bias vector $Cb+d$. It is well known that single layer neural networks cannot solve some "simple" problems, for example, they cannot solve the XOR problem. Suppose we have a single layer neural network $Ax + b$ that can solve the XOR problem. The matrix is of the form $A = (w_{1}, w_{2})$ since it takes two inputs and outputs a single value. Then \begin{equation} 0w_1 + 0w_2 + b \leq 0 \iff b \leq 0 \\ 0w_1 + 1w_2 + b > 0 \iff b > -w_2 \\ 1w_1 + 0w_2 + b > 0 \iff b > -w_1 \\ 1w_1 + 1w_2 + b \leq 0 \iff b \leq -w_1 - w_2 \end{equation} Suppose all of the left hand sides are true (which is required to solve the XOR problem) then we know $b \leq 0$. The second and third line gives that $w_1$ and $w_2$ are also negative (since they are less than $b$ which is negative). The fourth line gives $b \leq -w_1 - w_2$ then since $b$ is negative $2b \leq b \leq -w_1 - w_2$ but then $b > -w_2$ and $b > -w_1$ gives $2b > -w_1 - w_2$ which is a contradiction. Hence the single layer neural network cannot solve the XOR problem. Less formally, $Ax+b$ defines a line which separates the plane such that all points are classified according to the side of the line they lay on. Try drawing a straight line such that $(0,0)$ and $(1,1)$ are on one side and $(0,1)$ and $(1,0)$ is on the other, you will not be able to. Introducing non-linear activation functions between the layers allows for the network to solve a larger variety of problems. To be more precise the Universal approximation theorem states that a feed-forward network with a single hidden layer containing a finite number of neurons, can approximate continuous functions on compact subsets of $\mathbb{R}^n$, under mild assumptions on the activation function.
What if do not use any activation function in the neural network? [duplicate] Consider a two layer neural network. Let $x \in \mathbb{R}^n$ be your input vector, and consider a single layer without an activation function and weight matrix $A$ and bias $b$, it would computer $$A
47,790
Comparisons of circular means
You can do Watson's large sample nonparametric test or Bootstrap version of Watson's nonparametric test. Both these tests are available in R "circular" package. There is a good book named "Circular statistics in R" written by Arthur Pewsey et al. There you will find the details on what functions to use and how to do the tests.
Comparisons of circular means
You can do Watson's large sample nonparametric test or Bootstrap version of Watson's nonparametric test. Both these tests are available in R "circular" package. There is a good book named "Circular st
Comparisons of circular means You can do Watson's large sample nonparametric test or Bootstrap version of Watson's nonparametric test. Both these tests are available in R "circular" package. There is a good book named "Circular statistics in R" written by Arthur Pewsey et al. There you will find the details on what functions to use and how to do the tests.
Comparisons of circular means You can do Watson's large sample nonparametric test or Bootstrap version of Watson's nonparametric test. Both these tests are available in R "circular" package. There is a good book named "Circular st
47,791
Comparisons of circular means
Philipp Berens' CircStat toolbox for MATLAB offers circ_cmtest, which is a non-parametric multi-sample test for equal medians. It says it is similar to a Kruskal-Wallis test for linear data. Becuase it assumes data are non-parametric, comparison of medians rather than means makes good sense. It's quite simple to use.
Comparisons of circular means
Philipp Berens' CircStat toolbox for MATLAB offers circ_cmtest, which is a non-parametric multi-sample test for equal medians. It says it is similar to a Kruskal-Wallis test for linear data. Becuase i
Comparisons of circular means Philipp Berens' CircStat toolbox for MATLAB offers circ_cmtest, which is a non-parametric multi-sample test for equal medians. It says it is similar to a Kruskal-Wallis test for linear data. Becuase it assumes data are non-parametric, comparison of medians rather than means makes good sense. It's quite simple to use.
Comparisons of circular means Philipp Berens' CircStat toolbox for MATLAB offers circ_cmtest, which is a non-parametric multi-sample test for equal medians. It says it is similar to a Kruskal-Wallis test for linear data. Becuase i
47,792
Are Neural Nets a Special Case Of Graphical Models?
If you focus on the generative part, GANs and VAEs are actually mathematically the same object (1), i.e. Gaussian latent variable models, where $z$ is a latent Gaussian random variable pointing to an observed $x$: The difference is that VAEs are prescribed models that output a random variable $x$ with a probability density, while GANs are likelihood-free implicit models (2) that directly specifies a (deterministic) procedure with which to generate data. Concretely, the VAE's graphical model is implemented as the decoder/inference network, while the GAN's graphical model is implemented as the generator network; the GAN's discriminator network does not appear in the graphical model (similarly to how the VAE's encoder/recognition network doesn't show up) because it is merely an auxiliary object created to approximate the Jensen-Shannon divergence or other f-divergences (1): (Image from Lilian Weng) References (1): I've linked the relevant timestamp of the video recording of a tutorial (slides here) by Shakir Mohamed and Danilo Rezende from DeepMind at UAI 2017; Ferenc Huszar also explains the equivalence on Reddit. The VAE's graphical model is also explained in Stanford's "CS236 Deep Generative Models" notes. (2): The distinction between prescribed and implicit models is described in greater detail in "Learning in Implicit Generative Models" by Shakir et al. (2016). Extra: Another perspective with augmented graphical models On Unifying Deep Generative Models (Hu et al., 2017) illustrates augmented graphical models that encompass both the GAN generator and discriminator, and an analogous model for the VAE where we assume a perfect discriminator. Arrows with solid lines denote generative process; arrows with dashed lines denote inference; hollow arrows denote deterministic transformation leading to implicit distributions; and blue arrows denote adversarial objectives.
Are Neural Nets a Special Case Of Graphical Models?
If you focus on the generative part, GANs and VAEs are actually mathematically the same object (1), i.e. Gaussian latent variable models, where $z$ is a latent Gaussian random variable pointing to an
Are Neural Nets a Special Case Of Graphical Models? If you focus on the generative part, GANs and VAEs are actually mathematically the same object (1), i.e. Gaussian latent variable models, where $z$ is a latent Gaussian random variable pointing to an observed $x$: The difference is that VAEs are prescribed models that output a random variable $x$ with a probability density, while GANs are likelihood-free implicit models (2) that directly specifies a (deterministic) procedure with which to generate data. Concretely, the VAE's graphical model is implemented as the decoder/inference network, while the GAN's graphical model is implemented as the generator network; the GAN's discriminator network does not appear in the graphical model (similarly to how the VAE's encoder/recognition network doesn't show up) because it is merely an auxiliary object created to approximate the Jensen-Shannon divergence or other f-divergences (1): (Image from Lilian Weng) References (1): I've linked the relevant timestamp of the video recording of a tutorial (slides here) by Shakir Mohamed and Danilo Rezende from DeepMind at UAI 2017; Ferenc Huszar also explains the equivalence on Reddit. The VAE's graphical model is also explained in Stanford's "CS236 Deep Generative Models" notes. (2): The distinction between prescribed and implicit models is described in greater detail in "Learning in Implicit Generative Models" by Shakir et al. (2016). Extra: Another perspective with augmented graphical models On Unifying Deep Generative Models (Hu et al., 2017) illustrates augmented graphical models that encompass both the GAN generator and discriminator, and an analogous model for the VAE where we assume a perfect discriminator. Arrows with solid lines denote generative process; arrows with dashed lines denote inference; hollow arrows denote deterministic transformation leading to implicit distributions; and blue arrows denote adversarial objectives.
Are Neural Nets a Special Case Of Graphical Models? If you focus on the generative part, GANs and VAEs are actually mathematically the same object (1), i.e. Gaussian latent variable models, where $z$ is a latent Gaussian random variable pointing to an
47,793
Are Neural Nets a Special Case Of Graphical Models?
You can view a deep neural network as a graphical model, but here, the CPDs are not probabilistic but are deterministic. Consider for example that the input to a neuron is $\vec{x}$ and the output of the neuron is y. In the CPD for this neuron we have, $p(\vec{x},y)=1$, and $p(\vec{x},\hat{y})=0$ for $\hat{y}\neq y$. Refer to the section 10.2.3 of Deep Learning Book for more details.
Are Neural Nets a Special Case Of Graphical Models?
You can view a deep neural network as a graphical model, but here, the CPDs are not probabilistic but are deterministic. Consider for example that the input to a neuron is $\vec{x}$ and the output of
Are Neural Nets a Special Case Of Graphical Models? You can view a deep neural network as a graphical model, but here, the CPDs are not probabilistic but are deterministic. Consider for example that the input to a neuron is $\vec{x}$ and the output of the neuron is y. In the CPD for this neuron we have, $p(\vec{x},y)=1$, and $p(\vec{x},\hat{y})=0$ for $\hat{y}\neq y$. Refer to the section 10.2.3 of Deep Learning Book for more details.
Are Neural Nets a Special Case Of Graphical Models? You can view a deep neural network as a graphical model, but here, the CPDs are not probabilistic but are deterministic. Consider for example that the input to a neuron is $\vec{x}$ and the output of
47,794
interpreting causality() in R for Granger Test
[W]hat is this Granger test for and how to interpret it? Basically, Granger causality $x \xrightarrow{Granger} y$ exists when using lags of $x$ next to the lags of $y$ for forecasting $y$ delivers better forecast accuracy than using only the lags of $y$ (without the lags of $x$). You can find definitions and details in Wikipedia and in free textbooks and lecture notes online. There are also many examples on this site, just check the threads tagged with granger-causality. It says in the results that the null hypothesis is "H0: e do not Granger-cause prod rw U", does that mean it is testing whether e Granger causes prod, rw, U all at the same time with one p-value? You are right. Note that in a 4-variable VAR(2) model, testing whether one variables does not cause the other three amounts to testing $3 \times 2$ zero restrictions (three variables times two lags), and that is also what the test summary shows: df1=6. When using grangertest() in R, one always needs to specify both a cause and the dependent variable, so it is not entirely intuitive for me how causality() works. This is because in a $K$-variate system with $K>2$ there are many possible causal links. $x_i$ may cause $x_j$; $x_i$ may cause $x_j$ and $x_k$; $x_i$ and $x_j$ may cause $x_k$; etc. So the function requires you to specify precisely which causal link you want to examine.
interpreting causality() in R for Granger Test
[W]hat is this Granger test for and how to interpret it? Basically, Granger causality $x \xrightarrow{Granger} y$ exists when using lags of $x$ next to the lags of $y$ for forecasting $y$ delivers be
interpreting causality() in R for Granger Test [W]hat is this Granger test for and how to interpret it? Basically, Granger causality $x \xrightarrow{Granger} y$ exists when using lags of $x$ next to the lags of $y$ for forecasting $y$ delivers better forecast accuracy than using only the lags of $y$ (without the lags of $x$). You can find definitions and details in Wikipedia and in free textbooks and lecture notes online. There are also many examples on this site, just check the threads tagged with granger-causality. It says in the results that the null hypothesis is "H0: e do not Granger-cause prod rw U", does that mean it is testing whether e Granger causes prod, rw, U all at the same time with one p-value? You are right. Note that in a 4-variable VAR(2) model, testing whether one variables does not cause the other three amounts to testing $3 \times 2$ zero restrictions (three variables times two lags), and that is also what the test summary shows: df1=6. When using grangertest() in R, one always needs to specify both a cause and the dependent variable, so it is not entirely intuitive for me how causality() works. This is because in a $K$-variate system with $K>2$ there are many possible causal links. $x_i$ may cause $x_j$; $x_i$ may cause $x_j$ and $x_k$; $x_i$ and $x_j$ may cause $x_k$; etc. So the function requires you to specify precisely which causal link you want to examine.
interpreting causality() in R for Granger Test [W]hat is this Granger test for and how to interpret it? Basically, Granger causality $x \xrightarrow{Granger} y$ exists when using lags of $x$ next to the lags of $y$ for forecasting $y$ delivers be
47,795
What is the difference between complete statistics and complete family of distributions?
Suppose $X_1,\ldots,X_n \sim \text{i.i.d. } N(\mu,\sigma^2).$ The family of distributions is $$\left\{ N_n\left(\begin{bmatrix} \mu \\ \vdots \\ \mu \end{bmatrix},\sigma^2 \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix}\right) \quad : \quad \mu\in \mathbb R \right\}$$ (so $\sigma$ is fixed, so that the only difference between one distribution and another in this family is a different value of $\mu$). This is the family of $n$-dimensional normal distributions in which the expected value is that column of $\mu$s and the matrix of covariances is $\sigma^2$ times the $n\times n$ identity matrix. This family of distributions of $n$-tuples is not complete since it admits nontrivial unbiased estimators of $0$; for example $X_1-X_2$ is such an estimator. Now suppose $T = T(X_1,\ldots,X_n) = X_1+\cdots+X_n.$ It follows that $T\sim N(n\mu,n\sigma^2).$ The family of distributions of $T$ is $\{ N(n\mu,n\sigma^2) : \mu \in\mathbb R\},$ so again $\sigma$ is fixed, so the difference between two members of this family is a different value of $\mu$. This family is complete since it admits no nontrivial unbiased estimators of $0$. The fact that this family of distributions is complete is also expressed by saying that the statistic $T$ is complete. Any time you define a statistic that is a function of $(X_1,\ldots,X_n),$ having already defined a family of distributions for that $n$-tuple, that definition induces another family of distributions for the statistic you have defined. To say that that statistic is complete merely means that that induced family of distributions is complete.
What is the difference between complete statistics and complete family of distributions?
Suppose $X_1,\ldots,X_n \sim \text{i.i.d. } N(\mu,\sigma^2).$ The family of distributions is $$\left\{ N_n\left(\begin{bmatrix} \mu \\ \vdots \\ \mu \end{bmatrix},\sigma^2 \begin{bmatrix} 1 & 0 & 0 &
What is the difference between complete statistics and complete family of distributions? Suppose $X_1,\ldots,X_n \sim \text{i.i.d. } N(\mu,\sigma^2).$ The family of distributions is $$\left\{ N_n\left(\begin{bmatrix} \mu \\ \vdots \\ \mu \end{bmatrix},\sigma^2 \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix}\right) \quad : \quad \mu\in \mathbb R \right\}$$ (so $\sigma$ is fixed, so that the only difference between one distribution and another in this family is a different value of $\mu$). This is the family of $n$-dimensional normal distributions in which the expected value is that column of $\mu$s and the matrix of covariances is $\sigma^2$ times the $n\times n$ identity matrix. This family of distributions of $n$-tuples is not complete since it admits nontrivial unbiased estimators of $0$; for example $X_1-X_2$ is such an estimator. Now suppose $T = T(X_1,\ldots,X_n) = X_1+\cdots+X_n.$ It follows that $T\sim N(n\mu,n\sigma^2).$ The family of distributions of $T$ is $\{ N(n\mu,n\sigma^2) : \mu \in\mathbb R\},$ so again $\sigma$ is fixed, so the difference between two members of this family is a different value of $\mu$. This family is complete since it admits no nontrivial unbiased estimators of $0$. The fact that this family of distributions is complete is also expressed by saying that the statistic $T$ is complete. Any time you define a statistic that is a function of $(X_1,\ldots,X_n),$ having already defined a family of distributions for that $n$-tuple, that definition induces another family of distributions for the statistic you have defined. To say that that statistic is complete merely means that that induced family of distributions is complete.
What is the difference between complete statistics and complete family of distributions? Suppose $X_1,\ldots,X_n \sim \text{i.i.d. } N(\mu,\sigma^2).$ The family of distributions is $$\left\{ N_n\left(\begin{bmatrix} \mu \\ \vdots \\ \mu \end{bmatrix},\sigma^2 \begin{bmatrix} 1 & 0 & 0 &
47,796
Performance of the Wilcoxon-Mann-Whitney test with large sample sizes (> 100,000) from medical data warehouses
Your sample sizes are so large it would be surprising not to find differences on almost any reasonable measure of difference between the population distributions. In the medical literature one typically sees a Wilcoxon-Mann-Whitney test being used in comparing the LOS of the two groups and reports it as a test of the difference between medians. You already seem to clearly understand that it's not really testing that without additional assumptions. Usually due to unequal variance and sample size, such an approach does not conform to the so-called "pure shift model." You don't need a pure shift alternative for it to be a test of equality of medians; however that does make it considerably easier to interpret a rejection. If I am using the W-M-W test as originally intended (testing the null hypothesis Prob(x < Y) = 0.5), will the unequal samples sizes (100,000 vs. 700,000) or the unequal variances invalidate the test result? Unequal sample sizes will not be an issue. Unequal variances is not in any way a problem for the Wilcoxon-Mann-Whitney -- though it may be an issue if you want to use it to test equality of medians (in particular, if you want to insist that your alternatives may only be location-shifts). The discreteness is at least as much an issue for that as the variance. There may be challenges to this sort of approach, but they don't really come from those directions. Is there a good test for comparing the median LOS of the two groups? Have you considered a permutation test with the test statistic the difference in sample medians? It will not be possible to compute the exact permutation distribution but it could be sampled to any desired accuracy. There's also Mood's median test. It's perhaps a bit low-powered but that's probably not much of an issue with that sort of sample size. I really don't see a great need for a test here though (you'll reject); what's likely to be a bit more interesting would be to give an interval for the difference in medians.
Performance of the Wilcoxon-Mann-Whitney test with large sample sizes (> 100,000) from medical data
Your sample sizes are so large it would be surprising not to find differences on almost any reasonable measure of difference between the population distributions. In the medical literature one typica
Performance of the Wilcoxon-Mann-Whitney test with large sample sizes (> 100,000) from medical data warehouses Your sample sizes are so large it would be surprising not to find differences on almost any reasonable measure of difference between the population distributions. In the medical literature one typically sees a Wilcoxon-Mann-Whitney test being used in comparing the LOS of the two groups and reports it as a test of the difference between medians. You already seem to clearly understand that it's not really testing that without additional assumptions. Usually due to unequal variance and sample size, such an approach does not conform to the so-called "pure shift model." You don't need a pure shift alternative for it to be a test of equality of medians; however that does make it considerably easier to interpret a rejection. If I am using the W-M-W test as originally intended (testing the null hypothesis Prob(x < Y) = 0.5), will the unequal samples sizes (100,000 vs. 700,000) or the unequal variances invalidate the test result? Unequal sample sizes will not be an issue. Unequal variances is not in any way a problem for the Wilcoxon-Mann-Whitney -- though it may be an issue if you want to use it to test equality of medians (in particular, if you want to insist that your alternatives may only be location-shifts). The discreteness is at least as much an issue for that as the variance. There may be challenges to this sort of approach, but they don't really come from those directions. Is there a good test for comparing the median LOS of the two groups? Have you considered a permutation test with the test statistic the difference in sample medians? It will not be possible to compute the exact permutation distribution but it could be sampled to any desired accuracy. There's also Mood's median test. It's perhaps a bit low-powered but that's probably not much of an issue with that sort of sample size. I really don't see a great need for a test here though (you'll reject); what's likely to be a bit more interesting would be to give an interval for the difference in medians.
Performance of the Wilcoxon-Mann-Whitney test with large sample sizes (> 100,000) from medical data Your sample sizes are so large it would be surprising not to find differences on almost any reasonable measure of difference between the population distributions. In the medical literature one typica
47,797
High correlation among two variables but VIFs do not indicate collinearity
I would use condition indexes rather than either VIFs or correlations; I wrote my dissertation about this, but you can also see the work of David Belsley, e.g. this book. But if I had to choose between VIFs and correlations, I'd go with VIFs. Belsley shows that fairly high correlations are not always problematic. If you are using R, another method that seems good to me is to use the perturb package to see if the collinearity is problematic.
High correlation among two variables but VIFs do not indicate collinearity
I would use condition indexes rather than either VIFs or correlations; I wrote my dissertation about this, but you can also see the work of David Belsley, e.g. this book. But if I had to choose betwe
High correlation among two variables but VIFs do not indicate collinearity I would use condition indexes rather than either VIFs or correlations; I wrote my dissertation about this, but you can also see the work of David Belsley, e.g. this book. But if I had to choose between VIFs and correlations, I'd go with VIFs. Belsley shows that fairly high correlations are not always problematic. If you are using R, another method that seems good to me is to use the perturb package to see if the collinearity is problematic.
High correlation among two variables but VIFs do not indicate collinearity I would use condition indexes rather than either VIFs or correlations; I wrote my dissertation about this, but you can also see the work of David Belsley, e.g. this book. But if I had to choose betwe
47,798
Log-linear regression vs. Poisson regression
A Poisson regression is a regression where the outcome variable consists of non-negative integers, and it is sensible to assume that the variance and mean of the model are the same. A log-linear regression is usually a model estimated using linear regression, where the response variable is replaced by a new variable that is the natural logarithm of the of the original response variable. Or, if using a GLM, this is done via a logarithmic link function (essentially the same idea, but the mechanics of fitting the model are the different). The Poisson regression and log-linear regression are not the same thing, but are often used for very similar problems, particularly among older statisticians (the Poisson regression model only became widely available in software in the 1980s). Most people these days prefer a Poisson regression because it can deal with 0 values, whereas you will get an error using a log-linear regression. It is possible to use a Poisson regression to model data from a contingency table, where the predictor variables are the dimensions (e.g., row and column labels) of a contingency table. This can be referred to as a log-linear model. Perhaps some people call it a log-linear regression (one of the challenges of statistics is that the language is used rather loosely, but many people act as if the language is precise).
Log-linear regression vs. Poisson regression
A Poisson regression is a regression where the outcome variable consists of non-negative integers, and it is sensible to assume that the variance and mean of the model are the same. A log-linear regr
Log-linear regression vs. Poisson regression A Poisson regression is a regression where the outcome variable consists of non-negative integers, and it is sensible to assume that the variance and mean of the model are the same. A log-linear regression is usually a model estimated using linear regression, where the response variable is replaced by a new variable that is the natural logarithm of the of the original response variable. Or, if using a GLM, this is done via a logarithmic link function (essentially the same idea, but the mechanics of fitting the model are the different). The Poisson regression and log-linear regression are not the same thing, but are often used for very similar problems, particularly among older statisticians (the Poisson regression model only became widely available in software in the 1980s). Most people these days prefer a Poisson regression because it can deal with 0 values, whereas you will get an error using a log-linear regression. It is possible to use a Poisson regression to model data from a contingency table, where the predictor variables are the dimensions (e.g., row and column labels) of a contingency table. This can be referred to as a log-linear model. Perhaps some people call it a log-linear regression (one of the challenges of statistics is that the language is used rather loosely, but many people act as if the language is precise).
Log-linear regression vs. Poisson regression A Poisson regression is a regression where the outcome variable consists of non-negative integers, and it is sensible to assume that the variance and mean of the model are the same. A log-linear regr
47,799
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution?
This is what the R help says. For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one. ‘Exact’ inference can be based on observing that in general, given all marginal totals fixed, the first element of the contingency table has a non-central hypergeometric distribution with non-centrality parameter given by the odds ratio (Fisher, 1935). So note the two facts in that quote: the null of conditional* independence is equivalent to the hypothesis that the odds ratio equals one the first element of the contingency table has a non-central hypergeometric distribution with non-centrality parameter given by the odds ratio If you make the odds ratio 1, then that's the central hypergeometric. See, for example the Wikipedia article on Fisher's noncentral hypergeometric distribution which states it explicitly: The two distributions* are both equal to the (central) hypergeometric distribution when the odds ratio is 1. * [Fisher's and Wallenius' noncentral hypergeometrics are being discussed; they both give the ordinary hypergeometric when the odds ratio is 1] So there's no contradiction - under the null, it's the central hypergeometric. Why the R help didn't add just a few words to make that clear, I don't know. -- * it's the margins being conditioned on there
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom
This is what the R help says. For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one. ‘Exact’ inference can be based on o
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution? This is what the R help says. For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one. ‘Exact’ inference can be based on observing that in general, given all marginal totals fixed, the first element of the contingency table has a non-central hypergeometric distribution with non-centrality parameter given by the odds ratio (Fisher, 1935). So note the two facts in that quote: the null of conditional* independence is equivalent to the hypothesis that the odds ratio equals one the first element of the contingency table has a non-central hypergeometric distribution with non-centrality parameter given by the odds ratio If you make the odds ratio 1, then that's the central hypergeometric. See, for example the Wikipedia article on Fisher's noncentral hypergeometric distribution which states it explicitly: The two distributions* are both equal to the (central) hypergeometric distribution when the odds ratio is 1. * [Fisher's and Wallenius' noncentral hypergeometrics are being discussed; they both give the ordinary hypergeometric when the odds ratio is 1] So there's no contradiction - under the null, it's the central hypergeometric. Why the R help didn't add just a few words to make that clear, I don't know. -- * it's the margins being conditioned on there
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom This is what the R help says. For 2 by 2 tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one. ‘Exact’ inference can be based on o
47,800
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution?
The noncentral hypergeometric distribution is a generalization of the hypergeometric distribution. The latter is used for the Fisher exact test. However, it frequently seems to be referred to as the hypergeometric distribution as if the question of noncentrality did not exist. I suppose that the thinking for this is that the noncentrality is just the introducing of weighting for the hypergeometric distribution and people sometimes use shorthand and refer to weighted functions by the function name itself, e.g., when the weighting is neutral, is would be exactly that function. @gammer helpfully suggests that the difference amounts to independent random variables for the hypergeometric case, and the weighted (noncentral) hypergeometric distribution for the not independent random variable case. BTW, (+1) good question. Note however, that this is not the only numerical approach to problems of the Fisher exact test type, see Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? for further information.
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom
The noncentral hypergeometric distribution is a generalization of the hypergeometric distribution. The latter is used for the Fisher exact test. However, it frequently seems to be referred to as the h
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeometric distribution? The noncentral hypergeometric distribution is a generalization of the hypergeometric distribution. The latter is used for the Fisher exact test. However, it frequently seems to be referred to as the hypergeometric distribution as if the question of noncentrality did not exist. I suppose that the thinking for this is that the noncentrality is just the introducing of weighting for the hypergeometric distribution and people sometimes use shorthand and refer to weighted functions by the function name itself, e.g., when the weighting is neutral, is would be exactly that function. @gammer helpfully suggests that the difference amounts to independent random variables for the hypergeometric case, and the weighted (noncentral) hypergeometric distribution for the not independent random variable case. BTW, (+1) good question. Note however, that this is not the only numerical approach to problems of the Fisher exact test type, see Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? for further information.
Does Fisher's Exact test for a $2\times 2$ table use the Non-central Hypergeometric or the Hypergeom The noncentral hypergeometric distribution is a generalization of the hypergeometric distribution. The latter is used for the Fisher exact test. However, it frequently seems to be referred to as the h