idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
42,601
|
Is there a "pure R" implementation for loess? (with no C code?)
|
The loess.demo function in the TeachingDemos package replicates some of the internals in plain R (it also uses the built in C code version). You could use that function as a starting place depending on what you want to do.
|
Is there a "pure R" implementation for loess? (with no C code?)
|
The loess.demo function in the TeachingDemos package replicates some of the internals in plain R (it also uses the built in C code version). You could use that function as a starting place depending
|
Is there a "pure R" implementation for loess? (with no C code?)
The loess.demo function in the TeachingDemos package replicates some of the internals in plain R (it also uses the built in C code version). You could use that function as a starting place depending on what you want to do.
|
Is there a "pure R" implementation for loess? (with no C code?)
The loess.demo function in the TeachingDemos package replicates some of the internals in plain R (it also uses the built in C code version). You could use that function as a starting place depending
|
42,602
|
Off-diagonal range for guaranteed positive definiteness?
|
The maximum is $1/(p-1)$.
To see this, note first that the eigenvalues of the matrix with all off-diagonal entries equal to a constant $x$ are $1-x$ (with multiplicity $p-1$) and $1+(p-1)x$. When $x \lt -1/(p-1)$, the smallest eigenvalue will therefore be negative implying the matrix is not positive definite. Because the smallest eigenvalue is a continuous function of the entries, we can find a positive $\epsilon$ such that when all off-diagonal entries are in the interval $[x, x+\epsilon]$ (but no longer all equal to each other), the smallest eigenvalue remains negative.
Now suppose $a \gt 1/(p-1)$. Setting $x=-a$, choose an $\epsilon$ as just described and if necessary make it even smaller, but still positive, to assure that $a - \epsilon \gt 1/(p-1)$. Assuming the off-diagonal entries are independently generated, the probability that all entries lie in the interval $[-a, -a+\epsilon]$ equals $(\epsilon / (2a))^{p(p-1)/2} \gt 0$, showing that the matrix has a positive probability of not being positive definite.
This has established $1/(p-1)$ as an upper bound for $a$. We need to show that it suffices. Consider an arbitrary symmetric $p$ by $p$ matrix $(a_{ij})$ with unit diagonal and all entries in size less than $1/p$. By a suitable induction on $p$, and by virtue of Sylvester's Criterion, it suffices to show this matrix has positive determinant. Row-reduction using the first row reduces this question to considering the sign of a $p-1$ by $p-1$ determinant with entries $(a_{ij} / (1 + a_{1i})$. Because $-1/p \lt a_{1i} \lt 1/p$, these clearly are less than $1/(p-1)$ in absolute value, so we are done by induction. (The base case $p=2$ is trivial.)
|
Off-diagonal range for guaranteed positive definiteness?
|
The maximum is $1/(p-1)$.
To see this, note first that the eigenvalues of the matrix with all off-diagonal entries equal to a constant $x$ are $1-x$ (with multiplicity $p-1$) and $1+(p-1)x$. When $x
|
Off-diagonal range for guaranteed positive definiteness?
The maximum is $1/(p-1)$.
To see this, note first that the eigenvalues of the matrix with all off-diagonal entries equal to a constant $x$ are $1-x$ (with multiplicity $p-1$) and $1+(p-1)x$. When $x \lt -1/(p-1)$, the smallest eigenvalue will therefore be negative implying the matrix is not positive definite. Because the smallest eigenvalue is a continuous function of the entries, we can find a positive $\epsilon$ such that when all off-diagonal entries are in the interval $[x, x+\epsilon]$ (but no longer all equal to each other), the smallest eigenvalue remains negative.
Now suppose $a \gt 1/(p-1)$. Setting $x=-a$, choose an $\epsilon$ as just described and if necessary make it even smaller, but still positive, to assure that $a - \epsilon \gt 1/(p-1)$. Assuming the off-diagonal entries are independently generated, the probability that all entries lie in the interval $[-a, -a+\epsilon]$ equals $(\epsilon / (2a))^{p(p-1)/2} \gt 0$, showing that the matrix has a positive probability of not being positive definite.
This has established $1/(p-1)$ as an upper bound for $a$. We need to show that it suffices. Consider an arbitrary symmetric $p$ by $p$ matrix $(a_{ij})$ with unit diagonal and all entries in size less than $1/p$. By a suitable induction on $p$, and by virtue of Sylvester's Criterion, it suffices to show this matrix has positive determinant. Row-reduction using the first row reduces this question to considering the sign of a $p-1$ by $p-1$ determinant with entries $(a_{ij} / (1 + a_{1i})$. Because $-1/p \lt a_{1i} \lt 1/p$, these clearly are less than $1/(p-1)$ in absolute value, so we are done by induction. (The base case $p=2$ is trivial.)
|
Off-diagonal range for guaranteed positive definiteness?
The maximum is $1/(p-1)$.
To see this, note first that the eigenvalues of the matrix with all off-diagonal entries equal to a constant $x$ are $1-x$ (with multiplicity $p-1$) and $1+(p-1)x$. When $x
|
42,603
|
Outlier detection for generic time series
|
You are quite right that the ARIMA Model you are using (first differences) may not be appropriate to detect outliers. Outliers can be Pulses, Level Shifts, Seasonal Pulses or Local Time Trends. You might want to google "INTERVENTION DETECTION IN TIME SERIES" or google "AUTOMATIC INTERVENTION DETECTION" to get some reading matter on INTERVENTION DETECTION. Note that this is not the same as INTERVENTION MODELLING which often assumes the nature of the outlier and does not empirically identify same. Following mpkitas's remarks one would include the empirically identified outliers as dummy predictor series in order to accommodate their impact. A lot of work has been done in identifying oultliers using a null filter and then identifying the appropriate ARIMA Model. Some commercial packages assume that you identify the arima model first ( possibly flawed by the outliers ) and then identify the outliers. More general procedures examine both strategies. Your current procedure follows the "use up front filter first" approach but is also flawed by the assumption of the upfront filter.
Some more reflections:
to detect an anomaly, you need a model which provides an expectation. Intervention Detection yields the answer to the question " What is the probability of observing what I observed before I observed it ? AN ARIMA model can then used to identify the "unusual" Time Series observations. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things,and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately understand Nature, The Model you are imposing on all your series i clearly am inadequate way to go.
|
Outlier detection for generic time series
|
You are quite right that the ARIMA Model you are using (first differences) may not be appropriate to detect outliers. Outliers can be Pulses, Level Shifts, Seasonal Pulses or Local Time Trends. You mi
|
Outlier detection for generic time series
You are quite right that the ARIMA Model you are using (first differences) may not be appropriate to detect outliers. Outliers can be Pulses, Level Shifts, Seasonal Pulses or Local Time Trends. You might want to google "INTERVENTION DETECTION IN TIME SERIES" or google "AUTOMATIC INTERVENTION DETECTION" to get some reading matter on INTERVENTION DETECTION. Note that this is not the same as INTERVENTION MODELLING which often assumes the nature of the outlier and does not empirically identify same. Following mpkitas's remarks one would include the empirically identified outliers as dummy predictor series in order to accommodate their impact. A lot of work has been done in identifying oultliers using a null filter and then identifying the appropriate ARIMA Model. Some commercial packages assume that you identify the arima model first ( possibly flawed by the outliers ) and then identify the outliers. More general procedures examine both strategies. Your current procedure follows the "use up front filter first" approach but is also flawed by the assumption of the upfront filter.
Some more reflections:
to detect an anomaly, you need a model which provides an expectation. Intervention Detection yields the answer to the question " What is the probability of observing what I observed before I observed it ? AN ARIMA model can then used to identify the "unusual" Time Series observations. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things,and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately understand Nature, The Model you are imposing on all your series i clearly am inadequate way to go.
|
Outlier detection for generic time series
You are quite right that the ARIMA Model you are using (first differences) may not be appropriate to detect outliers. Outliers can be Pulses, Level Shifts, Seasonal Pulses or Local Time Trends. You mi
|
42,604
|
Outlier detection for generic time series
|
Winsorization replaces extreme data values with less extreme values.
http://www.r-bloggers.com/winsorization/
|
Outlier detection for generic time series
|
Winsorization replaces extreme data values with less extreme values.
http://www.r-bloggers.com/winsorization/
|
Outlier detection for generic time series
Winsorization replaces extreme data values with less extreme values.
http://www.r-bloggers.com/winsorization/
|
Outlier detection for generic time series
Winsorization replaces extreme data values with less extreme values.
http://www.r-bloggers.com/winsorization/
|
42,605
|
What is the difference between hazard and crude ratio?
|
Actually, it sounds like the hazard ratio you obtained was the adjusted ratio, whereas by definition the crude ratio has no adjustment for covariates.
When you did your multivariable Cox regression, you included some covariates in the model, and the crude ratio would be the hazard ratio without these covariates included in the model.
To get this, you can re-run your Cox regression without including the covariates.
I would recommend reading the following link: http://faculty.chass.ncsu.edu/garson/PA765/cox.htm
|
What is the difference between hazard and crude ratio?
|
Actually, it sounds like the hazard ratio you obtained was the adjusted ratio, whereas by definition the crude ratio has no adjustment for covariates.
When you did your multivariable Cox regression, y
|
What is the difference between hazard and crude ratio?
Actually, it sounds like the hazard ratio you obtained was the adjusted ratio, whereas by definition the crude ratio has no adjustment for covariates.
When you did your multivariable Cox regression, you included some covariates in the model, and the crude ratio would be the hazard ratio without these covariates included in the model.
To get this, you can re-run your Cox regression without including the covariates.
I would recommend reading the following link: http://faculty.chass.ncsu.edu/garson/PA765/cox.htm
|
What is the difference between hazard and crude ratio?
Actually, it sounds like the hazard ratio you obtained was the adjusted ratio, whereas by definition the crude ratio has no adjustment for covariates.
When you did your multivariable Cox regression, y
|
42,606
|
Is it necessary to perform a transformation on proportion data if it's reasonably well behaved?
|
It depends. If your goal is prediction, then you may not need to do any gymnastics to get a more theoretically sound model if the one in hand does well. But of course you should be always be aware that a good-fitting model to present data may not perform well on new data. You can try to get a feel for that using cross validation, although you simply might not have important aspects of the distribution represented in your sample.
If you want to make inferences using some of the parameters in the model then that model should be motivated by the problem at hand.
Anyway, a first step is to just look at the response. Is it roughly bell-shaped? Did you try the arcsine transform? Does the transformed distribution look (much) different? If the distribution of the proportions is fairly tight and located somewhere in the middle the transformation might not do much. And then, of course, does the transformation make a difference in the regression?
|
Is it necessary to perform a transformation on proportion data if it's reasonably well behaved?
|
It depends. If your goal is prediction, then you may not need to do any gymnastics to get a more theoretically sound model if the one in hand does well. But of course you should be always be aware tha
|
Is it necessary to perform a transformation on proportion data if it's reasonably well behaved?
It depends. If your goal is prediction, then you may not need to do any gymnastics to get a more theoretically sound model if the one in hand does well. But of course you should be always be aware that a good-fitting model to present data may not perform well on new data. You can try to get a feel for that using cross validation, although you simply might not have important aspects of the distribution represented in your sample.
If you want to make inferences using some of the parameters in the model then that model should be motivated by the problem at hand.
Anyway, a first step is to just look at the response. Is it roughly bell-shaped? Did you try the arcsine transform? Does the transformed distribution look (much) different? If the distribution of the proportions is fairly tight and located somewhere in the middle the transformation might not do much. And then, of course, does the transformation make a difference in the regression?
|
Is it necessary to perform a transformation on proportion data if it's reasonably well behaved?
It depends. If your goal is prediction, then you may not need to do any gymnastics to get a more theoretically sound model if the one in hand does well. But of course you should be always be aware tha
|
42,607
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
|
Specifying the Input Variables' ARIMA Models
The ARIMA Procedure uses the results of the first pair(s) of identify and estimate statements (i.e., the identify and estimate statements for the input variables) to create models to forecast the values of the input variable(s) (also called exogenous variable(s)) after the last point in time that each of those input variables are observed. In other words, those statements specify the models that are used whenever values for the input variables are needed for periods not yet observed.
Thus, the model for VariableY is specified as
identify var=VariableY(PeriodsOfDifferencing);
estimate p=OrderOfAutoregression q=OrderOfMovingAverage;
where VariableY is modeled as $ARIMA(p,d,q)$ with $p$ = OrderOfAutoregression, $d$ = the order of differencing (determined from PeriodsOfDifferencing), and $q$ = OrderOfMovingAverage.
Specifying Differencing for the Main and Input Series in the ARIMAX Model
The order(s) of differencing to apply to the input variables are specified in the crosscorr option; for modeling VariableX with inputs VariableY and VariableZ, the SAS code is:
identify var=VariableX(DifferencingX) crosscorr=( VariableY(DifferencingY) VariableZ(DifferencingZ) );
where DifferencingX, DifferencingY, and DifferencingZ are the period(s) of differencing for VariableX, VariableY, and VariableZ, respectively.
Specifying the Order of Autoregression and the Order of Moving Average for the Main and Input Series in the ARIMAX Model
The number of input variable lags to include in the model is specified in the transfer function (in the input option). The beginning of the estimate line sets the orders of autoregression and moving average for the main series (i.e., the series for which a model or forecasts are ultimately being sought):
estimate p=AutoregressionX q=MovingAverageX
where VariableX is modeled as $ARIMAX(p,d,q,b)$ with $p$ = AutoregressionX and $q$ = MovingAverageX.
The input option in the same estimate statement sets the orders of autoregression and moving average for the ARIMAX model. The numerator factors for a transfer function for an input series are like the MA part of the ARMA model for the noise series. The denominator factors for a transfer function for an input series are like the AR part of the ARMA model for the noise series. (All examples below will simplify the example down to a single input series VariableY instead of showing both VariableY and VariableZ.)
When specified without any numerator or denominator terms, the input variable is treated as a pure regression term (i.e., the value of the input variable in the current period is used without any lags, whether it is forecast by the input variable's ARIMA model or already present as an observed value in the input series): estimate...input=( VariableY );.
Numerator terms are represented in parentheses before the input variable. estimate...input=( (1 2 3) VariableY ); produces a regression on VariableY, LAG(VariableY), LAG2(VariableY), and LAG3(VariableY).
Denominator terms are represented in parenetheses after a slash and before the input variable. estimate...input=( \ (1) VariableY ); estimates the effect of VariableY as an infinite distributed lag model with exponentially declining weights.
Initial shift is represented before a dollar sign; estimate...input=( k $ ( $\omega$-lags ) / ( $\delta$-lags ) VariableY ); represents the form $B^k \cdot \left(\frac{\omega (B)}{\delta (B)}\right) \cdot \text{VariableY}_t$. The value of k will be added to the exponent of $B$ for all numerator and denominator terms. To use an AR-like shift in the input variable without including the un-shifted (i.e., un-lagged or pure regression) term, use this operator instead of numerator terms in parentheses. For example, to set a 6, 12, and 18 month shift in the input series VariableY without the un-shifted term, the statement would be estimate...input=( 6 $ (6 12) VariableY ); (this results in shifts of 6, 6 + 6 (i.e., 12), and 6 + 12 (i.e., 18)).
Summary
The first pair(s) of identify and estimate statements are used to prepare any necessary forecasted values for the input variable(s).
The last pair of identify and estimate statements run the actual ARIMAX model, and use forecasted values for the input variable(s) (generated from the first pair(s) of identify and estimate statements) when necessary.
The relationship between the main variable and the input variable(s) is specified in the crosscorr option of the identify statement and the input option of the estimate statement. The relationship between the main variable and the input variable(s) can be defined as a run-of-the-mill regression relationship; or it can be defined with differencing, AR term(s), and/or MA term(s).
Attribution
Although this answer is my own, I was able to come up with the answer based on substantial help (and some quotations) from the official SAS documentation ("The ARIMA Procedure: Rational Transfer Functions and Distributed Lag Models", "The ARIMA Procedure: Specifying Inputs and Transfer Functions", "The ARIMA Procedure: Input Variables and Regression with ARMA Errors", and "The ARIMA Procedure: Differencing"), and from direction found in this answer and comments by IrishStat.
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
|
Specifying the Input Variables' ARIMA Models
The ARIMA Procedure uses the results of the first pair(s) of identify and estimate statements (i.e., the identify and estimate statements for the input var
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
Specifying the Input Variables' ARIMA Models
The ARIMA Procedure uses the results of the first pair(s) of identify and estimate statements (i.e., the identify and estimate statements for the input variables) to create models to forecast the values of the input variable(s) (also called exogenous variable(s)) after the last point in time that each of those input variables are observed. In other words, those statements specify the models that are used whenever values for the input variables are needed for periods not yet observed.
Thus, the model for VariableY is specified as
identify var=VariableY(PeriodsOfDifferencing);
estimate p=OrderOfAutoregression q=OrderOfMovingAverage;
where VariableY is modeled as $ARIMA(p,d,q)$ with $p$ = OrderOfAutoregression, $d$ = the order of differencing (determined from PeriodsOfDifferencing), and $q$ = OrderOfMovingAverage.
Specifying Differencing for the Main and Input Series in the ARIMAX Model
The order(s) of differencing to apply to the input variables are specified in the crosscorr option; for modeling VariableX with inputs VariableY and VariableZ, the SAS code is:
identify var=VariableX(DifferencingX) crosscorr=( VariableY(DifferencingY) VariableZ(DifferencingZ) );
where DifferencingX, DifferencingY, and DifferencingZ are the period(s) of differencing for VariableX, VariableY, and VariableZ, respectively.
Specifying the Order of Autoregression and the Order of Moving Average for the Main and Input Series in the ARIMAX Model
The number of input variable lags to include in the model is specified in the transfer function (in the input option). The beginning of the estimate line sets the orders of autoregression and moving average for the main series (i.e., the series for which a model or forecasts are ultimately being sought):
estimate p=AutoregressionX q=MovingAverageX
where VariableX is modeled as $ARIMAX(p,d,q,b)$ with $p$ = AutoregressionX and $q$ = MovingAverageX.
The input option in the same estimate statement sets the orders of autoregression and moving average for the ARIMAX model. The numerator factors for a transfer function for an input series are like the MA part of the ARMA model for the noise series. The denominator factors for a transfer function for an input series are like the AR part of the ARMA model for the noise series. (All examples below will simplify the example down to a single input series VariableY instead of showing both VariableY and VariableZ.)
When specified without any numerator or denominator terms, the input variable is treated as a pure regression term (i.e., the value of the input variable in the current period is used without any lags, whether it is forecast by the input variable's ARIMA model or already present as an observed value in the input series): estimate...input=( VariableY );.
Numerator terms are represented in parentheses before the input variable. estimate...input=( (1 2 3) VariableY ); produces a regression on VariableY, LAG(VariableY), LAG2(VariableY), and LAG3(VariableY).
Denominator terms are represented in parenetheses after a slash and before the input variable. estimate...input=( \ (1) VariableY ); estimates the effect of VariableY as an infinite distributed lag model with exponentially declining weights.
Initial shift is represented before a dollar sign; estimate...input=( k $ ( $\omega$-lags ) / ( $\delta$-lags ) VariableY ); represents the form $B^k \cdot \left(\frac{\omega (B)}{\delta (B)}\right) \cdot \text{VariableY}_t$. The value of k will be added to the exponent of $B$ for all numerator and denominator terms. To use an AR-like shift in the input variable without including the un-shifted (i.e., un-lagged or pure regression) term, use this operator instead of numerator terms in parentheses. For example, to set a 6, 12, and 18 month shift in the input series VariableY without the un-shifted term, the statement would be estimate...input=( 6 $ (6 12) VariableY ); (this results in shifts of 6, 6 + 6 (i.e., 12), and 6 + 12 (i.e., 18)).
Summary
The first pair(s) of identify and estimate statements are used to prepare any necessary forecasted values for the input variable(s).
The last pair of identify and estimate statements run the actual ARIMAX model, and use forecasted values for the input variable(s) (generated from the first pair(s) of identify and estimate statements) when necessary.
The relationship between the main variable and the input variable(s) is specified in the crosscorr option of the identify statement and the input option of the estimate statement. The relationship between the main variable and the input variable(s) can be defined as a run-of-the-mill regression relationship; or it can be defined with differencing, AR term(s), and/or MA term(s).
Attribution
Although this answer is my own, I was able to come up with the answer based on substantial help (and some quotations) from the official SAS documentation ("The ARIMA Procedure: Rational Transfer Functions and Distributed Lag Models", "The ARIMA Procedure: Specifying Inputs and Transfer Functions", "The ARIMA Procedure: Input Variables and Regression with ARMA Errors", and "The ARIMA Procedure: Differencing"), and from direction found in this answer and comments by IrishStat.
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
Specifying the Input Variables' ARIMA Models
The ARIMA Procedure uses the results of the first pair(s) of identify and estimate statements (i.e., the identify and estimate statements for the input var
|
42,608
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
|
I have reviewed the output and the forecast refelects an AR(12) in the error term which translates to a 12 period weighted forecast using the last 12 values of both your predictor series as the AR polynomial acts a multiplier across all series ( X,Y,Z ). Without getting into it in great detail , your model specification or rather lack of specification is in my opinion "found wanting". Unfortunately the SAS procedure assumes that the differencing operators required to make the original series stationary is the same as the differencing operators in the Transfer Function. Furthermore the ARIMA component in the Transfer Function is the same as the ARIMA component for the Univariate Analysis of the dependent series. This structure should be identified form the residuals of a suitably formed Transfer Function that does not have an ARIMA structure. Finally your specification ( by default ) of the ARIMA component in the Transfer Function is a "common factor". What you need to do is to identify the forms of differencing ( if any ) for all three series and the nature of the response (PDL/ADL/LAG STRUCTURE) for EACH of the two inputs. After estimating such a tentative model verify that there are no Level Shifts/Local Time Trends/Seasonal or 1 time Pulses in the tentative set of model errors via Interevention Detection schemes. Furthermore one must ensure the model errors have constant variance and that the parameters of the model haven't proved to have changed over time and that the acf of the model errors is free of any significant structure AND that these errors are uncorrelated with either of the two pre-whitened input series.
In summary you are getting what you want but you might not want what you are getting !You might consider posting the original data for the three series and have the list members ( including myself ) aid you in constructing a minimally sufficient model.
EDIT: I found some material on the web that might be of help to you.
For illustration, say the non-zero lags are 2 and 4. The process y might be estimated as follows using an x as an input.
estimate input=( 2$(2) x )
The input is of the form cB*2+ dB*4 = B*2( c + dB*2). It is this latter form that gives the form of the input statement
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
|
I have reviewed the output and the forecast refelects an AR(12) in the error term which translates to a 12 period weighted forecast using the last 12 values of both your predictor series as the AR pol
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
I have reviewed the output and the forecast refelects an AR(12) in the error term which translates to a 12 period weighted forecast using the last 12 values of both your predictor series as the AR polynomial acts a multiplier across all series ( X,Y,Z ). Without getting into it in great detail , your model specification or rather lack of specification is in my opinion "found wanting". Unfortunately the SAS procedure assumes that the differencing operators required to make the original series stationary is the same as the differencing operators in the Transfer Function. Furthermore the ARIMA component in the Transfer Function is the same as the ARIMA component for the Univariate Analysis of the dependent series. This structure should be identified form the residuals of a suitably formed Transfer Function that does not have an ARIMA structure. Finally your specification ( by default ) of the ARIMA component in the Transfer Function is a "common factor". What you need to do is to identify the forms of differencing ( if any ) for all three series and the nature of the response (PDL/ADL/LAG STRUCTURE) for EACH of the two inputs. After estimating such a tentative model verify that there are no Level Shifts/Local Time Trends/Seasonal or 1 time Pulses in the tentative set of model errors via Interevention Detection schemes. Furthermore one must ensure the model errors have constant variance and that the parameters of the model haven't proved to have changed over time and that the acf of the model errors is free of any significant structure AND that these errors are uncorrelated with either of the two pre-whitened input series.
In summary you are getting what you want but you might not want what you are getting !You might consider posting the original data for the three series and have the list members ( including myself ) aid you in constructing a minimally sufficient model.
EDIT: I found some material on the web that might be of help to you.
For illustration, say the non-zero lags are 2 and 4. The process y might be estimated as follows using an x as an input.
estimate input=( 2$(2) x )
The input is of the form cB*2+ dB*4 = B*2( c + dB*2). It is this latter form that gives the form of the input statement
|
How do I ensure PROC ARIMA is performing the correct parameterization of input variables?
I have reviewed the output and the forecast refelects an AR(12) in the error term which translates to a 12 period weighted forecast using the last 12 values of both your predictor series as the AR pol
|
42,609
|
How to test if the mean equals the median?
|
This is a bootstrap confidence interval for the (median - mean) difference in R:
z = function() {s = sample(women$weight, replace=TRUE); median(s)-mean(s)}
k = replicate(10000, z())
c(quantile(k, c(.025, .5, .975)), mean=mean(k), sd=sd(k), qgte0=mean(k>=0))
2.5% 50% 97.5% mean sd qgte0
-7.933333 -1.333333 5.800000 -1.218007 3.513462 0.362100
I'm still pondering if the mean and SD of the k resample of the difference could be used in a Wald(-like) test, or if the quantile greater than or equal to 0 can be viewed as a one-sided p value under some assumptions — comments on this are welcome.
|
How to test if the mean equals the median?
|
This is a bootstrap confidence interval for the (median - mean) difference in R:
z = function() {s = sample(women$weight, replace=TRUE); median(s)-mean(s)}
k = replicate(10000, z())
c(quantile(k, c(.0
|
How to test if the mean equals the median?
This is a bootstrap confidence interval for the (median - mean) difference in R:
z = function() {s = sample(women$weight, replace=TRUE); median(s)-mean(s)}
k = replicate(10000, z())
c(quantile(k, c(.025, .5, .975)), mean=mean(k), sd=sd(k), qgte0=mean(k>=0))
2.5% 50% 97.5% mean sd qgte0
-7.933333 -1.333333 5.800000 -1.218007 3.513462 0.362100
I'm still pondering if the mean and SD of the k resample of the difference could be used in a Wald(-like) test, or if the quantile greater than or equal to 0 can be viewed as a one-sided p value under some assumptions — comments on this are welcome.
|
How to test if the mean equals the median?
This is a bootstrap confidence interval for the (median - mean) difference in R:
z = function() {s = sample(women$weight, replace=TRUE); median(s)-mean(s)}
k = replicate(10000, z())
c(quantile(k, c(.0
|
42,610
|
How to test if the mean equals the median?
|
A permutations test can easily be set up to use the (mean - mode difference) as its test statistic. That would give you an exact P value for the difference.
|
How to test if the mean equals the median?
|
A permutations test can easily be set up to use the (mean - mode difference) as its test statistic. That would give you an exact P value for the difference.
|
How to test if the mean equals the median?
A permutations test can easily be set up to use the (mean - mode difference) as its test statistic. That would give you an exact P value for the difference.
|
How to test if the mean equals the median?
A permutations test can easily be set up to use the (mean - mode difference) as its test statistic. That would give you an exact P value for the difference.
|
42,611
|
How to check for bivariate Gaussianity without the use of regression?
|
I have recently come across this method that was displayed in Johnson and Wichern.
Let the data points that you want to test for bivariate normality be designated as $\{ x_{i} \}$. Next, compute the sample covariance matrix and deisgnate it as $S$.
For each observed point calculate $d_{j}^{2} = (x_{j} - \bar{x})^{T} S^{-1} (x_{j} - \bar{x})$. Order the values of the $d_{j}^{2}$ from low to high. The last mathematical step is to plot the pair $(q_{c,p}((j- \frac{1}{2}), d_{j}^{2})$, where $q_{c,p}((j- \frac{1}{2})/n)$ is the $100(j- \frac{1}{2})$ quantile of the chi-squared distribution. The plot should be a straight line if the data has a bivariate normal distribution.
|
How to check for bivariate Gaussianity without the use of regression?
|
I have recently come across this method that was displayed in Johnson and Wichern.
Let the data points that you want to test for bivariate normality be designated as $\{ x_{i} \}$. Next, compute the s
|
How to check for bivariate Gaussianity without the use of regression?
I have recently come across this method that was displayed in Johnson and Wichern.
Let the data points that you want to test for bivariate normality be designated as $\{ x_{i} \}$. Next, compute the sample covariance matrix and deisgnate it as $S$.
For each observed point calculate $d_{j}^{2} = (x_{j} - \bar{x})^{T} S^{-1} (x_{j} - \bar{x})$. Order the values of the $d_{j}^{2}$ from low to high. The last mathematical step is to plot the pair $(q_{c,p}((j- \frac{1}{2}), d_{j}^{2})$, where $q_{c,p}((j- \frac{1}{2})/n)$ is the $100(j- \frac{1}{2})$ quantile of the chi-squared distribution. The plot should be a straight line if the data has a bivariate normal distribution.
|
How to check for bivariate Gaussianity without the use of regression?
I have recently come across this method that was displayed in Johnson and Wichern.
Let the data points that you want to test for bivariate normality be designated as $\{ x_{i} \}$. Next, compute the s
|
42,612
|
Operations on probability distributions of continuous random variables
|
If you have a random variable $X$ distributed with continuous distribution function $F$, and you define a random variable $Y=h(X)$, then what is its distribution function? Let's just use the definition of distribution function:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X) \le y\}
\end{align}
If $h$ is monotonically increasing (hence invertible) and differentiable, then the next steps are easy:
\begin{align}
G(y) &= P\{X \le h^{-1}(y)\} \\
&= F(h^{-1}(y))\\
g(y) &= \frac{d}{dy}G(y) = f(h^{-1}(y))\frac{d}{dy}h^{-1}(y)
\end{align}
By considering the decreasing case, you can see that the general formula for monotonic $h$ is:
\begin{align}
g(y) &= f(h^{-1}(y))|\frac{d}{dy}h^{-1}(y)|
\end{align}
You are interested in cases where $h$ is not invertible, though, and in cases where the function $h$ takes many arguments and returns a single value but where the random variables are continuous. So, consider a bunch of random variables $X_1,\ldots,X_K$ with continuous joint distribution function $F(X_1,\ldots,X_K)$ and a random variable $Y$ defined by a differentiable function $h$ as $Y=h(X_1,\ldots,X_K)$
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X_1,\ldots,X_K) \le y\}\\
&= \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
The random variable $Y$ has density:
\begin{align}
g(y) &= \frac{d}{dy} \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
This is not that useful in practice, though. Generally, you are going to have to find a way, on a function-by-function case, to make evaluating these two items tractable. In the case of $Y=sin(X)$, $sin$ is periodic, so you just chop up its domain into half-cycles (within which it is monotonic and invertible). You can get the density (except at points where $Y=0$ or $Y=1$) from the infinite series (which, as a practical matter you approximate by just leaving off the terms where $f(x)$ is very small):
\begin{align}
g(y) &= \sum_{x:sin(x)=y} f(x) \left| \frac{d}{dy} sin^{-1}(y) \right|
\end{align}
For your example of $Y=X_1X_2$:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= \int_{X_1X_2 \le y} f(X_1,X_2) d X_1 d X_2
\end{align}
Because of the way sign and multiplication work, evaluating this integral is a bit annoying. Let's evaluate it for a $y\ge0$. For a $y$ like this, $X_1X_2\le y$ any time one but not both $X$s are negative, any time both are positive but not too big, and any time both are negative but not too big in absolute value:
\begin{align}
G(y) &= \int_0^\infty \int_{-\infty}^0 f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_0^\infty f(X_1,X_2) d X_1 d X_2\\
&+ \int_0^\infty \int_0^{\frac{y}{X_1}} f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_{\frac{y}{X_1}}^0 f(X_1,X_2) d X_1 d X_2
\end{align}
Then the density of $Y$ is going to be:
\begin{align}
g(y) &= \frac{d}{dy}G(y)\\
&= \int_0^\infty \frac{1}{X_1}f(X_1,\frac{y}{X_1}) d X_1 + \int_{-\infty}^0 -\frac{1}{X_1} f(X_1,\frac{y}{X_1}) d X_1
\end{align}
|
Operations on probability distributions of continuous random variables
|
If you have a random variable $X$ distributed with continuous distribution function $F$, and you define a random variable $Y=h(X)$, then what is its distribution function? Let's just use the definiti
|
Operations on probability distributions of continuous random variables
If you have a random variable $X$ distributed with continuous distribution function $F$, and you define a random variable $Y=h(X)$, then what is its distribution function? Let's just use the definition of distribution function:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X) \le y\}
\end{align}
If $h$ is monotonically increasing (hence invertible) and differentiable, then the next steps are easy:
\begin{align}
G(y) &= P\{X \le h^{-1}(y)\} \\
&= F(h^{-1}(y))\\
g(y) &= \frac{d}{dy}G(y) = f(h^{-1}(y))\frac{d}{dy}h^{-1}(y)
\end{align}
By considering the decreasing case, you can see that the general formula for monotonic $h$ is:
\begin{align}
g(y) &= f(h^{-1}(y))|\frac{d}{dy}h^{-1}(y)|
\end{align}
You are interested in cases where $h$ is not invertible, though, and in cases where the function $h$ takes many arguments and returns a single value but where the random variables are continuous. So, consider a bunch of random variables $X_1,\ldots,X_K$ with continuous joint distribution function $F(X_1,\ldots,X_K)$ and a random variable $Y$ defined by a differentiable function $h$ as $Y=h(X_1,\ldots,X_K)$
\begin{align}
G(y) &= P\{Y \le y\} \\
&= P\{h(X_1,\ldots,X_K) \le y\}\\
&= \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
The random variable $Y$ has density:
\begin{align}
g(y) &= \frac{d}{dy} \int_{h(X_1,\ldots,X_K) \le y} f(X_1,\ldots,X_K) d X_1 d X_2 \ldots dX_K
\end{align}
This is not that useful in practice, though. Generally, you are going to have to find a way, on a function-by-function case, to make evaluating these two items tractable. In the case of $Y=sin(X)$, $sin$ is periodic, so you just chop up its domain into half-cycles (within which it is monotonic and invertible). You can get the density (except at points where $Y=0$ or $Y=1$) from the infinite series (which, as a practical matter you approximate by just leaving off the terms where $f(x)$ is very small):
\begin{align}
g(y) &= \sum_{x:sin(x)=y} f(x) \left| \frac{d}{dy} sin^{-1}(y) \right|
\end{align}
For your example of $Y=X_1X_2$:
\begin{align}
G(y) &= P\{Y \le y\} \\
&= \int_{X_1X_2 \le y} f(X_1,X_2) d X_1 d X_2
\end{align}
Because of the way sign and multiplication work, evaluating this integral is a bit annoying. Let's evaluate it for a $y\ge0$. For a $y$ like this, $X_1X_2\le y$ any time one but not both $X$s are negative, any time both are positive but not too big, and any time both are negative but not too big in absolute value:
\begin{align}
G(y) &= \int_0^\infty \int_{-\infty}^0 f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_0^\infty f(X_1,X_2) d X_1 d X_2\\
&+ \int_0^\infty \int_0^{\frac{y}{X_1}} f(X_1,X_2) d X_1 d X_2 + \int_{-\infty}^0 \int_{\frac{y}{X_1}}^0 f(X_1,X_2) d X_1 d X_2
\end{align}
Then the density of $Y$ is going to be:
\begin{align}
g(y) &= \frac{d}{dy}G(y)\\
&= \int_0^\infty \frac{1}{X_1}f(X_1,\frac{y}{X_1}) d X_1 + \int_{-\infty}^0 -\frac{1}{X_1} f(X_1,\frac{y}{X_1}) d X_1
\end{align}
|
Operations on probability distributions of continuous random variables
If you have a random variable $X$ distributed with continuous distribution function $F$, and you define a random variable $Y=h(X)$, then what is its distribution function? Let's just use the definiti
|
42,613
|
Operations on probability distributions of continuous random variables
|
my 2 cents:
First, locally, if $\text{pdf}_s(s)$ is the probability density function of a random variable $s$, and let's do transform that $s = s(t)$, one would have $$\text{pdf}_t(t) = \text{pdf}_s(s(t)) \frac{ds(t)}{dt} ... (1)$$
Now let's look at $x \rightarrow \sin{x}$ transform.
Locally, one could replace (1) as $t \rightarrow x$, $s \rightarrow \sin{x}$, and have
$$\text{pdf}_x(x) = \text{pdf}_{\sin{x}} (\sin{x}) \cos{x}$$
, or $$\text{pdf}_y(y) = \text{pdf}_x(x) / \cos{x}$$, where $y=\sin{x}$.
As this is only a "local" equation, for $y=\sin(x)$, the $\text{pdf}_y(y)$ shall be a infinite sum of all the points of $x+2k\pi$, $k \in Z$. So one have:
$$\text{pdf}_y(y) = \Sigma_{k\in Z} \text{pdf}_x(x + 2k\pi) / \cos{x}... (2)$$, where $y=\sin{x} $.
However here is a catch: at $x=(k+\frac{1}{2})\pi$, $k \in Z$, $\sin{x} = \pm1$, $\frac{d\sin{x}}{dx} = \cos{x} = 0$, natually $\text{pdf}_y(y) \rightarrow \infty$. This explains in your first diagram, at points $\sin{x}=\pm1$, the pdf curve in red spikes up.
Things become complex when one try to do a transform $x, y \rightarrow z$.
Let's put it as $z = f(x,y)$
Now let's define inverse function of $z$ on $y$, given $x$ as $g(x, z) = y$. This means $f(x, g(x,z)) = z$.
Suppose $g(x,z)$ exists, we would have:
$$\text{pdf}_z(z) = \int_{-\infty}^{+\infty} \text{pdf}_{x,y} (x, g(x,z) ) \frac{\partial g(x,z)}{\partial z} dx$$
So there's no simple results for $z=x*y$, $z=x+y$ or $z=sin(x+y+3)$.
|
Operations on probability distributions of continuous random variables
|
my 2 cents:
First, locally, if $\text{pdf}_s(s)$ is the probability density function of a random variable $s$, and let's do transform that $s = s(t)$, one would have $$\text{pdf}_t(t) = \text{pdf}_s(s
|
Operations on probability distributions of continuous random variables
my 2 cents:
First, locally, if $\text{pdf}_s(s)$ is the probability density function of a random variable $s$, and let's do transform that $s = s(t)$, one would have $$\text{pdf}_t(t) = \text{pdf}_s(s(t)) \frac{ds(t)}{dt} ... (1)$$
Now let's look at $x \rightarrow \sin{x}$ transform.
Locally, one could replace (1) as $t \rightarrow x$, $s \rightarrow \sin{x}$, and have
$$\text{pdf}_x(x) = \text{pdf}_{\sin{x}} (\sin{x}) \cos{x}$$
, or $$\text{pdf}_y(y) = \text{pdf}_x(x) / \cos{x}$$, where $y=\sin{x}$.
As this is only a "local" equation, for $y=\sin(x)$, the $\text{pdf}_y(y)$ shall be a infinite sum of all the points of $x+2k\pi$, $k \in Z$. So one have:
$$\text{pdf}_y(y) = \Sigma_{k\in Z} \text{pdf}_x(x + 2k\pi) / \cos{x}... (2)$$, where $y=\sin{x} $.
However here is a catch: at $x=(k+\frac{1}{2})\pi$, $k \in Z$, $\sin{x} = \pm1$, $\frac{d\sin{x}}{dx} = \cos{x} = 0$, natually $\text{pdf}_y(y) \rightarrow \infty$. This explains in your first diagram, at points $\sin{x}=\pm1$, the pdf curve in red spikes up.
Things become complex when one try to do a transform $x, y \rightarrow z$.
Let's put it as $z = f(x,y)$
Now let's define inverse function of $z$ on $y$, given $x$ as $g(x, z) = y$. This means $f(x, g(x,z)) = z$.
Suppose $g(x,z)$ exists, we would have:
$$\text{pdf}_z(z) = \int_{-\infty}^{+\infty} \text{pdf}_{x,y} (x, g(x,z) ) \frac{\partial g(x,z)}{\partial z} dx$$
So there's no simple results for $z=x*y$, $z=x+y$ or $z=sin(x+y+3)$.
|
Operations on probability distributions of continuous random variables
my 2 cents:
First, locally, if $\text{pdf}_s(s)$ is the probability density function of a random variable $s$, and let's do transform that $s = s(t)$, one would have $$\text{pdf}_t(t) = \text{pdf}_s(s
|
42,614
|
Binary classification when many binary features are missing
|
Assuming data are considered missing completely at random (cf. @whuber's comment), using an ensemble learning technique as described in the following paper might be interesting to try:
Polikar, R. et al. (2010).
Learn++.MF: A random subspace
approach for the missing feature
problem. Pattern Recognition,
43(11), 3817-3832.
The general idea is to train multiple classifiers on a subset of the variables that compose your dataset (like in Random Forests), but to use only the classifiers trained with the non-missing features for building the classification rule. Be sure to check what the authors call the "distributed redundancy" assumption (p. 3 in the preprint linked above), that is there must be some equally balanced redundancy in your features set.
|
Binary classification when many binary features are missing
|
Assuming data are considered missing completely at random (cf. @whuber's comment), using an ensemble learning technique as described in the following paper might be interesting to try:
Polikar, R. et
|
Binary classification when many binary features are missing
Assuming data are considered missing completely at random (cf. @whuber's comment), using an ensemble learning technique as described in the following paper might be interesting to try:
Polikar, R. et al. (2010).
Learn++.MF: A random subspace
approach for the missing feature
problem. Pattern Recognition,
43(11), 3817-3832.
The general idea is to train multiple classifiers on a subset of the variables that compose your dataset (like in Random Forests), but to use only the classifiers trained with the non-missing features for building the classification rule. Be sure to check what the authors call the "distributed redundancy" assumption (p. 3 in the preprint linked above), that is there must be some equally balanced redundancy in your features set.
|
Binary classification when many binary features are missing
Assuming data are considered missing completely at random (cf. @whuber's comment), using an ensemble learning technique as described in the following paper might be interesting to try:
Polikar, R. et
|
42,615
|
Binary classification when many binary features are missing
|
If the features in the subset are random you can still impute values. However, if you have that much missing data, I would think twice about whether or not you really have enough valid data to do any kind of analysis.
The multiple imputation FAQ page ---->
http://www.stat.psu.edu/~jls/mifaq.html
|
Binary classification when many binary features are missing
|
If the features in the subset are random you can still impute values. However, if you have that much missing data, I would think twice about whether or not you really have enough valid data to do any
|
Binary classification when many binary features are missing
If the features in the subset are random you can still impute values. However, if you have that much missing data, I would think twice about whether or not you really have enough valid data to do any kind of analysis.
The multiple imputation FAQ page ---->
http://www.stat.psu.edu/~jls/mifaq.html
|
Binary classification when many binary features are missing
If the features in the subset are random you can still impute values. However, if you have that much missing data, I would think twice about whether or not you really have enough valid data to do any
|
42,616
|
How to compute Confidence Interval associated to a Binomial proportion's increase?
|
Following whuber's link to Wikipedia you have
Assume that $a$ and $b$ are jointly
normally distributed, and that $b$ is
not too near zero (i.e. more
specifically, that the standard error
of $b$ is small compared to $b$)
$$\operatorname{Var} \left( \frac{a}{b} \right) = \left(
\frac{a}{b} \right)^{2} \left( \frac{\operatorname{Var}(a)}{a^2} +
\frac{\operatorname{Var}(b)}{b^2}\right).$$
though in fact you want $\operatorname{Var} \left( \frac{B}{A} \right)$.
If your 95% CI is $\pm 0.002$ then your variances for $A$ and $B$ are $(0.002/1.96)^2 \approx 0.00000104$, so $\operatorname{Var} \left( \frac{B}{A} \right) \approx 0.00846$. Taking the square root and multiplying by 1.96 you get $$\frac{B}{A} \approx 1.5 \pm 0.18$$
If you must turn this into percentages (I think it confuses more than it enlightens) then it becomes
B's proportion is 50% higher than A's, plus or minus 18%, i.e. between 32% higher and 68% higher.
In R you could simulate this by something like
> n <- 1000000
> A <- 0.02 + (0.002 / qnorm(0.975)) * rnorm(n)
> B <- 0.03 + (0.002 / qnorm(0.975)) * rnorm(n)
> C <- B / A
> quantile(C, probs = c(0.025, 0.5, 0.975))
2.5% 50% 97.5%
1.333514 1.499955 1.697418
which is reasonably close.
|
How to compute Confidence Interval associated to a Binomial proportion's increase?
|
Following whuber's link to Wikipedia you have
Assume that $a$ and $b$ are jointly
normally distributed, and that $b$ is
not too near zero (i.e. more
specifically, that the standard error
of
|
How to compute Confidence Interval associated to a Binomial proportion's increase?
Following whuber's link to Wikipedia you have
Assume that $a$ and $b$ are jointly
normally distributed, and that $b$ is
not too near zero (i.e. more
specifically, that the standard error
of $b$ is small compared to $b$)
$$\operatorname{Var} \left( \frac{a}{b} \right) = \left(
\frac{a}{b} \right)^{2} \left( \frac{\operatorname{Var}(a)}{a^2} +
\frac{\operatorname{Var}(b)}{b^2}\right).$$
though in fact you want $\operatorname{Var} \left( \frac{B}{A} \right)$.
If your 95% CI is $\pm 0.002$ then your variances for $A$ and $B$ are $(0.002/1.96)^2 \approx 0.00000104$, so $\operatorname{Var} \left( \frac{B}{A} \right) \approx 0.00846$. Taking the square root and multiplying by 1.96 you get $$\frac{B}{A} \approx 1.5 \pm 0.18$$
If you must turn this into percentages (I think it confuses more than it enlightens) then it becomes
B's proportion is 50% higher than A's, plus or minus 18%, i.e. between 32% higher and 68% higher.
In R you could simulate this by something like
> n <- 1000000
> A <- 0.02 + (0.002 / qnorm(0.975)) * rnorm(n)
> B <- 0.03 + (0.002 / qnorm(0.975)) * rnorm(n)
> C <- B / A
> quantile(C, probs = c(0.025, 0.5, 0.975))
2.5% 50% 97.5%
1.333514 1.499955 1.697418
which is reasonably close.
|
How to compute Confidence Interval associated to a Binomial proportion's increase?
Following whuber's link to Wikipedia you have
Assume that $a$ and $b$ are jointly
normally distributed, and that $b$ is
not too near zero (i.e. more
specifically, that the standard error
of
|
42,617
|
Predicting from a simple linear model with lags in R
|
It's clear the solution I posted previously is inadequate and inelegant. Here is my second attempt, which 100% solves my problem. Please let me know if you spot any bugs! I will cross post to stack overflow, if you all think that would be a better place to get comments on my code.
#A function to iteratively predict a time series
ipredict <-function(model, newdata, interval = "none",
level = 0.95, na.action = na.pass, weights = 1) {
P<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)
for (i in seq(1,dim(newdata)[1])) {
if (is.na(newdata[i])) {
if (interval=="none") {
P[i]<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)[i]
newdata[i]<-P[i]
}
else{
P[i,]<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)[i,]
newdata[i]<-P[i,1]
}
}
}
P_end<-end(P)[1]*frequency(P)+(end(P)[2]-1) #Convert (time,period) to decimal time
P<-window(P,end=P_end-1*frequency(P)) #Drop last observation, which is NA
return(P)
}
#Example usage:
library(dyn)
y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable
A<-rnorm(10) #Create independant variables
B<-rnorm(10)
C<-rnorm(10)
Error<-rnorm(10)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
data=cbind(y,A,B,C)
#Fit linear model
model.dyn<-dyn$lm(y~A+B+C+lag(y,-1),data=data)
summary(model.dyn)
#Forecast linear model
A<-c(A,rnorm(5))
B<-c(B,rnorm(5))
C<-c(C,rnorm(5))
y=window(y,end=end(y)+c(5,0),extend=TRUE)
newdata<-cbind(y,A,B,C)
P1<-ipredict(model.dyn,newdata)
P2<-ipredict(model.dyn,newdata,interval="prediction")
#Plot
plot(y)
lines(P1,col=2)
|
Predicting from a simple linear model with lags in R
|
It's clear the solution I posted previously is inadequate and inelegant. Here is my second attempt, which 100% solves my problem. Please let me know if you spot any bugs! I will cross post to stack
|
Predicting from a simple linear model with lags in R
It's clear the solution I posted previously is inadequate and inelegant. Here is my second attempt, which 100% solves my problem. Please let me know if you spot any bugs! I will cross post to stack overflow, if you all think that would be a better place to get comments on my code.
#A function to iteratively predict a time series
ipredict <-function(model, newdata, interval = "none",
level = 0.95, na.action = na.pass, weights = 1) {
P<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)
for (i in seq(1,dim(newdata)[1])) {
if (is.na(newdata[i])) {
if (interval=="none") {
P[i]<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)[i]
newdata[i]<-P[i]
}
else{
P[i,]<-predict(model,newdata=newdata,interval=interval,
level=level,na.action=na.action,weights=weights)[i,]
newdata[i]<-P[i,1]
}
}
}
P_end<-end(P)[1]*frequency(P)+(end(P)[2]-1) #Convert (time,period) to decimal time
P<-window(P,end=P_end-1*frequency(P)) #Drop last observation, which is NA
return(P)
}
#Example usage:
library(dyn)
y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable
A<-rnorm(10) #Create independant variables
B<-rnorm(10)
C<-rnorm(10)
Error<-rnorm(10)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
data=cbind(y,A,B,C)
#Fit linear model
model.dyn<-dyn$lm(y~A+B+C+lag(y,-1),data=data)
summary(model.dyn)
#Forecast linear model
A<-c(A,rnorm(5))
B<-c(B,rnorm(5))
C<-c(C,rnorm(5))
y=window(y,end=end(y)+c(5,0),extend=TRUE)
newdata<-cbind(y,A,B,C)
P1<-ipredict(model.dyn,newdata)
P2<-ipredict(model.dyn,newdata,interval="prediction")
#Plot
plot(y)
lines(P1,col=2)
|
Predicting from a simple linear model with lags in R
It's clear the solution I posted previously is inadequate and inelegant. Here is my second attempt, which 100% solves my problem. Please let me know if you spot any bugs! I will cross post to stack
|
42,618
|
Predicting from a simple linear model with lags in R
|
One more method, which has been suggested in other topics, is to just use the arima function with xregs. Arima seems to be able to make forecasts from a new set of xregs just fine.
|
Predicting from a simple linear model with lags in R
|
One more method, which has been suggested in other topics, is to just use the arima function with xregs. Arima seems to be able to make forecasts from a new set of xregs just fine.
|
Predicting from a simple linear model with lags in R
One more method, which has been suggested in other topics, is to just use the arima function with xregs. Arima seems to be able to make forecasts from a new set of xregs just fine.
|
Predicting from a simple linear model with lags in R
One more method, which has been suggested in other topics, is to just use the arima function with xregs. Arima seems to be able to make forecasts from a new set of xregs just fine.
|
42,619
|
Predicting from a simple linear model with lags in R
|
Ok, I answered my own problem, but my solution could use more testing and probably isn't perfect. Suggestions would be appreciated!
First of all I used a modified version of the parseCall function available here:
parseCall <- function(obj) {
if (class(obj) != 'call') {
stop("Must supply a 'call' object")
}
srep <- deparse(obj)
if (length(srep) >1) srep <- paste(srep,sep='',collapse='')
fname <- unlist(strsplit(srep,"\\("))[1]
func <- unlist(strsplit(srep, paste(fname,"\\(",sep='')))[2]
func <- unlist(strsplit(func,""))
func <- paste(func[-length(func)],sep='',collapse="")
func <- unlist(strsplit(func,","))
vals <- list()
nms <- c()
cnt <- 1
for (args in func) {
arg <- unlist(strsplit(args,"="))[1]
val <- unlist(strsplit(args,"="))[2]
arg <- gsub(" ", "", arg)
val <- gsub(" ", "", val)
vals[[cnt]] <- val
nms[cnt] <- arg
cnt <- cnt + 1
}
names(vals) <- nms
return(vals)
}
This function returns the dependent variable of a linear regression
getDepVar <- function(call) {
call<-parseCall(call)
formula<-call$formula
out<-unlist(strsplit(formula,"~")[1])
return(out[1])
}
And finally, this function does the magic:
ipredict <-function(model,newdata) {
Y<-getDepVar(model$call)
P<-predict(model,newdata=newdata)
for (i in seq(1,dim(newdata)[1])) {
if (is.na(newdata[i,Y])) {
newdata[i,Y]<-predict(model,newdata=newdata[1:i,])[i]
P[i]<-newdata[i,Y]
}
}
return(P)
}
Example usage (based on my question):
#A function to calculate lags
lagmatrix <- function(x,max.lag){embed(c(rep(NA,max.lag),x),max.lag)}
lag <- function(x,lag) {
out<-lagmatrix(x,lag+1)[,lag]
return(out[1:length(out)-1])
}
y<-arima.sim(model=list(ar=c(.9)),n=1000) #Create AR(1) dependant variable
A<-rnorm(1000) #Create independant variables
B<-rnorm(1000)
C<-rnorm(1000)
Error<-rnorm(10)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
#Fit linear model
model<-lm(y~A+B+C+I(lag(y,1)))
summary(model)
#Forecast linear model
A<-c(A,rnorm(50)) #Assume we know 50 future values of A, B, C
B<-c(B,rnorm(50))
C<-c(C,rnorm(50))
y<-c(y,rep(NA,50))
newdata<-as.data.frame(cbind(y,A,B,C))
ipredict(model,newdata=newdata)
|
Predicting from a simple linear model with lags in R
|
Ok, I answered my own problem, but my solution could use more testing and probably isn't perfect. Suggestions would be appreciated!
First of all I used a modified version of the parseCall function av
|
Predicting from a simple linear model with lags in R
Ok, I answered my own problem, but my solution could use more testing and probably isn't perfect. Suggestions would be appreciated!
First of all I used a modified version of the parseCall function available here:
parseCall <- function(obj) {
if (class(obj) != 'call') {
stop("Must supply a 'call' object")
}
srep <- deparse(obj)
if (length(srep) >1) srep <- paste(srep,sep='',collapse='')
fname <- unlist(strsplit(srep,"\\("))[1]
func <- unlist(strsplit(srep, paste(fname,"\\(",sep='')))[2]
func <- unlist(strsplit(func,""))
func <- paste(func[-length(func)],sep='',collapse="")
func <- unlist(strsplit(func,","))
vals <- list()
nms <- c()
cnt <- 1
for (args in func) {
arg <- unlist(strsplit(args,"="))[1]
val <- unlist(strsplit(args,"="))[2]
arg <- gsub(" ", "", arg)
val <- gsub(" ", "", val)
vals[[cnt]] <- val
nms[cnt] <- arg
cnt <- cnt + 1
}
names(vals) <- nms
return(vals)
}
This function returns the dependent variable of a linear regression
getDepVar <- function(call) {
call<-parseCall(call)
formula<-call$formula
out<-unlist(strsplit(formula,"~")[1])
return(out[1])
}
And finally, this function does the magic:
ipredict <-function(model,newdata) {
Y<-getDepVar(model$call)
P<-predict(model,newdata=newdata)
for (i in seq(1,dim(newdata)[1])) {
if (is.na(newdata[i,Y])) {
newdata[i,Y]<-predict(model,newdata=newdata[1:i,])[i]
P[i]<-newdata[i,Y]
}
}
return(P)
}
Example usage (based on my question):
#A function to calculate lags
lagmatrix <- function(x,max.lag){embed(c(rep(NA,max.lag),x),max.lag)}
lag <- function(x,lag) {
out<-lagmatrix(x,lag+1)[,lag]
return(out[1:length(out)-1])
}
y<-arima.sim(model=list(ar=c(.9)),n=1000) #Create AR(1) dependant variable
A<-rnorm(1000) #Create independant variables
B<-rnorm(1000)
C<-rnorm(1000)
Error<-rnorm(10)
y<-y+.5*A+.2*B-.3*C+.1*Error #Add relationship to independant variables
#Fit linear model
model<-lm(y~A+B+C+I(lag(y,1)))
summary(model)
#Forecast linear model
A<-c(A,rnorm(50)) #Assume we know 50 future values of A, B, C
B<-c(B,rnorm(50))
C<-c(C,rnorm(50))
y<-c(y,rep(NA,50))
newdata<-as.data.frame(cbind(y,A,B,C))
ipredict(model,newdata=newdata)
|
Predicting from a simple linear model with lags in R
Ok, I answered my own problem, but my solution could use more testing and probably isn't perfect. Suggestions would be appreciated!
First of all I used a modified version of the parseCall function av
|
42,620
|
Non-informative priors for the AR(1) model
|
This transformation group is discrete and finite: it contains exactly two elements, the identity and inverting $\rho$. It's simply not big enough to determine a prior. In fact, you can choose any measurable function $f$ defined on $[-1,1]$ provided (a) it is integrable and (b) $\rho^{-2}f(1/\rho)d\rho$ is integrable on $[1, \infty]$. The latter restricts $f$ only in a neighborhood of $0$.
BTW, for this model to be practical you need to introduce a nuisance parameter $\sigma$: $\epsilon_t \sim N(0, \sigma^2)$.
|
Non-informative priors for the AR(1) model
|
This transformation group is discrete and finite: it contains exactly two elements, the identity and inverting $\rho$. It's simply not big enough to determine a prior. In fact, you can choose any me
|
Non-informative priors for the AR(1) model
This transformation group is discrete and finite: it contains exactly two elements, the identity and inverting $\rho$. It's simply not big enough to determine a prior. In fact, you can choose any measurable function $f$ defined on $[-1,1]$ provided (a) it is integrable and (b) $\rho^{-2}f(1/\rho)d\rho$ is integrable on $[1, \infty]$. The latter restricts $f$ only in a neighborhood of $0$.
BTW, for this model to be practical you need to introduce a nuisance parameter $\sigma$: $\epsilon_t \sim N(0, \sigma^2)$.
|
Non-informative priors for the AR(1) model
This transformation group is discrete and finite: it contains exactly two elements, the identity and inverting $\rho$. It's simply not big enough to determine a prior. In fact, you can choose any me
|
42,621
|
When is the shrinkage applied in Friedman's stochastic gradient boosting machine?
|
Using trees, the shrinkage takes place at the update stage of the algorithm, when the new function $f(x)_k$ is created as the function prior step ($f(x)_{k-1}$) + the new decision tree output ($p(x)_k$). This new tree output ($p(x)_k$) is scaled by the learning rate parameter.
See for example the implementation in R GBM on page 6.
|
When is the shrinkage applied in Friedman's stochastic gradient boosting machine?
|
Using trees, the shrinkage takes place at the update stage of the algorithm, when the new function $f(x)_k$ is created as the function prior step ($f(x)_{k-1}$) + the new decision tree output ($p(x)_k
|
When is the shrinkage applied in Friedman's stochastic gradient boosting machine?
Using trees, the shrinkage takes place at the update stage of the algorithm, when the new function $f(x)_k$ is created as the function prior step ($f(x)_{k-1}$) + the new decision tree output ($p(x)_k$). This new tree output ($p(x)_k$) is scaled by the learning rate parameter.
See for example the implementation in R GBM on page 6.
|
When is the shrinkage applied in Friedman's stochastic gradient boosting machine?
Using trees, the shrinkage takes place at the update stage of the algorithm, when the new function $f(x)_k$ is created as the function prior step ($f(x)_{k-1}$) + the new decision tree output ($p(x)_k
|
42,622
|
Use coefficients of thin plate regression splines in a clustering method
|
If I understand correctly, I think you want the coefficients from the $gam component:
> coef(test$gam)
(Intercept) s(x1).1 s(x1).2 s(x1).3 s(x1).4 s(x1).5
21.8323526 9.2169405 15.7504889 -3.4709907 16.9314057 -19.4909343
s(x1).6 s(x1).7 s(x1).8 s(x1).9 s(x2).1 s(x2).2
1.1124505 -3.3807996 21.7637766 -23.5791595 3.2303904 -3.0366406
s(x2).3 s(x2).4 s(x2).5 s(x2).6 s(x2).7 s(x2).8
-2.0725621 -0.6642467 0.7347857 1.7232242 -0.5078875 -7.7776700
s(x2).9
-12.0056347
Update 1: To get at the basis functions we can use predict(...., type = "lpmatrix") to get $Xp$ the smoothing matrix:
Xp <- predict(test$gam, type = "lpmatrix")
The fitted spline (e.g. for s(x1)) can be recovered then using:
plot(Xp[,2:10] %*% coef(test$gam)[2:10], type = "l")
You can plot this ($Xp$) and see that it is similar to um[[1]]$X
layout(matrix(1:2, ncol = 2))
matplot(um[[1]]$X, type = "l")
matplot(Xp[,1:10], type = "l")
layout(1)
I pondered why these are not exactly the same. Is it because the original basis functions have been subject to the penalised regression during fitting???
Update 2: You can make them the same by including the identifiability constraints into your basis functions in um:
um2 <- smoothCon(s(x1), data=data.frame(x1=x1), knots=NULL,
absorb.cons=TRUE)
layout(matrix(1:2, ncol = 2))
matplot(um2[[1]]$X, type = "l", main = "smoothCon()")
matplot(Xp[,2:10], type = "l", main = "Xp matrix") ##!##
layout(1)
Note I have not got the intercept in the line marked ##!##.
You ought to be able to get $Xp$ directly from function PredictMat(), which is documented on same page as smoothCon().
|
Use coefficients of thin plate regression splines in a clustering method
|
If I understand correctly, I think you want the coefficients from the $gam component:
> coef(test$gam)
(Intercept) s(x1).1 s(x1).2 s(x1).3 s(x1).4 s(x1).5
21.8323526 9.2169405
|
Use coefficients of thin plate regression splines in a clustering method
If I understand correctly, I think you want the coefficients from the $gam component:
> coef(test$gam)
(Intercept) s(x1).1 s(x1).2 s(x1).3 s(x1).4 s(x1).5
21.8323526 9.2169405 15.7504889 -3.4709907 16.9314057 -19.4909343
s(x1).6 s(x1).7 s(x1).8 s(x1).9 s(x2).1 s(x2).2
1.1124505 -3.3807996 21.7637766 -23.5791595 3.2303904 -3.0366406
s(x2).3 s(x2).4 s(x2).5 s(x2).6 s(x2).7 s(x2).8
-2.0725621 -0.6642467 0.7347857 1.7232242 -0.5078875 -7.7776700
s(x2).9
-12.0056347
Update 1: To get at the basis functions we can use predict(...., type = "lpmatrix") to get $Xp$ the smoothing matrix:
Xp <- predict(test$gam, type = "lpmatrix")
The fitted spline (e.g. for s(x1)) can be recovered then using:
plot(Xp[,2:10] %*% coef(test$gam)[2:10], type = "l")
You can plot this ($Xp$) and see that it is similar to um[[1]]$X
layout(matrix(1:2, ncol = 2))
matplot(um[[1]]$X, type = "l")
matplot(Xp[,1:10], type = "l")
layout(1)
I pondered why these are not exactly the same. Is it because the original basis functions have been subject to the penalised regression during fitting???
Update 2: You can make them the same by including the identifiability constraints into your basis functions in um:
um2 <- smoothCon(s(x1), data=data.frame(x1=x1), knots=NULL,
absorb.cons=TRUE)
layout(matrix(1:2, ncol = 2))
matplot(um2[[1]]$X, type = "l", main = "smoothCon()")
matplot(Xp[,2:10], type = "l", main = "Xp matrix") ##!##
layout(1)
Note I have not got the intercept in the line marked ##!##.
You ought to be able to get $Xp$ directly from function PredictMat(), which is documented on same page as smoothCon().
|
Use coefficients of thin plate regression splines in a clustering method
If I understand correctly, I think you want the coefficients from the $gam component:
> coef(test$gam)
(Intercept) s(x1).1 s(x1).2 s(x1).3 s(x1).4 s(x1).5
21.8323526 9.2169405
|
42,623
|
Measuring statistical significance of machine learning algorithms comparison
|
You have two biases to remove here -- the selection of the initial parameters set and the selection of train/test data.
So, I don't think it is good to compare algorithms based on the same initial parameters set; I would just run the evaluation over few different initial sets for each of the algorithms to get more general approximation.
The next step is something that you are probably doing already, so using some kind of cross-validation.
t-test is a way to go (I assume that you are getting this RMS as a mean from cross validation [and evaluation over few different starting parameters set, supposing you decided to use my first suggestion], so you can also calculate the standard deviation); more fancy method is to use Mann-Whitney-Wilcoxon test.
Wikipedia article about cross validation is quite nice and have some references worth reading.
UPDATE AFTER UPDATE: I still think that making paired test (Dikran's way) looks suspicious.
|
Measuring statistical significance of machine learning algorithms comparison
|
You have two biases to remove here -- the selection of the initial parameters set and the selection of train/test data.
So, I don't think it is good to compare algorithms based on the same initial par
|
Measuring statistical significance of machine learning algorithms comparison
You have two biases to remove here -- the selection of the initial parameters set and the selection of train/test data.
So, I don't think it is good to compare algorithms based on the same initial parameters set; I would just run the evaluation over few different initial sets for each of the algorithms to get more general approximation.
The next step is something that you are probably doing already, so using some kind of cross-validation.
t-test is a way to go (I assume that you are getting this RMS as a mean from cross validation [and evaluation over few different starting parameters set, supposing you decided to use my first suggestion], so you can also calculate the standard deviation); more fancy method is to use Mann-Whitney-Wilcoxon test.
Wikipedia article about cross validation is quite nice and have some references worth reading.
UPDATE AFTER UPDATE: I still think that making paired test (Dikran's way) looks suspicious.
|
Measuring statistical significance of machine learning algorithms comparison
You have two biases to remove here -- the selection of the initial parameters set and the selection of train/test data.
So, I don't think it is good to compare algorithms based on the same initial par
|
42,624
|
Survival analysis, one cohort, two classifications
|
I'll concentrate on your example question: does class 1 of the old classification scheme have a better or worse survival than class 1 of the updated classification scheme?
We can form four mutually exclusive groups of patients:
(a) Patients who weren't in class 1 under either scheme. Clearly, they don't help us answer the question.
(b) Patients who were in class 1 under both schemes. Clearly, they don't help us answer the question either.
(c) Patients who were in class 1 under the old scheme, but aren't in class 1 under the new scheme.
(d) Patients who weren't in class 1 under the old scheme, but are in class 1 under the new scheme.
Compare survival in groups (c) and (d). If survival if better in (c), then class 1 of the old scheme has better survival than class 1 of the new scheme. If survival if better in (d), then class 1 of the new scheme has better survival than class 1 of the old scheme.
|
Survival analysis, one cohort, two classifications
|
I'll concentrate on your example question: does class 1 of the old classification scheme have a better or worse survival than class 1 of the updated classification scheme?
We can form four mutually ex
|
Survival analysis, one cohort, two classifications
I'll concentrate on your example question: does class 1 of the old classification scheme have a better or worse survival than class 1 of the updated classification scheme?
We can form four mutually exclusive groups of patients:
(a) Patients who weren't in class 1 under either scheme. Clearly, they don't help us answer the question.
(b) Patients who were in class 1 under both schemes. Clearly, they don't help us answer the question either.
(c) Patients who were in class 1 under the old scheme, but aren't in class 1 under the new scheme.
(d) Patients who weren't in class 1 under the old scheme, but are in class 1 under the new scheme.
Compare survival in groups (c) and (d). If survival if better in (c), then class 1 of the old scheme has better survival than class 1 of the new scheme. If survival if better in (d), then class 1 of the new scheme has better survival than class 1 of the old scheme.
|
Survival analysis, one cohort, two classifications
I'll concentrate on your example question: does class 1 of the old classification scheme have a better or worse survival than class 1 of the updated classification scheme?
We can form four mutually ex
|
42,625
|
How much can the "pyramid of evidence" be applied to economics and political sciences?
|
Way back in 1965, Sir Austin Bradford Hill wrote a great essay about something very akin to the Pyramid of Evidence, where he discussed how the piling up of evidence can increase our confidence in hypotheses of causality in Medicine.
Most of the factors he discusses can be applied to Economics and political sciences.
|
How much can the "pyramid of evidence" be applied to economics and political sciences?
|
Way back in 1965, Sir Austin Bradford Hill wrote a great essay about something very akin to the Pyramid of Evidence, where he discussed how the piling up of evidence can increase our confidence in hyp
|
How much can the "pyramid of evidence" be applied to economics and political sciences?
Way back in 1965, Sir Austin Bradford Hill wrote a great essay about something very akin to the Pyramid of Evidence, where he discussed how the piling up of evidence can increase our confidence in hypotheses of causality in Medicine.
Most of the factors he discusses can be applied to Economics and political sciences.
|
How much can the "pyramid of evidence" be applied to economics and political sciences?
Way back in 1965, Sir Austin Bradford Hill wrote a great essay about something very akin to the Pyramid of Evidence, where he discussed how the piling up of evidence can increase our confidence in hyp
|
42,626
|
Meaningful deviation measure with strongly varying datapoints
|
You need to use some paired test, maybe paired t-test or a sign test is the distribution is really weired.
|
Meaningful deviation measure with strongly varying datapoints
|
You need to use some paired test, maybe paired t-test or a sign test is the distribution is really weired.
|
Meaningful deviation measure with strongly varying datapoints
You need to use some paired test, maybe paired t-test or a sign test is the distribution is really weired.
|
Meaningful deviation measure with strongly varying datapoints
You need to use some paired test, maybe paired t-test or a sign test is the distribution is really weired.
|
42,627
|
Meaningful deviation measure with strongly varying datapoints
|
I am not at all sure if ignoring the performance spread is a good idea. Ideally, you would want a method to be both reliable (i.e., have low spread) and be valid (i.e., give a performance measure of close to 1). Consider the following two output measures:
Method 1. [0.80, 0.60]
Method 2. [0.71, 0.69].
Unlike your example, there is no method that clearly dominates and in fact both methods perform equally well on average. Thus you may want to choose the one that is more reliable (i.e., has lower spread).
If you accept the above reasoning then your null hypothesis should be:
$$\frac{\mu_1}{\sigma_1} = \frac{\mu_2}{\sigma_2}$$
The above is analagous to the Sharpe ratio from finance and I am sure there is an extensive financial literature which discusses how to test hypothesis like the above and its extensions to more than 2 groups. Unfortunately, I am not well read up on that literature to help you.
|
Meaningful deviation measure with strongly varying datapoints
|
I am not at all sure if ignoring the performance spread is a good idea. Ideally, you would want a method to be both reliable (i.e., have low spread) and be valid (i.e., give a performance measure of c
|
Meaningful deviation measure with strongly varying datapoints
I am not at all sure if ignoring the performance spread is a good idea. Ideally, you would want a method to be both reliable (i.e., have low spread) and be valid (i.e., give a performance measure of close to 1). Consider the following two output measures:
Method 1. [0.80, 0.60]
Method 2. [0.71, 0.69].
Unlike your example, there is no method that clearly dominates and in fact both methods perform equally well on average. Thus you may want to choose the one that is more reliable (i.e., has lower spread).
If you accept the above reasoning then your null hypothesis should be:
$$\frac{\mu_1}{\sigma_1} = \frac{\mu_2}{\sigma_2}$$
The above is analagous to the Sharpe ratio from finance and I am sure there is an extensive financial literature which discusses how to test hypothesis like the above and its extensions to more than 2 groups. Unfortunately, I am not well read up on that literature to help you.
|
Meaningful deviation measure with strongly varying datapoints
I am not at all sure if ignoring the performance spread is a good idea. Ideally, you would want a method to be both reliable (i.e., have low spread) and be valid (i.e., give a performance measure of c
|
42,628
|
Sample problems on logit modeling and Bayesian methods
|
The UCLA Statistical Computing site has a number of examples in various languages (SAS, R, etc). In particular, see the following pages (look among the links titled logistic regression, categorical data analysis and generalized linear models):
Data Analysis Examples
Textbook Examples
|
Sample problems on logit modeling and Bayesian methods
|
The UCLA Statistical Computing site has a number of examples in various languages (SAS, R, etc). In particular, see the following pages (look among the links titled logistic regression, categorical d
|
Sample problems on logit modeling and Bayesian methods
The UCLA Statistical Computing site has a number of examples in various languages (SAS, R, etc). In particular, see the following pages (look among the links titled logistic regression, categorical data analysis and generalized linear models):
Data Analysis Examples
Textbook Examples
|
Sample problems on logit modeling and Bayesian methods
The UCLA Statistical Computing site has a number of examples in various languages (SAS, R, etc). In particular, see the following pages (look among the links titled logistic regression, categorical d
|
42,629
|
Optimizing the sample size: number of individuals versus trials per individual
|
The mean per individual will be distributed as
$$\bar{Y}_i = \frac{1}{n_i} \sum_{j = 1}^{n_i} Y_{ij} \sim N\left(\mu, \sigma^2 + \tau^2/n_i\right)$$
where $n_i \geq 1$ are the number of observations for individual $i$ (we need at least 1 measurement for a participant).
The estimate will be
$$\hat{\mu} = \sum_{i=1}^n w_i \bar{Y}_i$$
with $$w_i = \frac{(\sigma^2 + \tau^2/n_i)^{-1}}{ \sum_{l=1}^n (\sigma^2 + \tau^2/n_l)^{-1}} $$
and the variance will be
$$\text{VAR}(\hat{\mu}) = \frac{1}{\sum_{l=i}^n (\sigma^2 + \tau^2/n_i)^{-1}} \approx \frac{\sigma^2}{n }+ \frac{\tau^2}{ \sum n_i} = \frac{\sigma^2}{n }+ \frac{\tau^2}{ m} $$
The approximate is exactly true when the $n_i$ are all the same. And we defined $m = \sum n_i$.
The variance decreases when we increase $n$ or when we increase $m$. With the changes being
$$\frac{\partial}{\partial n} \text{VAR}(\hat{\mu}) = - \frac{\sigma^2}{n^2} \\
\frac{\partial}{\partial m} \text{VAR}(\hat{\mu}) = - \frac{\tau^2}{m^2} \\$$
and the optimum will occur when the amount of observations per individual follows the ratio
$$\frac{m}{n} = \frac{\tau\sqrt{a}}{\sigma\sqrt{b}}$$
and there is also the limit $\frac{m}{n} > 1$ because we need at minimum one observation per individual.
|
Optimizing the sample size: number of individuals versus trials per individual
|
The mean per individual will be distributed as
$$\bar{Y}_i = \frac{1}{n_i} \sum_{j = 1}^{n_i} Y_{ij} \sim N\left(\mu, \sigma^2 + \tau^2/n_i\right)$$
where $n_i \geq 1$ are the number of observations f
|
Optimizing the sample size: number of individuals versus trials per individual
The mean per individual will be distributed as
$$\bar{Y}_i = \frac{1}{n_i} \sum_{j = 1}^{n_i} Y_{ij} \sim N\left(\mu, \sigma^2 + \tau^2/n_i\right)$$
where $n_i \geq 1$ are the number of observations for individual $i$ (we need at least 1 measurement for a participant).
The estimate will be
$$\hat{\mu} = \sum_{i=1}^n w_i \bar{Y}_i$$
with $$w_i = \frac{(\sigma^2 + \tau^2/n_i)^{-1}}{ \sum_{l=1}^n (\sigma^2 + \tau^2/n_l)^{-1}} $$
and the variance will be
$$\text{VAR}(\hat{\mu}) = \frac{1}{\sum_{l=i}^n (\sigma^2 + \tau^2/n_i)^{-1}} \approx \frac{\sigma^2}{n }+ \frac{\tau^2}{ \sum n_i} = \frac{\sigma^2}{n }+ \frac{\tau^2}{ m} $$
The approximate is exactly true when the $n_i$ are all the same. And we defined $m = \sum n_i$.
The variance decreases when we increase $n$ or when we increase $m$. With the changes being
$$\frac{\partial}{\partial n} \text{VAR}(\hat{\mu}) = - \frac{\sigma^2}{n^2} \\
\frac{\partial}{\partial m} \text{VAR}(\hat{\mu}) = - \frac{\tau^2}{m^2} \\$$
and the optimum will occur when the amount of observations per individual follows the ratio
$$\frac{m}{n} = \frac{\tau\sqrt{a}}{\sigma\sqrt{b}}$$
and there is also the limit $\frac{m}{n} > 1$ because we need at minimum one observation per individual.
|
Optimizing the sample size: number of individuals versus trials per individual
The mean per individual will be distributed as
$$\bar{Y}_i = \frac{1}{n_i} \sum_{j = 1}^{n_i} Y_{ij} \sim N\left(\mu, \sigma^2 + \tau^2/n_i\right)$$
where $n_i \geq 1$ are the number of observations f
|
42,630
|
Find UMVUE of difference of parameters of two exponential distribution random variables
|
Changing the question in two different ways allows for some answers:
If $\theta_x$ and $\theta_y$ are rate rather than scale parameters,
$$
\frac{n-1}{n} \frac{\sum_{i=1}^n (1-\Delta_i)}{\sum_{i=1} Z_i} - \frac{n-1}{n} \frac{\sum_{i=1}^n \Delta_i}{\sum_{i=1} Z_i}\tag{1}
$$
is an unbiased estimator of $1/\theta_x - 1/\theta_y$ and since it only depends on $\mathbf T$, it is the UMVUE.
If instead $Z=\max\{X,Y\}$, with $\theta_x$ and $\theta_y$ scale parameters, the conditional distribution of $Z$ conditional on $\Delta=1$. Since
$$\mathbb P(\Delta=1)=\frac{\theta_x}{\theta_x+\theta_y}$$
we have
\begin{align}
\mathbb P(Z\le z|\Delta=1)&=\frac{\theta_x+\theta_y}{\theta_x}\mathbb P(Z\le z,\Delta=1)\\
&=\frac{\theta_x+\theta_y}{\theta_x}\mathbb P(X\le z,X>Y)\\
&=\frac{\theta_x+\theta_y}{\theta_x}\int_0^z\int_0^x \frac1{\theta_x\theta_y}
\exp\{-x/\theta_x-y/\theta_y\}\text dx\text dy\\
&=\frac{\theta_x+\theta_y}{\theta_x}\int_0^z(1-\exp\{-x/\theta_y\})\frac{\exp\{-x/\theta_x\}}{\theta_x}\text dx\\
&=\frac{\theta_x+\theta_y}{\theta_x}[1-\exp\{-z/\theta_x\}]-\\
&\qquad\frac{\theta_x+\theta_y}{\theta_x}(\theta_x^{-1}+\theta_y^{-1})^{-1}
[1-\exp\{-z(\theta_x^{-1}+\theta_y^{-1})\}]\\
&=\frac{\theta_x+\theta_y}{\theta_x}[1-\exp\{-z/\theta_x\}]-
\frac{\theta_y}{\theta_x}[1-\exp\{-z(\theta_x+\theta_y)/\theta_x\theta_y\}]
\end{align}
This is a signed mixture of two Exponential distributions
$$\frac{\theta_x+\theta_y}{\theta_x}\mathcal Exp(\theta_x)-\frac{\theta_y}{\theta_x}\mathcal Exp(\theta_x\theta_y/(\theta_x+\theta_y))$$ which is illustrated by the fit in the following graphs:
based on $n=10^6$ simulations from $\mathcal Exp(10)$ and $\mathcal Exp(1/10)$ samples. This distribution has mean
\begin{align}\mathbb E[Z\le z|\Delta=1]
&=\frac{\theta_x+\theta_y}{\theta_x}\theta_x-\frac{\theta_y}{\theta_x}\frac{\theta_x\theta_y}{\theta_x+\theta_y}\\
&=\theta_x+\theta_y\left[1-\frac{\theta_y}{\theta_x+\theta_y}\right]\\
&=\theta_x+\frac{\theta_y\theta_y}{\theta_x+\theta_y}\end{align}
The second term above is symmetric in $(\theta_x,\theta_y)$. Therefore,
$$\mathbb E[Z\le z|\Delta=1]-\mathbb E[Z\le z|\Delta=0]=\theta_x-\theta_y$$
which leads immediately to an unbiased estimator based on $(\mathbf X,\boldsymbol \Delta)$:
$$\dfrac{\sum_{i=1}^n Z_i\Delta_i}{\sum_{i=1}^n\Delta_i}-
\dfrac{\sum_{i=1}^n Z_i\{1-\Delta_i\}}{\sum_{i=1}^n\{1-\Delta_i\}}\tag{2}$$
although I cannot tell about about (2) being UMVUE as the $Z_i$'s are not from an exponential family.
|
Find UMVUE of difference of parameters of two exponential distribution random variables
|
Changing the question in two different ways allows for some answers:
If $\theta_x$ and $\theta_y$ are rate rather than scale parameters,
$$
\frac{n-1}{n} \frac{\sum_{i=1}^n (1-\Delta_i)}{\sum_{i=1} Z
|
Find UMVUE of difference of parameters of two exponential distribution random variables
Changing the question in two different ways allows for some answers:
If $\theta_x$ and $\theta_y$ are rate rather than scale parameters,
$$
\frac{n-1}{n} \frac{\sum_{i=1}^n (1-\Delta_i)}{\sum_{i=1} Z_i} - \frac{n-1}{n} \frac{\sum_{i=1}^n \Delta_i}{\sum_{i=1} Z_i}\tag{1}
$$
is an unbiased estimator of $1/\theta_x - 1/\theta_y$ and since it only depends on $\mathbf T$, it is the UMVUE.
If instead $Z=\max\{X,Y\}$, with $\theta_x$ and $\theta_y$ scale parameters, the conditional distribution of $Z$ conditional on $\Delta=1$. Since
$$\mathbb P(\Delta=1)=\frac{\theta_x}{\theta_x+\theta_y}$$
we have
\begin{align}
\mathbb P(Z\le z|\Delta=1)&=\frac{\theta_x+\theta_y}{\theta_x}\mathbb P(Z\le z,\Delta=1)\\
&=\frac{\theta_x+\theta_y}{\theta_x}\mathbb P(X\le z,X>Y)\\
&=\frac{\theta_x+\theta_y}{\theta_x}\int_0^z\int_0^x \frac1{\theta_x\theta_y}
\exp\{-x/\theta_x-y/\theta_y\}\text dx\text dy\\
&=\frac{\theta_x+\theta_y}{\theta_x}\int_0^z(1-\exp\{-x/\theta_y\})\frac{\exp\{-x/\theta_x\}}{\theta_x}\text dx\\
&=\frac{\theta_x+\theta_y}{\theta_x}[1-\exp\{-z/\theta_x\}]-\\
&\qquad\frac{\theta_x+\theta_y}{\theta_x}(\theta_x^{-1}+\theta_y^{-1})^{-1}
[1-\exp\{-z(\theta_x^{-1}+\theta_y^{-1})\}]\\
&=\frac{\theta_x+\theta_y}{\theta_x}[1-\exp\{-z/\theta_x\}]-
\frac{\theta_y}{\theta_x}[1-\exp\{-z(\theta_x+\theta_y)/\theta_x\theta_y\}]
\end{align}
This is a signed mixture of two Exponential distributions
$$\frac{\theta_x+\theta_y}{\theta_x}\mathcal Exp(\theta_x)-\frac{\theta_y}{\theta_x}\mathcal Exp(\theta_x\theta_y/(\theta_x+\theta_y))$$ which is illustrated by the fit in the following graphs:
based on $n=10^6$ simulations from $\mathcal Exp(10)$ and $\mathcal Exp(1/10)$ samples. This distribution has mean
\begin{align}\mathbb E[Z\le z|\Delta=1]
&=\frac{\theta_x+\theta_y}{\theta_x}\theta_x-\frac{\theta_y}{\theta_x}\frac{\theta_x\theta_y}{\theta_x+\theta_y}\\
&=\theta_x+\theta_y\left[1-\frac{\theta_y}{\theta_x+\theta_y}\right]\\
&=\theta_x+\frac{\theta_y\theta_y}{\theta_x+\theta_y}\end{align}
The second term above is symmetric in $(\theta_x,\theta_y)$. Therefore,
$$\mathbb E[Z\le z|\Delta=1]-\mathbb E[Z\le z|\Delta=0]=\theta_x-\theta_y$$
which leads immediately to an unbiased estimator based on $(\mathbf X,\boldsymbol \Delta)$:
$$\dfrac{\sum_{i=1}^n Z_i\Delta_i}{\sum_{i=1}^n\Delta_i}-
\dfrac{\sum_{i=1}^n Z_i\{1-\Delta_i\}}{\sum_{i=1}^n\{1-\Delta_i\}}\tag{2}$$
although I cannot tell about about (2) being UMVUE as the $Z_i$'s are not from an exponential family.
|
Find UMVUE of difference of parameters of two exponential distribution random variables
Changing the question in two different ways allows for some answers:
If $\theta_x$ and $\theta_y$ are rate rather than scale parameters,
$$
\frac{n-1}{n} \frac{\sum_{i=1}^n (1-\Delta_i)}{\sum_{i=1} Z
|
42,631
|
How can you combine control variates with antithetic variates
|
Using antithetic variates to improve the Monte Carlo approximation of $\mathbb E^F[h(X)]$ mean generating correlated realisations from $F$, $X_1,\ldots,X_n$ such that$$\text{var}(h(X_1)+\cdots+h(X_n))<\text{var}(h(X_1))+\cdots+\text{var}(h(X_n))\tag{1}$$
While the idea is appealing, it is difficult to implement in realistically complex settings since establishing the reduction of variance for a given $h$ [or a collection of $h$'s] is challenging.
Assuming such an antithetic scheme (1) has been constructed, if a control variate is available for the model, i.e. a function $h_0(\cdot)$ such that $\mathbb E^F[h_0(X)]=0$ and $\text{corr}(h(X),h_0(X))\ne 0$, the (overall) negative correlation between the $h(X_i)$'s does not automatically transfer to an (overall) negative correlation between the $h(X_i)+\alpha h_0(X_i)$'s. Hence, even if $\alpha$ is chosen such that
$$\text{var}(h(X_i)+\alpha h_0(X_i))<\text{var}(h(X_i))\tag{2}$$
it does not necessarily imply that
$$\text{var}\left\{\sum_{i=1}^n h(X_i)+\alpha h_0(X_i)\right\}<\sum_{i=1}^n \text{var}(h(X_i))$$
because the $h(X_i)+\alpha h_0(X_i)$'s may turn out to be positively correlated.
|
How can you combine control variates with antithetic variates
|
Using antithetic variates to improve the Monte Carlo approximation of $\mathbb E^F[h(X)]$ mean generating correlated realisations from $F$, $X_1,\ldots,X_n$ such that$$\text{var}(h(X_1)+\cdots+h(X_n))
|
How can you combine control variates with antithetic variates
Using antithetic variates to improve the Monte Carlo approximation of $\mathbb E^F[h(X)]$ mean generating correlated realisations from $F$, $X_1,\ldots,X_n$ such that$$\text{var}(h(X_1)+\cdots+h(X_n))<\text{var}(h(X_1))+\cdots+\text{var}(h(X_n))\tag{1}$$
While the idea is appealing, it is difficult to implement in realistically complex settings since establishing the reduction of variance for a given $h$ [or a collection of $h$'s] is challenging.
Assuming such an antithetic scheme (1) has been constructed, if a control variate is available for the model, i.e. a function $h_0(\cdot)$ such that $\mathbb E^F[h_0(X)]=0$ and $\text{corr}(h(X),h_0(X))\ne 0$, the (overall) negative correlation between the $h(X_i)$'s does not automatically transfer to an (overall) negative correlation between the $h(X_i)+\alpha h_0(X_i)$'s. Hence, even if $\alpha$ is chosen such that
$$\text{var}(h(X_i)+\alpha h_0(X_i))<\text{var}(h(X_i))\tag{2}$$
it does not necessarily imply that
$$\text{var}\left\{\sum_{i=1}^n h(X_i)+\alpha h_0(X_i)\right\}<\sum_{i=1}^n \text{var}(h(X_i))$$
because the $h(X_i)+\alpha h_0(X_i)$'s may turn out to be positively correlated.
|
How can you combine control variates with antithetic variates
Using antithetic variates to improve the Monte Carlo approximation of $\mathbb E^F[h(X)]$ mean generating correlated realisations from $F$, $X_1,\ldots,X_n$ such that$$\text{var}(h(X_1)+\cdots+h(X_n))
|
42,632
|
Bootstrap for random effects logistic regression to get CI for difference in proportions
|
More of an extended set of comments than an answer.
The bootMer() and its associated simulate.merMod() functions in lme4 contain hints as to what works in practice whether in frequentist or Bayesian modeling. (The package doesn't seem to contain functions for case resampling; more on that later.) Quotes below are from the manual page for bootMer().
The use.u parameter determines whether the random effects (u) are used at their estimated values (use.u = TRUE) or simulation/sampling is done from the "spherical" random effects (use.u = FALSE). According to the bootMer() manual page, "resampling from the estimated values of u is not good practice."* That would argue against resampling from your fitted random effects, whether in Bayesian or frequentist modeling.
The type setting determines how errors/responses are sampled. With type = "parametric", "the i.i.d. errors (or, for GLMMs, response values drawn from the appropriate distributions) are resampled." With type = "semiparametric":
the i.i.d. errors are sampled from the distribution of (response) residuals. (For GLMMs, the resulting sample will no longer have the same properties as the original sample, and the method may not make sense; a warning is generated.)
As this question is about a logistic model, the "semiparametric" setting would thus be unwise.
So with bootMer() and a logistic model you would use type = "parametric", thus sampling response values drawn from the underlying binomial distribution. Your choice is whether to use random-effect values fixed at their estimates (use.u = TRUE) or to sample from the estimated distribution of random-effect values (use.u = FALSE). The former choice would seem to make your results conditional on your sample.
With respect to case resampling, Michael Chernick notes that: "The nonparametric bootstrap has been shown to be more robust than the parametric bootstrap when the model is misspecified." That said, it's not clear to me how to deal correctly with resamples that don't seem to be fit properly. If it's just a matter of slow convergence that might be handled by altering fitting options, but if a fit to a resample is really impossible then omitting those resamples would seem to lead to a bias.
As the Bayesian model seems to work well, I suspect that you will stick with that. The warning about resampling from fitted fixed effects in the bootMer() manual page would suggest you should move to drawing from their distribution instead.
*The citation on this point in the manual page is to: Morris, J. S. (2002). The BLUPs Are Not ‘best’ When It Comes to Bootstrapping. Statistics & Probability Letters 56(4): 425–430. doi:10.1016/S0167-7152(02)00041-X. I haven't read that myself.
|
Bootstrap for random effects logistic regression to get CI for difference in proportions
|
More of an extended set of comments than an answer.
The bootMer() and its associated simulate.merMod() functions in lme4 contain hints as to what works in practice whether in frequentist or Bayesian m
|
Bootstrap for random effects logistic regression to get CI for difference in proportions
More of an extended set of comments than an answer.
The bootMer() and its associated simulate.merMod() functions in lme4 contain hints as to what works in practice whether in frequentist or Bayesian modeling. (The package doesn't seem to contain functions for case resampling; more on that later.) Quotes below are from the manual page for bootMer().
The use.u parameter determines whether the random effects (u) are used at their estimated values (use.u = TRUE) or simulation/sampling is done from the "spherical" random effects (use.u = FALSE). According to the bootMer() manual page, "resampling from the estimated values of u is not good practice."* That would argue against resampling from your fitted random effects, whether in Bayesian or frequentist modeling.
The type setting determines how errors/responses are sampled. With type = "parametric", "the i.i.d. errors (or, for GLMMs, response values drawn from the appropriate distributions) are resampled." With type = "semiparametric":
the i.i.d. errors are sampled from the distribution of (response) residuals. (For GLMMs, the resulting sample will no longer have the same properties as the original sample, and the method may not make sense; a warning is generated.)
As this question is about a logistic model, the "semiparametric" setting would thus be unwise.
So with bootMer() and a logistic model you would use type = "parametric", thus sampling response values drawn from the underlying binomial distribution. Your choice is whether to use random-effect values fixed at their estimates (use.u = TRUE) or to sample from the estimated distribution of random-effect values (use.u = FALSE). The former choice would seem to make your results conditional on your sample.
With respect to case resampling, Michael Chernick notes that: "The nonparametric bootstrap has been shown to be more robust than the parametric bootstrap when the model is misspecified." That said, it's not clear to me how to deal correctly with resamples that don't seem to be fit properly. If it's just a matter of slow convergence that might be handled by altering fitting options, but if a fit to a resample is really impossible then omitting those resamples would seem to lead to a bias.
As the Bayesian model seems to work well, I suspect that you will stick with that. The warning about resampling from fitted fixed effects in the bootMer() manual page would suggest you should move to drawing from their distribution instead.
*The citation on this point in the manual page is to: Morris, J. S. (2002). The BLUPs Are Not ‘best’ When It Comes to Bootstrapping. Statistics & Probability Letters 56(4): 425–430. doi:10.1016/S0167-7152(02)00041-X. I haven't read that myself.
|
Bootstrap for random effects logistic regression to get CI for difference in proportions
More of an extended set of comments than an answer.
The bootMer() and its associated simulate.merMod() functions in lme4 contain hints as to what works in practice whether in frequentist or Bayesian m
|
42,633
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
|
This is only a partial answer as this comes from my personal experience of training classifiers rather than the literature.
Many classifiers output a weight (or probability) for each class simultaneously, which means the weights are paired by the example from the data set. The approach I have taken is to treat this resulting matrix (rows correspond to examples, columns to the class, and entries are the output weights) as a dataset unto itself to study.
In some cases this involves estimating conditional metaprobabilities between classes, but often pairplots and dimensionality reduction plots (PCA/MDS/etc) reveal a lot about what is going on between classes. However, the metaprobability distributions may be what you're interested in if you wish quantify dependence between class confidences.
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
|
This is only a partial answer as this comes from my personal experience of training classifiers rather than the literature.
Many classifiers output a weight (or probability) for each class simultaneou
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
This is only a partial answer as this comes from my personal experience of training classifiers rather than the literature.
Many classifiers output a weight (or probability) for each class simultaneously, which means the weights are paired by the example from the data set. The approach I have taken is to treat this resulting matrix (rows correspond to examples, columns to the class, and entries are the output weights) as a dataset unto itself to study.
In some cases this involves estimating conditional metaprobabilities between classes, but often pairplots and dimensionality reduction plots (PCA/MDS/etc) reveal a lot about what is going on between classes. However, the metaprobability distributions may be what you're interested in if you wish quantify dependence between class confidences.
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
This is only a partial answer as this comes from my personal experience of training classifiers rather than the literature.
Many classifiers output a weight (or probability) for each class simultaneou
|
42,634
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
|
I have never come across such a thing in literature, but it is a very interesting idea. Firstly, I'd like to point out that normalised confusion matrices exist (I know this isn't what you are asking for but it will illustrate a point I'm going to make, so just bare with me); for these types of confusion matrix there is some form of normalisation such that rows or columns sum to 1, the matrix has a norm of 1, or individual elements are normalised relative to the total number of samples. This means, of course, that a confusion matrix can contain entries which are in the range $[0,1]$ instead of the typical confusion matrix where entries are in range $[0, NumSamples]$, it encapsulates the same relationships as an un-normalized confusion matrix but simply with the values scaled.
My idea would be instead of creating a normalised matrix that contains TP/TN/FP/FN as entries you instead construct a matrix of the One-vs-One scores for different classes using a metric such as Average Precision which takes into account how thresholding affects prediction. Of course, this matrix would be symmetric as Dog-vs-Cat has the same AP as Cat-vs-Dog, but it would give an idea of prediction confidence based on the probabilistic scores rather than the hard predictions. AP would be my first choice, but this method would be relevant to any metric which use prediction scores (and would even work for metrics that use hard predictions too).
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
|
I have never come across such a thing in literature, but it is a very interesting idea. Firstly, I'd like to point out that normalised confusion matrices exist (I know this isn't what you are asking f
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
I have never come across such a thing in literature, but it is a very interesting idea. Firstly, I'd like to point out that normalised confusion matrices exist (I know this isn't what you are asking for but it will illustrate a point I'm going to make, so just bare with me); for these types of confusion matrix there is some form of normalisation such that rows or columns sum to 1, the matrix has a norm of 1, or individual elements are normalised relative to the total number of samples. This means, of course, that a confusion matrix can contain entries which are in the range $[0,1]$ instead of the typical confusion matrix where entries are in range $[0, NumSamples]$, it encapsulates the same relationships as an un-normalized confusion matrix but simply with the values scaled.
My idea would be instead of creating a normalised matrix that contains TP/TN/FP/FN as entries you instead construct a matrix of the One-vs-One scores for different classes using a metric such as Average Precision which takes into account how thresholding affects prediction. Of course, this matrix would be symmetric as Dog-vs-Cat has the same AP as Cat-vs-Dog, but it would give an idea of prediction confidence based on the probabilistic scores rather than the hard predictions. AP would be my first choice, but this method would be relevant to any metric which use prediction scores (and would even work for metrics that use hard predictions too).
|
Is there something like a confusion matrix for a probabilistic score rather than categories?
I have never come across such a thing in literature, but it is a very interesting idea. Firstly, I'd like to point out that normalised confusion matrices exist (I know this isn't what you are asking f
|
42,635
|
Understanding the Benjamini-Hochberg method proof
|
Let's backup and give more context to the two cases specified by equation (5), and then clarify the reasoning behind some of the details.
In equation (5), we consider the two cases, $p \leq p''$ and $ p > p''$
1. $P'_{m_0} =p\leq p''$
This is a difficult (rare) case because all the $m_0$ p-values associated with true nulls are small, and they are likely mixed with false nulls that also have small p-values. By definition of $P'_{m_0} = p \leq p''$, all the true hypotheses (along with $j_0$ false ones) will get rejected by the procedure and we make the maximum error among those $m_0$ true hypotheses. The saving grace is that hopefully $m_0$ is small, or it's very rare (that many Unif(0,1) variables would be smaller than those p-values associated with truly false variables).
"Thus all $m_0 + j_0$ hypotheses are rejected"—why is that? By which procedure? Procedure (1) (=BH)? Or by using the cutoff declared earlier?
Under $p\leq p''$, procedure BH(1) and inequality (4) are effectively describing the same procedure when we consider all true and false hypotheses. It might help to think of the RHS of (4) $:= p''$ as an upper bound on the cutoff described in procedure BH(1) where we think about the index $i$ as $m_0 + j$. Note that at $P'_{m_0}$, the RHS of (1) and (4) are equal, and the $k$ from BH (1) will be $k = m_0 + j_0$ because the maximum of all the true hypotheses are also below this threshold $p''$. I believe the main reason to introduce the inequality like this is for the proof to go through, with the intuition that controlling FDR is limited by the unknown proportion of our tests that are actually true $\left(\frac{m_0}{m}\right)$.
2. $P'_{m_0} = p > p''$
This case is more interesting (common) since our true null p-values are not completely mixed with the false null p-values. Since the true null p-values $P'_{i} \sim Unif(0,1)$, we expect many to fall well above the threshold of rejection (well, at least one, the maximum $P'_{m_0}$), but the goal is to quantify the extent that this happens, which is where we use the induction hypothesis (IH). I think the cleverness here, is that by conditioning out the highest true null p-value $P'_{m_0}$, we can create "new" p-values for a sub-problem with $m$ p-values to allow use of the IH.
I have no idea how they arrive that there must be a k such that $i \leq k \leq m_0 + j - 1$ for which $p_{(k)} \leq \{k / (m+1)\}q^*$ —what is that $j$? Why $−1$?
Under the condition $P'_{m_0} = p > p''$, we are certain that at least 1 value will not be rejected, the hypothesis associated with $P'_{m_0}$. Again, it's useful to think of $p''$ as an upper bound on the cutoff provided by BH (1). Even though these cutoff values for inequality (4) are defined on only the false nulls, no matter how $P'_{m_0}$ lands among the false p-values, $P'_{m_0} > p''$ exceeds the upper bound of the rejected p-values of the BH condition always since the upper bound of (4) does not change for true null p-values (in the plot, dashed line only inflects upwards on false nulls).
The $j$ here is the same $j$ preceding equation (4). The "$-1$" is here because we're creating a sub-problem by conditioning on the maximum true null p-value. The maximum value of $j$ is $m_1$, so the largest our sub-problem is $m_0 + m_1 - 1 = m$.
We create the associated sub-problem by creating new Unif(0,1) random variables dividing by the maximum $p$, and it happens that the new selection problem has $m_0 + j - 1$ p-values, associated with the constant $\frac{m_0 + j - 1}{(m+1)p}q^*$.
From here, the proof is largely algebraic, and it seems you understand the remaining portions.
For completeness and the curious, associated code for the plot is below!
qs <- .05 # FDR control rate
m0 <- 4 # num true H0
m1 <- 6 # num false H0
ix <- 1:10 # Index of all true/false H0
jx <- c(1, 1, 2, 2, 3, 3, 4, 5, 5, 6) # Index increases when H0 false
pval <- c(.001, .0015, .002, .0035, .0075, .02, .037, .06, .075, .1)
plot(ix, pval,
xlim = c(1, 10),
ylim = c(0, .1),
col = c(2, 3, 2, 3, 2, 3, 2, 2, 3, 2),
cex = 1.5,
pch = 19,
xlab = "Ordered Null Index",
ylab = "p-values",
xaxt = "n")
title(main =
'Comparison of inequality (4) and BH Rejection (1) when p > p"')
lines(ix, ix / 10 * qs, col = rgb(0,0,0, alpha = .5), lwd = 4) # BH cutoff
lines(ix, (m0 + jx) / (m0 + m1) * qs, lty=2) # (4) RHS
# Various labels and legends
text(9, .075, expression("P'"[m[0]]), pos = 3, cex = 1.1, col = 3)
text(7.2, .0355, expression("p"[j[0]]), pos = 1, cex = 1.1, col = 2)
text(7, .04, 'p"', pos = 3, cex = 1.1)
axis(1, at = 1:10, labels = c("1", "2", "3", "4", "5", "k = 6",
expression(7 ~ (j[0] ~"="~ 4)), "8", "9", "10"))
legend("topleft", lty = c(1, 3), lwd = c(4, 1),
legend = c("BH cutoff (1)", "Inequality (4)"))
legend(x = .64, y = .0909, col = c(3, 2), pch= c(19, 19),
legend = c("True Null", "False Null"))
|
Understanding the Benjamini-Hochberg method proof
|
Let's backup and give more context to the two cases specified by equation (5), and then clarify the reasoning behind some of the details.
In equation (5), we consider the two cases, $p \leq p''$ and $
|
Understanding the Benjamini-Hochberg method proof
Let's backup and give more context to the two cases specified by equation (5), and then clarify the reasoning behind some of the details.
In equation (5), we consider the two cases, $p \leq p''$ and $ p > p''$
1. $P'_{m_0} =p\leq p''$
This is a difficult (rare) case because all the $m_0$ p-values associated with true nulls are small, and they are likely mixed with false nulls that also have small p-values. By definition of $P'_{m_0} = p \leq p''$, all the true hypotheses (along with $j_0$ false ones) will get rejected by the procedure and we make the maximum error among those $m_0$ true hypotheses. The saving grace is that hopefully $m_0$ is small, or it's very rare (that many Unif(0,1) variables would be smaller than those p-values associated with truly false variables).
"Thus all $m_0 + j_0$ hypotheses are rejected"—why is that? By which procedure? Procedure (1) (=BH)? Or by using the cutoff declared earlier?
Under $p\leq p''$, procedure BH(1) and inequality (4) are effectively describing the same procedure when we consider all true and false hypotheses. It might help to think of the RHS of (4) $:= p''$ as an upper bound on the cutoff described in procedure BH(1) where we think about the index $i$ as $m_0 + j$. Note that at $P'_{m_0}$, the RHS of (1) and (4) are equal, and the $k$ from BH (1) will be $k = m_0 + j_0$ because the maximum of all the true hypotheses are also below this threshold $p''$. I believe the main reason to introduce the inequality like this is for the proof to go through, with the intuition that controlling FDR is limited by the unknown proportion of our tests that are actually true $\left(\frac{m_0}{m}\right)$.
2. $P'_{m_0} = p > p''$
This case is more interesting (common) since our true null p-values are not completely mixed with the false null p-values. Since the true null p-values $P'_{i} \sim Unif(0,1)$, we expect many to fall well above the threshold of rejection (well, at least one, the maximum $P'_{m_0}$), but the goal is to quantify the extent that this happens, which is where we use the induction hypothesis (IH). I think the cleverness here, is that by conditioning out the highest true null p-value $P'_{m_0}$, we can create "new" p-values for a sub-problem with $m$ p-values to allow use of the IH.
I have no idea how they arrive that there must be a k such that $i \leq k \leq m_0 + j - 1$ for which $p_{(k)} \leq \{k / (m+1)\}q^*$ —what is that $j$? Why $−1$?
Under the condition $P'_{m_0} = p > p''$, we are certain that at least 1 value will not be rejected, the hypothesis associated with $P'_{m_0}$. Again, it's useful to think of $p''$ as an upper bound on the cutoff provided by BH (1). Even though these cutoff values for inequality (4) are defined on only the false nulls, no matter how $P'_{m_0}$ lands among the false p-values, $P'_{m_0} > p''$ exceeds the upper bound of the rejected p-values of the BH condition always since the upper bound of (4) does not change for true null p-values (in the plot, dashed line only inflects upwards on false nulls).
The $j$ here is the same $j$ preceding equation (4). The "$-1$" is here because we're creating a sub-problem by conditioning on the maximum true null p-value. The maximum value of $j$ is $m_1$, so the largest our sub-problem is $m_0 + m_1 - 1 = m$.
We create the associated sub-problem by creating new Unif(0,1) random variables dividing by the maximum $p$, and it happens that the new selection problem has $m_0 + j - 1$ p-values, associated with the constant $\frac{m_0 + j - 1}{(m+1)p}q^*$.
From here, the proof is largely algebraic, and it seems you understand the remaining portions.
For completeness and the curious, associated code for the plot is below!
qs <- .05 # FDR control rate
m0 <- 4 # num true H0
m1 <- 6 # num false H0
ix <- 1:10 # Index of all true/false H0
jx <- c(1, 1, 2, 2, 3, 3, 4, 5, 5, 6) # Index increases when H0 false
pval <- c(.001, .0015, .002, .0035, .0075, .02, .037, .06, .075, .1)
plot(ix, pval,
xlim = c(1, 10),
ylim = c(0, .1),
col = c(2, 3, 2, 3, 2, 3, 2, 2, 3, 2),
cex = 1.5,
pch = 19,
xlab = "Ordered Null Index",
ylab = "p-values",
xaxt = "n")
title(main =
'Comparison of inequality (4) and BH Rejection (1) when p > p"')
lines(ix, ix / 10 * qs, col = rgb(0,0,0, alpha = .5), lwd = 4) # BH cutoff
lines(ix, (m0 + jx) / (m0 + m1) * qs, lty=2) # (4) RHS
# Various labels and legends
text(9, .075, expression("P'"[m[0]]), pos = 3, cex = 1.1, col = 3)
text(7.2, .0355, expression("p"[j[0]]), pos = 1, cex = 1.1, col = 2)
text(7, .04, 'p"', pos = 3, cex = 1.1)
axis(1, at = 1:10, labels = c("1", "2", "3", "4", "5", "k = 6",
expression(7 ~ (j[0] ~"="~ 4)), "8", "9", "10"))
legend("topleft", lty = c(1, 3), lwd = c(4, 1),
legend = c("BH cutoff (1)", "Inequality (4)"))
legend(x = .64, y = .0909, col = c(3, 2), pch= c(19, 19),
legend = c("True Null", "False Null"))
|
Understanding the Benjamini-Hochberg method proof
Let's backup and give more context to the two cases specified by equation (5), and then clarify the reasoning behind some of the details.
In equation (5), we consider the two cases, $p \leq p''$ and $
|
42,636
|
What exactly is the gblinear booster in XGBoost?
|
It is just using a linear model with l1 and l2 regularization as its base learner rather than a decision tree. Here is a similar Q&A: Difference in regression coefficients of sklearn's LinearRegression and XGBRegressor .
So it will be different than other linear models because it is optimized slightly differently but more-so you are boosting it which provides further regularization in linear models unlike when you boost trees and add complexity. So it tends to shrink the linear coefficients. You can boost any model but you typically only get major gains when you boost models which partition your data in some way such as linear piecewise functions or trees.
|
What exactly is the gblinear booster in XGBoost?
|
It is just using a linear model with l1 and l2 regularization as its base learner rather than a decision tree. Here is a similar Q&A: Difference in regression coefficients of sklearn's LinearRegressio
|
What exactly is the gblinear booster in XGBoost?
It is just using a linear model with l1 and l2 regularization as its base learner rather than a decision tree. Here is a similar Q&A: Difference in regression coefficients of sklearn's LinearRegression and XGBRegressor .
So it will be different than other linear models because it is optimized slightly differently but more-so you are boosting it which provides further regularization in linear models unlike when you boost trees and add complexity. So it tends to shrink the linear coefficients. You can boost any model but you typically only get major gains when you boost models which partition your data in some way such as linear piecewise functions or trees.
|
What exactly is the gblinear booster in XGBoost?
It is just using a linear model with l1 and l2 regularization as its base learner rather than a decision tree. Here is a similar Q&A: Difference in regression coefficients of sklearn's LinearRegressio
|
42,637
|
GAM: Do shrinkage smooth splines also address for concurvity?
|
In short, no, using select = TRUE doesn't automatically drop concurved terms. You should still check the concurvity of the terms in the resultant model, and decide whether to drop terms or not for highly concurved ones, checking how the other estimated terms change when you drop a concurved term.
That said, fitting with method = "REML" (or "ML" or "fREML" depending on context) and select = TRUE is likely the best protection against the issues raised by concurved terms in the model that we have if you don't want to or can't drop concruved terms.
|
GAM: Do shrinkage smooth splines also address for concurvity?
|
In short, no, using select = TRUE doesn't automatically drop concurved terms. You should still check the concurvity of the terms in the resultant model, and decide whether to drop terms or not for hig
|
GAM: Do shrinkage smooth splines also address for concurvity?
In short, no, using select = TRUE doesn't automatically drop concurved terms. You should still check the concurvity of the terms in the resultant model, and decide whether to drop terms or not for highly concurved ones, checking how the other estimated terms change when you drop a concurved term.
That said, fitting with method = "REML" (or "ML" or "fREML" depending on context) and select = TRUE is likely the best protection against the issues raised by concurved terms in the model that we have if you don't want to or can't drop concruved terms.
|
GAM: Do shrinkage smooth splines also address for concurvity?
In short, no, using select = TRUE doesn't automatically drop concurved terms. You should still check the concurvity of the terms in the resultant model, and decide whether to drop terms or not for hig
|
42,638
|
How to set up a DL classification model so that it selects from an ever changing menu
|
For matching problems, there are mainly two approaches:
Single network to embed object A and object B: In natural language processing, the input would be "[CLS] SentenceA [SEP] SentenceB [SEP]". Then the neural network would measure the difference between the two sentences. In computer vision, you would need to concatenate the two images (as you do not have a sequence).
Siamese network: It is still a single network, but you would first run "object A" through the network and then "object B" through the same network. The result is two vectors of size batch_size x n.
The second approach is a bit more complicated because you have to turn the outputs "object A" and "object B" into a single number.
However, this approach is also more common in computer vision. As you have two vectors of size n, you can compute the scalar product between both objects to obtain a single value. The scalar product can be interpreted as cosine similarity. Normalizing the two vectors is maybe not necessary in deep learning. The next step is to define a loss function $L(a^Tb, y)$ between objects $a^Tb$ and the ground truth $y$.
Note that a batch should not only consist of positive examples. So you have to sample negative examples for each positive example (noise contrastive estimation). See this question.
I described the general approach, but what works best depends on the dataset. For example, instead of using the scalar product, you could also try the mean squared error. Besides the papers, I mentioned in the comments, you can also look at contrastive losses.
|
How to set up a DL classification model so that it selects from an ever changing menu
|
For matching problems, there are mainly two approaches:
Single network to embed object A and object B: In natural language processing, the input would be "[CLS] SentenceA [SEP] SentenceB [SEP]". Then
|
How to set up a DL classification model so that it selects from an ever changing menu
For matching problems, there are mainly two approaches:
Single network to embed object A and object B: In natural language processing, the input would be "[CLS] SentenceA [SEP] SentenceB [SEP]". Then the neural network would measure the difference between the two sentences. In computer vision, you would need to concatenate the two images (as you do not have a sequence).
Siamese network: It is still a single network, but you would first run "object A" through the network and then "object B" through the same network. The result is two vectors of size batch_size x n.
The second approach is a bit more complicated because you have to turn the outputs "object A" and "object B" into a single number.
However, this approach is also more common in computer vision. As you have two vectors of size n, you can compute the scalar product between both objects to obtain a single value. The scalar product can be interpreted as cosine similarity. Normalizing the two vectors is maybe not necessary in deep learning. The next step is to define a loss function $L(a^Tb, y)$ between objects $a^Tb$ and the ground truth $y$.
Note that a batch should not only consist of positive examples. So you have to sample negative examples for each positive example (noise contrastive estimation). See this question.
I described the general approach, but what works best depends on the dataset. For example, instead of using the scalar product, you could also try the mean squared error. Besides the papers, I mentioned in the comments, you can also look at contrastive losses.
|
How to set up a DL classification model so that it selects from an ever changing menu
For matching problems, there are mainly two approaches:
Single network to embed object A and object B: In natural language processing, the input would be "[CLS] SentenceA [SEP] SentenceB [SEP]". Then
|
42,639
|
How to set up a DL classification model so that it selects from an ever changing menu
|
NOTE: I'm still editing this post, need a bit more time to finish
Here's a philosophical aside on the limits of few-shot learning. It is not exactly the answer to the question, but I guess it could help set the expectations straight.
Part 1: Naive estimate of number of samples needed for classifier convergence. Let's say we did some dimensionality reduction on the available images and extracted $N$ features. We could argue that these features are representative of the images as finer features are weak and likely imperceptible for the human eye. For simplicity, let us assume that all of the features are orthogonal. Further, for the sake of the argument, let's assume that exactly 1 of these features is used by the participant to classify the images as those they like vs those they don't. Our goal is to find which feature that is. We have to know this well, or we would not be able to make predictions. Let's assume that each feature has a standard normal distribution $\mathcal{N}(0,1)$ over the available images, and that the important feature will be greater than zero if the image is liked, otherwise below, meaning that the participant will like approximately half of all the images. The question now is: how many trials do we require to find the correctly-predicting feature. The expected number is $\log_2 N$ - there will be some features which will be predictive at random, and at every trial each of them will have 50% probability of still being predictive by chance, meaning that 50% of them will drop out. A related and a bit more realistic question is - how many trials do we require to guarantee that all non-informative features have dropped off (e.g. with 95% confidence). A little less intuitively, but this value also scales as $\log_2 N$. The proof is as follows: after $t$ trials, the probability that one non-informative feature still looks informative by chance is $p_1 = 2^{-t}$. We are looking for the probability that $N$ non-informative features have all been revealed, and we want this probability to be greater or equal to 95%. This can be written as $(1 - p_1)^N \geq 0.95$. If we solve this inequality for $t$ and use series expansion, we will get $t \geq C + \log_2 N + O(\frac{1}{N})$ for some small constant $C$.
Part 2: Effect of noise. But this is the absolute lowest unrealistic estimate. The difficulty starts when we consider that participants such as humans and mammals are known to have variance in their choice, meaning that they do not stay consistent to their general strategy at all trials. To model this, we must allow for a fraction of trials to be non-informative. Let's say that fraction of correct trials for informative features is $p_I = 90\%$. We need a condition to refute the feature if it is non-informative. Dropping some finer details of hypothesis testing, we will find the fraction $\phi=\frac{t_{good}}{t}$ of trials for which the feature is informative, and compare it with $p_I$. The more trials we measure, the closer we would expect the observed fraction to be to the true fraction. So, we would consider the channel as informative, if the observed fraction of trials $\phi$ is greater or equal to some threshold $\phi_0$, which can be written as
$$\phi_0 = p_I+\frac{\sigma_I}{\sqrt{t}}K$$
where $\sigma_I$ is the variance of the fraction (in our binomial case it is $\sigma_I = \sqrt{p_I(1-p_I)}$), and $K$ is some constant which depends on the desired confidence of the test (for normal approximation it is $K=\sqrt{2}erf^{-1}(2\alpha - 1)$ where $\alpha$ is the confidence level). Next, we need to find the probability that a non-informative feature fails the test $\phi \geq \phi_0$, namely, that the result will be $\phi < \phi_0$. For a non-informative trial, the fraction of correct trials will be $p_{NI}=50\%$. Using DeMoivre-Laplace approximation, the sample fraction for non-informative features will be normally distributed, namely $$\phi \sim \mathcal{N}(\frac{1}{2}, \frac{1}{4t})$$
Part 3: Effect of feature synergy. Finally, situation is further complicated if individual features are insufficient for good prediction, and synergistic predictors are required. For example, if we require a pair of features to be simultaneously high (e.g. a person likes red images that are also very sharp), then we effectively have $N^2$ features to consider.
TL;DR: Humans have to be cheating when performing few-shot learning. The only way to learn fast is to have prior information on what features are likely to be salient (predictive of outcome).
|
How to set up a DL classification model so that it selects from an ever changing menu
|
NOTE: I'm still editing this post, need a bit more time to finish
Here's a philosophical aside on the limits of few-shot learning. It is not exactly the answer to the question, but I guess it could he
|
How to set up a DL classification model so that it selects from an ever changing menu
NOTE: I'm still editing this post, need a bit more time to finish
Here's a philosophical aside on the limits of few-shot learning. It is not exactly the answer to the question, but I guess it could help set the expectations straight.
Part 1: Naive estimate of number of samples needed for classifier convergence. Let's say we did some dimensionality reduction on the available images and extracted $N$ features. We could argue that these features are representative of the images as finer features are weak and likely imperceptible for the human eye. For simplicity, let us assume that all of the features are orthogonal. Further, for the sake of the argument, let's assume that exactly 1 of these features is used by the participant to classify the images as those they like vs those they don't. Our goal is to find which feature that is. We have to know this well, or we would not be able to make predictions. Let's assume that each feature has a standard normal distribution $\mathcal{N}(0,1)$ over the available images, and that the important feature will be greater than zero if the image is liked, otherwise below, meaning that the participant will like approximately half of all the images. The question now is: how many trials do we require to find the correctly-predicting feature. The expected number is $\log_2 N$ - there will be some features which will be predictive at random, and at every trial each of them will have 50% probability of still being predictive by chance, meaning that 50% of them will drop out. A related and a bit more realistic question is - how many trials do we require to guarantee that all non-informative features have dropped off (e.g. with 95% confidence). A little less intuitively, but this value also scales as $\log_2 N$. The proof is as follows: after $t$ trials, the probability that one non-informative feature still looks informative by chance is $p_1 = 2^{-t}$. We are looking for the probability that $N$ non-informative features have all been revealed, and we want this probability to be greater or equal to 95%. This can be written as $(1 - p_1)^N \geq 0.95$. If we solve this inequality for $t$ and use series expansion, we will get $t \geq C + \log_2 N + O(\frac{1}{N})$ for some small constant $C$.
Part 2: Effect of noise. But this is the absolute lowest unrealistic estimate. The difficulty starts when we consider that participants such as humans and mammals are known to have variance in their choice, meaning that they do not stay consistent to their general strategy at all trials. To model this, we must allow for a fraction of trials to be non-informative. Let's say that fraction of correct trials for informative features is $p_I = 90\%$. We need a condition to refute the feature if it is non-informative. Dropping some finer details of hypothesis testing, we will find the fraction $\phi=\frac{t_{good}}{t}$ of trials for which the feature is informative, and compare it with $p_I$. The more trials we measure, the closer we would expect the observed fraction to be to the true fraction. So, we would consider the channel as informative, if the observed fraction of trials $\phi$ is greater or equal to some threshold $\phi_0$, which can be written as
$$\phi_0 = p_I+\frac{\sigma_I}{\sqrt{t}}K$$
where $\sigma_I$ is the variance of the fraction (in our binomial case it is $\sigma_I = \sqrt{p_I(1-p_I)}$), and $K$ is some constant which depends on the desired confidence of the test (for normal approximation it is $K=\sqrt{2}erf^{-1}(2\alpha - 1)$ where $\alpha$ is the confidence level). Next, we need to find the probability that a non-informative feature fails the test $\phi \geq \phi_0$, namely, that the result will be $\phi < \phi_0$. For a non-informative trial, the fraction of correct trials will be $p_{NI}=50\%$. Using DeMoivre-Laplace approximation, the sample fraction for non-informative features will be normally distributed, namely $$\phi \sim \mathcal{N}(\frac{1}{2}, \frac{1}{4t})$$
Part 3: Effect of feature synergy. Finally, situation is further complicated if individual features are insufficient for good prediction, and synergistic predictors are required. For example, if we require a pair of features to be simultaneously high (e.g. a person likes red images that are also very sharp), then we effectively have $N^2$ features to consider.
TL;DR: Humans have to be cheating when performing few-shot learning. The only way to learn fast is to have prior information on what features are likely to be salient (predictive of outcome).
|
How to set up a DL classification model so that it selects from an ever changing menu
NOTE: I'm still editing this post, need a bit more time to finish
Here's a philosophical aside on the limits of few-shot learning. It is not exactly the answer to the question, but I guess it could he
|
42,640
|
3 Treatment Agronomic Experiment: Latin Square or Randomized Complete Block Design with 4 replicates?
|
It is more natural to compare designs with equal number of observations, so I will compare a $3\times 3$ latin square (LSQ) with a thrice replicated RCBD. The LSQ leaves 2 df (defgrees of freedom) for error, while the RCBD leaves 4 df for error. So the advantage of the RCBD is more df for error, while the LSQ possibly can remove more variation, so give a lower variance. What is more important?
If you make inference with (say) a 95% confidence interval (CI) for effects of interest, those will have the form
$$ \text{estimate}\pm \hat{\sigma} t_{\nu,0.975}/\sqrt{n} $$
Compare those t quantiles: $t_{2,0.975}=4.30, t_{4,0.975}=2.78$ so the variance reduction must be large, at least a factor of $\left( 2.78/4.30 \right)^2 = 0.42$ to get more effective inference.
How does this change with more replicas? Say we double the number of observations above, then the LSQ design gives 4 df for error, while the RCBD gives 10. You can redo the calculation above and make your conclusions.
But the general conclusion will be that with a low $n$, very few plots, it might not be an advantage with a latin square design over a RCBD.
|
3 Treatment Agronomic Experiment: Latin Square or Randomized Complete Block Design with 4 replicates
|
It is more natural to compare designs with equal number of observations, so I will compare a $3\times 3$ latin square (LSQ) with a thrice replicated RCBD. The LSQ leaves 2 df (defgrees of freedom) for
|
3 Treatment Agronomic Experiment: Latin Square or Randomized Complete Block Design with 4 replicates?
It is more natural to compare designs with equal number of observations, so I will compare a $3\times 3$ latin square (LSQ) with a thrice replicated RCBD. The LSQ leaves 2 df (defgrees of freedom) for error, while the RCBD leaves 4 df for error. So the advantage of the RCBD is more df for error, while the LSQ possibly can remove more variation, so give a lower variance. What is more important?
If you make inference with (say) a 95% confidence interval (CI) for effects of interest, those will have the form
$$ \text{estimate}\pm \hat{\sigma} t_{\nu,0.975}/\sqrt{n} $$
Compare those t quantiles: $t_{2,0.975}=4.30, t_{4,0.975}=2.78$ so the variance reduction must be large, at least a factor of $\left( 2.78/4.30 \right)^2 = 0.42$ to get more effective inference.
How does this change with more replicas? Say we double the number of observations above, then the LSQ design gives 4 df for error, while the RCBD gives 10. You can redo the calculation above and make your conclusions.
But the general conclusion will be that with a low $n$, very few plots, it might not be an advantage with a latin square design over a RCBD.
|
3 Treatment Agronomic Experiment: Latin Square or Randomized Complete Block Design with 4 replicates
It is more natural to compare designs with equal number of observations, so I will compare a $3\times 3$ latin square (LSQ) with a thrice replicated RCBD. The LSQ leaves 2 df (defgrees of freedom) for
|
42,641
|
What do you do after your tuned model perform badly on the test set?
|
Generally speaking, this should not be the case, and is most likely an implementation bug. The validation performance should be very close to the test performance. If this is not the case, either:
A) [Most likely] the code has one of the following mistakes:
Possibility 1: Incorrect preprocessing of the test set. E.g. applying some sort of preprocessing (zero meaning, normalizing, etc.) to the train and validation sets, but not the test set.
Possibility 2: Testing the model in train mode. Certain layers such as batch normalization perform differently at training and inference time.
Possibility 3: Some other implementation-related bug.
B) the validation set and test set come from very different distributions.
C) the dataset is small with an even smaller validation set.
|
What do you do after your tuned model perform badly on the test set?
|
Generally speaking, this should not be the case, and is most likely an implementation bug. The validation performance should be very close to the test performance. If this is not the case, either:
A)
|
What do you do after your tuned model perform badly on the test set?
Generally speaking, this should not be the case, and is most likely an implementation bug. The validation performance should be very close to the test performance. If this is not the case, either:
A) [Most likely] the code has one of the following mistakes:
Possibility 1: Incorrect preprocessing of the test set. E.g. applying some sort of preprocessing (zero meaning, normalizing, etc.) to the train and validation sets, but not the test set.
Possibility 2: Testing the model in train mode. Certain layers such as batch normalization perform differently at training and inference time.
Possibility 3: Some other implementation-related bug.
B) the validation set and test set come from very different distributions.
C) the dataset is small with an even smaller validation set.
|
What do you do after your tuned model perform badly on the test set?
Generally speaking, this should not be the case, and is most likely an implementation bug. The validation performance should be very close to the test performance. If this is not the case, either:
A)
|
42,642
|
What do you do after your tuned model perform badly on the test set?
|
You have overfitted the training set. Try again with more data, or with some form of regularization, possibly including added noise.
|
What do you do after your tuned model perform badly on the test set?
|
You have overfitted the training set. Try again with more data, or with some form of regularization, possibly including added noise.
|
What do you do after your tuned model perform badly on the test set?
You have overfitted the training set. Try again with more data, or with some form of regularization, possibly including added noise.
|
What do you do after your tuned model perform badly on the test set?
You have overfitted the training set. Try again with more data, or with some form of regularization, possibly including added noise.
|
42,643
|
What do you do after your tuned model perform badly on the test set?
|
This may be due to your dev set and test set not being identically distributed.
One way to test this is to train a classifier that discriminates between the training/dev vs the test set.
If your dataset is small you should definitely check if the drop from dev/test metrics is consistent between splits. If the drop varies, you should do a nested cross validation. That way you average over the splits (which is random) and get a better estimate of the true performance.
|
What do you do after your tuned model perform badly on the test set?
|
This may be due to your dev set and test set not being identically distributed.
One way to test this is to train a classifier that discriminates between the training/dev vs the test set.
If your datas
|
What do you do after your tuned model perform badly on the test set?
This may be due to your dev set and test set not being identically distributed.
One way to test this is to train a classifier that discriminates between the training/dev vs the test set.
If your dataset is small you should definitely check if the drop from dev/test metrics is consistent between splits. If the drop varies, you should do a nested cross validation. That way you average over the splits (which is random) and get a better estimate of the true performance.
|
What do you do after your tuned model perform badly on the test set?
This may be due to your dev set and test set not being identically distributed.
One way to test this is to train a classifier that discriminates between the training/dev vs the test set.
If your datas
|
42,644
|
DAG: no back-door paths but background information shows a need for adjusting
|
What is the difference between disjunctive cause criterion and Pearl's single door criterion?
The Single Door Criterion establishes conditions under which a causal bath between two variables, say $X \rightarrow Y$, will be consistently estimated by the regression coefficient for $X$ in a multivariable regression model for the response $Y$. Breifly, it stipulates that, for a set of variables containing various paths between them and being acyclic (ie, it's a DAG), then a subset of these variables, $Z$, will be sufficient provided that
$Z$ contains no descendent of $Y$, and
by removing the arrow in $X \rightarrow Y$, $X$ is then independent of $Y$
This leads to the familiar "rules" that we should condition on confounders (ie backdoor adjustment), but not mediators.
It also leads to "front door adjustment", where we are able to estimate the causal effect of $X$ on $Y$ in $X \rightarrow M \rightarrow Y$ even in the presence of unmeasured confounding.
The Disjunctive Cause Criterion (VanderWeele, 2019), is actually very similar to backdoor adjustment, but tries to avoid having to explicitly identify confounders, and instead seeks to adjust for variables that are causes of either the main exposure or the outcome (or indeed both), but excluding instrumental variables. However, I say "tries to", because there is still a need to include confounders:
"controlling for each covariate that is a cause of the exposure, or of the outcome, or of both; excluding from this set any variable known to be an instrumental variable; and including as a covariate any proxy for an unmeasured variable that is a common cause of both the exposure and the outcome"
VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol. 2019. https://doi.org/10.1007/s10654-019-00494-6.
The problem with this approach is two-fold. First it can lead to "over-adjustment", that is, unlike Pearl's theory, it can not, except by accident, lead to a "minimally sufficient" set of covariates, so in general it will not result in a parsimonious model and could suffer from problems due to high correlations between covariates. Second, it can lead to the inclusion of mediators, which VanderWeele acknowledges would be a problem.
And if I have the aforementioned background knowledge, is it reasonable to adjust for all available six covariates?
No, I don't think this is appropriate. All 6 observed variables appear to be confounders for the causal effect of TOWN on INCOME and should not be adjusted for. This is precisely an example of the second problem with this technique, mentioned in my last paragraph. See this answer for details and examples of what can go wrong if you do adjust for mediators:
How do DAGs help to reduce bias in causal inference?
Without further details of your research question, study, and data it is difficult to advise further, but you might want to look into a multilevel structural equation model with random effects for town, although if TOWN is your main exposure this probably wouldn't be the way to go, but some kind of SEM could be worth looking at. I would suggest asking a new question about how to proceed further.
|
DAG: no back-door paths but background information shows a need for adjusting
|
What is the difference between disjunctive cause criterion and Pearl's single door criterion?
The Single Door Criterion establishes conditions under which a causal bath between two variables, say $X
|
DAG: no back-door paths but background information shows a need for adjusting
What is the difference between disjunctive cause criterion and Pearl's single door criterion?
The Single Door Criterion establishes conditions under which a causal bath between two variables, say $X \rightarrow Y$, will be consistently estimated by the regression coefficient for $X$ in a multivariable regression model for the response $Y$. Breifly, it stipulates that, for a set of variables containing various paths between them and being acyclic (ie, it's a DAG), then a subset of these variables, $Z$, will be sufficient provided that
$Z$ contains no descendent of $Y$, and
by removing the arrow in $X \rightarrow Y$, $X$ is then independent of $Y$
This leads to the familiar "rules" that we should condition on confounders (ie backdoor adjustment), but not mediators.
It also leads to "front door adjustment", where we are able to estimate the causal effect of $X$ on $Y$ in $X \rightarrow M \rightarrow Y$ even in the presence of unmeasured confounding.
The Disjunctive Cause Criterion (VanderWeele, 2019), is actually very similar to backdoor adjustment, but tries to avoid having to explicitly identify confounders, and instead seeks to adjust for variables that are causes of either the main exposure or the outcome (or indeed both), but excluding instrumental variables. However, I say "tries to", because there is still a need to include confounders:
"controlling for each covariate that is a cause of the exposure, or of the outcome, or of both; excluding from this set any variable known to be an instrumental variable; and including as a covariate any proxy for an unmeasured variable that is a common cause of both the exposure and the outcome"
VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol. 2019. https://doi.org/10.1007/s10654-019-00494-6.
The problem with this approach is two-fold. First it can lead to "over-adjustment", that is, unlike Pearl's theory, it can not, except by accident, lead to a "minimally sufficient" set of covariates, so in general it will not result in a parsimonious model and could suffer from problems due to high correlations between covariates. Second, it can lead to the inclusion of mediators, which VanderWeele acknowledges would be a problem.
And if I have the aforementioned background knowledge, is it reasonable to adjust for all available six covariates?
No, I don't think this is appropriate. All 6 observed variables appear to be confounders for the causal effect of TOWN on INCOME and should not be adjusted for. This is precisely an example of the second problem with this technique, mentioned in my last paragraph. See this answer for details and examples of what can go wrong if you do adjust for mediators:
How do DAGs help to reduce bias in causal inference?
Without further details of your research question, study, and data it is difficult to advise further, but you might want to look into a multilevel structural equation model with random effects for town, although if TOWN is your main exposure this probably wouldn't be the way to go, but some kind of SEM could be worth looking at. I would suggest asking a new question about how to proceed further.
|
DAG: no back-door paths but background information shows a need for adjusting
What is the difference between disjunctive cause criterion and Pearl's single door criterion?
The Single Door Criterion establishes conditions under which a causal bath between two variables, say $X
|
42,645
|
How to make sure that the random sample is representative for the whole sample?
|
So long as you have no wish to incorporate covariate information into your sampling scheme (e.g., balancing tweets from males/females), the usual method is to take a simple random sample without replacement. This can be implemented in R using the sample.int function. In the code below I show you how to generate a simple random sample from $N$ population values. For convenience, the sample is sorted into ascending order, so it is a list of numbers of the tweets to include in the sample. (Remember to set your seed for reproducible randomisation.)
#Generate simple random sample of tweets
set.seed(1)
N <- 14000
p <- 0.2
n <- ceiling(p*N)
SAMPLE <- sort(sample.int(N, size = n, replace = FALSE))
#Show the sample
SAMPLE
[1] 8 13 17 18 21 25 27 42 59 64 ...
[24] 126 128 129 149 152 155 157 172 173 179 ...
[47] 237 241 244 262 267 274 277 289 308 311 ...
...
...
...
[2761] 13775 13777 13779 13780 13784 13785 13787 13788 13796 13798 ...
[2784] 13879 13880 13886 13896 13908 13918 13923 13927 13942 13944 ...
If you are looking for a randomiser that gives a "representative" sample with respect to some variables of interest (e.g., men and women, etc.) then you can use block randomisation instead of simple-random-sampling. Block randomisaton allows you to ensure that known variables in your data are distributed in a representative fashion across your sample. It is a bit more complicated than the above coding but it can also be implemented in a reproducible way using scripted coding.
You should note that with any sampling method, it is possible to make post hoc checks of the distributions of known variables in the sampled and non-sampled parts. However, rejection of a random sample based on post-hoc analysis is highly discouraged and can lead to serious problems in your analysis.
|
How to make sure that the random sample is representative for the whole sample?
|
So long as you have no wish to incorporate covariate information into your sampling scheme (e.g., balancing tweets from males/females), the usual method is to take a simple random sample without repla
|
How to make sure that the random sample is representative for the whole sample?
So long as you have no wish to incorporate covariate information into your sampling scheme (e.g., balancing tweets from males/females), the usual method is to take a simple random sample without replacement. This can be implemented in R using the sample.int function. In the code below I show you how to generate a simple random sample from $N$ population values. For convenience, the sample is sorted into ascending order, so it is a list of numbers of the tweets to include in the sample. (Remember to set your seed for reproducible randomisation.)
#Generate simple random sample of tweets
set.seed(1)
N <- 14000
p <- 0.2
n <- ceiling(p*N)
SAMPLE <- sort(sample.int(N, size = n, replace = FALSE))
#Show the sample
SAMPLE
[1] 8 13 17 18 21 25 27 42 59 64 ...
[24] 126 128 129 149 152 155 157 172 173 179 ...
[47] 237 241 244 262 267 274 277 289 308 311 ...
...
...
...
[2761] 13775 13777 13779 13780 13784 13785 13787 13788 13796 13798 ...
[2784] 13879 13880 13886 13896 13908 13918 13923 13927 13942 13944 ...
If you are looking for a randomiser that gives a "representative" sample with respect to some variables of interest (e.g., men and women, etc.) then you can use block randomisation instead of simple-random-sampling. Block randomisaton allows you to ensure that known variables in your data are distributed in a representative fashion across your sample. It is a bit more complicated than the above coding but it can also be implemented in a reproducible way using scripted coding.
You should note that with any sampling method, it is possible to make post hoc checks of the distributions of known variables in the sampled and non-sampled parts. However, rejection of a random sample based on post-hoc analysis is highly discouraged and can lead to serious problems in your analysis.
|
How to make sure that the random sample is representative for the whole sample?
So long as you have no wish to incorporate covariate information into your sampling scheme (e.g., balancing tweets from males/females), the usual method is to take a simple random sample without repla
|
42,646
|
How to make sure that the random sample is representative for the whole sample?
|
Here I prefer the technique of Systematic Sampling where one selects every kth individual from the population. Thus, from a list of n arrived tweets, every kth tweet is chosen to construct a sample set of 's' tweets, such that k*s is close to n.
Advantages:
Simple statistically valid procedure
Accurate
Easier to implement and verify the correct tweets have been selected
Unbiased and representative, even more likely so than a Simple Random Sampling scheme, in the current context as this also sorts by time of arrival, where the latter criteria is likely material, as it spreads the sample over the day. As such, it can, for example, likely isolate workers, largely inactive 9 AM to 5 PM day, versus non-workers including students active 3 PM - 8 PM (after school), and older adults active latter in the evening.
Thus, the application of simple, easy to implement, unbiased and representative Systematic Sampling here likely also results in a spread of the sample over important age demographics and income classes.
Note: How one arrives at the best sample size 's' is an important topic, best discussed separately.
[EDIT] An important point that is duly noted by this educational reference, to quote:
You don't have a complete list, so simple random sampling doesn't apply...
So, technically the employment of a simple random sampling scheme, to assess characteristics of the parent population, is valid when one has a complete list of the population over which to subsample. This is NOT the case with a continuous occurring series of generated tweets constituting a subset of the tweeting universe. So, inferences on the parent population and, in particular, the very question as to whether it is representative of the 'whole sample', implying the parent population, only arguably can be answered here by a simple random sampling scheme. However, the same source does affirm the validity of systematic sampling in such a context, to quote:
Since we don't have access to the complete list, just stand at a corner and pick every 10th* person walking by.
*Of course, choosing 10 here is just an example. It would depend on the number of students typically passing by that spot and what sample size was needed.
|
How to make sure that the random sample is representative for the whole sample?
|
Here I prefer the technique of Systematic Sampling where one selects every kth individual from the population. Thus, from a list of n arrived tweets, every kth tweet is chosen to construct a sample se
|
How to make sure that the random sample is representative for the whole sample?
Here I prefer the technique of Systematic Sampling where one selects every kth individual from the population. Thus, from a list of n arrived tweets, every kth tweet is chosen to construct a sample set of 's' tweets, such that k*s is close to n.
Advantages:
Simple statistically valid procedure
Accurate
Easier to implement and verify the correct tweets have been selected
Unbiased and representative, even more likely so than a Simple Random Sampling scheme, in the current context as this also sorts by time of arrival, where the latter criteria is likely material, as it spreads the sample over the day. As such, it can, for example, likely isolate workers, largely inactive 9 AM to 5 PM day, versus non-workers including students active 3 PM - 8 PM (after school), and older adults active latter in the evening.
Thus, the application of simple, easy to implement, unbiased and representative Systematic Sampling here likely also results in a spread of the sample over important age demographics and income classes.
Note: How one arrives at the best sample size 's' is an important topic, best discussed separately.
[EDIT] An important point that is duly noted by this educational reference, to quote:
You don't have a complete list, so simple random sampling doesn't apply...
So, technically the employment of a simple random sampling scheme, to assess characteristics of the parent population, is valid when one has a complete list of the population over which to subsample. This is NOT the case with a continuous occurring series of generated tweets constituting a subset of the tweeting universe. So, inferences on the parent population and, in particular, the very question as to whether it is representative of the 'whole sample', implying the parent population, only arguably can be answered here by a simple random sampling scheme. However, the same source does affirm the validity of systematic sampling in such a context, to quote:
Since we don't have access to the complete list, just stand at a corner and pick every 10th* person walking by.
*Of course, choosing 10 here is just an example. It would depend on the number of students typically passing by that spot and what sample size was needed.
|
How to make sure that the random sample is representative for the whole sample?
Here I prefer the technique of Systematic Sampling where one selects every kth individual from the population. Thus, from a list of n arrived tweets, every kth tweet is chosen to construct a sample se
|
42,647
|
How to make sure that the random sample is representative for the whole sample?
|
What you want is a sample that is representative in terms of the topics you are going to manually code.
First of all, you want to be sure that your coding procedure is not biased. This is really important because a representative sample is useless if your coding procedure is biased. Thus you need at least two independent coders to code the tweets (usually just a part of the tweets you are going to code), and a test to evaluate the coherence between the coding results of the independent coders (such as the Krippendorff’s alpha coefficient).
Having said that, in your case the universe is composed by 14,000 tweets and a random sample would avoid per definition systematic biases in the selection of tweets. However, you might consider a more systematic sampling to be sure that every day of the week and every hours of the day is properly represented. For instance, you could sample a certain number of tweets per hours, for every hours of the day, for all the days in your dataset. In media studies there is also a procedure consisting in created a constructed week, where the data for each day are sampled for the same day across many weeks. With regards to tweets, this method has been compared to simple random sampling finding that the latter performs better.
In general, you can find a lot of examples in literature based on media data and also twitter data. If you want to be really sure of the appropriateness of your sample strategy, you might consider a sort of cross-validation approach. Instead of pick up just a sample, you pick up two samples. Without forgetting to code the tweets with independent coders and verify the validity of the coding, you first code one sample and then the other sample, and finally compare the proportions of codes in the two samples. You could also use a statistical test to be sure that the code proportions in the samples do not differ too much. However, a so detailed approach could be unusual. You should take into account the best practice in your field.
You might also want to try some supervised classification methods that seems to work fine also with a limited quantity of manually coded data.
|
How to make sure that the random sample is representative for the whole sample?
|
What you want is a sample that is representative in terms of the topics you are going to manually code.
First of all, you want to be sure that your coding procedure is not biased. This is really impor
|
How to make sure that the random sample is representative for the whole sample?
What you want is a sample that is representative in terms of the topics you are going to manually code.
First of all, you want to be sure that your coding procedure is not biased. This is really important because a representative sample is useless if your coding procedure is biased. Thus you need at least two independent coders to code the tweets (usually just a part of the tweets you are going to code), and a test to evaluate the coherence between the coding results of the independent coders (such as the Krippendorff’s alpha coefficient).
Having said that, in your case the universe is composed by 14,000 tweets and a random sample would avoid per definition systematic biases in the selection of tweets. However, you might consider a more systematic sampling to be sure that every day of the week and every hours of the day is properly represented. For instance, you could sample a certain number of tweets per hours, for every hours of the day, for all the days in your dataset. In media studies there is also a procedure consisting in created a constructed week, where the data for each day are sampled for the same day across many weeks. With regards to tweets, this method has been compared to simple random sampling finding that the latter performs better.
In general, you can find a lot of examples in literature based on media data and also twitter data. If you want to be really sure of the appropriateness of your sample strategy, you might consider a sort of cross-validation approach. Instead of pick up just a sample, you pick up two samples. Without forgetting to code the tweets with independent coders and verify the validity of the coding, you first code one sample and then the other sample, and finally compare the proportions of codes in the two samples. You could also use a statistical test to be sure that the code proportions in the samples do not differ too much. However, a so detailed approach could be unusual. You should take into account the best practice in your field.
You might also want to try some supervised classification methods that seems to work fine also with a limited quantity of manually coded data.
|
How to make sure that the random sample is representative for the whole sample?
What you want is a sample that is representative in terms of the topics you are going to manually code.
First of all, you want to be sure that your coding procedure is not biased. This is really impor
|
42,648
|
Interpretation of mutual information
|
The mutual information measure $I(X;Y)$ is nonparametric measure of probabilistic dependence between the variables $X$ and $Y$. As follows from wikipedia:
"Intuitively, mutual information measures the information that $X$ and $Y$ share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if $X$ and $Y$ are independent, then knowing $X$ does not give any information about $Y$ and vice versa, so their mutual information is zero."
In general, $I(X;Y)$ is computed for $m \times 2$ grid-histograms. You can 'bin' continuously distributed variables into $m$ intervals as to create this grid.
When it comes to the degree of covariation between a feature value distribution and a class outcome distribution, the information gain $IG(T,a)$ is widely used. Here $T$ is the variable associated with class outcomes and $a$ the attribute value. I refer you to the definition of criteria optimized by learning algorithm ID3 (its modern successor algorithm is called C4.5). $IG(T,a)$ is different to $I(T;A)$.
$I(X;Y)$ is also defined for continuous probability density functions, but you need to know the mathematical formula for the bivariate probability density in order to calculate it. Hence, histograms are practical for continuous stochastic variables $X$ and $Y$.
|
Interpretation of mutual information
|
The mutual information measure $I(X;Y)$ is nonparametric measure of probabilistic dependence between the variables $X$ and $Y$. As follows from wikipedia:
"Intuitively, mutual information measures the
|
Interpretation of mutual information
The mutual information measure $I(X;Y)$ is nonparametric measure of probabilistic dependence between the variables $X$ and $Y$. As follows from wikipedia:
"Intuitively, mutual information measures the information that $X$ and $Y$ share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if $X$ and $Y$ are independent, then knowing $X$ does not give any information about $Y$ and vice versa, so their mutual information is zero."
In general, $I(X;Y)$ is computed for $m \times 2$ grid-histograms. You can 'bin' continuously distributed variables into $m$ intervals as to create this grid.
When it comes to the degree of covariation between a feature value distribution and a class outcome distribution, the information gain $IG(T,a)$ is widely used. Here $T$ is the variable associated with class outcomes and $a$ the attribute value. I refer you to the definition of criteria optimized by learning algorithm ID3 (its modern successor algorithm is called C4.5). $IG(T,a)$ is different to $I(T;A)$.
$I(X;Y)$ is also defined for continuous probability density functions, but you need to know the mathematical formula for the bivariate probability density in order to calculate it. Hence, histograms are practical for continuous stochastic variables $X$ and $Y$.
|
Interpretation of mutual information
The mutual information measure $I(X;Y)$ is nonparametric measure of probabilistic dependence between the variables $X$ and $Y$. As follows from wikipedia:
"Intuitively, mutual information measures the
|
42,649
|
Interpretation of mutual information
|
From a Kaggle webpage:
The least possible mutual information between quantities is 0.0. When
MI is zero, the quantities are independent: neither can tell you
anything about the other. Conversely, in theory there's no upper bound
to what MI can be. In practice though values above 2.0 or so are
uncommon. (Mutual information is a logarithmic quantity, so it
increases very slowly.)
So, the answer to your question is yes when MI is "high".
However, when MI is "low", it does not necessarily mean there is no possibility of discrimination at all. The same page mentions the role of fuel type in a vehicle's price. Although fuel type MI is low, it separates two classes of prices.
|
Interpretation of mutual information
|
From a Kaggle webpage:
The least possible mutual information between quantities is 0.0. When
MI is zero, the quantities are independent: neither can tell you
anything about the other. Conversely, in
|
Interpretation of mutual information
From a Kaggle webpage:
The least possible mutual information between quantities is 0.0. When
MI is zero, the quantities are independent: neither can tell you
anything about the other. Conversely, in theory there's no upper bound
to what MI can be. In practice though values above 2.0 or so are
uncommon. (Mutual information is a logarithmic quantity, so it
increases very slowly.)
So, the answer to your question is yes when MI is "high".
However, when MI is "low", it does not necessarily mean there is no possibility of discrimination at all. The same page mentions the role of fuel type in a vehicle's price. Although fuel type MI is low, it separates two classes of prices.
|
Interpretation of mutual information
From a Kaggle webpage:
The least possible mutual information between quantities is 0.0. When
MI is zero, the quantities are independent: neither can tell you
anything about the other. Conversely, in
|
42,650
|
How is pairwise PERMANOVA/adonis a valid non-parametric approach for pairwise comparisons
|
Am I incorrect in saying that this is equivalent to a simple series of pairwise anovas with p-values calculated according to the observed F statistics probability under the empirical null distribution that was generated through random permutations of group membership (or "location" membership in this case).
No, you are not incorrect.
If so, how can this be a valid non-parametric approach to pairwise comparisons?
It depends what you take to be meant by "non-parametric"?
If you take that to be synonymous with classical rank-based tests, then no, PERMANOVA is not non-parametric.
If you take that term to be something broader, where we relax (to some extent) the distributional assumptions, then PERMANOVA is non-parametric. The distributional assumptions are relaxed because we do not use a parametric distribution to generate the null distribution of the test statistic. Instead we use a permutation test to generate the null distribution of the test statistic.
PERMANOVA used to be called NP-MANOVA and I think the new name or PERMANOVA helps clarify some of your concern. The PER bit stands for permutation and reflects the use of permutations to avoid the stricter distributional assumptions of classical ANOVA or MANOVA.
Regardless, the method is a valid method (and the pairwise part is irrelevant as we can use a pairwise-based test statistic or the omnibus test statistic for the overall model) given a set of assumptions, as with any test.
The important bit is that we can compute any reasonable test statistic for the permutation test and it just so happens that it is easy and useful to compute the F statistic in adonis(). Computing that statistic is just an exercise in math (or computation); we just do the math on the input data. This is all valid at this point.
Where we have issues is if we want to assign a p-value to the result. We could use standard parametric theory here but we'd typically get the wrong answer and biased p-values if we did. Instead we try to resolve that by using a permutation test. The computation of the test statistic itself is not invalidated by the assumptions of ANOVA/MANOVA as those assumptions apply to the theory used to justify the use of the null or reference distribution for construction of the p-value, and not to computation of the statistic itself.
|
How is pairwise PERMANOVA/adonis a valid non-parametric approach for pairwise comparisons
|
Am I incorrect in saying that this is equivalent to a simple series of pairwise anovas with p-values calculated according to the observed F statistics probability under the empirical null distribution
|
How is pairwise PERMANOVA/adonis a valid non-parametric approach for pairwise comparisons
Am I incorrect in saying that this is equivalent to a simple series of pairwise anovas with p-values calculated according to the observed F statistics probability under the empirical null distribution that was generated through random permutations of group membership (or "location" membership in this case).
No, you are not incorrect.
If so, how can this be a valid non-parametric approach to pairwise comparisons?
It depends what you take to be meant by "non-parametric"?
If you take that to be synonymous with classical rank-based tests, then no, PERMANOVA is not non-parametric.
If you take that term to be something broader, where we relax (to some extent) the distributional assumptions, then PERMANOVA is non-parametric. The distributional assumptions are relaxed because we do not use a parametric distribution to generate the null distribution of the test statistic. Instead we use a permutation test to generate the null distribution of the test statistic.
PERMANOVA used to be called NP-MANOVA and I think the new name or PERMANOVA helps clarify some of your concern. The PER bit stands for permutation and reflects the use of permutations to avoid the stricter distributional assumptions of classical ANOVA or MANOVA.
Regardless, the method is a valid method (and the pairwise part is irrelevant as we can use a pairwise-based test statistic or the omnibus test statistic for the overall model) given a set of assumptions, as with any test.
The important bit is that we can compute any reasonable test statistic for the permutation test and it just so happens that it is easy and useful to compute the F statistic in adonis(). Computing that statistic is just an exercise in math (or computation); we just do the math on the input data. This is all valid at this point.
Where we have issues is if we want to assign a p-value to the result. We could use standard parametric theory here but we'd typically get the wrong answer and biased p-values if we did. Instead we try to resolve that by using a permutation test. The computation of the test statistic itself is not invalidated by the assumptions of ANOVA/MANOVA as those assumptions apply to the theory used to justify the use of the null or reference distribution for construction of the p-value, and not to computation of the statistic itself.
|
How is pairwise PERMANOVA/adonis a valid non-parametric approach for pairwise comparisons
Am I incorrect in saying that this is equivalent to a simple series of pairwise anovas with p-values calculated according to the observed F statistics probability under the empirical null distribution
|
42,651
|
Intuition for why the (log) partition function matters?
|
This is how Self-Normalized Importance Sampling (SNIS) works - you draw samples from a proposal distribution that is essentially guess about where
This shows how the lack of knowledge about $\log Z$ can be solved.
But it doesn't mean that lack of knowledge of $\log Z$ is not a problem.
In fact the SNIS method shows that not knowing $\log Z$ is a problem. It is a problem and we need to use a trick in order to solve it. If we knew $\log Z$ then our sampling method would perform better.
Example
See for instance in the example below where we have a beta distributed variable
$$f_X(x) \propto x^2 \quad \qquad \qquad \text{for $\quad 0 \leq x \leq 1$}$$
And we wish to estimate the expectation value for $log(X)$.
Because this is a simple example we know that $E_X[log(X)] = -1/3$ by calculating it analytically. But here we are gonna use self-normalized importance sampling and sampling with another beta distribution $f_Y(y) \propto (1-y)^2$ to illustrate the difference.
In one case we compute it with an exact normalization factor. We can do this because we know $log(Z)$, as for a beta distribution it is not so difficult.
$$E_X[log(X)] \approx \frac{\sum_{\forall y_i} log(y_i) \frac{y_i^2}{(1-y_i)^2}}{1}$$
In the other case we compute it with self-normalization
$$E_X[log(X)] \approx \frac{\sum_{\forall y_i} log(y_i) \frac{y_i^2}{(1-y_i)^2}}{\sum_{\forall y_i} \frac{y_i^2}{(1-y_i)^2}}$$
So the difference is whether this factor in the denominator is a constant based on the partition function $\log(Z)$ (or actually ratio of partition functions for X and Y), or a random variable $\sum_{\forall y_i} {y_i^2}/{(1-y_i)^2}$.
Intuitively you may guess that this latter will increase bias and variance of the estimate.
The image below gives the histograms for estimates with samples of size 100.
ns <- 100
nt <- 10^3
mt <- rep(0,nt)
zt <- rep(0,nt)
for (i in 1:nt) {
y <- rbeta(ns,1,3)
t <- log(y)*y^2/(1-y)^2
z <- y^2/(1-y)^2
mt[i] <- mean(t)
zt[i] <- mean(z)
}
h1 <- hist(mt, breaks = seq(-1,0,0.01), main = "using known parition function")
h2 <- hist(mt/zt , breaks = seq(-1,0,0.01), main = "using self-normalization")
|
Intuition for why the (log) partition function matters?
|
This is how Self-Normalized Importance Sampling (SNIS) works - you draw samples from a proposal distribution that is essentially guess about where
This shows how the lack of knowledge about $\log Z$
|
Intuition for why the (log) partition function matters?
This is how Self-Normalized Importance Sampling (SNIS) works - you draw samples from a proposal distribution that is essentially guess about where
This shows how the lack of knowledge about $\log Z$ can be solved.
But it doesn't mean that lack of knowledge of $\log Z$ is not a problem.
In fact the SNIS method shows that not knowing $\log Z$ is a problem. It is a problem and we need to use a trick in order to solve it. If we knew $\log Z$ then our sampling method would perform better.
Example
See for instance in the example below where we have a beta distributed variable
$$f_X(x) \propto x^2 \quad \qquad \qquad \text{for $\quad 0 \leq x \leq 1$}$$
And we wish to estimate the expectation value for $log(X)$.
Because this is a simple example we know that $E_X[log(X)] = -1/3$ by calculating it analytically. But here we are gonna use self-normalized importance sampling and sampling with another beta distribution $f_Y(y) \propto (1-y)^2$ to illustrate the difference.
In one case we compute it with an exact normalization factor. We can do this because we know $log(Z)$, as for a beta distribution it is not so difficult.
$$E_X[log(X)] \approx \frac{\sum_{\forall y_i} log(y_i) \frac{y_i^2}{(1-y_i)^2}}{1}$$
In the other case we compute it with self-normalization
$$E_X[log(X)] \approx \frac{\sum_{\forall y_i} log(y_i) \frac{y_i^2}{(1-y_i)^2}}{\sum_{\forall y_i} \frac{y_i^2}{(1-y_i)^2}}$$
So the difference is whether this factor in the denominator is a constant based on the partition function $\log(Z)$ (or actually ratio of partition functions for X and Y), or a random variable $\sum_{\forall y_i} {y_i^2}/{(1-y_i)^2}$.
Intuitively you may guess that this latter will increase bias and variance of the estimate.
The image below gives the histograms for estimates with samples of size 100.
ns <- 100
nt <- 10^3
mt <- rep(0,nt)
zt <- rep(0,nt)
for (i in 1:nt) {
y <- rbeta(ns,1,3)
t <- log(y)*y^2/(1-y)^2
z <- y^2/(1-y)^2
mt[i] <- mean(t)
zt[i] <- mean(z)
}
h1 <- hist(mt, breaks = seq(-1,0,0.01), main = "using known parition function")
h2 <- hist(mt/zt , breaks = seq(-1,0,0.01), main = "using self-normalization")
|
Intuition for why the (log) partition function matters?
This is how Self-Normalized Importance Sampling (SNIS) works - you draw samples from a proposal distribution that is essentially guess about where
This shows how the lack of knowledge about $\log Z$
|
42,652
|
Intuition for why the (log) partition function matters?
|
As a precursor: It is worth thinking about how these problems arise in statistical practice. Optimising over $x$ is rare - usually, $x$ has already been observed. It is more common to be optimising over $\boldsymbol{\theta}$, given an observation $x$, e.g. to find the maximum likelihood estimator of $\theta$, one would solve
$$\max_\boldsymbol{\theta} \left\{ \log p(\mathbf{x};\boldsymbol{\theta}) = \boldsymbol{\phi}(\mathbf{x})^\top\boldsymbol{\theta} - \log Z(\boldsymbol{\theta}) \right\}.$$
If one is aiming to optimise this function, it is clear that one needs some sort of control on $Z(\boldsymbol{\theta})$, and/or its derivatives.
To address your specific comments:
Consider this thought experiment: imagine you are given an oracle who
computes $Z(\boldsymbol{\theta})$ efficiently. What can you now do
that you could not do before? [...] can you now compute expected values more easily?
Indeed you can. If you have oracle access to $Z(\boldsymbol{\theta})$, then you can also estimate its gradient by finite differencing. This lets you compute the specific expectation
$$\nabla_\boldsymbol{\theta} \log Z(\boldsymbol{\theta}) = \mathbb{E}\left[\boldsymbol{\phi}(\mathbf{x})\right]\equiv\boldsymbol{\mu}.$$
It does not allow you to compute arbitrary expectations (unless you change to thinking about a different exponential family), but one is typically not looking for arbitrary expectations.
Personally, I would rather have an oracle that tells me which regions
of $\mathbf{x}-$space to look in -- solve the search problem for me.
What would this mean? This seems very close to being able to sample from $p(\mathbf{x};\boldsymbol{\theta})$, which is of similar difficulty to computing $Z(\boldsymbol{\theta})$. I agree that this would be a useful oracle, but it is not an easier one.
This is how Self-Normalized Importance Sampling (SNIS) works - you
draw samples from a proposal distribution that is essentially guess
about where $\mathbf{x}$ has non-negligible mass, then plug in an
estimate of $Z(\boldsymbol{\theta})$ based on those samples.
The hard problem in SNIS is constructing a good proposal distribution
$q$, then you get $Z(\boldsymbol{\theta})$ "for free."
Yes. For many problems of interest, constructing a good $q$ is very difficult, and is usually more difficult than computing $Z(\boldsymbol{\theta})$.
One way to find the relevant regions of $\mathbf{x}$ would be to find
the mode(s) of $p$. [...] But the difficulty of this depends on
$\boldsymbol{\phi}$; the partition function is not involved.
The extent to which this is useful will depend on the problem at hand. For calculation of expectations, in high-dimensional problems of interest, modes are not as useful as one might think, unless $p$ is very well-concentrated. The difficulty is in integration over the (many) possible states.
To summarize, I see inference as having two core problems: (a) a
search problem for the relevant region of $\mathbf{x}$ (high-probability regions, modes, etc.), and (b) a normalization
problem of computing (log) $Z(\boldsymbol{\theta})$. I am puzzled why
the latter (b) receives so much attention, especially since solving
(a) can give (b) for free, but not the other way around as far as I
can tell. So, what is the intuition behind the emphasis on the log
partition function?
To recapitulate: (a) does not give (b) for free, nor does (b) give (a) for free.
(a) is a problem of optimisation over $x$, which does not depend (as much) on the value of $\boldsymbol{\theta}$.
(b) is a problem of integration over $x$, which depends intimately on the value of $\boldsymbol{\theta}$.
As stated at the top of this post: statistically, you are usually interested in inference over $\theta$, and $x$ is given already. It is thus more common to be in a situation where (b) is relevant.
|
Intuition for why the (log) partition function matters?
|
As a precursor: It is worth thinking about how these problems arise in statistical practice. Optimising over $x$ is rare - usually, $x$ has already been observed. It is more common to be optimising ov
|
Intuition for why the (log) partition function matters?
As a precursor: It is worth thinking about how these problems arise in statistical practice. Optimising over $x$ is rare - usually, $x$ has already been observed. It is more common to be optimising over $\boldsymbol{\theta}$, given an observation $x$, e.g. to find the maximum likelihood estimator of $\theta$, one would solve
$$\max_\boldsymbol{\theta} \left\{ \log p(\mathbf{x};\boldsymbol{\theta}) = \boldsymbol{\phi}(\mathbf{x})^\top\boldsymbol{\theta} - \log Z(\boldsymbol{\theta}) \right\}.$$
If one is aiming to optimise this function, it is clear that one needs some sort of control on $Z(\boldsymbol{\theta})$, and/or its derivatives.
To address your specific comments:
Consider this thought experiment: imagine you are given an oracle who
computes $Z(\boldsymbol{\theta})$ efficiently. What can you now do
that you could not do before? [...] can you now compute expected values more easily?
Indeed you can. If you have oracle access to $Z(\boldsymbol{\theta})$, then you can also estimate its gradient by finite differencing. This lets you compute the specific expectation
$$\nabla_\boldsymbol{\theta} \log Z(\boldsymbol{\theta}) = \mathbb{E}\left[\boldsymbol{\phi}(\mathbf{x})\right]\equiv\boldsymbol{\mu}.$$
It does not allow you to compute arbitrary expectations (unless you change to thinking about a different exponential family), but one is typically not looking for arbitrary expectations.
Personally, I would rather have an oracle that tells me which regions
of $\mathbf{x}-$space to look in -- solve the search problem for me.
What would this mean? This seems very close to being able to sample from $p(\mathbf{x};\boldsymbol{\theta})$, which is of similar difficulty to computing $Z(\boldsymbol{\theta})$. I agree that this would be a useful oracle, but it is not an easier one.
This is how Self-Normalized Importance Sampling (SNIS) works - you
draw samples from a proposal distribution that is essentially guess
about where $\mathbf{x}$ has non-negligible mass, then plug in an
estimate of $Z(\boldsymbol{\theta})$ based on those samples.
The hard problem in SNIS is constructing a good proposal distribution
$q$, then you get $Z(\boldsymbol{\theta})$ "for free."
Yes. For many problems of interest, constructing a good $q$ is very difficult, and is usually more difficult than computing $Z(\boldsymbol{\theta})$.
One way to find the relevant regions of $\mathbf{x}$ would be to find
the mode(s) of $p$. [...] But the difficulty of this depends on
$\boldsymbol{\phi}$; the partition function is not involved.
The extent to which this is useful will depend on the problem at hand. For calculation of expectations, in high-dimensional problems of interest, modes are not as useful as one might think, unless $p$ is very well-concentrated. The difficulty is in integration over the (many) possible states.
To summarize, I see inference as having two core problems: (a) a
search problem for the relevant region of $\mathbf{x}$ (high-probability regions, modes, etc.), and (b) a normalization
problem of computing (log) $Z(\boldsymbol{\theta})$. I am puzzled why
the latter (b) receives so much attention, especially since solving
(a) can give (b) for free, but not the other way around as far as I
can tell. So, what is the intuition behind the emphasis on the log
partition function?
To recapitulate: (a) does not give (b) for free, nor does (b) give (a) for free.
(a) is a problem of optimisation over $x$, which does not depend (as much) on the value of $\boldsymbol{\theta}$.
(b) is a problem of integration over $x$, which depends intimately on the value of $\boldsymbol{\theta}$.
As stated at the top of this post: statistically, you are usually interested in inference over $\theta$, and $x$ is given already. It is thus more common to be in a situation where (b) is relevant.
|
Intuition for why the (log) partition function matters?
As a precursor: It is worth thinking about how these problems arise in statistical practice. Optimising over $x$ is rare - usually, $x$ has already been observed. It is more common to be optimising ov
|
42,653
|
Combining Bootstrap and Cross-Validation
|
The bootstrap is certainly one way of assessing internal validation of a model. Ewout W. Steyerberg in his book Clinical Prediction Models describes how the bootstrap can be used to estimate optimism corrected performance. The procedure is as follows:
Construct a model in the original sample; determine the apparent performance on the data from the sample used to construct the model.
Draw a bootstrap sample (Sample*) with replacement from the original sample
Construct a model (Model*) in Sample*, replaying every step that was done in the original sample, especially model specification steps such as selection of predictors. Determine the bootstrap performance as the apparent performance of Model* in Sample*;
Apply Model* to the original sample without any modification to determine the test performance;
Calculate the optimism (Bootstrap performance - test performance).
Repeat steps 1-5 many times, at least 200, to obtain a stable mean estimate of the optimism.
Subtract the mean optimism estimate from the apparent performance to obtain the optimism corrected performance.
In this scheme, the apparent performance is determined on the sample where the model was derived from. In machine learning, this is often referred to as training error. If you're working with popular tools like caret or sklearn, Frank Harrell writes 10-fold repeated cross validation, repeated 100 times is an excellent competitor to this procedure
As for an interval estimate of the prediction error, the result of the above procedure provides an approximate sampling distribution to the optimism, and so you should be able to just subtract the apparent performance from each of the optimism bootstrap results, then estimate the interval by taking appropriate quantiles or by using bias adjusted bootstrap confidence intervals. I would search for literature on this though, because although this sounds reasonable, I am not confident it is methodologically sound.
|
Combining Bootstrap and Cross-Validation
|
The bootstrap is certainly one way of assessing internal validation of a model. Ewout W. Steyerberg in his book Clinical Prediction Models describes how the bootstrap can be used to estimate optimism
|
Combining Bootstrap and Cross-Validation
The bootstrap is certainly one way of assessing internal validation of a model. Ewout W. Steyerberg in his book Clinical Prediction Models describes how the bootstrap can be used to estimate optimism corrected performance. The procedure is as follows:
Construct a model in the original sample; determine the apparent performance on the data from the sample used to construct the model.
Draw a bootstrap sample (Sample*) with replacement from the original sample
Construct a model (Model*) in Sample*, replaying every step that was done in the original sample, especially model specification steps such as selection of predictors. Determine the bootstrap performance as the apparent performance of Model* in Sample*;
Apply Model* to the original sample without any modification to determine the test performance;
Calculate the optimism (Bootstrap performance - test performance).
Repeat steps 1-5 many times, at least 200, to obtain a stable mean estimate of the optimism.
Subtract the mean optimism estimate from the apparent performance to obtain the optimism corrected performance.
In this scheme, the apparent performance is determined on the sample where the model was derived from. In machine learning, this is often referred to as training error. If you're working with popular tools like caret or sklearn, Frank Harrell writes 10-fold repeated cross validation, repeated 100 times is an excellent competitor to this procedure
As for an interval estimate of the prediction error, the result of the above procedure provides an approximate sampling distribution to the optimism, and so you should be able to just subtract the apparent performance from each of the optimism bootstrap results, then estimate the interval by taking appropriate quantiles or by using bias adjusted bootstrap confidence intervals. I would search for literature on this though, because although this sounds reasonable, I am not confident it is methodologically sound.
|
Combining Bootstrap and Cross-Validation
The bootstrap is certainly one way of assessing internal validation of a model. Ewout W. Steyerberg in his book Clinical Prediction Models describes how the bootstrap can be used to estimate optimism
|
42,654
|
Intuition behind Partial Residual Plots
|
While it is mentioned in a number of regression texts, the plot you have mentioned here does not seem particularly useful to me. A far better alternative is the added variable plot, which correctly represents the relationship between an individual explanatory variable and the response variable conditional on other explanatory variables. For the explanatory variable $x_k$ the plot shows the following variables on the vertical and horizontal axes respectively:
$$\begin{matrix}
Y_{\bullet [k]} & & & \text{Residuals from regressing } Y \text{ against } \mathbf{x}_{-k}, \\[6pt]
X_{k \bullet [k]} & & & \text{Residuals from regressing } x_k \text{ against } \mathbf{x}_{-k}. \\[6pt]
\end{matrix}$$
This latter plot has several useful properties. The line-of-best-fit in the plot will match the estimated regression coefficient for that explanatory variable, and the residuals match the residuals of the overall regression. The plot isolates the relationship between $Y$ and $x_k$ conditional on the other explanatory variables. It allows you to easily diagnose the relationship between the explanatory variable and response, and thereby diagnose errors in the model assumptions (e.g., patterns that vary from the assumed model form).
|
Intuition behind Partial Residual Plots
|
While it is mentioned in a number of regression texts, the plot you have mentioned here does not seem particularly useful to me. A far better alternative is the added variable plot, which correctly r
|
Intuition behind Partial Residual Plots
While it is mentioned in a number of regression texts, the plot you have mentioned here does not seem particularly useful to me. A far better alternative is the added variable plot, which correctly represents the relationship between an individual explanatory variable and the response variable conditional on other explanatory variables. For the explanatory variable $x_k$ the plot shows the following variables on the vertical and horizontal axes respectively:
$$\begin{matrix}
Y_{\bullet [k]} & & & \text{Residuals from regressing } Y \text{ against } \mathbf{x}_{-k}, \\[6pt]
X_{k \bullet [k]} & & & \text{Residuals from regressing } x_k \text{ against } \mathbf{x}_{-k}. \\[6pt]
\end{matrix}$$
This latter plot has several useful properties. The line-of-best-fit in the plot will match the estimated regression coefficient for that explanatory variable, and the residuals match the residuals of the overall regression. The plot isolates the relationship between $Y$ and $x_k$ conditional on the other explanatory variables. It allows you to easily diagnose the relationship between the explanatory variable and response, and thereby diagnose errors in the model assumptions (e.g., patterns that vary from the assumed model form).
|
Intuition behind Partial Residual Plots
While it is mentioned in a number of regression texts, the plot you have mentioned here does not seem particularly useful to me. A far better alternative is the added variable plot, which correctly r
|
42,655
|
The "correct" way to approximate $\text{var}(f(X))$ via Taylor expansion
|
I cannot speak to the derivation of the first approximation (which looks wrong to me). However, the second equation is obtained using a second-order Taylor approximation to $f$ for the case where the underlying distribution is centred, unskewed and mesokurtic. In this case, you have $\mu=0$, $\gamma=0$ and $\kappa=3$. Using the general form of the Taylor approximation you obtain:
$$\begin{equation} \begin{aligned}
\mathbb{V}[f(X)]
&\approx ( f''(\mu)^2 \mu^2 - f'(\mu)f''(\mu) \mu + f'(\mu)^2 ) \cdot \sigma^2 \\[6pt]
&\quad - \frac{f''(\mu)(f'(\mu) + \mu f''(\mu))}{2} \cdot \gamma \sigma^3
+ \frac{f''(\mu)^2}{4} \cdot (\kappa-1) \sigma^4 \\[6pt]
&= f'(\mu)^2 \cdot \sigma^2 + \frac{f''(\mu)^2}{2} \cdot \sigma^4. \\[6pt]
\end{aligned} \end{equation}$$
The first approximation does not look correct to me, and I see no evidence that it is a "commonly reported formula". This approximation cannot be derived from the general second-order Taylor approximation for any assumed level of kurtosis, so I find it unsurprising that it performs poorly. (It would require $\kappa = 0$ which is not a valid kurtosis value.) For this reason, I would expect the second approximation to perform better than the first, except possibly in the case where the kurtosis of the underlying distribution is highly platykurtic.
|
The "correct" way to approximate $\text{var}(f(X))$ via Taylor expansion
|
I cannot speak to the derivation of the first approximation (which looks wrong to me). However, the second equation is obtained using a second-order Taylor approximation to $f$ for the case where the
|
The "correct" way to approximate $\text{var}(f(X))$ via Taylor expansion
I cannot speak to the derivation of the first approximation (which looks wrong to me). However, the second equation is obtained using a second-order Taylor approximation to $f$ for the case where the underlying distribution is centred, unskewed and mesokurtic. In this case, you have $\mu=0$, $\gamma=0$ and $\kappa=3$. Using the general form of the Taylor approximation you obtain:
$$\begin{equation} \begin{aligned}
\mathbb{V}[f(X)]
&\approx ( f''(\mu)^2 \mu^2 - f'(\mu)f''(\mu) \mu + f'(\mu)^2 ) \cdot \sigma^2 \\[6pt]
&\quad - \frac{f''(\mu)(f'(\mu) + \mu f''(\mu))}{2} \cdot \gamma \sigma^3
+ \frac{f''(\mu)^2}{4} \cdot (\kappa-1) \sigma^4 \\[6pt]
&= f'(\mu)^2 \cdot \sigma^2 + \frac{f''(\mu)^2}{2} \cdot \sigma^4. \\[6pt]
\end{aligned} \end{equation}$$
The first approximation does not look correct to me, and I see no evidence that it is a "commonly reported formula". This approximation cannot be derived from the general second-order Taylor approximation for any assumed level of kurtosis, so I find it unsurprising that it performs poorly. (It would require $\kappa = 0$ which is not a valid kurtosis value.) For this reason, I would expect the second approximation to perform better than the first, except possibly in the case where the kurtosis of the underlying distribution is highly platykurtic.
|
The "correct" way to approximate $\text{var}(f(X))$ via Taylor expansion
I cannot speak to the derivation of the first approximation (which looks wrong to me). However, the second equation is obtained using a second-order Taylor approximation to $f$ for the case where the
|
42,656
|
Interpretation of zero-truncated Poisson regression coefficients
|
Unfortunately not. There is (to the best of my knowledge) no easy ceteris paribus interpretation of the coefficients' effect on the expectation of the zero-truncated response.
Sometimes the usual multiplicative effect on $\lambda$ has a natural interpretation. Namely, if the expectation of the underlying untruncated counts is of interest.
Moreover, it is clear that the multiplicative effect of a change in $x_j$ is at most $\text{e}^{\beta_j}$. For an observation with high $\lambda$ (i.e., sufficiently far away from the truncation point 0) the effect will become closer to $\text{e}^{\beta_j}$. In contrast, for an observation with low $\lambda$ (i.e., close to or even below the truncation point 0) the effect will be almost zero. But this, of course, depends on the entire regressor vector $x$ and not just the change in one of the regressors.
|
Interpretation of zero-truncated Poisson regression coefficients
|
Unfortunately not. There is (to the best of my knowledge) no easy ceteris paribus interpretation of the coefficients' effect on the expectation of the zero-truncated response.
Sometimes the usual mult
|
Interpretation of zero-truncated Poisson regression coefficients
Unfortunately not. There is (to the best of my knowledge) no easy ceteris paribus interpretation of the coefficients' effect on the expectation of the zero-truncated response.
Sometimes the usual multiplicative effect on $\lambda$ has a natural interpretation. Namely, if the expectation of the underlying untruncated counts is of interest.
Moreover, it is clear that the multiplicative effect of a change in $x_j$ is at most $\text{e}^{\beta_j}$. For an observation with high $\lambda$ (i.e., sufficiently far away from the truncation point 0) the effect will become closer to $\text{e}^{\beta_j}$. In contrast, for an observation with low $\lambda$ (i.e., close to or even below the truncation point 0) the effect will be almost zero. But this, of course, depends on the entire regressor vector $x$ and not just the change in one of the regressors.
|
Interpretation of zero-truncated Poisson regression coefficients
Unfortunately not. There is (to the best of my knowledge) no easy ceteris paribus interpretation of the coefficients' effect on the expectation of the zero-truncated response.
Sometimes the usual mult
|
42,657
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
ROUND TWO:
You asked “how do I do this with the log-link function and quasi(Poisson) errors?”. I say put aside your priors suggesting a particular fixed model and use a data-driven empirical process to identify the (possible) memory model, refining parameters and testing both necessity and sufficiency.
When you only have 29 days (4 seasons of daily data), I am normally reluctant to enable the automatic process to consider seasonal activity like day 6 as the OP has smartly viewed and pointed out ... a win for the human!
Following is the audit trail .... the ACF of the original series is here:
I suggested the possibility of a day 6 effect to the software which then identified supported that hypothesis while detecting three unusual points while incorporating an ar(1) effect shown here and here and the companion PACF of the original series here:
The Actual/Fit and Forecast is here:
with forecasts here:
... all without assuming logarithms or any other possible unwarranted transformation.
Logs can be useful but the suggestion for a power transform for a theoretic model should never be made based upon the original data but on the residuals from a model which is where all the assumptions are placed that need to be tested.
When (and why) should you take the log of a distribution (of numbers)?
Notice the ACF of the residuals series suggesting that it that the model can not be proven to be insufficient
and a supporting (not quite perfect !) residual plot here:
As Isaac Asimov said “the only education is self-education” and your question is certainly in that spirit.
EDITED AFTER OP REQUESTED A LONGER PERIOD OF FORECASTS (149 FORECAST PERIOD WAS USED )
Here is the Actual/Fit & Forecast graph with forecasts here
Simulation is performed using the residuals from the model here
I selected not to allow for future anomalies and report here the simulation ( see Bootstrap prediction interval for an introductory discussion ) for a few select periods ahead
period 30 ... 1 day ahead
period 31 .... 2 day ahead
period 34 .... 5 day ahead (this is day 6 of the week )
period 178 ... 149 days ahead
And the sum for the next 149 periods Q.E.D. here
this example shows how prediction limits shouldn't be assumed to be symmetrical as errors form a useful model may not be normally distributed BUT are what they are.
Should you wish to extend the forecast period to 335 days to give you a 364 expectation simply prorate the 149 day prediction to 335 and add the actual for the first 29 (335 + 29 =364 ) to get your desideratum expectation for the first year.
Additionally you had queried about "the correlation of the errors" . Here is the ACF of the model's errors suggesting sufficiency and no need to worry about this possible effect . This is due to extracting the ar(1) effect and the day6 effect .
After adding the level shift indicator to the model ..here it is and the sum of the 149 day simulated predictions . much lower due to the level shift down at period 20
If I further assumed logs , I would expect the prediction to be even lower .
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
ROUND TWO:
You asked “how do I do this with the log-link function and quasi(Poisson) errors?”. I say put aside your priors suggesting a particular fixed model and use a data-driven empirical process t
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
ROUND TWO:
You asked “how do I do this with the log-link function and quasi(Poisson) errors?”. I say put aside your priors suggesting a particular fixed model and use a data-driven empirical process to identify the (possible) memory model, refining parameters and testing both necessity and sufficiency.
When you only have 29 days (4 seasons of daily data), I am normally reluctant to enable the automatic process to consider seasonal activity like day 6 as the OP has smartly viewed and pointed out ... a win for the human!
Following is the audit trail .... the ACF of the original series is here:
I suggested the possibility of a day 6 effect to the software which then identified supported that hypothesis while detecting three unusual points while incorporating an ar(1) effect shown here and here and the companion PACF of the original series here:
The Actual/Fit and Forecast is here:
with forecasts here:
... all without assuming logarithms or any other possible unwarranted transformation.
Logs can be useful but the suggestion for a power transform for a theoretic model should never be made based upon the original data but on the residuals from a model which is where all the assumptions are placed that need to be tested.
When (and why) should you take the log of a distribution (of numbers)?
Notice the ACF of the residuals series suggesting that it that the model can not be proven to be insufficient
and a supporting (not quite perfect !) residual plot here:
As Isaac Asimov said “the only education is self-education” and your question is certainly in that spirit.
EDITED AFTER OP REQUESTED A LONGER PERIOD OF FORECASTS (149 FORECAST PERIOD WAS USED )
Here is the Actual/Fit & Forecast graph with forecasts here
Simulation is performed using the residuals from the model here
I selected not to allow for future anomalies and report here the simulation ( see Bootstrap prediction interval for an introductory discussion ) for a few select periods ahead
period 30 ... 1 day ahead
period 31 .... 2 day ahead
period 34 .... 5 day ahead (this is day 6 of the week )
period 178 ... 149 days ahead
And the sum for the next 149 periods Q.E.D. here
this example shows how prediction limits shouldn't be assumed to be symmetrical as errors form a useful model may not be normally distributed BUT are what they are.
Should you wish to extend the forecast period to 335 days to give you a 364 expectation simply prorate the 149 day prediction to 335 and add the actual for the first 29 (335 + 29 =364 ) to get your desideratum expectation for the first year.
Additionally you had queried about "the correlation of the errors" . Here is the ACF of the model's errors suggesting sufficiency and no need to worry about this possible effect . This is due to extracting the ar(1) effect and the day6 effect .
After adding the level shift indicator to the model ..here it is and the sum of the 149 day simulated predictions . much lower due to the level shift down at period 20
If I further assumed logs , I would expect the prediction to be even lower .
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
ROUND TWO:
You asked “how do I do this with the log-link function and quasi(Poisson) errors?”. I say put aside your priors suggesting a particular fixed model and use a data-driven empirical process t
|
42,658
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
I took your 29 days (oldest to newest) and found that there were 3 unusual days thus the following equation with Actual/Fit and Forecast here
All models are wrong ... but some are useful .... . It is fundamentally an autoregressive process of order 1 after one has adjusted for the three "unusual data points " see for a clear support for the anomaly identification.
Plot of the residuals from the above model suggesting reduced variability is clearly obvious . It is reasonable to suggest that there has been a break-point in the model error variance suggesting GLS or a weighted model . This was not not investigated here due to sample size ! ).
Here is the plot of the original data
While the variability of the series is higher at higher values suggesting to some that there is a need for logarithms http://stats.stackexchange.com/questions/18844/when-and-why-to-take-the-log-of-a-distribution-of-numbers ..it is truer yet that the error variance distribution is better characterized as having a deterministic change point at or about day 11.
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
I took your 29 days (oldest to newest) and found that there were 3 unusual days thus the following equation with Actual/Fit and Forecast here
All models are wrong ... but some are useful .... . It i
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
I took your 29 days (oldest to newest) and found that there were 3 unusual days thus the following equation with Actual/Fit and Forecast here
All models are wrong ... but some are useful .... . It is fundamentally an autoregressive process of order 1 after one has adjusted for the three "unusual data points " see for a clear support for the anomaly identification.
Plot of the residuals from the above model suggesting reduced variability is clearly obvious . It is reasonable to suggest that there has been a break-point in the model error variance suggesting GLS or a weighted model . This was not not investigated here due to sample size ! ).
Here is the plot of the original data
While the variability of the series is higher at higher values suggesting to some that there is a need for logarithms http://stats.stackexchange.com/questions/18844/when-and-why-to-take-the-log-of-a-distribution-of-numbers ..it is truer yet that the error variance distribution is better characterized as having a deterministic change point at or about day 11.
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
I took your 29 days (oldest to newest) and found that there were 3 unusual days thus the following equation with Actual/Fit and Forecast here
All models are wrong ... but some are useful .... . It i
|
42,659
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
For this type of problem, it should be possible to make a prediction of the total donations by predicting the infinite tail of donations, and adding this to the observed donations. To facilitate our analysis, suppose we let $M_t$ denote the donation received on day $t$, and let $U$ denote the total remaining donations, and $V$ denote the total donations (including the observed donations).
If we have observations for days $t = 0,1,...,T$ then we are making predictions for the infinite sequence of days $t = T+1, T+2, T+3, ...$. Under a GLM with a log-link function, the predictions will be of the form:
$$\hat{M}_t = \exp(\hat{\beta}_0 + \hat{\beta}_1 t).$$
It follows that the predicted value of the total remaining donations is:
$$\begin{equation} \begin{aligned}
\hat{U} \equiv \sum_{t=T+1}^\infty \hat{M}_t
&= \sum_{t=T+1}^\infty \exp(\hat{\beta}_0 + \hat{\beta}_1 t) \\[6pt]
&= \exp(\hat{\beta}_0) \sum_{t=T+1}^\infty \exp(\hat{\beta}_1)^t \\[6pt]
&= \exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1)) \sum_{t=0}^\infty \exp(\hat{\beta}_1)^t \\[6pt]
&= \frac{\exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1))}{1-\exp(\hat{\beta}_1)}. \\[6pt]
\end{aligned} \end{equation}$$
Thus, the predicted total donations (including the observed donations) is:
$$\begin{equation} \begin{aligned}
\hat{V} \equiv \sum_{t=0}^T m_t + \sum_{t=T+1}^\infty \hat{M}_t
&= \sum_{t=0}^T m_t + \frac{\exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1))}{1-\exp(\hat{\beta}_1)}. \\[6pt]
\end{aligned} \end{equation}$$
This value is the MLE prediction for the total donations (due to the invariance property of the MLE).
Implementation in R: I am going to implement this method using a negative-binomial GLM instead of a quasi-Poisson GLM. That advantage of the negative binomial model is that you actually have a full specified distribution, which makes it easier to obtain prediction intervals (if you so desire). In the code below I create the data-frame, fit the model, and then generate the total predicted donations. (Due to your update, I have generated a variable for the day of the week, but I have not incorporated this into the model. It is there if you decide you want to add it.)
#Generate the variables
Donations <- c(6085, 3207, 885, 1279, 1483, 75, 421, 335, 1176,
504, 430, 110, 36, 299, 314, 215, 417, 1712,
2141, 35, 235, 80, 330, 70, 70, 105, 65, 15, 180);
Time <- c(0:28);
DAYS <- c('Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun', 'Mon');
Day <- rep(DAYS, 5)[1:29];
#Create the data frame
DATA <- data.frame(Donations = Donations, Time = Time, Day = factor(Day));
#Fit the model and extract the estimated coefficients
library(MASS);
MODEL <- glm.nb(Donations ~ Time, data = DATA);
COEFS <- summary(MODEL)$coefficient;
B0 <- COEFS[1,1];
B1 <- COEFS[2,1];
#Predict the remaining donations
UHAT <- exp(B0 + B1*nrow(DATA))/(1 - exp(B1));
#Predict the total donations
VHAT <- sum(DATA$Donations) + UHAT;
This particular model has a McFadden pseudo-$R^2$ of 38.89%, which can be improved if you add the day variable into the GLM. The predicted remaining donations and predicted total donations are shown below.
UHAT;
[1] 1109.464
VHAT;
[1] 23418.46
As you can see, under this method, we predict an additional \$1109.46 worth of donations, bringing the predicted total to \$23,418.46.
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
|
For this type of problem, it should be possible to make a prediction of the total donations by predicting the infinite tail of donations, and adding this to the observed donations. To facilitate our
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
For this type of problem, it should be possible to make a prediction of the total donations by predicting the infinite tail of donations, and adding this to the observed donations. To facilitate our analysis, suppose we let $M_t$ denote the donation received on day $t$, and let $U$ denote the total remaining donations, and $V$ denote the total donations (including the observed donations).
If we have observations for days $t = 0,1,...,T$ then we are making predictions for the infinite sequence of days $t = T+1, T+2, T+3, ...$. Under a GLM with a log-link function, the predictions will be of the form:
$$\hat{M}_t = \exp(\hat{\beta}_0 + \hat{\beta}_1 t).$$
It follows that the predicted value of the total remaining donations is:
$$\begin{equation} \begin{aligned}
\hat{U} \equiv \sum_{t=T+1}^\infty \hat{M}_t
&= \sum_{t=T+1}^\infty \exp(\hat{\beta}_0 + \hat{\beta}_1 t) \\[6pt]
&= \exp(\hat{\beta}_0) \sum_{t=T+1}^\infty \exp(\hat{\beta}_1)^t \\[6pt]
&= \exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1)) \sum_{t=0}^\infty \exp(\hat{\beta}_1)^t \\[6pt]
&= \frac{\exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1))}{1-\exp(\hat{\beta}_1)}. \\[6pt]
\end{aligned} \end{equation}$$
Thus, the predicted total donations (including the observed donations) is:
$$\begin{equation} \begin{aligned}
\hat{V} \equiv \sum_{t=0}^T m_t + \sum_{t=T+1}^\infty \hat{M}_t
&= \sum_{t=0}^T m_t + \frac{\exp(\hat{\beta}_0 + \hat{\beta}_1 (T+1))}{1-\exp(\hat{\beta}_1)}. \\[6pt]
\end{aligned} \end{equation}$$
This value is the MLE prediction for the total donations (due to the invariance property of the MLE).
Implementation in R: I am going to implement this method using a negative-binomial GLM instead of a quasi-Poisson GLM. That advantage of the negative binomial model is that you actually have a full specified distribution, which makes it easier to obtain prediction intervals (if you so desire). In the code below I create the data-frame, fit the model, and then generate the total predicted donations. (Due to your update, I have generated a variable for the day of the week, but I have not incorporated this into the model. It is there if you decide you want to add it.)
#Generate the variables
Donations <- c(6085, 3207, 885, 1279, 1483, 75, 421, 335, 1176,
504, 430, 110, 36, 299, 314, 215, 417, 1712,
2141, 35, 235, 80, 330, 70, 70, 105, 65, 15, 180);
Time <- c(0:28);
DAYS <- c('Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun', 'Mon');
Day <- rep(DAYS, 5)[1:29];
#Create the data frame
DATA <- data.frame(Donations = Donations, Time = Time, Day = factor(Day));
#Fit the model and extract the estimated coefficients
library(MASS);
MODEL <- glm.nb(Donations ~ Time, data = DATA);
COEFS <- summary(MODEL)$coefficient;
B0 <- COEFS[1,1];
B1 <- COEFS[2,1];
#Predict the remaining donations
UHAT <- exp(B0 + B1*nrow(DATA))/(1 - exp(B1));
#Predict the total donations
VHAT <- sum(DATA$Donations) + UHAT;
This particular model has a McFadden pseudo-$R^2$ of 38.89%, which can be improved if you add the day variable into the GLM. The predicted remaining donations and predicted total donations are shown below.
UHAT;
[1] 1109.464
VHAT;
[1] 23418.46
As you can see, under this method, we predict an additional \$1109.46 worth of donations, bringing the predicted total to \$23,418.46.
|
Forecasting/predicting total sum of donations (following GLM with poisson family and log link)
For this type of problem, it should be possible to make a prediction of the total donations by predicting the infinite tail of donations, and adding this to the observed donations. To facilitate our
|
42,660
|
Bootstrapped confidence intervals for performance metrics of predictive models
|
the goal is to build a predictive model.
I read this as: a model that is then actually used for prediction, and we need to know the performance of exactly that model*
Independent Test Set or Hold Out Testing
Now, if you set up is that you have a training set that is used to build your model, and once that model is finalized its performance is evaluated with with a properly independent test set.
In that case, as the model is fixed, we need to account only for the variance uncertainty due to the limited number of tested cases - as always, if the performance estimate is based on measuring more cases, the uncertainty will be lower.
Thus, bootstrap your figure of merit from the test results.
Figures of merit that are proportions (0/1 loss, e.g. accuracy, precision, recall, sensitivity, ...) follow a binomial distribution, so you can also directly calculate confidence intervals this way. This is particularly useful as you can do that beforehand as back-of-the-envelope calculation to check whether your experiment can possibly result in a sufficiently narrow confidence interval for your figure of merit to be of practical use.
We've outlined such approaches in: Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
Resampling validation: Cross Validation, Set Validation, Out-Of-Bootstrap & Co.
Resampling validation takes so-called surrogate models trained on a subset of the data at hand and tests them with the respective cases not used for that surrogate model's training. This is typically done for many surrogate models, and the test results are pooled and used as approximation for the performance of the final model which is trained with the same algorithm but on the whole data set.
In this case, the situation is more complex as we have to take into account:
Bias: as the surrogate models are trained on smaller subsets, they are usually a bit worse than the final model: this is the root cause of the slight pessimistic bias of resampling validation. Your confidence interval will be off due to this bias.
$k$-fold CV with not too small $k$ usually has low bias, while the bias of out-of-bootstrap can be more substantial and I've seen .632-bootstrap having optimistic bias.
However, depending on the application question behind this, this bias may not be too bad: I've been working a lot developing models for clinical diagnostic questions. In that case, I use cross validation (low but pessimistic bias) and can then say that my confidence interval will be a bit too conservative - which in this case is far more acceptable than a possibly overoptimistic estimate.
With some experience, you may be able to get an idea of the order of magnitude for your data.
We've studied this for small n large p situations that are typical in my field: Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
Variance uncertainty due to the limited number of tested cases: this is a bit more tricky now than above: as the results for all cases are pooled, you'd bootstrap test results from all cases. But that includes test cases from multiple surrogate models
and there is also variance uncertainty due to possible model (in)stability, which is caused by the limited number of training cases and possibly by non-determinism in the training algorithm ("variance source 2b").
As this is important information on it's own, you may want to directly measure this with repeated/iterated cross validation or bootstrap-based resampling, see e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations Anal Bioanal Chem, 2008, 390, 1261-1271.
DOI: 10.1007/s00216-007-1818-6
With such a repeated cross validation or out-of-bootstrap or any of its variants (.632, .632+), your raw test results include both relevant sources of variance. But what we want is the distribution of the figure of merit that pools both sources: $n_t$ tested independent cases and $n_b$ surrogate models.
While I've not quite finished thinking though this, at the moment I bootstrap both $n_b$ out of $n_b$ surrogate models and $n_t$ out of $n_t$ test cases to construct my distribution for the figure of merit.
(I've presented a poster "C. Beleites & A. Krähmer: Cross-Validation Revisited:
using Uncertainty Estimates to Improve Model Autotuning" about this a few weeks ago, please do not hesitate to email me [see profile] if you'd like to have a copy)
* As opposed to: a model trained with this training algorithm on a data set (not this data set) of size $n$ of this general population => in that case, a proper estimate of the variance will need multiple data sets, resampling validation cannot estimate it, see
Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105.
|
Bootstrapped confidence intervals for performance metrics of predictive models
|
the goal is to build a predictive model.
I read this as: a model that is then actually used for prediction, and we need to know the performance of exactly that model*
Independent Test Set or Hold Out
|
Bootstrapped confidence intervals for performance metrics of predictive models
the goal is to build a predictive model.
I read this as: a model that is then actually used for prediction, and we need to know the performance of exactly that model*
Independent Test Set or Hold Out Testing
Now, if you set up is that you have a training set that is used to build your model, and once that model is finalized its performance is evaluated with with a properly independent test set.
In that case, as the model is fixed, we need to account only for the variance uncertainty due to the limited number of tested cases - as always, if the performance estimate is based on measuring more cases, the uncertainty will be lower.
Thus, bootstrap your figure of merit from the test results.
Figures of merit that are proportions (0/1 loss, e.g. accuracy, precision, recall, sensitivity, ...) follow a binomial distribution, so you can also directly calculate confidence intervals this way. This is particularly useful as you can do that beforehand as back-of-the-envelope calculation to check whether your experiment can possibly result in a sufficiently narrow confidence interval for your figure of merit to be of practical use.
We've outlined such approaches in: Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
Resampling validation: Cross Validation, Set Validation, Out-Of-Bootstrap & Co.
Resampling validation takes so-called surrogate models trained on a subset of the data at hand and tests them with the respective cases not used for that surrogate model's training. This is typically done for many surrogate models, and the test results are pooled and used as approximation for the performance of the final model which is trained with the same algorithm but on the whole data set.
In this case, the situation is more complex as we have to take into account:
Bias: as the surrogate models are trained on smaller subsets, they are usually a bit worse than the final model: this is the root cause of the slight pessimistic bias of resampling validation. Your confidence interval will be off due to this bias.
$k$-fold CV with not too small $k$ usually has low bias, while the bias of out-of-bootstrap can be more substantial and I've seen .632-bootstrap having optimistic bias.
However, depending on the application question behind this, this bias may not be too bad: I've been working a lot developing models for clinical diagnostic questions. In that case, I use cross validation (low but pessimistic bias) and can then say that my confidence interval will be a bit too conservative - which in this case is far more acceptable than a possibly overoptimistic estimate.
With some experience, you may be able to get an idea of the order of magnitude for your data.
We've studied this for small n large p situations that are typical in my field: Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
Variance uncertainty due to the limited number of tested cases: this is a bit more tricky now than above: as the results for all cases are pooled, you'd bootstrap test results from all cases. But that includes test cases from multiple surrogate models
and there is also variance uncertainty due to possible model (in)stability, which is caused by the limited number of training cases and possibly by non-determinism in the training algorithm ("variance source 2b").
As this is important information on it's own, you may want to directly measure this with repeated/iterated cross validation or bootstrap-based resampling, see e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations Anal Bioanal Chem, 2008, 390, 1261-1271.
DOI: 10.1007/s00216-007-1818-6
With such a repeated cross validation or out-of-bootstrap or any of its variants (.632, .632+), your raw test results include both relevant sources of variance. But what we want is the distribution of the figure of merit that pools both sources: $n_t$ tested independent cases and $n_b$ surrogate models.
While I've not quite finished thinking though this, at the moment I bootstrap both $n_b$ out of $n_b$ surrogate models and $n_t$ out of $n_t$ test cases to construct my distribution for the figure of merit.
(I've presented a poster "C. Beleites & A. Krähmer: Cross-Validation Revisited:
using Uncertainty Estimates to Improve Model Autotuning" about this a few weeks ago, please do not hesitate to email me [see profile] if you'd like to have a copy)
* As opposed to: a model trained with this training algorithm on a data set (not this data set) of size $n$ of this general population => in that case, a proper estimate of the variance will need multiple data sets, resampling validation cannot estimate it, see
Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105.
|
Bootstrapped confidence intervals for performance metrics of predictive models
the goal is to build a predictive model.
I read this as: a model that is then actually used for prediction, and we need to know the performance of exactly that model*
Independent Test Set or Hold Out
|
42,661
|
Bootstrapped confidence intervals for performance metrics of predictive models
|
If you are just evaluating predictions of a model on one test set, so no CV than this is a simple problem and you can just treat your model predictions as you would any other variable in order to get some estimates and their CI.
So if you want to get CI using bootstrap, then you don't need to refit the model many times, you just bootstrap the errors on the test set. But you don't even need to do bootstrap, you can just use standard methods. For example, if you want to know CI for your accuracy, you can just get it using a binomial test on the proportion of correctly predicted samples. CI for accuracy is the same as the CI for proportions.
This works, because you are testing predictions of one fixed model. If you don't have a one fixed model, such as if you wan't to evaluate your CV performance, then this will not give you a correct intervals and I don't know what will.
|
Bootstrapped confidence intervals for performance metrics of predictive models
|
If you are just evaluating predictions of a model on one test set, so no CV than this is a simple problem and you can just treat your model predictions as you would any other variable in order to get
|
Bootstrapped confidence intervals for performance metrics of predictive models
If you are just evaluating predictions of a model on one test set, so no CV than this is a simple problem and you can just treat your model predictions as you would any other variable in order to get some estimates and their CI.
So if you want to get CI using bootstrap, then you don't need to refit the model many times, you just bootstrap the errors on the test set. But you don't even need to do bootstrap, you can just use standard methods. For example, if you want to know CI for your accuracy, you can just get it using a binomial test on the proportion of correctly predicted samples. CI for accuracy is the same as the CI for proportions.
This works, because you are testing predictions of one fixed model. If you don't have a one fixed model, such as if you wan't to evaluate your CV performance, then this will not give you a correct intervals and I don't know what will.
|
Bootstrapped confidence intervals for performance metrics of predictive models
If you are just evaluating predictions of a model on one test set, so no CV than this is a simple problem and you can just treat your model predictions as you would any other variable in order to get
|
42,662
|
Simulate a Continuous Joint pdf in R Using Known Distributions
|
There are many ways you can simulate this bivariate random vector. Probably the most efficient way is to derive the marginal distribution of one variable, and the conditional distribution of the other, and then simulate the variables individually using these distributions.
An alternative method, which is less efficient, but does not require you to derive the marginal and conditional distributions, is to use rejection sampling. The simples method in this case is to use a uniform generating distribution over the unit square. We have a bivariate continuous random vector $(X,Y)$ with a bounded density $f$ over the support $\mathcal{S} \subset [0,1]^2$. Thus, we can generate $X_*,Y_* \sim \text{IID U}[0,1]$ and then accept the generated value with acceptance probability:
$$A(x_*,y_*) \equiv \frac{f(x_*,y_*)}{\sup_{(x,y) \in \mathcal{S}} f(x,y)} = \frac{24 x_* (1-y_*) \cdot \mathbb{I}(x_* \leqslant y_*)}{24} = x_* (1-y_*) \cdot \mathbb{I}(x_* \leqslant y_*).$$
It is fairly simple to program this method into R. In the code below we create a function SIMULATE that takes an input n and produces a matrix with this many outputs of the bivariate random vector in question. (The matrix has two columns for the two variables; each row is one simulated value of the random vector.)
#Create function to simulate vectors from specified distribution
SIMULATE <- function(n) {
#Set output matrix
OUT <- matrix(NA, nrow = n, ncol = 2);
colnames(OUT) <- c('X','Y');
#Undertake rejection sampling
for (i in 1:n) {
ACCEPT <- FALSE;
while (!ACCEPT) {
#Simulate proposed values
X <- runif(1);
Y <- runif(1);
#Determine acceptance
AA <- X*(1-Y)*(X <= Y);
ACCEPT <- (runif(1) <= AA); }
OUT[i,] <- c(X,Y); }
OUT; }
We can use this function to simulate any number of bivariate outputs from this distribution. Below we simulate $n=10^3$ outputs of the random vector.
#Set the seed
set.seed(75375211);
#Generate simulations and show the first few values
SIMULATIONS <- SIMULATE(1000);
head(SIMULATIONS);
X Y
[1,] 0.4124875 0.4681140
[2,] 0.1345465 0.1565690
[3,] 0.4703997 0.4810464
[4,] 0.6532923 0.8114625
[5,] 0.5971606 0.6286653
[6,] 0.6476007 0.8088133
|
Simulate a Continuous Joint pdf in R Using Known Distributions
|
There are many ways you can simulate this bivariate random vector. Probably the most efficient way is to derive the marginal distribution of one variable, and the conditional distribution of the othe
|
Simulate a Continuous Joint pdf in R Using Known Distributions
There are many ways you can simulate this bivariate random vector. Probably the most efficient way is to derive the marginal distribution of one variable, and the conditional distribution of the other, and then simulate the variables individually using these distributions.
An alternative method, which is less efficient, but does not require you to derive the marginal and conditional distributions, is to use rejection sampling. The simples method in this case is to use a uniform generating distribution over the unit square. We have a bivariate continuous random vector $(X,Y)$ with a bounded density $f$ over the support $\mathcal{S} \subset [0,1]^2$. Thus, we can generate $X_*,Y_* \sim \text{IID U}[0,1]$ and then accept the generated value with acceptance probability:
$$A(x_*,y_*) \equiv \frac{f(x_*,y_*)}{\sup_{(x,y) \in \mathcal{S}} f(x,y)} = \frac{24 x_* (1-y_*) \cdot \mathbb{I}(x_* \leqslant y_*)}{24} = x_* (1-y_*) \cdot \mathbb{I}(x_* \leqslant y_*).$$
It is fairly simple to program this method into R. In the code below we create a function SIMULATE that takes an input n and produces a matrix with this many outputs of the bivariate random vector in question. (The matrix has two columns for the two variables; each row is one simulated value of the random vector.)
#Create function to simulate vectors from specified distribution
SIMULATE <- function(n) {
#Set output matrix
OUT <- matrix(NA, nrow = n, ncol = 2);
colnames(OUT) <- c('X','Y');
#Undertake rejection sampling
for (i in 1:n) {
ACCEPT <- FALSE;
while (!ACCEPT) {
#Simulate proposed values
X <- runif(1);
Y <- runif(1);
#Determine acceptance
AA <- X*(1-Y)*(X <= Y);
ACCEPT <- (runif(1) <= AA); }
OUT[i,] <- c(X,Y); }
OUT; }
We can use this function to simulate any number of bivariate outputs from this distribution. Below we simulate $n=10^3$ outputs of the random vector.
#Set the seed
set.seed(75375211);
#Generate simulations and show the first few values
SIMULATIONS <- SIMULATE(1000);
head(SIMULATIONS);
X Y
[1,] 0.4124875 0.4681140
[2,] 0.1345465 0.1565690
[3,] 0.4703997 0.4810464
[4,] 0.6532923 0.8114625
[5,] 0.5971606 0.6286653
[6,] 0.6476007 0.8088133
|
Simulate a Continuous Joint pdf in R Using Known Distributions
There are many ways you can simulate this bivariate random vector. Probably the most efficient way is to derive the marginal distribution of one variable, and the conditional distribution of the othe
|
42,663
|
Why need wald test ( a squared version of t test ) when we already have t test?
|
There are similarities between a Wald test and a t-test, as this page describes.* They both are used to determine whether a coefficient value is significantly different from 0 (or from any value specified in a null hypothesis). They typically arise in different contexts that show why we need both.
Consider the situation for testing whether a coefficient is significantly different from 0 in two scenarios: one for the mean value of a set of numbers examined with a t-test, and another for a coefficient in a Cox regression examined with a Wald test. In both cases you use the ratio of the observed difference from 0 to a measure of the error in that estimate. The error term, however, is determined in different ways.
For testing whether a mean value of a set of numbers is different from 0, the t-test takes into account the fact that you are estimating both the mean value and the variance from a particular number of observations. It assumes that the underlying population is normally distributed and uses the properties of normal distributions to calculate the error estimate. This gives an exact answer, under that assumption, for the probability that the observed difference of the mean value from 0 could have arisen by chance if the true population mean were 0.
The error estimate for coefficients determined by maximizing likelihoods (partial likelihoods with Cox regressions) are determined in a different way. This answer describes how the matrix of second derivatives of the log-likelihood, calculated at the maximum likelihood, can be transformed into an estimate of the variance-covariance matrix for all coefficients in a multiple regression setting.
For the case of a single coefficient that matrix is a single value, the variance of the coefficient estimate. The Wald test for a single coefficient assumes that the coefficient estimate is normally distributed. That's different from the assumptions for the t-test; in this case you aren't sampling a specified number of times from a normal distribution but rather using your estimate of the variance directly. Note that with small sample sizes that assumption of normally distributed coefficient estimates might not hold.
This page shows how the t-test and the Wald test become equivalent as sample size increases.
*The form of the Wald statistic that you have in mind, for testing against a chi-square distribution, is the square of that presented on the linked page.
|
Why need wald test ( a squared version of t test ) when we already have t test?
|
There are similarities between a Wald test and a t-test, as this page describes.* They both are used to determine whether a coefficient value is significantly different from 0 (or from any value speci
|
Why need wald test ( a squared version of t test ) when we already have t test?
There are similarities between a Wald test and a t-test, as this page describes.* They both are used to determine whether a coefficient value is significantly different from 0 (or from any value specified in a null hypothesis). They typically arise in different contexts that show why we need both.
Consider the situation for testing whether a coefficient is significantly different from 0 in two scenarios: one for the mean value of a set of numbers examined with a t-test, and another for a coefficient in a Cox regression examined with a Wald test. In both cases you use the ratio of the observed difference from 0 to a measure of the error in that estimate. The error term, however, is determined in different ways.
For testing whether a mean value of a set of numbers is different from 0, the t-test takes into account the fact that you are estimating both the mean value and the variance from a particular number of observations. It assumes that the underlying population is normally distributed and uses the properties of normal distributions to calculate the error estimate. This gives an exact answer, under that assumption, for the probability that the observed difference of the mean value from 0 could have arisen by chance if the true population mean were 0.
The error estimate for coefficients determined by maximizing likelihoods (partial likelihoods with Cox regressions) are determined in a different way. This answer describes how the matrix of second derivatives of the log-likelihood, calculated at the maximum likelihood, can be transformed into an estimate of the variance-covariance matrix for all coefficients in a multiple regression setting.
For the case of a single coefficient that matrix is a single value, the variance of the coefficient estimate. The Wald test for a single coefficient assumes that the coefficient estimate is normally distributed. That's different from the assumptions for the t-test; in this case you aren't sampling a specified number of times from a normal distribution but rather using your estimate of the variance directly. Note that with small sample sizes that assumption of normally distributed coefficient estimates might not hold.
This page shows how the t-test and the Wald test become equivalent as sample size increases.
*The form of the Wald statistic that you have in mind, for testing against a chi-square distribution, is the square of that presented on the linked page.
|
Why need wald test ( a squared version of t test ) when we already have t test?
There are similarities between a Wald test and a t-test, as this page describes.* They both are used to determine whether a coefficient value is significantly different from 0 (or from any value speci
|
42,664
|
What is the relation between ELBO and SGVB?
|
The evidence lower bound is a bound on the log probability of the data. But there is no straightforward way to compute the ELBO, since it requires taking an expectation over the variational posterior. Therefore we need a procedure to estimate the ELBO (more specifically we need some way to estimate the gradient of the ELBO so we can optimize it).
The straightforward method is simply to estimate the expectation by sampling from the variational posterior, and then to compute the gradient of the estimator using the score function gradient estimator. However the variance of this method is too high for practical use, which is why the authors introduce their "SGVB" estimator.
|
What is the relation between ELBO and SGVB?
|
The evidence lower bound is a bound on the log probability of the data. But there is no straightforward way to compute the ELBO, since it requires taking an expectation over the variational posterior.
|
What is the relation between ELBO and SGVB?
The evidence lower bound is a bound on the log probability of the data. But there is no straightforward way to compute the ELBO, since it requires taking an expectation over the variational posterior. Therefore we need a procedure to estimate the ELBO (more specifically we need some way to estimate the gradient of the ELBO so we can optimize it).
The straightforward method is simply to estimate the expectation by sampling from the variational posterior, and then to compute the gradient of the estimator using the score function gradient estimator. However the variance of this method is too high for practical use, which is why the authors introduce their "SGVB" estimator.
|
What is the relation between ELBO and SGVB?
The evidence lower bound is a bound on the log probability of the data. But there is no straightforward way to compute the ELBO, since it requires taking an expectation over the variational posterior.
|
42,665
|
conditional and interventional expectation
|
Yes, you can consider $X$ and $Z$ to be arbitrary vectors of variables. The identification problem of expressions of the type $E[Y|do(X)]$ and $E[Y|do(X), Z]$ for arbitrary vectors of variables $X$ and $Z$ has been solved for nonparametric models using the do-calculus (via the ID-algorithm).
For instance, in the model below, suppose you are interested in identifying $E[Y|do(X_1, X_2)]$:
This is given by (here you can just use the truncated factorization formula):
$$
E[Y|do(X_1, X_2)] = \sum_{Z_1, Z_2} P(Y|X_1, X_2, Z_2) P(Z_2|X_1,Z_1) P(Z_1)
$$
Or equivalently, using inverse probability weights:
$$
E[Y|do(X_1, X_2)] = \sum_{Z_1, Z_2} \frac{P(Y, X_1, X_2, Z_1, Z_2)}{P(X_2|X_1, Z_1, Z_2)P(X_1|Z_1)}
$$
The R package causaleffect has several of the existing identification algorithms implemented.
|
conditional and interventional expectation
|
Yes, you can consider $X$ and $Z$ to be arbitrary vectors of variables. The identification problem of expressions of the type $E[Y|do(X)]$ and $E[Y|do(X), Z]$ for arbitrary vectors of variables $X$ an
|
conditional and interventional expectation
Yes, you can consider $X$ and $Z$ to be arbitrary vectors of variables. The identification problem of expressions of the type $E[Y|do(X)]$ and $E[Y|do(X), Z]$ for arbitrary vectors of variables $X$ and $Z$ has been solved for nonparametric models using the do-calculus (via the ID-algorithm).
For instance, in the model below, suppose you are interested in identifying $E[Y|do(X_1, X_2)]$:
This is given by (here you can just use the truncated factorization formula):
$$
E[Y|do(X_1, X_2)] = \sum_{Z_1, Z_2} P(Y|X_1, X_2, Z_2) P(Z_2|X_1,Z_1) P(Z_1)
$$
Or equivalently, using inverse probability weights:
$$
E[Y|do(X_1, X_2)] = \sum_{Z_1, Z_2} \frac{P(Y, X_1, X_2, Z_1, Z_2)}{P(X_2|X_1, Z_1, Z_2)P(X_1|Z_1)}
$$
The R package causaleffect has several of the existing identification algorithms implemented.
|
conditional and interventional expectation
Yes, you can consider $X$ and $Z$ to be arbitrary vectors of variables. The identification problem of expressions of the type $E[Y|do(X)]$ and $E[Y|do(X), Z]$ for arbitrary vectors of variables $X$ an
|
42,666
|
Pairwise comparisons of regression coefficients [duplicate]
|
Use the emmeans package, specifically pairs(emtrends(m, ~grp, var="var")) ... where grp is the categorical (grouping) variable, "var" is the slope variable.
library(emmeans)
m <- lm(Sepal.Width ~ Species*Sepal.Length, iris)
pairs(emtrends(m, ~Species, var="Sepal.Length"))
## contrast estimate SE df t.ratio p.value
## setosa - versicolor 0.4788 0.1337 144 3.582 0.0013
## setosa - virginica 0.5666 0.1262 144 4.490 <.0001
## versicolor - virginica 0.0878 0.0971 144 0.905 0.6382
## P value adjustment: tukey method for comparing a family of 3 estimates
|
Pairwise comparisons of regression coefficients [duplicate]
|
Use the emmeans package, specifically pairs(emtrends(m, ~grp, var="var")) ... where grp is the categorical (grouping) variable, "var" is the slope variable.
library(emmeans)
m <- lm(Sepal.Width ~ Spec
|
Pairwise comparisons of regression coefficients [duplicate]
Use the emmeans package, specifically pairs(emtrends(m, ~grp, var="var")) ... where grp is the categorical (grouping) variable, "var" is the slope variable.
library(emmeans)
m <- lm(Sepal.Width ~ Species*Sepal.Length, iris)
pairs(emtrends(m, ~Species, var="Sepal.Length"))
## contrast estimate SE df t.ratio p.value
## setosa - versicolor 0.4788 0.1337 144 3.582 0.0013
## setosa - virginica 0.5666 0.1262 144 4.490 <.0001
## versicolor - virginica 0.0878 0.0971 144 0.905 0.6382
## P value adjustment: tukey method for comparing a family of 3 estimates
|
Pairwise comparisons of regression coefficients [duplicate]
Use the emmeans package, specifically pairs(emtrends(m, ~grp, var="var")) ... where grp is the categorical (grouping) variable, "var" is the slope variable.
library(emmeans)
m <- lm(Sepal.Width ~ Spec
|
42,667
|
How to improve this time series model?
|
You politely asked "My question is about how to improve this model further . Because you will see that residuals are not normally distribute" . I have answered .....
I took your 290 monthly values (1987/1) and introduced them to AUTOBOX which automatically identified the following model.
1) ARIMA (1,1,0)(0,0,0)12 WITH AR COEFFICIENT=.865 THUS THIS IS VERY CLOSE TO SECOND DIFFERENCES WHICH YOU HAD CONSIDERED
2) 1 SEASONAL PULSE (POSITIVE) STARTING IN NOVEMBER 1996
3) 12 UNUSUAL VALUES (PULSES)
4) TWO MODEL ERROR VARIANCE CHANGES A) DOWNWARDS AT PERIOD 60 (1991/12) AND UPWARDS AT PERIOD 174 (2001/6) ... BOTH VISUALLY OBVIOUS FROM YOUR GRAPHS
This is the model
The Actual,Fit and Forecast is here with forecasts (and 95$ monte carlo based) here
The procedure following Tsay https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf to identify variance change points yielded
The model residuals are here
The reasons that your software's attempt to model this data failed is that the original data is compromized by complications or opportunities.
|
How to improve this time series model?
|
You politely asked "My question is about how to improve this model further . Because you will see that residuals are not normally distribute" . I have answered .....
I took your 290 monthly values (19
|
How to improve this time series model?
You politely asked "My question is about how to improve this model further . Because you will see that residuals are not normally distribute" . I have answered .....
I took your 290 monthly values (1987/1) and introduced them to AUTOBOX which automatically identified the following model.
1) ARIMA (1,1,0)(0,0,0)12 WITH AR COEFFICIENT=.865 THUS THIS IS VERY CLOSE TO SECOND DIFFERENCES WHICH YOU HAD CONSIDERED
2) 1 SEASONAL PULSE (POSITIVE) STARTING IN NOVEMBER 1996
3) 12 UNUSUAL VALUES (PULSES)
4) TWO MODEL ERROR VARIANCE CHANGES A) DOWNWARDS AT PERIOD 60 (1991/12) AND UPWARDS AT PERIOD 174 (2001/6) ... BOTH VISUALLY OBVIOUS FROM YOUR GRAPHS
This is the model
The Actual,Fit and Forecast is here with forecasts (and 95$ monte carlo based) here
The procedure following Tsay https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf to identify variance change points yielded
The model residuals are here
The reasons that your software's attempt to model this data failed is that the original data is compromized by complications or opportunities.
|
How to improve this time series model?
You politely asked "My question is about how to improve this model further . Because you will see that residuals are not normally distribute" . I have answered .....
I took your 290 monthly values (19
|
42,668
|
Advantages of using t-value as test statistic in permutation tests?
|
The main reason to use a t-value (and, indeed, any approximately pivotal test statistic) in a permutation test is to give the test asymptotic validity in the case of unequal variances. Given that you always want this property, you should always base permutation tests on β/(SE β) rather than just β alone.
This property was first described in Janssen, 1997. As many textbooks and papers note, the ordinary permutation test is only "exact" for the test of identical distributions. Typically, however, we want to test for equality of the parameter of interest, not that the distributions are identical. More importantly, we also generally want to make directional conclusions about the results of the test. Janssen (and later, Chung and Romano) pointed out that in order to do this you have to use a pivotal test statistic (which is related to why the bootstrap-t functions better than the ordinary bootstrap).
In order to make an approximately pivotal test statistic, you can divide a comparison of interest with an estimate of its standard error (called Studentization). The t-value is the classic example of this procedure. Given that the null hypothesis of "equal distributions" is rarely interesting, you should ALWAYS be using an approximately pivotal test statistic. Note, however, that is sometimes difficult to estimate the standard error of a comparison (though you can always nest a bootstrap inside of your permutation test).
|
Advantages of using t-value as test statistic in permutation tests?
|
The main reason to use a t-value (and, indeed, any approximately pivotal test statistic) in a permutation test is to give the test asymptotic validity in the case of unequal variances. Given that you
|
Advantages of using t-value as test statistic in permutation tests?
The main reason to use a t-value (and, indeed, any approximately pivotal test statistic) in a permutation test is to give the test asymptotic validity in the case of unequal variances. Given that you always want this property, you should always base permutation tests on β/(SE β) rather than just β alone.
This property was first described in Janssen, 1997. As many textbooks and papers note, the ordinary permutation test is only "exact" for the test of identical distributions. Typically, however, we want to test for equality of the parameter of interest, not that the distributions are identical. More importantly, we also generally want to make directional conclusions about the results of the test. Janssen (and later, Chung and Romano) pointed out that in order to do this you have to use a pivotal test statistic (which is related to why the bootstrap-t functions better than the ordinary bootstrap).
In order to make an approximately pivotal test statistic, you can divide a comparison of interest with an estimate of its standard error (called Studentization). The t-value is the classic example of this procedure. Given that the null hypothesis of "equal distributions" is rarely interesting, you should ALWAYS be using an approximately pivotal test statistic. Note, however, that is sometimes difficult to estimate the standard error of a comparison (though you can always nest a bootstrap inside of your permutation test).
|
Advantages of using t-value as test statistic in permutation tests?
The main reason to use a t-value (and, indeed, any approximately pivotal test statistic) in a permutation test is to give the test asymptotic validity in the case of unequal variances. Given that you
|
42,669
|
Cross validation : hyper-parameter tuning ? or model validation?
|
I'd say mostly the first (i.e. Hyper-parameter tuning).
If you have a sufficiently large hold-out test set you can evaluate the models pretty reliably. When wanting to select hyperparameters, having a validation set could cause your model to overfit on that. CV makes it much harder to do so.
|
Cross validation : hyper-parameter tuning ? or model validation?
|
I'd say mostly the first (i.e. Hyper-parameter tuning).
If you have a sufficiently large hold-out test set you can evaluate the models pretty reliably. When wanting to select hyperparameters, having a
|
Cross validation : hyper-parameter tuning ? or model validation?
I'd say mostly the first (i.e. Hyper-parameter tuning).
If you have a sufficiently large hold-out test set you can evaluate the models pretty reliably. When wanting to select hyperparameters, having a validation set could cause your model to overfit on that. CV makes it much harder to do so.
|
Cross validation : hyper-parameter tuning ? or model validation?
I'd say mostly the first (i.e. Hyper-parameter tuning).
If you have a sufficiently large hold-out test set you can evaluate the models pretty reliably. When wanting to select hyperparameters, having a
|
42,670
|
Cross validation : hyper-parameter tuning ? or model validation?
|
Model Selection and Model Hyperparams Tuning are conceptually equivalent.
If a model has a single Hyperparam, which in turn can have 3 possible values, it's like you have 3 different models. Very similar models maybe, but for the purpose of optimization they are treated as if they are 3, completely different and independent.
So wheter your hyperparam is saying "use N=1/2/3 neurons per layer" or "use model 1=linear regression/2=lasso/3=ridge" you have indeed 3 different models, all the same.
Crossvalidation would then be a way to compare these 3 competing models and select the one that is "most adept" to learn the behavior you want to resemble. You can see that it is the "most adept", because everytime yor train it on a subset of your data, it then performs well on the remaining data.
|
Cross validation : hyper-parameter tuning ? or model validation?
|
Model Selection and Model Hyperparams Tuning are conceptually equivalent.
If a model has a single Hyperparam, which in turn can have 3 possible values, it's like you have 3 different models. Very sim
|
Cross validation : hyper-parameter tuning ? or model validation?
Model Selection and Model Hyperparams Tuning are conceptually equivalent.
If a model has a single Hyperparam, which in turn can have 3 possible values, it's like you have 3 different models. Very similar models maybe, but for the purpose of optimization they are treated as if they are 3, completely different and independent.
So wheter your hyperparam is saying "use N=1/2/3 neurons per layer" or "use model 1=linear regression/2=lasso/3=ridge" you have indeed 3 different models, all the same.
Crossvalidation would then be a way to compare these 3 competing models and select the one that is "most adept" to learn the behavior you want to resemble. You can see that it is the "most adept", because everytime yor train it on a subset of your data, it then performs well on the remaining data.
|
Cross validation : hyper-parameter tuning ? or model validation?
Model Selection and Model Hyperparams Tuning are conceptually equivalent.
If a model has a single Hyperparam, which in turn can have 3 possible values, it's like you have 3 different models. Very sim
|
42,671
|
Cross validation : hyper-parameter tuning ? or model validation?
|
Suppose you have split your data into train-validation-test sets. You do not usually split it into train-test unless your models have no hyperparameters.
Validation set is always used to tune hyperparameters of your models. Test set is used to assess the final performance of your model, and compare different classes of models (e.g. random forest vs neural network vs svm) by their performance.
Cross-validation is tightly connected to the validation set and hyperparameter selection. In general, cross-validation splits your large mass of (train + validation) into training and validation sets repeatedly. This should provide you with an out-of-sample performance approximation, and based on it you choose your hyperparameters (model). However, you can treat your model class (random forest, svm, neural network) as hyperparameter. In this way, you can choose your final model class using cross-validation.
Test set is used purely for reporting (how well your chosen model performs). According to some authors, you cannot use test set for ANY model selection, even if it is model class selection (svm vs random forest vs etc.).
However, I would not compare broad model classes using cross-validation. It is best to decide which model to use based on your test set. By using your test set purely for reporting as statisticians suggest, you are losing a lot of data. However, using it to choose a model class as the last step does not influence the out-of-sample overfitting much, and this is what all scientific papers do when they claim that their state-of-the-art method outperformed some other method on some large dataset.
As for #1) You are making a sympathetic error by confusing cross-validation division procedure and train-validation-test sets. Train-validation-test is a data set division, while cross-validation is a specific way HOW you can divide your mass of data into training and validation sets (while test set has nothing to do with cross-validation, and allocated separately).
As for #2) Yes, you can do Lasso and Gradient Boosted Regression Tree comparison using validation set (and cross-validation split method), but it would be better to compare them on the test set, while cross-validation (validation set) is used to find hyperparameters of your GRT and Lasso regression separately.
|
Cross validation : hyper-parameter tuning ? or model validation?
|
Suppose you have split your data into train-validation-test sets. You do not usually split it into train-test unless your models have no hyperparameters.
Validation set is always used to tune hyperpa
|
Cross validation : hyper-parameter tuning ? or model validation?
Suppose you have split your data into train-validation-test sets. You do not usually split it into train-test unless your models have no hyperparameters.
Validation set is always used to tune hyperparameters of your models. Test set is used to assess the final performance of your model, and compare different classes of models (e.g. random forest vs neural network vs svm) by their performance.
Cross-validation is tightly connected to the validation set and hyperparameter selection. In general, cross-validation splits your large mass of (train + validation) into training and validation sets repeatedly. This should provide you with an out-of-sample performance approximation, and based on it you choose your hyperparameters (model). However, you can treat your model class (random forest, svm, neural network) as hyperparameter. In this way, you can choose your final model class using cross-validation.
Test set is used purely for reporting (how well your chosen model performs). According to some authors, you cannot use test set for ANY model selection, even if it is model class selection (svm vs random forest vs etc.).
However, I would not compare broad model classes using cross-validation. It is best to decide which model to use based on your test set. By using your test set purely for reporting as statisticians suggest, you are losing a lot of data. However, using it to choose a model class as the last step does not influence the out-of-sample overfitting much, and this is what all scientific papers do when they claim that their state-of-the-art method outperformed some other method on some large dataset.
As for #1) You are making a sympathetic error by confusing cross-validation division procedure and train-validation-test sets. Train-validation-test is a data set division, while cross-validation is a specific way HOW you can divide your mass of data into training and validation sets (while test set has nothing to do with cross-validation, and allocated separately).
As for #2) Yes, you can do Lasso and Gradient Boosted Regression Tree comparison using validation set (and cross-validation split method), but it would be better to compare them on the test set, while cross-validation (validation set) is used to find hyperparameters of your GRT and Lasso regression separately.
|
Cross validation : hyper-parameter tuning ? or model validation?
Suppose you have split your data into train-validation-test sets. You do not usually split it into train-test unless your models have no hyperparameters.
Validation set is always used to tune hyperpa
|
42,672
|
MLE of $f(x\mid\theta) = \theta x^{\theta−1}e^{−x^{\theta}}I_{(0,\infty)}(x)$
|
If you're not sure whether or not your answer is correct, a useful check is to plot a graph of the log-likelihood function and see if your purported MLE looks visually to give the maximising value. I will do this below, but I include the mathematics for deriving the MLE in the general case.
MLE in the general case: For IID data from this distribution, you have log-likelihood:
$$\ell_\mathbf{x}(\theta) = n \ln \theta + (\theta-1) \sum_{i=1}^n \ln x_i - \sum_{i=1}^n x_i^\theta \quad \quad \text{for } \theta>0.$$
The corresponding score function is:
$$s_\mathbf{x}(\theta) = \frac{d\ell_\mathbf{x}}{d\theta}(\theta) = \frac{n}{\theta} + \sum_{i=1}^n (1-x_i^\theta) \ln x_i,$$
and the observed information is:
$$I_\mathbf{x}(\theta) = - \frac{d^2\ell_\mathbf{x}}{d\theta^2}(\theta) = \frac{n}{\theta^2} + \sum_{i=1}^n x_i^\theta (\ln x_i)^2 > 0.$$
We can see from the positive observed information that the log-likelihood is strictly concave, which means that the MLE will be at the unique critical point (unless the score is monotone, in which case the maximum is approached at the boundary of the parameter range, and there is no MLE). The critical point is given implicitly by solving the score equation:
$$0 = s_\mathbf{x}(\hat{\theta}) = \frac{n}{\hat{\theta}} + \sum_{i=1}^n (1 - x_i^\hat{\theta}) \ln x_i.$$
There is no closed-form expression for the MLE in this case, so we need to find it numerically.
Iterative algorithm for MLE: Applying Newton's method, with your chosen starting-point, gives:
$$\hat{\theta}_0 = 1 \quad \quad \quad \hat{\theta}_{k+1} = \hat{\theta}_{k} + \frac{s_\mathbf{x}(\hat{\theta})}{I_\mathbf{x}(\hat{\theta})} = \hat{\theta}_{k} + \frac{n \hat{\theta}_k + \hat{\theta}_k^2 \sum_{i=1}^n (1 - x_i^{\hat{\theta}_k}) \ln x_i}{n + \hat{\theta}_k^2 \sum_{i=1}^n x_i^{\hat{\theta}_k} (\ln x_i)^2}.$$
(Note: The starting point you have chosen is a reasonable one. With some calculus, it is possible to show that $\mathbb{E}(X) = \Gamma(1 + 1/\theta)$, so we could approximate $\bar{x} \approx \Gamma(1 + 1/\theta)$ as a starting point for the iteration. However, the problem is that this already requires numerical solution, so it is not a great starting point. The value you have chosen is reasonable, and the iteration should converge quite rapidly in any case.) We can implement this iteration algorithm in the following R code:
#Create function to find the MLE via iteration
#The input m is the number of iterations to perform (default is five iterations)
MLE_ITERATION <- function(x, m = 5) {
n <- length(x);
T <- 1;
theta <- rep(T, m+1);
for (k in 1:m) {
NUMERATOR <- n*theta[k] + theta[k]^2 * sum((1-x^theta[k])*log(x));
DENOMINATOR <- n + theta[k]^2 * sum(x^theta[k]*(log(x))^2);
theta[k+1] <- theta[k] + NUMERATOR/DENOMINATOR; }
theta; }
Application to your data set: You have the data vector $\mathbf{x} = (0.60, 5.17, 0.23)$. With $m=10$ iterations (which is more than you need) you get the MLE $\hat{\theta} = 0.6771516$. The log-likelihood and MLE are shown here:
Here is the R code used to generate the MLE and the plot:
#Enter your data
x <- c(0.60, 5.17, 0.23);
#Choose number of iterations
m <- 10;
#Generate the iterations, and display the last value
THETA_ITER <- MLE_ITERATION(x, m);
THETA_ITER[m+1];
[1] 0.6771516
#Generate vectorised log-likelihood function
LOG_LIKE <- function(theta) {
LL <- rep(0, length(theta));
for (i in 1:length(theta)) {
LL[i] <- length(x)*log(theta[i]) + (theta[i]-1)*sum(log(x)) - sum(x^theta[i]); }
LL }
DATA <- data.frame(Theta = 1:200/100,
Log_Like = LOG_LIKE(1:200/100));
#Plot the log-likelihood function with MLE
library(ggplot2);
ggplot(data = DATA, aes(x = Theta, y = Log_Like)) +
geom_line(size = 1.2) +
geom_vline(xintercept = THETA_ITER[m+1],
size = 1.2, linetype = 'dashed', colour = 'red') +
theme(plot.title = element_text(hjust = 0.5, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5)) +
ggtitle('Plot of Log-Likelihood Function') +
labs(subtitle = '(Red line shows MLE - obtained via iteration)') +
xlab(expression(theta)) + ylab('Log-Likelihood');
|
MLE of $f(x\mid\theta) = \theta x^{\theta−1}e^{−x^{\theta}}I_{(0,\infty)}(x)$
|
If you're not sure whether or not your answer is correct, a useful check is to plot a graph of the log-likelihood function and see if your purported MLE looks visually to give the maximising value. I
|
MLE of $f(x\mid\theta) = \theta x^{\theta−1}e^{−x^{\theta}}I_{(0,\infty)}(x)$
If you're not sure whether or not your answer is correct, a useful check is to plot a graph of the log-likelihood function and see if your purported MLE looks visually to give the maximising value. I will do this below, but I include the mathematics for deriving the MLE in the general case.
MLE in the general case: For IID data from this distribution, you have log-likelihood:
$$\ell_\mathbf{x}(\theta) = n \ln \theta + (\theta-1) \sum_{i=1}^n \ln x_i - \sum_{i=1}^n x_i^\theta \quad \quad \text{for } \theta>0.$$
The corresponding score function is:
$$s_\mathbf{x}(\theta) = \frac{d\ell_\mathbf{x}}{d\theta}(\theta) = \frac{n}{\theta} + \sum_{i=1}^n (1-x_i^\theta) \ln x_i,$$
and the observed information is:
$$I_\mathbf{x}(\theta) = - \frac{d^2\ell_\mathbf{x}}{d\theta^2}(\theta) = \frac{n}{\theta^2} + \sum_{i=1}^n x_i^\theta (\ln x_i)^2 > 0.$$
We can see from the positive observed information that the log-likelihood is strictly concave, which means that the MLE will be at the unique critical point (unless the score is monotone, in which case the maximum is approached at the boundary of the parameter range, and there is no MLE). The critical point is given implicitly by solving the score equation:
$$0 = s_\mathbf{x}(\hat{\theta}) = \frac{n}{\hat{\theta}} + \sum_{i=1}^n (1 - x_i^\hat{\theta}) \ln x_i.$$
There is no closed-form expression for the MLE in this case, so we need to find it numerically.
Iterative algorithm for MLE: Applying Newton's method, with your chosen starting-point, gives:
$$\hat{\theta}_0 = 1 \quad \quad \quad \hat{\theta}_{k+1} = \hat{\theta}_{k} + \frac{s_\mathbf{x}(\hat{\theta})}{I_\mathbf{x}(\hat{\theta})} = \hat{\theta}_{k} + \frac{n \hat{\theta}_k + \hat{\theta}_k^2 \sum_{i=1}^n (1 - x_i^{\hat{\theta}_k}) \ln x_i}{n + \hat{\theta}_k^2 \sum_{i=1}^n x_i^{\hat{\theta}_k} (\ln x_i)^2}.$$
(Note: The starting point you have chosen is a reasonable one. With some calculus, it is possible to show that $\mathbb{E}(X) = \Gamma(1 + 1/\theta)$, so we could approximate $\bar{x} \approx \Gamma(1 + 1/\theta)$ as a starting point for the iteration. However, the problem is that this already requires numerical solution, so it is not a great starting point. The value you have chosen is reasonable, and the iteration should converge quite rapidly in any case.) We can implement this iteration algorithm in the following R code:
#Create function to find the MLE via iteration
#The input m is the number of iterations to perform (default is five iterations)
MLE_ITERATION <- function(x, m = 5) {
n <- length(x);
T <- 1;
theta <- rep(T, m+1);
for (k in 1:m) {
NUMERATOR <- n*theta[k] + theta[k]^2 * sum((1-x^theta[k])*log(x));
DENOMINATOR <- n + theta[k]^2 * sum(x^theta[k]*(log(x))^2);
theta[k+1] <- theta[k] + NUMERATOR/DENOMINATOR; }
theta; }
Application to your data set: You have the data vector $\mathbf{x} = (0.60, 5.17, 0.23)$. With $m=10$ iterations (which is more than you need) you get the MLE $\hat{\theta} = 0.6771516$. The log-likelihood and MLE are shown here:
Here is the R code used to generate the MLE and the plot:
#Enter your data
x <- c(0.60, 5.17, 0.23);
#Choose number of iterations
m <- 10;
#Generate the iterations, and display the last value
THETA_ITER <- MLE_ITERATION(x, m);
THETA_ITER[m+1];
[1] 0.6771516
#Generate vectorised log-likelihood function
LOG_LIKE <- function(theta) {
LL <- rep(0, length(theta));
for (i in 1:length(theta)) {
LL[i] <- length(x)*log(theta[i]) + (theta[i]-1)*sum(log(x)) - sum(x^theta[i]); }
LL }
DATA <- data.frame(Theta = 1:200/100,
Log_Like = LOG_LIKE(1:200/100));
#Plot the log-likelihood function with MLE
library(ggplot2);
ggplot(data = DATA, aes(x = Theta, y = Log_Like)) +
geom_line(size = 1.2) +
geom_vline(xintercept = THETA_ITER[m+1],
size = 1.2, linetype = 'dashed', colour = 'red') +
theme(plot.title = element_text(hjust = 0.5, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5)) +
ggtitle('Plot of Log-Likelihood Function') +
labs(subtitle = '(Red line shows MLE - obtained via iteration)') +
xlab(expression(theta)) + ylab('Log-Likelihood');
|
MLE of $f(x\mid\theta) = \theta x^{\theta−1}e^{−x^{\theta}}I_{(0,\infty)}(x)$
If you're not sure whether or not your answer is correct, a useful check is to plot a graph of the log-likelihood function and see if your purported MLE looks visually to give the maximising value. I
|
42,673
|
What is this parameter estimation strategy called?
|
Your second estimator is the "plug-in" estimator, based on the invariance property of MLE's it is the maximum-likelihood estimator (under the normal assumptions). The first estimator could be called a moments estimator, but could also be seen as non-parametric, as it is unbiased without need for normality assumption.
So you could try to find a better unbiased estimator using Rao-Blackwell theorem.
|
What is this parameter estimation strategy called?
|
Your second estimator is the "plug-in" estimator, based on the invariance property of MLE's it is the maximum-likelihood estimator (under the normal assumptions). The first estimator could be called a
|
What is this parameter estimation strategy called?
Your second estimator is the "plug-in" estimator, based on the invariance property of MLE's it is the maximum-likelihood estimator (under the normal assumptions). The first estimator could be called a moments estimator, but could also be seen as non-parametric, as it is unbiased without need for normality assumption.
So you could try to find a better unbiased estimator using Rao-Blackwell theorem.
|
What is this parameter estimation strategy called?
Your second estimator is the "plug-in" estimator, based on the invariance property of MLE's it is the maximum-likelihood estimator (under the normal assumptions). The first estimator could be called a
|
42,674
|
Convolution for uniform distribution and standard normal distribution
|
You're making the substitution $x = z - u$ to transform the integral. The differential of this is:
$$ dx = 0 - du = - du $$
So the calculation finishes up like this:
$$=\int_{0}^{1}f_X(z-u)du = - \int_{z}^{z-1}f_X(x)dx = \int_{z-1}^{z}f_X(x)dx$$
|
Convolution for uniform distribution and standard normal distribution
|
You're making the substitution $x = z - u$ to transform the integral. The differential of this is:
$$ dx = 0 - du = - du $$
So the calculation finishes up like this:
$$=\int_{0}^{1}f_X(z-u)du = - \in
|
Convolution for uniform distribution and standard normal distribution
You're making the substitution $x = z - u$ to transform the integral. The differential of this is:
$$ dx = 0 - du = - du $$
So the calculation finishes up like this:
$$=\int_{0}^{1}f_X(z-u)du = - \int_{z}^{z-1}f_X(x)dx = \int_{z-1}^{z}f_X(x)dx$$
|
Convolution for uniform distribution and standard normal distribution
You're making the substitution $x = z - u$ to transform the integral. The differential of this is:
$$ dx = 0 - du = - du $$
So the calculation finishes up like this:
$$=\int_{0}^{1}f_X(z-u)du = - \in
|
42,675
|
Is convolution neural network (CNN) a special case of multilayer perceptron (MLP)? And why not use MLP for everything?
|
A convolution can be expressed as matrix multiplication but the matrix is multiplied with a patch around every position in the image separately. So you go to (1/1) and extract a patch and multiply it with an MLP. Then you do the same thing at position (1/2) and so forth. So obviously there are less degrees of freedom than applying an MLP directly. Most people regard an MLP as a special case of a convolution where the spatial dimensions are 1x1.
Edit Start
Regarding MLP as special case of CNN, some comments do not share this opinion. Yann LeCun, who can be counted as one of the inventors of CNNs, made a similar comment before on FB: https://www.facebook.com/yann.lecun/posts/10152820758292143
He said that in CNNs there is no such thing as a "fully connected" layer, there is only a layer with 1x1 spatial extent and a kernel with 1x1 spatial extent. If one can "convert" FC layers, which are the single layers of MLPs into convolutional layers, then one can obviously also convert an entire MLP into a CNN by interpreting the input as a vector with only channel dimensions.
An example: If I have an image of size $H\times W\times C$ ($C$ channels) and I apply a single layer of an MLP to it, then I will transform the input into a vector $x$ of size $V=HWC$. I will then apply a matrix $W\in \mathbb{R}^{U\times V}$ to it, thereby creating $U$ hidden activations. I could interpret the input vector $x$ as an image with only one pixel but $V$ "channels": $x\in\mathbb{R}^{1\times 1\times V}$ and the weight matrix as a Kernel with only one pixel area but $U$ filters taking in $V$ channels each: $W\in\mathbb{R}^{U\times 1\times 1\times V}$. I can then call some Conv2D function that carries out the operation and computes exactly the same as the MLP.
Edit End
If yes, why people do not use a big enough MLP for everything, that let the computer to learn to use the convolution by self?
That is a nice idea (and probably worth doing research on) but it's simply not practical:
The MLP has too many degrees of freedom, it's likely to overfit.
In addition to learning the weights, you would have to learn their dependency structure.
As most Deep Learning research is closely related to NLP/speech processing/computer vision, people are eager to solve their problems and maybe less eager to investigate how a function space more general than a CNN could constrain itself to that particular function space. Though imho it's certainly interesting to think about that.
|
Is convolution neural network (CNN) a special case of multilayer perceptron (MLP)? And why not use M
|
A convolution can be expressed as matrix multiplication but the matrix is multiplied with a patch around every position in the image separately. So you go to (1/1) and extract a patch and multiply it
|
Is convolution neural network (CNN) a special case of multilayer perceptron (MLP)? And why not use MLP for everything?
A convolution can be expressed as matrix multiplication but the matrix is multiplied with a patch around every position in the image separately. So you go to (1/1) and extract a patch and multiply it with an MLP. Then you do the same thing at position (1/2) and so forth. So obviously there are less degrees of freedom than applying an MLP directly. Most people regard an MLP as a special case of a convolution where the spatial dimensions are 1x1.
Edit Start
Regarding MLP as special case of CNN, some comments do not share this opinion. Yann LeCun, who can be counted as one of the inventors of CNNs, made a similar comment before on FB: https://www.facebook.com/yann.lecun/posts/10152820758292143
He said that in CNNs there is no such thing as a "fully connected" layer, there is only a layer with 1x1 spatial extent and a kernel with 1x1 spatial extent. If one can "convert" FC layers, which are the single layers of MLPs into convolutional layers, then one can obviously also convert an entire MLP into a CNN by interpreting the input as a vector with only channel dimensions.
An example: If I have an image of size $H\times W\times C$ ($C$ channels) and I apply a single layer of an MLP to it, then I will transform the input into a vector $x$ of size $V=HWC$. I will then apply a matrix $W\in \mathbb{R}^{U\times V}$ to it, thereby creating $U$ hidden activations. I could interpret the input vector $x$ as an image with only one pixel but $V$ "channels": $x\in\mathbb{R}^{1\times 1\times V}$ and the weight matrix as a Kernel with only one pixel area but $U$ filters taking in $V$ channels each: $W\in\mathbb{R}^{U\times 1\times 1\times V}$. I can then call some Conv2D function that carries out the operation and computes exactly the same as the MLP.
Edit End
If yes, why people do not use a big enough MLP for everything, that let the computer to learn to use the convolution by self?
That is a nice idea (and probably worth doing research on) but it's simply not practical:
The MLP has too many degrees of freedom, it's likely to overfit.
In addition to learning the weights, you would have to learn their dependency structure.
As most Deep Learning research is closely related to NLP/speech processing/computer vision, people are eager to solve their problems and maybe less eager to investigate how a function space more general than a CNN could constrain itself to that particular function space. Though imho it's certainly interesting to think about that.
|
Is convolution neural network (CNN) a special case of multilayer perceptron (MLP)? And why not use M
A convolution can be expressed as matrix multiplication but the matrix is multiplied with a patch around every position in the image separately. So you go to (1/1) and extract a patch and multiply it
|
42,676
|
Puzzling predicted values in generalized multilevel model
|
The answer might at least partly be hidden in your answer:
The predicted random effects are all non-negative. Adding them to the linear predictor of the fixed effects will push up the average prediction to the desired level.
So what could be the reason for the non-centered distribution of the eBLUPs? I think it is hidden in the extremely unbalanced data situation: 604 of the 1000 ids provide just one single measurement, which can easily lead to numeric conflicts between random and fixed effects.
In such situations, I often run as a sensitivity analysis a normal mixed-model. If we do this with your data, we get:
mod_2 <- lmer(y ~ x + (1 | id), dat)
summary(mod_2)
# Output
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 0.06961 0.2638
Residual 0.10825 0.3290
Number of obs: 1491, groups: id, 1000
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.228315 0.022716 10.051
xt 0.007224 0.026922 0.268
Correlation of Fixed Effects:
(Intr)
xt -0.844
The fixed effects now look as expected by the descriptive analysis and very different from what the GLMER found. Now, the eBLUPs are even centered (while of course still far away from a normal):
summary(unlist(ranef(mod_2, drop = TRUE)))
# Output
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.15513 -0.12844 -0.09218 0.00000 0.14878 0.50347
Depending on your exact research questions and goals of your analysis, you could try generalized estimation equations GEE, in R:
library(gee)
fit_gee <- gee(y ~ x,
id = id,
data = dat %>% arrange(id),
family = binomial,
corstr = "exchangeable")
summary(fit_gee)
# Output
Coefficients:
Estimate Naive S.E. Naive z Robust S.E. Robust z
(Intercept) -1.22561668 0.1257257 -9.7483411 0.1235148 -9.9228358
xt 0.04325299 0.1484306 0.2914022 0.1479156 0.2924167
Estimated Scale Parameter: 0.9859774
Number of Iterations: 3
Working Correlation
[,1] [,2] [,3]
[1,] 1.0000000 0.3133096 0.3133096
[2,] 0.3133096 1.0000000 0.3133096
[3,] 0.3133096 0.3133096 1.0000000
# Distribution of predictions
summary(fitted(fit_gee))
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.2269 0.2269 0.2346 0.2324 0.2346 0.2346
|
Puzzling predicted values in generalized multilevel model
|
The answer might at least partly be hidden in your answer:
The predicted random effects are all non-negative. Adding them to the linear predictor of the fixed effects will push up the average predicti
|
Puzzling predicted values in generalized multilevel model
The answer might at least partly be hidden in your answer:
The predicted random effects are all non-negative. Adding them to the linear predictor of the fixed effects will push up the average prediction to the desired level.
So what could be the reason for the non-centered distribution of the eBLUPs? I think it is hidden in the extremely unbalanced data situation: 604 of the 1000 ids provide just one single measurement, which can easily lead to numeric conflicts between random and fixed effects.
In such situations, I often run as a sensitivity analysis a normal mixed-model. If we do this with your data, we get:
mod_2 <- lmer(y ~ x + (1 | id), dat)
summary(mod_2)
# Output
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 0.06961 0.2638
Residual 0.10825 0.3290
Number of obs: 1491, groups: id, 1000
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.228315 0.022716 10.051
xt 0.007224 0.026922 0.268
Correlation of Fixed Effects:
(Intr)
xt -0.844
The fixed effects now look as expected by the descriptive analysis and very different from what the GLMER found. Now, the eBLUPs are even centered (while of course still far away from a normal):
summary(unlist(ranef(mod_2, drop = TRUE)))
# Output
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.15513 -0.12844 -0.09218 0.00000 0.14878 0.50347
Depending on your exact research questions and goals of your analysis, you could try generalized estimation equations GEE, in R:
library(gee)
fit_gee <- gee(y ~ x,
id = id,
data = dat %>% arrange(id),
family = binomial,
corstr = "exchangeable")
summary(fit_gee)
# Output
Coefficients:
Estimate Naive S.E. Naive z Robust S.E. Robust z
(Intercept) -1.22561668 0.1257257 -9.7483411 0.1235148 -9.9228358
xt 0.04325299 0.1484306 0.2914022 0.1479156 0.2924167
Estimated Scale Parameter: 0.9859774
Number of Iterations: 3
Working Correlation
[,1] [,2] [,3]
[1,] 1.0000000 0.3133096 0.3133096
[2,] 0.3133096 1.0000000 0.3133096
[3,] 0.3133096 0.3133096 1.0000000
# Distribution of predictions
summary(fitted(fit_gee))
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.2269 0.2269 0.2346 0.2324 0.2346 0.2346
|
Puzzling predicted values in generalized multilevel model
The answer might at least partly be hidden in your answer:
The predicted random effects are all non-negative. Adding them to the linear predictor of the fixed effects will push up the average predicti
|
42,677
|
Which method is correct? (generalized additive model, mgcv)
|
I would first try an ANOVA and look at that F-test. If it rejects that, there is evidence that there is variation between treatments. After that, since your outcome if count data, I would do a poisson regression. Unlike GAMs where it can be hard to test the important of effects, Poisson regression gives you your usually p-values which makes it simple to see which things are important to causing the differences in your data.
https://stats.idre.ucla.edu/r/dae/poisson-regression/
If there is significnet difference between your mean and variance, (poisson says that they should be the same), modify the poisson regression with a Poisson-Gamma model.
|
Which method is correct? (generalized additive model, mgcv)
|
I would first try an ANOVA and look at that F-test. If it rejects that, there is evidence that there is variation between treatments. After that, since your outcome if count data, I would do a poisson
|
Which method is correct? (generalized additive model, mgcv)
I would first try an ANOVA and look at that F-test. If it rejects that, there is evidence that there is variation between treatments. After that, since your outcome if count data, I would do a poisson regression. Unlike GAMs where it can be hard to test the important of effects, Poisson regression gives you your usually p-values which makes it simple to see which things are important to causing the differences in your data.
https://stats.idre.ucla.edu/r/dae/poisson-regression/
If there is significnet difference between your mean and variance, (poisson says that they should be the same), modify the poisson regression with a Poisson-Gamma model.
|
Which method is correct? (generalized additive model, mgcv)
I would first try an ANOVA and look at that F-test. If it rejects that, there is evidence that there is variation between treatments. After that, since your outcome if count data, I would do a poisson
|
42,678
|
Which method is correct? (generalized additive model, mgcv)
|
I'd guess you want the by parameter for s. The resulting set of plots then has one for each level of the by factor. In this example from ?gam.models, there are three levels of fac, so there are three plots for x2 and one for x0.
dat <- gamSim(4)
b <- gam(y ~ fac + s(x2, by=fac) + s(x0), data=dat)
plot(b, pages=1)
|
Which method is correct? (generalized additive model, mgcv)
|
I'd guess you want the by parameter for s. The resulting set of plots then has one for each level of the by factor. In this example from ?gam.models, there are three levels of fac, so there are thre
|
Which method is correct? (generalized additive model, mgcv)
I'd guess you want the by parameter for s. The resulting set of plots then has one for each level of the by factor. In this example from ?gam.models, there are three levels of fac, so there are three plots for x2 and one for x0.
dat <- gamSim(4)
b <- gam(y ~ fac + s(x2, by=fac) + s(x0), data=dat)
plot(b, pages=1)
|
Which method is correct? (generalized additive model, mgcv)
I'd guess you want the by parameter for s. The resulting set of plots then has one for each level of the by factor. In this example from ?gam.models, there are three levels of fac, so there are thre
|
42,679
|
Which method is correct? (generalized additive model, mgcv)
|
You have some problems with your data; you can't have a random effect of Plant as a factor and get the plot you showed in the link in comment to @Aaron's Answer. Are you sure that Plant is a factor with more than 1 level? If not, you need to code your data correctly to get a random intercept per plant. Also, can you include both RIL and Plant level effects? Once you've accounted for the separate Plant effects (intercepts), won't that also logically account for the genetic line effects also?
Second, if you are using bs = 'fs', you need to pass in a continuous variable and a factor; so far you only pass in DateNum. At the moment you have
s(DateNum, k = 9, bs = "fs")
and I think you wanted
s(DateNum, Trt, k = 9, bs = "fs")
This model is similar to the one proposed by @Aaron, but is somewhat different in detail. The full model might be
gam(Total ~ Trt * RIL + s(DateNum, Trt, k = 9, bs = "fs") + s(Plant, bs = "re"),
data=ce1230)
but I suspect even that is wrong? (For fs smooths you don't need the parametric Trt.) So I think,
gam(Total ~ RIL + Trt:RIL + s(DateNum, Trt, k = 9, bs = "fs") +
s(Plant, bs = "re"), data=ce1230)
Where the main effect of Trt is actually contained in the fs smooth, so we don't specify it parametrically.
The main difference between this model and @Aaron's in that here, there is a single smoothness parameter for the smooths of DateNum by Trt, whereas in @Aaron's answer each of the smooths gets it's own smoothness parameter. This boils down to a choice between whether you expect similar wiggliness (the shapes of the smooths can be different) for each smooth or whether you expect some of the four smooths to be a lot wigglier than others?
What you seem to want is interactions between RIL and Trt plus separate smooths for each Trt. But do you want separate smooths for each RIL and Trt combination? That would require a separate variable formed by the combinations of RIL and Trt available in your data:
ce1230 <- transform(ce1230,
RILTrt = interaction(RIL, Trt, drop = TRUE))
And then you could fit
gam(Total ~ s(DateNum, RILTrt, k = 9, bs = "fs") +
s(Plant, bs = "re"), data=ce1230)
or
gam(Total ~ s(DateNum, k = 9, by = RILTrt) +
s(Plant, bs = "re"), data=ce1230)
depending on whether you wanted similar wigliness (use bs = 'fs') or different wigglinesses (use by) for each estimated smooth of DateNum.
The model also needs to respect the non-negative nature of the response; you have counts and you can't have negative counts of anything. Using family = poisson or family = nb would be reasonable starting points.
|
Which method is correct? (generalized additive model, mgcv)
|
You have some problems with your data; you can't have a random effect of Plant as a factor and get the plot you showed in the link in comment to @Aaron's Answer. Are you sure that Plant is a factor wi
|
Which method is correct? (generalized additive model, mgcv)
You have some problems with your data; you can't have a random effect of Plant as a factor and get the plot you showed in the link in comment to @Aaron's Answer. Are you sure that Plant is a factor with more than 1 level? If not, you need to code your data correctly to get a random intercept per plant. Also, can you include both RIL and Plant level effects? Once you've accounted for the separate Plant effects (intercepts), won't that also logically account for the genetic line effects also?
Second, if you are using bs = 'fs', you need to pass in a continuous variable and a factor; so far you only pass in DateNum. At the moment you have
s(DateNum, k = 9, bs = "fs")
and I think you wanted
s(DateNum, Trt, k = 9, bs = "fs")
This model is similar to the one proposed by @Aaron, but is somewhat different in detail. The full model might be
gam(Total ~ Trt * RIL + s(DateNum, Trt, k = 9, bs = "fs") + s(Plant, bs = "re"),
data=ce1230)
but I suspect even that is wrong? (For fs smooths you don't need the parametric Trt.) So I think,
gam(Total ~ RIL + Trt:RIL + s(DateNum, Trt, k = 9, bs = "fs") +
s(Plant, bs = "re"), data=ce1230)
Where the main effect of Trt is actually contained in the fs smooth, so we don't specify it parametrically.
The main difference between this model and @Aaron's in that here, there is a single smoothness parameter for the smooths of DateNum by Trt, whereas in @Aaron's answer each of the smooths gets it's own smoothness parameter. This boils down to a choice between whether you expect similar wiggliness (the shapes of the smooths can be different) for each smooth or whether you expect some of the four smooths to be a lot wigglier than others?
What you seem to want is interactions between RIL and Trt plus separate smooths for each Trt. But do you want separate smooths for each RIL and Trt combination? That would require a separate variable formed by the combinations of RIL and Trt available in your data:
ce1230 <- transform(ce1230,
RILTrt = interaction(RIL, Trt, drop = TRUE))
And then you could fit
gam(Total ~ s(DateNum, RILTrt, k = 9, bs = "fs") +
s(Plant, bs = "re"), data=ce1230)
or
gam(Total ~ s(DateNum, k = 9, by = RILTrt) +
s(Plant, bs = "re"), data=ce1230)
depending on whether you wanted similar wigliness (use bs = 'fs') or different wigglinesses (use by) for each estimated smooth of DateNum.
The model also needs to respect the non-negative nature of the response; you have counts and you can't have negative counts of anything. Using family = poisson or family = nb would be reasonable starting points.
|
Which method is correct? (generalized additive model, mgcv)
You have some problems with your data; you can't have a random effect of Plant as a factor and get the plot you showed in the link in comment to @Aaron's Answer. Are you sure that Plant is a factor wi
|
42,680
|
Principal component analysis on time series : meaning?
|
If I understand correctly, your question is about the reason to use MSSA for a system of time series, if one can apply PCA (or SVD) to this system.
The general answer is that the result of PCA is mostly an unstructured approximation (I mean from the viewpoint of the temporal structure), while SSA takes into consideration the temporal structure. Note that SSA is related to so-called SLRA (structured low-rank approximation).
The other (although there is a little point in this) answer is that if you have m time series of length N, m < N, then PCA provides only m component. For m=1, it is senseless to apply PCA; for m=2 two components can be insufficient even to try to decompose into trend, oscillations and noise.
A more clever example is related to decomposition to a signal and noise when the signal is described by a few SSA components (it is fulfilled if the time series is well approximated by a finite sum of products of polynomials, exponentials and sinusoids).
For example, let time series from the system consist of noisy sinusoids with some small Signal-to-Noise Ratio (SNR). PCA does not help to extract the signal for any time series length N.
SSA applies SVD (PCA without centering/standardizing) to the trajectory matrix, which consists of lagged subseries of length L. For sufficiently large L and N, SSA is able to approximately extract the signal; for any SNR!
The same effect is for the case, when time series consist of temporal components like trend and oscillations. Direct approximation by PCA does not help to extract one of the components. SSA is able to do it due to the bi-orthogonality of the SVD. See SSA literature for a description of the 'separability' notion.
Thus, for times series, PCA usually does not work. The other important question, what is better, to apply MSSA to the system of time series or to apply SSA to each time series separately.
|
Principal component analysis on time series : meaning?
|
If I understand correctly, your question is about the reason to use MSSA for a system of time series, if one can apply PCA (or SVD) to this system.
The general answer is that the result of PCA is most
|
Principal component analysis on time series : meaning?
If I understand correctly, your question is about the reason to use MSSA for a system of time series, if one can apply PCA (or SVD) to this system.
The general answer is that the result of PCA is mostly an unstructured approximation (I mean from the viewpoint of the temporal structure), while SSA takes into consideration the temporal structure. Note that SSA is related to so-called SLRA (structured low-rank approximation).
The other (although there is a little point in this) answer is that if you have m time series of length N, m < N, then PCA provides only m component. For m=1, it is senseless to apply PCA; for m=2 two components can be insufficient even to try to decompose into trend, oscillations and noise.
A more clever example is related to decomposition to a signal and noise when the signal is described by a few SSA components (it is fulfilled if the time series is well approximated by a finite sum of products of polynomials, exponentials and sinusoids).
For example, let time series from the system consist of noisy sinusoids with some small Signal-to-Noise Ratio (SNR). PCA does not help to extract the signal for any time series length N.
SSA applies SVD (PCA without centering/standardizing) to the trajectory matrix, which consists of lagged subseries of length L. For sufficiently large L and N, SSA is able to approximately extract the signal; for any SNR!
The same effect is for the case, when time series consist of temporal components like trend and oscillations. Direct approximation by PCA does not help to extract one of the components. SSA is able to do it due to the bi-orthogonality of the SVD. See SSA literature for a description of the 'separability' notion.
Thus, for times series, PCA usually does not work. The other important question, what is better, to apply MSSA to the system of time series or to apply SSA to each time series separately.
|
Principal component analysis on time series : meaning?
If I understand correctly, your question is about the reason to use MSSA for a system of time series, if one can apply PCA (or SVD) to this system.
The general answer is that the result of PCA is most
|
42,681
|
Confidence bounds for an ECDF
|
In Matlab's console type:
edit ecdf
It opens the source code in the editor.
Go to line 194:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
This is the start of the code block that calculates the lower - and upper (confidence) bounds: [Flo, Fup]. The code block is 30 lines long and pretty straightforward. Posted below for your convenience:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
% Get standard error of requested function
if cdf_sf % 'cdf' or 'survivor'
se = NaN(size(D));
if N(end)==D(end)
t = 1:length(N)-1;
else
t = 1:length(N);
end
se(t) = S(t) .* sqrt(cumsum(D(t) ./ (N(t) .* (N(t)-D(t))))); % <--- line 203
else % 'cumhazard'
se = sqrt(cumsum(D ./ (N .* N)));
end
% Get confidence limits
zalpha = -norminv(alpha/2);
halfwidth = zalpha*se;
Flo = max(0, Func - halfwidth);
Flo(isnan(halfwidth)) = NaN; % max drops NaNs, put them back
if cdf_sf % 'cdf' or 'survivor'
Fup = min(1, Func + halfwidth);
Fup(isnan(halfwidth)) = NaN; % max drops NaNs
else % 'cumhazard'
Fup = Func + halfwidth; % no restriction on upper limit
end
Flo = [NaN; Flo];
Fup = [NaN; Fup];
else
Flo = [];
Fup = [];
end
The square root of Greenwood's formula, i.e.
$$ S(t) \sqrt{\sum_{t_i < T} \frac{d_i}{r_i(r_i - d_i)}} \,, $$
is implemented in line 203 as:
se(t) = S(t) .* sqrt(cumsum(D(t) ./ (N(t) .* (N(t)-D(t)))));
Can you take it from here? Let me know.
|
Confidence bounds for an ECDF
|
In Matlab's console type:
edit ecdf
It opens the source code in the editor.
Go to line 194:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
This is the start of the code block that calculates t
|
Confidence bounds for an ECDF
In Matlab's console type:
edit ecdf
It opens the source code in the editor.
Go to line 194:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
This is the start of the code block that calculates the lower - and upper (confidence) bounds: [Flo, Fup]. The code block is 30 lines long and pretty straightforward. Posted below for your convenience:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
% Get standard error of requested function
if cdf_sf % 'cdf' or 'survivor'
se = NaN(size(D));
if N(end)==D(end)
t = 1:length(N)-1;
else
t = 1:length(N);
end
se(t) = S(t) .* sqrt(cumsum(D(t) ./ (N(t) .* (N(t)-D(t))))); % <--- line 203
else % 'cumhazard'
se = sqrt(cumsum(D ./ (N .* N)));
end
% Get confidence limits
zalpha = -norminv(alpha/2);
halfwidth = zalpha*se;
Flo = max(0, Func - halfwidth);
Flo(isnan(halfwidth)) = NaN; % max drops NaNs, put them back
if cdf_sf % 'cdf' or 'survivor'
Fup = min(1, Func + halfwidth);
Fup(isnan(halfwidth)) = NaN; % max drops NaNs
else % 'cumhazard'
Fup = Func + halfwidth; % no restriction on upper limit
end
Flo = [NaN; Flo];
Fup = [NaN; Fup];
else
Flo = [];
Fup = [];
end
The square root of Greenwood's formula, i.e.
$$ S(t) \sqrt{\sum_{t_i < T} \frac{d_i}{r_i(r_i - d_i)}} \,, $$
is implemented in line 203 as:
se(t) = S(t) .* sqrt(cumsum(D(t) ./ (N(t) .* (N(t)-D(t)))));
Can you take it from here? Let me know.
|
Confidence bounds for an ECDF
In Matlab's console type:
edit ecdf
It opens the source code in the editor.
Go to line 194:
if nargout>2 || (nargout==0 && isequal(bounds,'on'))
This is the start of the code block that calculates t
|
42,682
|
Correct feature aggregation for this tricky buying problem
|
0) and 2) Moving Average Models. Suppose we are given nothing else than the following time series data
time y
1: 0 -12.070657
2: 1 4.658008
3: 2 14.604409
4: 3 -17.835538
5: 4 11.751944
...
which looks like this:
for the purpose of this toy example it is essentially sin(time) + disturbance (you can find the R code below). What do I mean by this weird moving average or most simple time series model? I talk about adding the past values of y as new columns. Lets take $k=3$ for example, then at every time $t$ we add $y_{t-1}, y_{t-2}, y_{t-3}$ as new columns:
time y y_past_1 y_past_2 y_past_3
1: 0 -12.070657 NA NA NA
2: 1 4.658008 -12.070657 NA NA
3: 2 14.604409 4.658008 -12.070657 NA
4: 3 -17.835538 14.604409 4.658008 -12.070657
5: 4 11.751944 -17.835538 14.604409 4.658008
6: 5 14.331069 11.751944 -17.835538 14.604409
consider $t=3$. For this, $t-1 = 2$ and the value of y_past_1 (the value of y in the point in time just before the current $t=3$) is the value $y_{t-1} = y_2 = 14.604409$. Analogously, $t-2=1$ and the value of y_past_2 (the value of y two timesteps before the current $t=3$) is $y_{t-2} = y_1 = 4.658008$.
Now what people do as a first shot is to compute a (linear) model with $y$ as a target variable and `y_past_1, ..., y_past_k$ as input features. These are also called 'lagged' variables because they are the same as the target variable just with a little lag in the time component.
Now let us compute a linear model. What I get is essentially
y ~ 0.4320*y_past_1 + 0.2457*y_past_2 + y_past_3*0.2361 + 0.3070
Huh, how can it be that we computed a linear model but the outcome is not linear? This happens as the function time -> y_time is not linear, i.e. the linear model is applied to the 'non linear pairs of values' (y_past_1, y_past_2, y_past_3) but nevertheless, adds these up linearly.
That is what I mean by simple time series model: Take the past of some variable as input for the prediction of the new state.
NB: We did not discuss the role of $K$. This parameter works as a smoothing factor, i.e. in terms of time series it determines how much the prediction will be a so-called high pass filter (K small, do not filter out sudden movements, i.e. high frequencies, K big then the predictions follows the sin function more smoothly and is not 'fooled' so much by sudden movements of the target variable:
K=1:
K=10:
)
1) I mean the following. Let us say that we consider the product PIZZA. We have two users, A and B. During the last year we have sent 20 advertising emails for pizza to each of these users. User A did respond 15 times by buying a pizza and user B did not respond at all. Now let us say that during the current day again we see user A and user B and we have the trigger to send them an ad email. We iterate through all our products and we arrive at the product pizza. Should we send A an ad for pizza? How about B? [of course we should send A the email because it had a high respond rate but we should probably not send B a pizza ad because apparently he/she does simply not like either our advertisement for pizza or pizza or has some other reason not to respond]. In that way for every poiont in time $t$ we should include the past $t-1, ..., t-K$ for each request and each user. That means that we do not have a "single past" for each user but for training but for every request in the training set we have a new unique 'past'... as in the example above: for every $t$, y_past_1 has a unique value, namely $y_{t-1}$. However, in your example, we do not simply take $y_{t-1}, ..., y_{t-K}$ into account but some function of it like so:
For every request given by user $u$ at week $t$ we iterate over every product $p$ and for each product $p$ we go back $K=52$ weeks and check how often we sent the user $u$ an ad email for product $p$ (number sent) and we count how often $u$ responded positively by buying the advertised product within one week after receiving the email (number positiveResponses) and then we compute affinity = positiveResponses/sent and include that as a column for the current request. In that way the model should come up with the rule like 'if affinity for this product is high then I should send an ad for this product'.
In that sense: you do not use a column for each week in the past but for each product you go back 52 weeks.
3) You seemed to be worried that the model could not figure out a certain rule like 'only if the values of column X and of column Y are high then predict TRUE, else predict FALSE'. However, whichever model you choose a model from the "first league of complexity" (i.e. anything other than linear models like neural nets, tree boosting methods like random forest, gradient boosting, geometric methods like SVM, ...) these models can figure out arbitrarily complicated regions if only the data tells them to do so (provably!!!). For example: for NN with just ONE HIDDEN LAYER(!) this [I believe] is the celebrated Stone Weierstrass theorem (https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem, NNs with just one hidden layer form an algebra).
Example: SVM. Navigate your webbrowser to https://www.csie.ntu.edu.tw/~cjlin/libsvm/. Scroll down to the java applet and place two different sets of colored points and play around with the hyperparameter C and g a little bit and you will see results like this:
That means that even (much more complicated!) rules like the ones you formulated above will eventually be captured by the model. That's the reason I would not worry about that too much.
EDIT: R code:
library(data.table)
set.seed(1234)
dt = data.table(time = 0:250)
dt = dt[, y := sin(time/100*2*pi)*30 + rnorm(dt[, .N], mean=0, sd = 10)]
plot(dt$time, dt$y, type="l")
lag = function(x, k, fillUp = NA) {
if (length(x) > k) {
fillUpVector = x[1:k]
fillUpVector[1:k] = fillUp
return(c(fillUpVector, head(x, length(x)-k)))
} else {
if (length(x) > 0) {
x[1:length(x)] = fillUp
return(x)
} else {
return(x)
}
}
}
K = 3
for (k in 1:K) {
eval(parse(text=paste0("dt = dt[, y_past_", k, " := lag(y, k)]")))
}
train = copy(dt)
train = train[K:dt[, .N]]
train = train[, time := NULL]
model = lm(y ~ ., data = train)
pred = predict.lm(object = model, newdata = train, se.fit = F)
train = train[, PREDICTION := pred]
plot(dt$time, dt$y, type="l")
train = train[, time := K:dt[, .N]]
lines(train$time, train$PREDICTION, col="red", lwd=2)
Regards,
FW
|
Correct feature aggregation for this tricky buying problem
|
0) and 2) Moving Average Models. Suppose we are given nothing else than the following time series data
time y
1: 0 -12.070657
2: 1 4.658008
3: 2 14.604409
4: 3 -17
|
Correct feature aggregation for this tricky buying problem
0) and 2) Moving Average Models. Suppose we are given nothing else than the following time series data
time y
1: 0 -12.070657
2: 1 4.658008
3: 2 14.604409
4: 3 -17.835538
5: 4 11.751944
...
which looks like this:
for the purpose of this toy example it is essentially sin(time) + disturbance (you can find the R code below). What do I mean by this weird moving average or most simple time series model? I talk about adding the past values of y as new columns. Lets take $k=3$ for example, then at every time $t$ we add $y_{t-1}, y_{t-2}, y_{t-3}$ as new columns:
time y y_past_1 y_past_2 y_past_3
1: 0 -12.070657 NA NA NA
2: 1 4.658008 -12.070657 NA NA
3: 2 14.604409 4.658008 -12.070657 NA
4: 3 -17.835538 14.604409 4.658008 -12.070657
5: 4 11.751944 -17.835538 14.604409 4.658008
6: 5 14.331069 11.751944 -17.835538 14.604409
consider $t=3$. For this, $t-1 = 2$ and the value of y_past_1 (the value of y in the point in time just before the current $t=3$) is the value $y_{t-1} = y_2 = 14.604409$. Analogously, $t-2=1$ and the value of y_past_2 (the value of y two timesteps before the current $t=3$) is $y_{t-2} = y_1 = 4.658008$.
Now what people do as a first shot is to compute a (linear) model with $y$ as a target variable and `y_past_1, ..., y_past_k$ as input features. These are also called 'lagged' variables because they are the same as the target variable just with a little lag in the time component.
Now let us compute a linear model. What I get is essentially
y ~ 0.4320*y_past_1 + 0.2457*y_past_2 + y_past_3*0.2361 + 0.3070
Huh, how can it be that we computed a linear model but the outcome is not linear? This happens as the function time -> y_time is not linear, i.e. the linear model is applied to the 'non linear pairs of values' (y_past_1, y_past_2, y_past_3) but nevertheless, adds these up linearly.
That is what I mean by simple time series model: Take the past of some variable as input for the prediction of the new state.
NB: We did not discuss the role of $K$. This parameter works as a smoothing factor, i.e. in terms of time series it determines how much the prediction will be a so-called high pass filter (K small, do not filter out sudden movements, i.e. high frequencies, K big then the predictions follows the sin function more smoothly and is not 'fooled' so much by sudden movements of the target variable:
K=1:
K=10:
)
1) I mean the following. Let us say that we consider the product PIZZA. We have two users, A and B. During the last year we have sent 20 advertising emails for pizza to each of these users. User A did respond 15 times by buying a pizza and user B did not respond at all. Now let us say that during the current day again we see user A and user B and we have the trigger to send them an ad email. We iterate through all our products and we arrive at the product pizza. Should we send A an ad for pizza? How about B? [of course we should send A the email because it had a high respond rate but we should probably not send B a pizza ad because apparently he/she does simply not like either our advertisement for pizza or pizza or has some other reason not to respond]. In that way for every poiont in time $t$ we should include the past $t-1, ..., t-K$ for each request and each user. That means that we do not have a "single past" for each user but for training but for every request in the training set we have a new unique 'past'... as in the example above: for every $t$, y_past_1 has a unique value, namely $y_{t-1}$. However, in your example, we do not simply take $y_{t-1}, ..., y_{t-K}$ into account but some function of it like so:
For every request given by user $u$ at week $t$ we iterate over every product $p$ and for each product $p$ we go back $K=52$ weeks and check how often we sent the user $u$ an ad email for product $p$ (number sent) and we count how often $u$ responded positively by buying the advertised product within one week after receiving the email (number positiveResponses) and then we compute affinity = positiveResponses/sent and include that as a column for the current request. In that way the model should come up with the rule like 'if affinity for this product is high then I should send an ad for this product'.
In that sense: you do not use a column for each week in the past but for each product you go back 52 weeks.
3) You seemed to be worried that the model could not figure out a certain rule like 'only if the values of column X and of column Y are high then predict TRUE, else predict FALSE'. However, whichever model you choose a model from the "first league of complexity" (i.e. anything other than linear models like neural nets, tree boosting methods like random forest, gradient boosting, geometric methods like SVM, ...) these models can figure out arbitrarily complicated regions if only the data tells them to do so (provably!!!). For example: for NN with just ONE HIDDEN LAYER(!) this [I believe] is the celebrated Stone Weierstrass theorem (https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem, NNs with just one hidden layer form an algebra).
Example: SVM. Navigate your webbrowser to https://www.csie.ntu.edu.tw/~cjlin/libsvm/. Scroll down to the java applet and place two different sets of colored points and play around with the hyperparameter C and g a little bit and you will see results like this:
That means that even (much more complicated!) rules like the ones you formulated above will eventually be captured by the model. That's the reason I would not worry about that too much.
EDIT: R code:
library(data.table)
set.seed(1234)
dt = data.table(time = 0:250)
dt = dt[, y := sin(time/100*2*pi)*30 + rnorm(dt[, .N], mean=0, sd = 10)]
plot(dt$time, dt$y, type="l")
lag = function(x, k, fillUp = NA) {
if (length(x) > k) {
fillUpVector = x[1:k]
fillUpVector[1:k] = fillUp
return(c(fillUpVector, head(x, length(x)-k)))
} else {
if (length(x) > 0) {
x[1:length(x)] = fillUp
return(x)
} else {
return(x)
}
}
}
K = 3
for (k in 1:K) {
eval(parse(text=paste0("dt = dt[, y_past_", k, " := lag(y, k)]")))
}
train = copy(dt)
train = train[K:dt[, .N]]
train = train[, time := NULL]
model = lm(y ~ ., data = train)
pred = predict.lm(object = model, newdata = train, se.fit = F)
train = train[, PREDICTION := pred]
plot(dt$time, dt$y, type="l")
train = train[, time := K:dt[, .N]]
lines(train$time, train$PREDICTION, col="red", lwd=2)
Regards,
FW
|
Correct feature aggregation for this tricky buying problem
0) and 2) Moving Average Models. Suppose we are given nothing else than the following time series data
time y
1: 0 -12.070657
2: 1 4.658008
3: 2 14.604409
4: 3 -17
|
42,683
|
Train/Test Splitting for Time Series
|
Product demand data usually has a yearly seasonality.
Training on the first year is not sufficient, as your model won't be able to capture any yearly seasonality or any long term trends. Most algorithms require at least 2 years of data for this reason (more would be better - but that's not always available for retail demand forecasting data).
At the same time you want to make sure that all of the seasonalities are present in your test set as well - so the optimal split in your case is 2 years training and 1 year testing.
|
Train/Test Splitting for Time Series
|
Product demand data usually has a yearly seasonality.
Training on the first year is not sufficient, as your model won't be able to capture any yearly seasonality or any long term trends. Most algorit
|
Train/Test Splitting for Time Series
Product demand data usually has a yearly seasonality.
Training on the first year is not sufficient, as your model won't be able to capture any yearly seasonality or any long term trends. Most algorithms require at least 2 years of data for this reason (more would be better - but that's not always available for retail demand forecasting data).
At the same time you want to make sure that all of the seasonalities are present in your test set as well - so the optimal split in your case is 2 years training and 1 year testing.
|
Train/Test Splitting for Time Series
Product demand data usually has a yearly seasonality.
Training on the first year is not sufficient, as your model won't be able to capture any yearly seasonality or any long term trends. Most algorit
|
42,684
|
Train/Test Splitting for Time Series
|
Do you believe there is significant year to year variation?
If yes, it probably doesn't make sense to fit the model with only 1 year of data & a source of variation removed.
If no, which is generally unlikely due to seasonal trends, you might try it with 1 year of data. Using this small of a train set is a bit uncommon and may not be a reliable evaluation of parsimony.
In the end, if building & evaluating the model is computationally cheap, you can experiment with your idea and get the same validation results (plus an extra year) by starting at year one. If validation error is large in first year relative to the second, it suggests that either the model does not account for a large source of variation (perhaps your yearly effect), or that the model is overfit. If the validation error is similar between years one and two, it suggests that there may not be large yearly variation and the the model is not overfit.
|
Train/Test Splitting for Time Series
|
Do you believe there is significant year to year variation?
If yes, it probably doesn't make sense to fit the model with only 1 year of data & a source of variation removed.
If no, which is generally
|
Train/Test Splitting for Time Series
Do you believe there is significant year to year variation?
If yes, it probably doesn't make sense to fit the model with only 1 year of data & a source of variation removed.
If no, which is generally unlikely due to seasonal trends, you might try it with 1 year of data. Using this small of a train set is a bit uncommon and may not be a reliable evaluation of parsimony.
In the end, if building & evaluating the model is computationally cheap, you can experiment with your idea and get the same validation results (plus an extra year) by starting at year one. If validation error is large in first year relative to the second, it suggests that either the model does not account for a large source of variation (perhaps your yearly effect), or that the model is overfit. If the validation error is similar between years one and two, it suggests that there may not be large yearly variation and the the model is not overfit.
|
Train/Test Splitting for Time Series
Do you believe there is significant year to year variation?
If yes, it probably doesn't make sense to fit the model with only 1 year of data & a source of variation removed.
If no, which is generally
|
42,685
|
Train/Test Splitting for Time Series
|
Daily data is frequently heavily dependent on daily habits i.e. it is more important to take into account deterministic structure while dealing with memory effect (same day last week for example ) . Daily data is also dependent on holiday effects (lead , contemporaneous and lags). In addition there are often monthly effects and level shift effects and trend effects. Often we have found that particular days of the month are important and even week-of-the month effects. We suggest a 3-4 year history to be able to tease out the regular while being robust to the irregular.
Forecast customer's spending is an interesting study that you might benefit from and here for more discussions https://stats.stackexchange.com/search?q=user%3A3382+daily+data.
In terms of splitting the data I wold probably initally use an 80/20 split and measure accuracies from many origins not just a sample of 1 origin to ensure a comprehensive/objective estimate of model performance as "one swallow does not a summer make "..
|
Train/Test Splitting for Time Series
|
Daily data is frequently heavily dependent on daily habits i.e. it is more important to take into account deterministic structure while dealing with memory effect (same day last week for example ) . D
|
Train/Test Splitting for Time Series
Daily data is frequently heavily dependent on daily habits i.e. it is more important to take into account deterministic structure while dealing with memory effect (same day last week for example ) . Daily data is also dependent on holiday effects (lead , contemporaneous and lags). In addition there are often monthly effects and level shift effects and trend effects. Often we have found that particular days of the month are important and even week-of-the month effects. We suggest a 3-4 year history to be able to tease out the regular while being robust to the irregular.
Forecast customer's spending is an interesting study that you might benefit from and here for more discussions https://stats.stackexchange.com/search?q=user%3A3382+daily+data.
In terms of splitting the data I wold probably initally use an 80/20 split and measure accuracies from many origins not just a sample of 1 origin to ensure a comprehensive/objective estimate of model performance as "one swallow does not a summer make "..
|
Train/Test Splitting for Time Series
Daily data is frequently heavily dependent on daily habits i.e. it is more important to take into account deterministic structure while dealing with memory effect (same day last week for example ) . D
|
42,686
|
Linear regression model is under-predicting
|
I think it can be one of two things (I would have to take a look at your data to say for sure):
either your data has high homoskedasticity
or your data is strongly auto-correlated (a typical characteristic of time series)
|
Linear regression model is under-predicting
|
I think it can be one of two things (I would have to take a look at your data to say for sure):
either your data has high homoskedasticity
or your data is strongly auto-correlated (a typical characte
|
Linear regression model is under-predicting
I think it can be one of two things (I would have to take a look at your data to say for sure):
either your data has high homoskedasticity
or your data is strongly auto-correlated (a typical characteristic of time series)
|
Linear regression model is under-predicting
I think it can be one of two things (I would have to take a look at your data to say for sure):
either your data has high homoskedasticity
or your data is strongly auto-correlated (a typical characte
|
42,687
|
Turkish speech recognition (speech->text) in Google Speech API? [closed]
|
What is used in production is often not disclosed. I'm not aware of Google disclosing how the current automated speech recognition (ASR) system they using production works. One way to approximate it would be to scan ICASSP/Interspeech/etc. proceedings for Google publications.
Anyway, putting Google aside: the question can be generalize as "How to perform ASR in languages with large or open ended dictionaries?".
One way to do so is to use sub-word language modeling, e.g. from {1}:
Abstract:
In this study, some solutions for out of vocabulary (OOV) word problem of automatic speech recognition (ASR) systems which are developed for agglutinative languages like Turkish, are examined and an improvement to this problem is proposed. It has been shown that using sub-word language models outperforms word based models by reducing the OOV word ratio in languages with complex morphology.
or from {2}:
Abstract: Turkish speech recognition studies have been accelerated recently. With these efforts, not only available speech and text corpus which can be used in recognition experiments but also proposed new methods to improve accuracy has increased. Agglutinative nature of Turkish causes out of vocabulary (OOV) problem in Large Vocabulary Continuous Speech Recognition (LVCSR) tasks. In order to overcome OOV problem, usage of sub-word units has been proposed. In addition to LVCSR experiments, there have been some efforts to implement a speech recognizer in limited domains such as radiology. In this paper, we will present Turkish speech recognition software, which has been developed by utilizing recent studies. Both interface of software and recognition accuracies in two different test sets will be summarized. The performance of software has been evaluated using radiology and large vocabulary test sets. In order to solve OOV problem practically, we propose to adapt language models using frequent words or sentences. In recognition experiments, 90% and 44% word accuracies have been achieved in radiology and large vocabulary test sets respectively.
References:
{1} Akın, Ahmet Afşın, Cemil Demir, and Mehmet Uğur Doğan. "Improving sub-word language modeling for Turkish speech recognition." In Signal Processing and Communications Applications Conference (SIU), 2012 20th, pp. 1-4. IEEE, 2012. https://scholar.google.com/scholar?cluster=8818380122461969221&hl=en&as_sdt=0,5 ; http://ieeexplore.ieee.org/abstract/document/6204752/
{2} Buyuk, Osman, Ali Haznedaroglu, and Levent M. Arslan. "Turkish speech recognition software with adaptable language model." In Signal Processing and Communications Applications, 2007. SIU 2007. IEEE 15th, pp. 1-4. IEEE, 2007. https://scholar.google.com/scholar?cluster=17945910226656308345&hl=en&as_sdt=0,5 ; http://ieeexplore.ieee.org/abstract/document/4298561/
|
Turkish speech recognition (speech->text) in Google Speech API? [closed]
|
What is used in production is often not disclosed. I'm not aware of Google disclosing how the current automated speech recognition (ASR) system they using production works. One way to approximate it w
|
Turkish speech recognition (speech->text) in Google Speech API? [closed]
What is used in production is often not disclosed. I'm not aware of Google disclosing how the current automated speech recognition (ASR) system they using production works. One way to approximate it would be to scan ICASSP/Interspeech/etc. proceedings for Google publications.
Anyway, putting Google aside: the question can be generalize as "How to perform ASR in languages with large or open ended dictionaries?".
One way to do so is to use sub-word language modeling, e.g. from {1}:
Abstract:
In this study, some solutions for out of vocabulary (OOV) word problem of automatic speech recognition (ASR) systems which are developed for agglutinative languages like Turkish, are examined and an improvement to this problem is proposed. It has been shown that using sub-word language models outperforms word based models by reducing the OOV word ratio in languages with complex morphology.
or from {2}:
Abstract: Turkish speech recognition studies have been accelerated recently. With these efforts, not only available speech and text corpus which can be used in recognition experiments but also proposed new methods to improve accuracy has increased. Agglutinative nature of Turkish causes out of vocabulary (OOV) problem in Large Vocabulary Continuous Speech Recognition (LVCSR) tasks. In order to overcome OOV problem, usage of sub-word units has been proposed. In addition to LVCSR experiments, there have been some efforts to implement a speech recognizer in limited domains such as radiology. In this paper, we will present Turkish speech recognition software, which has been developed by utilizing recent studies. Both interface of software and recognition accuracies in two different test sets will be summarized. The performance of software has been evaluated using radiology and large vocabulary test sets. In order to solve OOV problem practically, we propose to adapt language models using frequent words or sentences. In recognition experiments, 90% and 44% word accuracies have been achieved in radiology and large vocabulary test sets respectively.
References:
{1} Akın, Ahmet Afşın, Cemil Demir, and Mehmet Uğur Doğan. "Improving sub-word language modeling for Turkish speech recognition." In Signal Processing and Communications Applications Conference (SIU), 2012 20th, pp. 1-4. IEEE, 2012. https://scholar.google.com/scholar?cluster=8818380122461969221&hl=en&as_sdt=0,5 ; http://ieeexplore.ieee.org/abstract/document/6204752/
{2} Buyuk, Osman, Ali Haznedaroglu, and Levent M. Arslan. "Turkish speech recognition software with adaptable language model." In Signal Processing and Communications Applications, 2007. SIU 2007. IEEE 15th, pp. 1-4. IEEE, 2007. https://scholar.google.com/scholar?cluster=17945910226656308345&hl=en&as_sdt=0,5 ; http://ieeexplore.ieee.org/abstract/document/4298561/
|
Turkish speech recognition (speech->text) in Google Speech API? [closed]
What is used in production is often not disclosed. I'm not aware of Google disclosing how the current automated speech recognition (ASR) system they using production works. One way to approximate it w
|
42,688
|
definition of "hidden unit" in a ConvNet
|
Generally speaking, I think for conv layers we tend not to focus on the concept of 'hidden unit', but to get it out of the way, when I think 'hidden unit', I think of the concepts of 'hidden' and 'unit'. For me, 'hidden' means it's neither something in the input layer (the inputs to the network), or the output layer (the outputs from the network). A 'unit' to me is a single output from a single layer. So if you have a conv layer, and it's not the output layer of the network, and let's say it has 16 feature planes (otherwise known as 'channels'), and the kernel is 3 by 3; and the input images to that layer are 128x128, and the conv layer has padding so the output images are also 128x128. So, the outputs from that conv layer will be a cube of 32 planes times 128x128 images. To me, independent of the kernel size, there are 32x128x128 units in that layer's output.
However, typically, I think we tend to use language such as 'neurons' and 'units' for linear, otherwise known as fully-connected layers.
For conv layers, I feel that we specify them in terms of:
feature planes, otherwise known as channels
kernel size, eg 3 by 3
padding, eg 1, at each edge
stride, eg stride 1, in both directions
(there's also some other stuff like dilation...)
And then we refer to things within this such as:
- an output
- an input
- a 'feature plane'
- a 'weight'
|
definition of "hidden unit" in a ConvNet
|
Generally speaking, I think for conv layers we tend not to focus on the concept of 'hidden unit', but to get it out of the way, when I think 'hidden unit', I think of the concepts of 'hidden' and 'uni
|
definition of "hidden unit" in a ConvNet
Generally speaking, I think for conv layers we tend not to focus on the concept of 'hidden unit', but to get it out of the way, when I think 'hidden unit', I think of the concepts of 'hidden' and 'unit'. For me, 'hidden' means it's neither something in the input layer (the inputs to the network), or the output layer (the outputs from the network). A 'unit' to me is a single output from a single layer. So if you have a conv layer, and it's not the output layer of the network, and let's say it has 16 feature planes (otherwise known as 'channels'), and the kernel is 3 by 3; and the input images to that layer are 128x128, and the conv layer has padding so the output images are also 128x128. So, the outputs from that conv layer will be a cube of 32 planes times 128x128 images. To me, independent of the kernel size, there are 32x128x128 units in that layer's output.
However, typically, I think we tend to use language such as 'neurons' and 'units' for linear, otherwise known as fully-connected layers.
For conv layers, I feel that we specify them in terms of:
feature planes, otherwise known as channels
kernel size, eg 3 by 3
padding, eg 1, at each edge
stride, eg stride 1, in both directions
(there's also some other stuff like dilation...)
And then we refer to things within this such as:
- an output
- an input
- a 'feature plane'
- a 'weight'
|
definition of "hidden unit" in a ConvNet
Generally speaking, I think for conv layers we tend not to focus on the concept of 'hidden unit', but to get it out of the way, when I think 'hidden unit', I think of the concepts of 'hidden' and 'uni
|
42,689
|
definition of "hidden unit" in a ConvNet
|
I think @stephen & @hugh have made it over-complicated,
let's make it simple.
A hidden unit, in general, has an operation Activation(W*X+b). Therefore, if you think carefully,
*A hidden unit in CONV layer is an operation that uses "filter_volume a.k.a volume of randomly initialized weights" in general. More loosely, you can say filter/filter volume (f *f n_c_prev) corresponds to single neuron/hidden unit in a CONV layer.
Particularly, in your example, you have (3*3* 3) filter volume that you will convolve (element-wise multiply & add--> Bias--> Activation) over your (9*9* 3) input.
(f* f* n_c_prev) is a filter in general, with n_c_prev as the number of the input channel,
49 (7*7) times for 1 filter, since you have 5 of such kind, you will do the same convolve operation just 4 times more,
therefore, 49*5=245 is the total number of convolution operation you are
going to perform on the input using you 5 differently initialized filter volumes!
Therefore, the number of the hidden unit be just 5 each of which is capacitated to use (f *f *n_c_prev) weights/vol. Thinking more abstractly, a hidden unit in layer-1, will see only a relatively small portion of the neural network. And so if you visualize, if you plot what activated unit's activation, it makes sense to plot just small image patches, because that's all of the images that particular unit sees. Example
Now you pick a different hidden unit in layer-1 and do the same thing
Therefore, now you have 9 different representative neurons and each of them finds the nine(3*3) image patches that maximizes the unit's activation.
Now, if you deeper into the network, a hidden layer over there, a hidden unit sees a larger patch/region the image(larger receptive field!) and able to detect many complex patterns such as
More about it you can read here "visualizing and understanding convolutional networks"
|
definition of "hidden unit" in a ConvNet
|
I think @stephen & @hugh have made it over-complicated,
let's make it simple.
A hidden unit, in general, has an operation Activation(W*X+b). Therefore, if you think carefully,
*A hidden unit in CON
|
definition of "hidden unit" in a ConvNet
I think @stephen & @hugh have made it over-complicated,
let's make it simple.
A hidden unit, in general, has an operation Activation(W*X+b). Therefore, if you think carefully,
*A hidden unit in CONV layer is an operation that uses "filter_volume a.k.a volume of randomly initialized weights" in general. More loosely, you can say filter/filter volume (f *f n_c_prev) corresponds to single neuron/hidden unit in a CONV layer.
Particularly, in your example, you have (3*3* 3) filter volume that you will convolve (element-wise multiply & add--> Bias--> Activation) over your (9*9* 3) input.
(f* f* n_c_prev) is a filter in general, with n_c_prev as the number of the input channel,
49 (7*7) times for 1 filter, since you have 5 of such kind, you will do the same convolve operation just 4 times more,
therefore, 49*5=245 is the total number of convolution operation you are
going to perform on the input using you 5 differently initialized filter volumes!
Therefore, the number of the hidden unit be just 5 each of which is capacitated to use (f *f *n_c_prev) weights/vol. Thinking more abstractly, a hidden unit in layer-1, will see only a relatively small portion of the neural network. And so if you visualize, if you plot what activated unit's activation, it makes sense to plot just small image patches, because that's all of the images that particular unit sees. Example
Now you pick a different hidden unit in layer-1 and do the same thing
Therefore, now you have 9 different representative neurons and each of them finds the nine(3*3) image patches that maximizes the unit's activation.
Now, if you deeper into the network, a hidden layer over there, a hidden unit sees a larger patch/region the image(larger receptive field!) and able to detect many complex patterns such as
More about it you can read here "visualizing and understanding convolutional networks"
|
definition of "hidden unit" in a ConvNet
I think @stephen & @hugh have made it over-complicated,
let's make it simple.
A hidden unit, in general, has an operation Activation(W*X+b). Therefore, if you think carefully,
*A hidden unit in CON
|
42,690
|
definition of "hidden unit" in a ConvNet
|
I don't think either of the answers provides a clear definition, so I will attempt to answer it because I stumbled into the same problem finding a clear definition of a hidden unit in the context of a Convolutional Neural Network.
Hidden units in this context are the feature maps or filters. So for Tensorflow or Keras it would be
tf.nn.Conv2D(**hidden_units**,...)
Hidden Units based on the definition provided by http://www.cs.toronto.edu/~asamir/papers/icassp13_cnn.pdf
A typical convolutional network architecture is shown in Figure 1. In
a fully-connected network like DNNs, each hidden activation hi is
computed by multiplying the entire input V by weights W in that layer.
However, in a CNN, each hidden activation is computed by multiplying a
small local input (i.e. [v1, v2, v3]) against the weights W. The
weights W are then shared across the entire input space, as indicated
in the figure. After computing the hidden units, a maxpooling layer
helps to remove variability in the hidden units (i.e. convolutional
band activations)
|
definition of "hidden unit" in a ConvNet
|
I don't think either of the answers provides a clear definition, so I will attempt to answer it because I stumbled into the same problem finding a clear definition of a hidden unit in the context of a
|
definition of "hidden unit" in a ConvNet
I don't think either of the answers provides a clear definition, so I will attempt to answer it because I stumbled into the same problem finding a clear definition of a hidden unit in the context of a Convolutional Neural Network.
Hidden units in this context are the feature maps or filters. So for Tensorflow or Keras it would be
tf.nn.Conv2D(**hidden_units**,...)
Hidden Units based on the definition provided by http://www.cs.toronto.edu/~asamir/papers/icassp13_cnn.pdf
A typical convolutional network architecture is shown in Figure 1. In
a fully-connected network like DNNs, each hidden activation hi is
computed by multiplying the entire input V by weights W in that layer.
However, in a CNN, each hidden activation is computed by multiplying a
small local input (i.e. [v1, v2, v3]) against the weights W. The
weights W are then shared across the entire input space, as indicated
in the figure. After computing the hidden units, a maxpooling layer
helps to remove variability in the hidden units (i.e. convolutional
band activations)
|
definition of "hidden unit" in a ConvNet
I don't think either of the answers provides a clear definition, so I will attempt to answer it because I stumbled into the same problem finding a clear definition of a hidden unit in the context of a
|
42,691
|
definition of "hidden unit" in a ConvNet
|
There are so many complexities in this topic which may confuse one so let's break it down.
Here is what we have,
(Layerl1) (9 * 9 * 3) ---->Conv with(3 * 3 * 3),5 filters with s=1 p=0-----> (layerl2) (7 * 7 * 5)
now,
unit:- A units in a layer is that whoose receptive fields cover a patch of the previous layer
and,
no of hidden units in layerl2 = no of channels in layerl2
reason>
each filter detects a patch of region from previous layer layerl1 and each
of this patch is called a unit of layerl2.
and we know that no of channels in layerl2 = no of filters
units can share filters i.e. 2 patches can have same filter
reason>
let filter no 1 detects vertical edges, filter no 2 detects vertical bars,
then if our image has some grid shape then both filter1 and filter2 will
detect this grid, this grid is a patch which makes a unit, and this patch/unit
is shared by 2 filters.
To Sum up:-
Each channel of layerl2 is a hidden unit of layerl2 (here 5 hidden units )
|
definition of "hidden unit" in a ConvNet
|
There are so many complexities in this topic which may confuse one so let's break it down.
Here is what we have,
(Layerl1) (9 * 9 * 3) ---->Conv with(3 * 3 * 3),5 filters with s=1 p=0-----> (layerl2)
|
definition of "hidden unit" in a ConvNet
There are so many complexities in this topic which may confuse one so let's break it down.
Here is what we have,
(Layerl1) (9 * 9 * 3) ---->Conv with(3 * 3 * 3),5 filters with s=1 p=0-----> (layerl2) (7 * 7 * 5)
now,
unit:- A units in a layer is that whoose receptive fields cover a patch of the previous layer
and,
no of hidden units in layerl2 = no of channels in layerl2
reason>
each filter detects a patch of region from previous layer layerl1 and each
of this patch is called a unit of layerl2.
and we know that no of channels in layerl2 = no of filters
units can share filters i.e. 2 patches can have same filter
reason>
let filter no 1 detects vertical edges, filter no 2 detects vertical bars,
then if our image has some grid shape then both filter1 and filter2 will
detect this grid, this grid is a patch which makes a unit, and this patch/unit
is shared by 2 filters.
To Sum up:-
Each channel of layerl2 is a hidden unit of layerl2 (here 5 hidden units )
|
definition of "hidden unit" in a ConvNet
There are so many complexities in this topic which may confuse one so let's break it down.
Here is what we have,
(Layerl1) (9 * 9 * 3) ---->Conv with(3 * 3 * 3),5 filters with s=1 p=0-----> (layerl2)
|
42,692
|
What are the relation and differences between time series and linear regression?
|
In the context of Statistics, linear regression is solved by maximizing the likliehood that the error of a model linear in basis is the mean of a Normal Distribution. During maximization we assume the observations are independently and identically distributed, clearly not a reasonable assumption for times series data.
|
What are the relation and differences between time series and linear regression?
|
In the context of Statistics, linear regression is solved by maximizing the likliehood that the error of a model linear in basis is the mean of a Normal Distribution. During maximization we assume the
|
What are the relation and differences between time series and linear regression?
In the context of Statistics, linear regression is solved by maximizing the likliehood that the error of a model linear in basis is the mean of a Normal Distribution. During maximization we assume the observations are independently and identically distributed, clearly not a reasonable assumption for times series data.
|
What are the relation and differences between time series and linear regression?
In the context of Statistics, linear regression is solved by maximizing the likliehood that the error of a model linear in basis is the mean of a Normal Distribution. During maximization we assume the
|
42,693
|
What are the relation and differences between time series and linear regression?
|
From Ordinary Regression to Time Series Regression:
The time series regression model is an extension of the ordinary regression model in which the following conditions exist:
Variables are observed in time.
Autocorrelation is allowed.
The target variable can be influenced by past values of inputs.
Source: DePaul University lecture slides for CSC 425
I think this answer is lacking in complete details, but is not wrong. @IrishStat gave a link to a document that covers the differences well. Together, these answer the first part of the original question.
I am still looking for answers to the latter half: does time series analysis share the assumptions of linear regression, plus some? For example, linear regression has multiple assumptions about X regressors such as no multicollinearity, linear relationship (correlation) to Y, the X regressors and model residuals are uncorrelated, etc. Do all of these still apply in time series analysis? If we could make a complete list of assumptions that these two methods share, that would be extremely helpful. Thanks everyone!
|
What are the relation and differences between time series and linear regression?
|
From Ordinary Regression to Time Series Regression:
The time series regression model is an extension of the ordinary regression model in which the following conditions exist:
Variables are observed
|
What are the relation and differences between time series and linear regression?
From Ordinary Regression to Time Series Regression:
The time series regression model is an extension of the ordinary regression model in which the following conditions exist:
Variables are observed in time.
Autocorrelation is allowed.
The target variable can be influenced by past values of inputs.
Source: DePaul University lecture slides for CSC 425
I think this answer is lacking in complete details, but is not wrong. @IrishStat gave a link to a document that covers the differences well. Together, these answer the first part of the original question.
I am still looking for answers to the latter half: does time series analysis share the assumptions of linear regression, plus some? For example, linear regression has multiple assumptions about X regressors such as no multicollinearity, linear relationship (correlation) to Y, the X regressors and model residuals are uncorrelated, etc. Do all of these still apply in time series analysis? If we could make a complete list of assumptions that these two methods share, that would be extremely helpful. Thanks everyone!
|
What are the relation and differences between time series and linear regression?
From Ordinary Regression to Time Series Regression:
The time series regression model is an extension of the ordinary regression model in which the following conditions exist:
Variables are observed
|
42,694
|
Step-wise Bayesian updating as a prior selection strategy
|
"Yesterday’s posterior is today’s prior" is the best Bayesian learning strategy if you know with absolute certainty that "Today's parameter is yesterday's parameter".
is it possible to use a wide prior for the oldest study, and then use
its posterior as prior for the study after it and so on to arrive at
one final posterior for the most recent study?
Yes as long as you know you are making inferences about the same (unknown) parameter:
same model
same experimental conditions (including sampling from the same population)
no deviation due to time depending or local phenomena
Note that if the lines in each dataset are considered to be independent of each other, the final posterior you get is the same as when considering all studies as a whole: merging all datasets into one (just basic copy/paste).
An interesting case in which such a simplifying assumption may hold or not is the Kalman Filter (or more generally Bayes filters): you acquire information at each observation making the prior dynamically evolve as $prior_{t+1}=posterior_t$.
But if at the same time, some random process is disturbing the parameter (known as "state" in Kalman filters), then the prior must be updated too due to this process. Your prior narrows down at each observation, but between two observations it broadens due to random changes.
In that case the prior you would use in the next study would be a broadened version of the posterior of the previous study. How much depends on the random dynamics and is very complicated, thus rarely done in practice.
|
Step-wise Bayesian updating as a prior selection strategy
|
"Yesterday’s posterior is today’s prior" is the best Bayesian learning strategy if you know with absolute certainty that "Today's parameter is yesterday's parameter".
is it possible to use a wide pri
|
Step-wise Bayesian updating as a prior selection strategy
"Yesterday’s posterior is today’s prior" is the best Bayesian learning strategy if you know with absolute certainty that "Today's parameter is yesterday's parameter".
is it possible to use a wide prior for the oldest study, and then use
its posterior as prior for the study after it and so on to arrive at
one final posterior for the most recent study?
Yes as long as you know you are making inferences about the same (unknown) parameter:
same model
same experimental conditions (including sampling from the same population)
no deviation due to time depending or local phenomena
Note that if the lines in each dataset are considered to be independent of each other, the final posterior you get is the same as when considering all studies as a whole: merging all datasets into one (just basic copy/paste).
An interesting case in which such a simplifying assumption may hold or not is the Kalman Filter (or more generally Bayes filters): you acquire information at each observation making the prior dynamically evolve as $prior_{t+1}=posterior_t$.
But if at the same time, some random process is disturbing the parameter (known as "state" in Kalman filters), then the prior must be updated too due to this process. Your prior narrows down at each observation, but between two observations it broadens due to random changes.
In that case the prior you would use in the next study would be a broadened version of the posterior of the previous study. How much depends on the random dynamics and is very complicated, thus rarely done in practice.
|
Step-wise Bayesian updating as a prior selection strategy
"Yesterday’s posterior is today’s prior" is the best Bayesian learning strategy if you know with absolute certainty that "Today's parameter is yesterday's parameter".
is it possible to use a wide pri
|
42,695
|
Step-wise Bayesian updating as a prior selection strategy
|
There has been some work on deriving priors from previous studies. Two relevant papers may be "Summarizing historical information on controls in clinical trials" by Neuenschwander and colleagues available from Clinical Trials and "Robust meta-analytic-predictive priors in clinical trials with historical control information" by Schmidli and colleagues available from Biometrics. It is difficult to summarise them so I give below the abstract of the Schmidli one.
Summary. Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a
new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment,
and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We
address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined
with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data
if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the metaanalytic-
predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors,
which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations.
Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust.
Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative
mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization
ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries
for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II
proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data
conflicts - they should encourage better and more frequent use of historical data in clinical trials.
There is currently an R package RBesT available to do the meta-analytic prior. It is on CRAN
|
Step-wise Bayesian updating as a prior selection strategy
|
There has been some work on deriving priors from previous studies. Two relevant papers may be "Summarizing historical information on controls in clinical trials" by Neuenschwander and colleagues avail
|
Step-wise Bayesian updating as a prior selection strategy
There has been some work on deriving priors from previous studies. Two relevant papers may be "Summarizing historical information on controls in clinical trials" by Neuenschwander and colleagues available from Clinical Trials and "Robust meta-analytic-predictive priors in clinical trials with historical control information" by Schmidli and colleagues available from Biometrics. It is difficult to summarise them so I give below the abstract of the Schmidli one.
Summary. Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a
new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment,
and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We
address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined
with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data
if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the metaanalytic-
predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors,
which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations.
Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust.
Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative
mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization
ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries
for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II
proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data
conflicts - they should encourage better and more frequent use of historical data in clinical trials.
There is currently an R package RBesT available to do the meta-analytic prior. It is on CRAN
|
Step-wise Bayesian updating as a prior selection strategy
There has been some work on deriving priors from previous studies. Two relevant papers may be "Summarizing historical information on controls in clinical trials" by Neuenschwander and colleagues avail
|
42,696
|
Should missing observations be included in the number of observations if correcting for multiple testing
|
If the missing value causes the observation not to be included in the calculations of parameter estimates, it's contributing nothing to the end result (for better or worse) and should not be included in the p-value or adjusted p-value calculation. Its effect is the same as if it hadn't been included in the data set at all.
However, in some cases, missing values are not excluded from the calculations. They may be imputed, or, in the case of values that are censored (e.g., $x_1$ is not observed, but we know that $x_1 \geq 10$), included in the calculations but in a different way than if they had been observed. This is a murkier area. Clearly we wouldn't want either extreme - counted as if the observation was fully informative or counted as if the observation didn't exist at all - as the basis for p-value calculations, but it's not clear (and, indeed, problem-specific) how much "weight" between 0 and 1 the observation should get. Providing the ability to calculate adjusted p-values using the full observation count enables us to get a bound on the adjusted p-values that we'd have liked to calculate. If a particular value for a statistic isn't significant with a "sample size" = 100, it's not going to be significant with a "sample size" of less than 100 either, so the calculation with the sample size equal to the full number of observations does contain information useful for testing and evaluation.
To summarize: both calculations are useful, depending on the circumstances of the testing problem and how the estimation procedure treats missing values.
|
Should missing observations be included in the number of observations if correcting for multiple tes
|
If the missing value causes the observation not to be included in the calculations of parameter estimates, it's contributing nothing to the end result (for better or worse) and should not be included
|
Should missing observations be included in the number of observations if correcting for multiple testing
If the missing value causes the observation not to be included in the calculations of parameter estimates, it's contributing nothing to the end result (for better or worse) and should not be included in the p-value or adjusted p-value calculation. Its effect is the same as if it hadn't been included in the data set at all.
However, in some cases, missing values are not excluded from the calculations. They may be imputed, or, in the case of values that are censored (e.g., $x_1$ is not observed, but we know that $x_1 \geq 10$), included in the calculations but in a different way than if they had been observed. This is a murkier area. Clearly we wouldn't want either extreme - counted as if the observation was fully informative or counted as if the observation didn't exist at all - as the basis for p-value calculations, but it's not clear (and, indeed, problem-specific) how much "weight" between 0 and 1 the observation should get. Providing the ability to calculate adjusted p-values using the full observation count enables us to get a bound on the adjusted p-values that we'd have liked to calculate. If a particular value for a statistic isn't significant with a "sample size" = 100, it's not going to be significant with a "sample size" of less than 100 either, so the calculation with the sample size equal to the full number of observations does contain information useful for testing and evaluation.
To summarize: both calculations are useful, depending on the circumstances of the testing problem and how the estimation procedure treats missing values.
|
Should missing observations be included in the number of observations if correcting for multiple tes
If the missing value causes the observation not to be included in the calculations of parameter estimates, it's contributing nothing to the end result (for better or worse) and should not be included
|
42,697
|
Frequentist Predictive Distribution for a Cauchy variable
|
The general solution to your problem is Maximum Likelihood Estimation (MLE) of your parameters $\theta$. Once they are obtained as $\hat{\theta}$, you substitute them into your pdf for the unknown parameters, i.e. you estimate the pdf of your random variable as $\hat{f}(x_i) = f(x_i|\hat{\theta})$. This allows you to construct the the predictive distribution of your Cauchy Random Variable.
For the univariate case, this paper is an excellent resource. For the univariate Cauchy with center $\mu$ and scale $\sigma$, one has a closed form if you have $3-4$ observations. If you have $n>4$ observations, the MLE exists$^{\ast}$. If you have $n$ observations, you will have to solve two equations that are easily derived by setting the first derivative of the log-likelihood to zero, see here for their exact form. (In their notation, $x_0 = \mu$ and $\sigma = \gamma$.) Solving this problem numerically has an implementation in the R language, see here.
For the multivariate case, all you need to note is that the multivariate Cauchy distribution is simply a multivariate $t$-distribution where the degree of freedom parameter is set to $1$, as was already pointed out in the comments. For the multivarate-$t$, you can do MLE inference as explained excellently in this answer, which is based on the paper that eric_kernfeld has pointed out. I did not find ready-to-roll implementation for this algorithm, but as you will see when you take a look at the supplied answer in the post, it should really easy to implement it yourself.
Difference to Bayesian prediction: In the Bayesian setting, you would put a prior on the parameters $\mu$ and $\sigma$, modelling your uncertainty about them as a random variable. Thus, you will get posterior distributions for both parameters, which indicate the relative certainty you have about them given your data. If you have the posterior $q(\mu, \sigma|x_1,\dots,x_n)$, you then obtain your predictive distribution as $\int f(x|\mu, \sigma)q(\mu, \sigma|x_1,\dots,x_n)d\mu d\sigma$, integrating out your uncertainty. In contrast, the MLE-setting will give you point estimates of $\mu$ and $\sigma$ that you plug into your pdf's functional form. Equivalently, you could say that MLE leads to a posterior with point mass $1$ at the tuple $(\hat{\mu}, \hat{\sigma})$ and $0$ probability at any other value. Thus, you ignore all parameter uncertainty in this case, and you rely on the fact that $\hat{\theta}$ is asymptotically equivalent to $\theta$, meaning that $\hat{f}(x) \to f(x)$ (uniformly over $x$).
$^\ast$Well, that is unless for the exotic case where $n$ is even and $n/2$ of your observations take value $x_1$ while the other half takes value $x_2$, which happens with probability zero because the Cauchy distribution is continuous.
|
Frequentist Predictive Distribution for a Cauchy variable
|
The general solution to your problem is Maximum Likelihood Estimation (MLE) of your parameters $\theta$. Once they are obtained as $\hat{\theta}$, you substitute them into your pdf for the unknown pa
|
Frequentist Predictive Distribution for a Cauchy variable
The general solution to your problem is Maximum Likelihood Estimation (MLE) of your parameters $\theta$. Once they are obtained as $\hat{\theta}$, you substitute them into your pdf for the unknown parameters, i.e. you estimate the pdf of your random variable as $\hat{f}(x_i) = f(x_i|\hat{\theta})$. This allows you to construct the the predictive distribution of your Cauchy Random Variable.
For the univariate case, this paper is an excellent resource. For the univariate Cauchy with center $\mu$ and scale $\sigma$, one has a closed form if you have $3-4$ observations. If you have $n>4$ observations, the MLE exists$^{\ast}$. If you have $n$ observations, you will have to solve two equations that are easily derived by setting the first derivative of the log-likelihood to zero, see here for their exact form. (In their notation, $x_0 = \mu$ and $\sigma = \gamma$.) Solving this problem numerically has an implementation in the R language, see here.
For the multivariate case, all you need to note is that the multivariate Cauchy distribution is simply a multivariate $t$-distribution where the degree of freedom parameter is set to $1$, as was already pointed out in the comments. For the multivarate-$t$, you can do MLE inference as explained excellently in this answer, which is based on the paper that eric_kernfeld has pointed out. I did not find ready-to-roll implementation for this algorithm, but as you will see when you take a look at the supplied answer in the post, it should really easy to implement it yourself.
Difference to Bayesian prediction: In the Bayesian setting, you would put a prior on the parameters $\mu$ and $\sigma$, modelling your uncertainty about them as a random variable. Thus, you will get posterior distributions for both parameters, which indicate the relative certainty you have about them given your data. If you have the posterior $q(\mu, \sigma|x_1,\dots,x_n)$, you then obtain your predictive distribution as $\int f(x|\mu, \sigma)q(\mu, \sigma|x_1,\dots,x_n)d\mu d\sigma$, integrating out your uncertainty. In contrast, the MLE-setting will give you point estimates of $\mu$ and $\sigma$ that you plug into your pdf's functional form. Equivalently, you could say that MLE leads to a posterior with point mass $1$ at the tuple $(\hat{\mu}, \hat{\sigma})$ and $0$ probability at any other value. Thus, you ignore all parameter uncertainty in this case, and you rely on the fact that $\hat{\theta}$ is asymptotically equivalent to $\theta$, meaning that $\hat{f}(x) \to f(x)$ (uniformly over $x$).
$^\ast$Well, that is unless for the exotic case where $n$ is even and $n/2$ of your observations take value $x_1$ while the other half takes value $x_2$, which happens with probability zero because the Cauchy distribution is continuous.
|
Frequentist Predictive Distribution for a Cauchy variable
The general solution to your problem is Maximum Likelihood Estimation (MLE) of your parameters $\theta$. Once they are obtained as $\hat{\theta}$, you substitute them into your pdf for the unknown pa
|
42,698
|
Frequentist Predictive Distribution for a Cauchy variable
|
One could use a Monte Carlo method to obtain empirical estimates for relationships between the $x_1....x_i$ and the prediction interval for $x_{i+n}$.
Motivation: If we estimate the prediction interval based on the quartiles/CDF of a distribution that follows from maximum likelihood estimates (or other type of parameter estimates), then we underestimate the size of the interval. Effectively, in practice, the point $x_{i+n}$ will fall out of the range more often than predicted.
The figure below demonstrates by how much we underestimate the size of the interval, by expressing how many more times a new measurement $x_i$ is outside the predictive range based on parameter estimates. (based on computations with 2000 repetitions for the prediction)
For instance, if we use a prediction interval of 99% (thus expecting 1% errors), then we get 5 times more errors if the sample size was 3.
These type of computations can be used to make empirical relationships for how we can correct the range, as well the computations show that for large $n$ the difference becomes smaller(and at some point one may consider it irrelevant).
set.seed(1)
# likelihood calculation
like<-function(par, x){
scale = abs(par[2])
pos = par[1]
n <- length(x)
like <- -n*log(scale*pi) - sum(log(1+((x-pos)/scale)^2))
-like
}
# obtain effective predictive failure rate rate
tryf <- function(pos, scale, perc, n) {
# random distribution
draw <- rcauchy(n, pos, scale)
# estimating distribution parameters based on median and interquartile range
first_est <- c(median(draw), 0.5*IQR(draw))
# estimating distribution parameters based on likelihood
out <- optim(par=first_est, like, method='CG', x=draw)
# making scale parameter positive (we used an absolute valuer in the optim function)
out$par[2] <- abs(out$par[2])
# calculate predictive interval
ql <- qcauchy(perc/2, out$par[1], out$par[2])
qh <- qcauchy(1-perc/2, out$par[1], out$par[2])
# calculate effective percentage outside predicted predictive interval
pl <- pcauchy(ql, pos, scale)
ph <- pcauchy(qh, pos, scale)
error <- pl+1-ph
error
}
# obtain mean of predictive interval in 2000 runs
meanf <- function(pos,scale,perc,n) {
trueval <- sapply(1:2000,FUN <- function(x) tryf(pos,scale,perc,n))
mean(trueval)
}
#################### generate image
# x-axis chosen desired interval percentage
percentages <- 0.2/1.2^c(0:30)
# desired sample sizes n
ns <- c(3,4,5,6,7,8,9,10,20,30)
# computations
y <- matrix(rep(percentages, length(ns)), length(percentages))
for (i in which(ns>0)) {
y[,i] <- sapply(percentages, FUN <- function(x) meanf(0,1,x,ns[i]))
}
# plotting
plot(NULL,
xlim=c(0.0008,1), ylim=c(0,10),
log="x",
xlab="aimed error rate",
ylab="effective error rate / aimed error rate",
yaxt="n",xaxt="n",axes=FALSE)
axis(1,las=2,tck=-0.0,cex.axis=1,labels=rep("",2),at=c(0.0008,1),pos=0.0008)
axis(1,las=2,tck=-0.005,cex.axis=1,at=c(0.001*c(1:9),0.01*c(1:9),0.1*c(1:9)),labels=rep("",27),mgp=c(1.5,1,0),pos=0.0008)
axis(1,las=2,tck=-0.01,cex.axis=1,labels=c(0.001,0.01,0.1,1), at=c(0.001,0.01,0.1,1),mgp=c(1.5,1,0),pos=0.000)
#axis(2,las=1,tck=-0.0,cex.axis=1,labels=rep("",2),at=c(0.0008,1),pos=0.0008)
#axis(2,las=1,tck=-0.005,cex.axis=1,at=c(0.001*c(1:9),0.01*c(1:9),0.1*c(1:9)),labels=rep("",27),mgp=c(1.5,1,0),pos=0.0008)
#axis(2,las=1,tck=-0.01,cex.axis=1,labels=c(0.001,0.01,0.1,1), at=c(0.001,0.01,0.1,1),mgp=c(1.5,1,0),pos=0.0008)
axis(2,las=2,tck=-0.01,cex.axis=1,labels=0:15, at=0:15,mgp=c(1.5,1,0),pos=0.0008)
colours <- hsv(c(1:10)/20,1,1-c(1:10)/15)
for (i in which(ns>0)) {
points(percentages,y[,i]/percentages,pch=21,cex=0.5,col=colours[i],bg=colours[i])
}
legend(x=0.4,y=4.5,pch=21,legend=ns,col=colours,pt.bg=colours,title="sample size")
title("difference between confidence interval and effective confidence interval")
plot(ns,y[31,]/percentages[31],log="")
|
Frequentist Predictive Distribution for a Cauchy variable
|
One could use a Monte Carlo method to obtain empirical estimates for relationships between the $x_1....x_i$ and the prediction interval for $x_{i+n}$.
Motivation: If we estimate the prediction interva
|
Frequentist Predictive Distribution for a Cauchy variable
One could use a Monte Carlo method to obtain empirical estimates for relationships between the $x_1....x_i$ and the prediction interval for $x_{i+n}$.
Motivation: If we estimate the prediction interval based on the quartiles/CDF of a distribution that follows from maximum likelihood estimates (or other type of parameter estimates), then we underestimate the size of the interval. Effectively, in practice, the point $x_{i+n}$ will fall out of the range more often than predicted.
The figure below demonstrates by how much we underestimate the size of the interval, by expressing how many more times a new measurement $x_i$ is outside the predictive range based on parameter estimates. (based on computations with 2000 repetitions for the prediction)
For instance, if we use a prediction interval of 99% (thus expecting 1% errors), then we get 5 times more errors if the sample size was 3.
These type of computations can be used to make empirical relationships for how we can correct the range, as well the computations show that for large $n$ the difference becomes smaller(and at some point one may consider it irrelevant).
set.seed(1)
# likelihood calculation
like<-function(par, x){
scale = abs(par[2])
pos = par[1]
n <- length(x)
like <- -n*log(scale*pi) - sum(log(1+((x-pos)/scale)^2))
-like
}
# obtain effective predictive failure rate rate
tryf <- function(pos, scale, perc, n) {
# random distribution
draw <- rcauchy(n, pos, scale)
# estimating distribution parameters based on median and interquartile range
first_est <- c(median(draw), 0.5*IQR(draw))
# estimating distribution parameters based on likelihood
out <- optim(par=first_est, like, method='CG', x=draw)
# making scale parameter positive (we used an absolute valuer in the optim function)
out$par[2] <- abs(out$par[2])
# calculate predictive interval
ql <- qcauchy(perc/2, out$par[1], out$par[2])
qh <- qcauchy(1-perc/2, out$par[1], out$par[2])
# calculate effective percentage outside predicted predictive interval
pl <- pcauchy(ql, pos, scale)
ph <- pcauchy(qh, pos, scale)
error <- pl+1-ph
error
}
# obtain mean of predictive interval in 2000 runs
meanf <- function(pos,scale,perc,n) {
trueval <- sapply(1:2000,FUN <- function(x) tryf(pos,scale,perc,n))
mean(trueval)
}
#################### generate image
# x-axis chosen desired interval percentage
percentages <- 0.2/1.2^c(0:30)
# desired sample sizes n
ns <- c(3,4,5,6,7,8,9,10,20,30)
# computations
y <- matrix(rep(percentages, length(ns)), length(percentages))
for (i in which(ns>0)) {
y[,i] <- sapply(percentages, FUN <- function(x) meanf(0,1,x,ns[i]))
}
# plotting
plot(NULL,
xlim=c(0.0008,1), ylim=c(0,10),
log="x",
xlab="aimed error rate",
ylab="effective error rate / aimed error rate",
yaxt="n",xaxt="n",axes=FALSE)
axis(1,las=2,tck=-0.0,cex.axis=1,labels=rep("",2),at=c(0.0008,1),pos=0.0008)
axis(1,las=2,tck=-0.005,cex.axis=1,at=c(0.001*c(1:9),0.01*c(1:9),0.1*c(1:9)),labels=rep("",27),mgp=c(1.5,1,0),pos=0.0008)
axis(1,las=2,tck=-0.01,cex.axis=1,labels=c(0.001,0.01,0.1,1), at=c(0.001,0.01,0.1,1),mgp=c(1.5,1,0),pos=0.000)
#axis(2,las=1,tck=-0.0,cex.axis=1,labels=rep("",2),at=c(0.0008,1),pos=0.0008)
#axis(2,las=1,tck=-0.005,cex.axis=1,at=c(0.001*c(1:9),0.01*c(1:9),0.1*c(1:9)),labels=rep("",27),mgp=c(1.5,1,0),pos=0.0008)
#axis(2,las=1,tck=-0.01,cex.axis=1,labels=c(0.001,0.01,0.1,1), at=c(0.001,0.01,0.1,1),mgp=c(1.5,1,0),pos=0.0008)
axis(2,las=2,tck=-0.01,cex.axis=1,labels=0:15, at=0:15,mgp=c(1.5,1,0),pos=0.0008)
colours <- hsv(c(1:10)/20,1,1-c(1:10)/15)
for (i in which(ns>0)) {
points(percentages,y[,i]/percentages,pch=21,cex=0.5,col=colours[i],bg=colours[i])
}
legend(x=0.4,y=4.5,pch=21,legend=ns,col=colours,pt.bg=colours,title="sample size")
title("difference between confidence interval and effective confidence interval")
plot(ns,y[31,]/percentages[31],log="")
|
Frequentist Predictive Distribution for a Cauchy variable
One could use a Monte Carlo method to obtain empirical estimates for relationships between the $x_1....x_i$ and the prediction interval for $x_{i+n}$.
Motivation: If we estimate the prediction interva
|
42,699
|
Frequentist Predictive Distribution for a Cauchy variable
|
It seems that all you need is to estimate the parameters of Cauchy distribution from the dataset $x_i$. Here's what Stephens proposes, it's not MLE, and author claims this method is consistent and more stable than MLE though you have to take into account that this has been written in the last century.
where Cauchy is parameterized as follows:
Once you have the distribution, your point forecast will be $\hat\alpha$. Note, that since it doesn't have moments, you won't be able to show that your forecast is optimal in usual sense such as minimizing expected square cost.
|
Frequentist Predictive Distribution for a Cauchy variable
|
It seems that all you need is to estimate the parameters of Cauchy distribution from the dataset $x_i$. Here's what Stephens proposes, it's not MLE, and author claims this method is consistent and mor
|
Frequentist Predictive Distribution for a Cauchy variable
It seems that all you need is to estimate the parameters of Cauchy distribution from the dataset $x_i$. Here's what Stephens proposes, it's not MLE, and author claims this method is consistent and more stable than MLE though you have to take into account that this has been written in the last century.
where Cauchy is parameterized as follows:
Once you have the distribution, your point forecast will be $\hat\alpha$. Note, that since it doesn't have moments, you won't be able to show that your forecast is optimal in usual sense such as minimizing expected square cost.
|
Frequentist Predictive Distribution for a Cauchy variable
It seems that all you need is to estimate the parameters of Cauchy distribution from the dataset $x_i$. Here's what Stephens proposes, it's not MLE, and author claims this method is consistent and mor
|
42,700
|
Why auto.arima does not differentiate when there is xreg?
|
There is a test inside the forecast function for whether the series should be differenced:
if (is.na(d)) {
d <- ndiffs(dx, test = test, max.d = max.d)
if (d > 0 & !is.null(xregg)) {
diffxreg <- diff(diffxreg, differences = d, lag = 1)
if (any(apply(diffxreg, 2, is.constant)))
d <- d - 1
}
}
where d is the order of differencing specified in the function call (defaulting to NA.) Another test - nsdiffs - is applied for seasonal differencing. If the test does not indicate the presence of a unit root, models with differencing are not even considered, which, as one might imagine, can save considerable runtime.
With respect to the example in the OP - the forecast function runs the regression lm(xx~exogenous) and applies ARIMA modeling to the residuals. In the case of this example, the ACF / PACF plots make it clear that the residuals are stationary, at least to my eye.
To see that auto.arima can in fact consider differenced residuals, we construct the following example where $y$ is clearly nonstationary and, as $x$ and $y$ are independent, the residuals from the regression of $x$ on $y$ will also be nonstationary (unless a very low probability event occurs.)
> y <- rnorm(100, 1:100, 25)
> x <- rnorm(100)
> auto.arima(y, xreg=x, trace=TRUE)
Regression with ARIMA(2,1,2) errors : Inf
Regression with ARIMA(0,1,0) errors : 974.5948
Regression with ARIMA(1,1,0) errors : 953.8159
... more models, removed to save space ...
ARIMA(2,1,1) : 920.2894
ARIMA(2,1,2) : 922.1489
ARIMA(3,1,2) : 923.2468
ARIMA(1,1,1) : 922.377
Best model: Regression with ARIMA(2,1,1) errors
Series: y
Regression with ARIMA(2,1,1) errors
EDIT: Update in response to a comment
I copied and pasted the data from the example above, and ran:
> length(test$xx)
[1] 111
> length(test$exogenous)
[1] 111
> ndiffs(residuals(lm(xx~exogenous)), max.d=2)
[1] 0
to confirm that the ndiffs function is in fact returning 0 for this data.
|
Why auto.arima does not differentiate when there is xreg?
|
There is a test inside the forecast function for whether the series should be differenced:
if (is.na(d)) {
d <- ndiffs(dx, test = test, max.d = max.d)
if (d > 0 & !is.null(xregg)) {
di
|
Why auto.arima does not differentiate when there is xreg?
There is a test inside the forecast function for whether the series should be differenced:
if (is.na(d)) {
d <- ndiffs(dx, test = test, max.d = max.d)
if (d > 0 & !is.null(xregg)) {
diffxreg <- diff(diffxreg, differences = d, lag = 1)
if (any(apply(diffxreg, 2, is.constant)))
d <- d - 1
}
}
where d is the order of differencing specified in the function call (defaulting to NA.) Another test - nsdiffs - is applied for seasonal differencing. If the test does not indicate the presence of a unit root, models with differencing are not even considered, which, as one might imagine, can save considerable runtime.
With respect to the example in the OP - the forecast function runs the regression lm(xx~exogenous) and applies ARIMA modeling to the residuals. In the case of this example, the ACF / PACF plots make it clear that the residuals are stationary, at least to my eye.
To see that auto.arima can in fact consider differenced residuals, we construct the following example where $y$ is clearly nonstationary and, as $x$ and $y$ are independent, the residuals from the regression of $x$ on $y$ will also be nonstationary (unless a very low probability event occurs.)
> y <- rnorm(100, 1:100, 25)
> x <- rnorm(100)
> auto.arima(y, xreg=x, trace=TRUE)
Regression with ARIMA(2,1,2) errors : Inf
Regression with ARIMA(0,1,0) errors : 974.5948
Regression with ARIMA(1,1,0) errors : 953.8159
... more models, removed to save space ...
ARIMA(2,1,1) : 920.2894
ARIMA(2,1,2) : 922.1489
ARIMA(3,1,2) : 923.2468
ARIMA(1,1,1) : 922.377
Best model: Regression with ARIMA(2,1,1) errors
Series: y
Regression with ARIMA(2,1,1) errors
EDIT: Update in response to a comment
I copied and pasted the data from the example above, and ran:
> length(test$xx)
[1] 111
> length(test$exogenous)
[1] 111
> ndiffs(residuals(lm(xx~exogenous)), max.d=2)
[1] 0
to confirm that the ndiffs function is in fact returning 0 for this data.
|
Why auto.arima does not differentiate when there is xreg?
There is a test inside the forecast function for whether the series should be differenced:
if (is.na(d)) {
d <- ndiffs(dx, test = test, max.d = max.d)
if (d > 0 & !is.null(xregg)) {
di
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.