idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
13,401
When can we speak of collinearity
A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only the correlations between two variables, as it simultaneously evaluates the correlation between one variable and the res...
When can we speak of collinearity
A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only
When can we speak of collinearity A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only the correlations between two variables, as it simultaneously evaluates the correlatio...
When can we speak of collinearity A common way to evaluate collinearity is with variance inflation factors (VIFs). This can be achieved in R using the 'vif' function within the 'car' package. This has an advantage over looking at only
13,402
Gaussian RBF vs. Gaussian kernel
The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is typically on the squared norm of the weights of the linear model implicitly constructed in the feature space induced by...
Gaussian RBF vs. Gaussian kernel
The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is
Gaussian RBF vs. Gaussian kernel The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is typically on the squared norm of the weights of the linear model implicitly constructe...
Gaussian RBF vs. Gaussian kernel The only real difference is in the regularisation that is applied. A regularised RBF network typically uses a penalty based on the squared norm of the weights. For the kernel version, the penalty is
13,403
When is distance covariance less appropriate than linear covariance?
I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corrections, suggestions, etc. are welcome. The remarks are (strongly) biased towards potential drawbacks, as requested in t...
When is distance covariance less appropriate than linear covariance?
I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corr
When is distance covariance less appropriate than linear covariance? I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corrections, suggestions, etc. are welcome. The remarks...
When is distance covariance less appropriate than linear covariance? I have tried to collect a few remarks on distance covariance based on my impressions from reading the references listed below. However, I do not consider myself an expert on this topic. Comments, corr
13,404
When is distance covariance less appropriate than linear covariance?
I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relationship. It won't give you any means to predict one variable from the other. By analogy, when doing exploratory data ana...
When is distance covariance less appropriate than linear covariance?
I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relatio
When is distance covariance less appropriate than linear covariance? I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relationship. It won't give you any means to predict one ...
When is distance covariance less appropriate than linear covariance? I could well be missing something, but just having a quantification of the nonlinear dependence between two variables doesn't seem to have much of a payoff. It won't tell you the shape of the relatio
13,405
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies that you can put your inference problem into a repeated sampling framework. Can you? If you can't then there isn't ...
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) impli
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies th...
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi A quick response to the bulleted content: 1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) impli
13,406
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian statistics rests on three basic "desiderata" or normative principles: The plausability of a proposition is to be repres...
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian stati...
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian
13,407
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a link to the podcast: http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research? I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a...
Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavi I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here
13,408
Is my weatherman accurate?
In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoulli variate having probability p(q) of success. This is a classic logistic regression setup if you are willing to model t...
Is my weatherman accurate?
In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoull
Is my weatherman accurate? In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoulli variate having probability p(q) of success. This is a classic logistic regression setup if...
Is my weatherman accurate? In effect you are thinking of a model in which the true chance of rain, p, is a function of the predicted chance q: p = p(q). Each time a prediction is made, you observe one realization of a Bernoull
13,409
Is my weatherman accurate?
Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be transformed into a dicrimination procedure with a varying threshold Indeed you can say "it will rain" if your probabilit...
Is my weatherman accurate?
Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be t
Is my weatherman accurate? Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be transformed into a dicrimination procedure with a varying threshold Indeed you can say "it wi...
Is my weatherman accurate? Comparison of probability forecast for binary event (or discrete Random Variable) can be done upon the Brier score but you can also use ROC curve since any probability forecast of this type can be t
13,410
Is my weatherman accurate?
When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it would normally be accurate to predict "100 percent chance of rain in North America". Bear in mind that the models are g...
Is my weatherman accurate?
When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it
Is my weatherman accurate? When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it would normally be accurate to predict "100 percent chance of rain in North America". Bear in...
Is my weatherman accurate? When the forecast says "X percent chance of rain in (area)", it means that the numerical weather model has indicated rain in X percent of the area, for the time interval in question. For example, it
13,411
Is my weatherman accurate?
The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different periods of time, data, errors, [weighted] rolling average of data, errors ... it's tough to say what visual analysis mig...
Is my weatherman accurate?
The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different
Is my weatherman accurate? The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different periods of time, data, errors, [weighted] rolling average of data, errors ... it's tough to s...
Is my weatherman accurate? The Brier Score approach is very simple and the most directly applicable way verify accuracy of a predicted outcome versus binary event. Don't rely on just formulas ...plot the scores for different
13,412
Is my weatherman accurate?
How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your value of interest (say the prediction by tomorrow) by a Gaussian and seeing what the weighted average is. You can guess...
Is my weatherman accurate?
How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your
Is my weatherman accurate? How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your value of interest (say the prediction by tomorrow) by a Gaussian and seeing what the weighte...
Is my weatherman accurate? How about just binning the given predictions and taking the observed fractions as your estimate for each bin? You can generalise this to a continuous model by weighing all the observations around your
13,413
Is my weatherman accurate?
Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC curves, and the f1-score. Determining if the forecast is objectively good is a different matter. One option is to look ...
Is my weatherman accurate?
Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC cu
Is my weatherman accurate? Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC curves, and the f1-score. Determining if the forecast is objectively good is a different matt...
Is my weatherman accurate? Do you want to know if his forecast is more accurate than another forecast? If so, you can look at basic accuracy metrics for probabilistic classification like cross-entropy, precision/recall, ROC cu
13,414
RMSE vs Standard deviation in population
TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared differences between some values. Nonetheless, they are not the same. Standard deviation is used to measure the spread of dat...
RMSE vs Standard deviation in population
TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared diff
RMSE vs Standard deviation in population TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared differences between some values. Nonetheless, they are not the same. Standard devia...
RMSE vs Standard deviation in population TLDR; While the formulas may be similar, RMSE and standard deviation have different usage. You are right that both standard deviation and RMSE are similar because they are square roots of squared diff
13,415
RMSE vs Standard deviation in population
This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt{\frac{\sum_{i=1}^N{(F_i - O_i)^2}}{N}} $$ f = forecasts (expected values or unknown results), o = observed values (know...
RMSE vs Standard deviation in population
This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt
RMSE vs Standard deviation in population This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt{\frac{\sum_{i=1}^N{(F_i - O_i)^2}}{N}} $$ f = forecasts (expected values or un...
RMSE vs Standard deviation in population This will make bit clear, RMSE calculated between two sets, eg: set and predicted set, to calculate the error, eg : price Vs predicted price 10 12 12 10 13 17 $$ {RMSE}=\sqrt
13,416
Interpretation of .L & .Q output from a negative binomial GLM with categorical data
Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. The first is linear (.L), the second is quadratic (.Q), the third (if you had enough levels) would be cubic, etc. R will ...
Interpretation of .L & .Q output from a negative binomial GLM with categorical data
Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. Th
Interpretation of .L & .Q output from a negative binomial GLM with categorical data Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. The first is linear (.L), the second i...
Interpretation of .L & .Q output from a negative binomial GLM with categorical data Your variables aren't just coded as factors (to make them categorical), they are coded as ordered factors. Then, by default, R fits a series of polynomial functions to the levels of the variable. Th
13,417
Interpretation of LASSO regression coefficients
Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coefficients in a logistic regression? LASSO (a penalized estimation method) aims at estimating the same quantities (model co...
Interpretation of LASSO regression coefficients
Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coeff
Interpretation of LASSO regression coefficients Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coefficients in a logistic regression? LASSO (a penalized estimation method) ...
Interpretation of LASSO regression coefficients Are the LASSO coefficients interpreted in the same method as logistic regression? Let me rephrase: Are the LASSO coefficients interpreted in the same way as, for example, OLS maximum likelihood coeff
13,418
Making sense of independent component analysis
Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but not quite, because you also hear someone else who's next to him, participating in an unrelated discussion about sports. ...
Making sense of independent component analysis
Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but
Making sense of independent component analysis Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but not quite, because you also hear someone else who's next to him, particip...
Making sense of independent component analysis Here's my attempt. Background Consider the following two cases. You are a private eye at a party. Suddenly, you see one of your old clients talking to someone, and you can hear some of the words but
13,419
Making sense of independent component analysis
Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma sits there and hears the noise of all of people speaking, what appears to be just a cacophony. If she turns to one group, ...
Making sense of independent component analysis
Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma si
Making sense of independent component analysis Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma sits there and hears the noise of all of people speaking, what appears to b...
Making sense of independent component analysis Very simple. Imagine you, your grandma and the family members are gathered around the table. Larger groups of people tend to break up where the chat topic is specific to that subgroup. Your grandma si
13,420
Sampling from von Mises-Fisher distribution in Python?
Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i.e. my code (in Python). The rejection sampling scheme: def rW(n, kappa, m): dim = m-1 b = dim / (np.sqrt(4*kapp...
Sampling from von Mises-Fisher distribution in Python?
Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i
Sampling from von Mises-Fisher distribution in Python? Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i.e. my code (in Python). The rejection sampling scheme: def rW(n,...
Sampling from von Mises-Fisher distribution in Python? Finally, I got it. Here is my answer. I finally put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i
13,421
Sampling from von Mises-Fisher distribution in Python?
(I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ needs to come from $S^{p-2}$ in the tangent space to $\mu$, that is, $v$ should be a unit vector orthogonal to $\mu$. Other...
Sampling from von Mises-Fisher distribution in Python?
(I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ nee
Sampling from von Mises-Fisher distribution in Python? (I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ needs to come from $S^{p-2}$ in the tangent space to $\mu$, that is,...
Sampling from von Mises-Fisher distribution in Python? (I apologize for the formatting here, I created an account just to reply to this question, since I was also trying to figure this out recently). The answer of mic isn't quite right, the vector $v$ nee
13,422
Why $\sqrt{n}$ in the definition of asymptotic normality?
We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes to infinity, but to maintain a distribution at the limit. So it has to be whatever it has to be in each case. Of course ...
Why $\sqrt{n}$ in the definition of asymptotic normality?
We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes
Why $\sqrt{n}$ in the definition of asymptotic normality? We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes to infinity, but to maintain a distribution at the limit. So i...
Why $\sqrt{n}$ in the definition of asymptotic normality? We don't get to choose here. The "normalizing" factor, in essence is a "variance-stabilizing to something finite" factor, so as for the expression not to go to zero or to infinity as sample size goes
13,423
Why $\sqrt{n}$ in the definition of asymptotic normality?
You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\frac{v}{n})$$ The last equation is informal. However, it's in some way more intuitive: you say that the deviation of $U_...
Why $\sqrt{n}$ in the definition of asymptotic normality?
You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\f
Why $\sqrt{n}$ in the definition of asymptotic normality? You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\frac{v}{n})$$ The last equation is informal. However, it's in s...
Why $\sqrt{n}$ in the definition of asymptotic normality? You were on the right track with a sample mean variance intuition. Re-arrange the condition: $$\sqrt{n}(U_n - \theta) \to N(0,v)$$ $$(U_n - \theta) \to \frac{N(0,v)}{\sqrt{n}}$$ $$U_n \to N(\theta,\f
13,424
stochastic vs deterministic trend/seasonality in time series forecasting
1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that were written on this issue are the following: Related to the trend: Dickey, D. y Fuller, W. (1979a), Distribution of th...
stochastic vs deterministic trend/seasonality in time series forecasting
1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that
stochastic vs deterministic trend/seasonality in time series forecasting 1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that were written on this issue are the following: R...
stochastic vs deterministic trend/seasonality in time series forecasting 1) As regards your first question, some tests statistics have been developed and discussed in the literature to test the null of stationarity and the null of a unit root. Some of the many papers that
13,425
stochastic vs deterministic trend/seasonality in time series forecasting
With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0,0,0,1,2,3,4 thus one trend applies to observations 1−t and a second trend applies to observations 6 to t. Your non-seas...
stochastic vs deterministic trend/seasonality in time series forecasting
With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0
stochastic vs deterministic trend/seasonality in time series forecasting With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0,0,0,1,2,3,4 thus one trend applies to observat...
stochastic vs deterministic trend/seasonality in time series forecasting With respect to your non-seasonal data ...Trends can be of two forms y(t)=y(t−1)+θ0 (A) Stochastic Trend or Y(t)=a+bx1+cx2 (B) Deterministic Trend etc where x1=1,2,3,4....t and x2=0,0,0
13,426
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inference?
I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly straightforward to implement, is called Hamiltonian Monte Carlo (or sometimes Hybrid Monte Carlo). It uses a physical model ...
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inf
I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly strai
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inference? I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly straightforward t...
What are some well known improvements over textbook MCMC algorithms that people use for bayesian inf I'm not an expert in any of these, but I thought I'd put them out there anyway to see what the community thought. Corrections are welcome. One increasingly popular method, which is not terribly strai
13,427
Dealing with ties, weights and voting in kNN
When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying function doesn't change much when the arguments don't change much. Or in other words the underlying function is locally...
Dealing with ties, weights and voting in kNN
When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying
Dealing with ties, weights and voting in kNN When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying function doesn't change much when the arguments don't change much. Or in o...
Dealing with ties, weights and voting in kNN When doing kNN you need to keep one thing in mind, namely that it's not a strictly, mathematically derived algorithm, but rather a simple classifier / regressor based on one intuition - the underlying
13,428
Dealing with ties, weights and voting in kNN
The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie is impossible when k = 1. If you were to increase k, pending your weighting scheme and number of categories, you would no...
Dealing with ties, weights and voting in kNN
The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie i
Dealing with ties, weights and voting in kNN The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie is impossible when k = 1. If you were to increase k, pending your weighting ...
Dealing with ties, weights and voting in kNN The ideal way to break a tie for a k nearest neighbor in my view would be to decrease k by 1 until you have broken the tie. This will always work regardless of the vote weighting scheme, since a tie i
13,429
Dealing with ties, weights and voting in kNN
About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fill k. Such a solution stresses the fact that those are pathological cases that simply don't provide enough informatio...
Dealing with ties, weights and voting in kNN
About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fil
Dealing with ties, weights and voting in kNN About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fill k. Such a solution stresses the fact that those are pathological case...
Dealing with ties, weights and voting in kNN About this tie part, the best baseline idea for ties is usually random breaking, so selecting random class of all winning the voting and randomly selecting a subset of tied objects large enough to fil
13,430
Dealing with ties, weights and voting in kNN
One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN
One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
Dealing with ties, weights and voting in kNN One possible way is to have the algorithm automatically increase or decrease k until you get a clear winner.
13,431
How to do a generalized linear model with multiple dependent variables in R?
The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to glm because users need to be able to specify dual column dependent variables for logistic regression models. The solutio...
How to do a generalized linear model with multiple dependent variables in R?
The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to
How to do a generalized linear model with multiple dependent variables in R? The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to glm because users need to be able to specif...
How to do a generalized linear model with multiple dependent variables in R? The short answer is that glm doesn't work like that. The lm will create mlm objects if you give it a matrix, but this is not widely supported in the generics and anyway couldn't easily generalize to
13,432
How to do a generalized linear model with multiple dependent variables in R?
I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLMs. Maybe they help...
How to do a generalized linear model with multiple dependent variables in R?
I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLM
How to do a generalized linear model with multiple dependent variables in R? I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLMs. Maybe they help...
How to do a generalized linear model with multiple dependent variables in R? I was told Multivariate Generalized Linear (Mixed) Models exists that address your problem. I'm not an expert about it, but I would have a look to SABRE documentation and this book on multivariate GLM
13,433
Examples of hidden Markov models problems?
I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily demands for these items thus contained zeroes that were legitimate zero demand days and also zeroes that were because the ...
Examples of hidden Markov models problems?
I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily de
Examples of hidden Markov models problems? I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily demands for these items thus contained zeroes that were legitimate zero demand ...
Examples of hidden Markov models problems? I've used HMM in a demand / inventory level estimation scenario, where we had goods being purchased from many stores that might or might not be out of inventory of the goods. The sequence of daily de
13,434
Examples of hidden Markov models problems?
I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning overlapping items/text, and recognizing sign language. One example I found and did some exploration of was in Section 8...
Examples of hidden Markov models problems?
I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning
Examples of hidden Markov models problems? I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning overlapping items/text, and recognizing sign language. One example I found a...
Examples of hidden Markov models problems? I pretty much experienced the same thing and didn't find much beyond the weather. Areas that come to mind include: speech recognition, change point detection, tagging parts of speech in text, aligning
13,435
Examples of hidden Markov models problems?
Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Graphical Models, by Koller and Friedman.
Examples of hidden Markov models problems?
Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Grap
Examples of hidden Markov models problems? Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Graphical Models, by Koller and Friedman.
Examples of hidden Markov models problems? Most speech recognition software uses Hidden Markov Models. You can experiment with natural language processing if you want to get a feel for HMM applications. Here's a good source: Probabilistic Grap
13,436
Examples of hidden Markov models problems?
Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of the cell and changes the DNA content of the cell and starts proliferation of virions until it burst out of the cells. ...
Examples of hidden Markov models problems?
Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of
Examples of hidden Markov models problems? Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of the cell and changes the DNA content of the cell and starts proliferation of...
Examples of hidden Markov models problems? Hidden markov models are very useful in monitoring HIV. HIV enters the blood stream and looks for the immune response cells. It then sits on the protein content of the cell and gets into the core of
13,437
Examples of hidden Markov models problems?
For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems?
For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems? For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
Examples of hidden Markov models problems? For me, very nice application of HMM is chord identification in musical composition. See for example this lecture.
13,438
Examples of hidden Markov models problems?
Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations in the future. A fun example showing the use of Markov Model is the following- http://freakonometrics.blog.free.fr/inde...
Examples of hidden Markov models problems?
Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations
Examples of hidden Markov models problems? Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations in the future. A fun example showing the use of Markov Model is the followin...
Examples of hidden Markov models problems? Markov models may be useful in analyzing the interactions of a user with a website - For Example on Amazon.com where figuring out what series of interactions lead to a checkout to give recommendations
13,439
Proportion of explained variance in a mixed-effects model
I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., Wolfinger, R. D., Qaqish, B. F., & Schabenberger, O. (2008). An $R^2$ statistic for fixed effects in the linear mixed mod...
Proportion of explained variance in a mixed-effects model
I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., W
Proportion of explained variance in a mixed-effects model I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., Wolfinger, R. D., Qaqish, B. F., & Schabenberger, O. (2008). An...
Proportion of explained variance in a mixed-effects model I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. DOI:10.1002/sim.1572 Edwards, L. J., Muller, K. E., W
13,440
Proportion of explained variance in a mixed-effects model
According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous answer). #load packages library(lme4) library(MuMIn) #Fit Model m <- lmer(mpg ~ gear + disp + (1|cyl), data = mtcars) #...
Proportion of explained variance in a mixed-effects model
According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous a
Proportion of explained variance in a mixed-effects model According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous answer). #load packages library(lme4) library(MuMIn) #Fit Mode...
Proportion of explained variance in a mixed-effects model According to this blog post from 2013, the MuMIn package in R can provide R$^2$ values for mixed models ala an approach developed by Nakagawa & Schielzeth 2013$^1$ (which was mentioned in a previous a
13,441
What's the history of box plots, and how did the "box and whiskers" design evolve?
Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of what are now often called dot or strip plots (dozens of other names) and with representations of the empirical quantile ...
What's the history of box plots, and how did the "box and whiskers" design evolve?
Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of
What's the history of box plots, and how did the "box and whiskers" design evolve? Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of what are now often called dot or stri...
What's the history of box plots, and how did the "box and whiskers" design evolve? Chief Executive Officer summary The history is much longer and more complicated than many people think it is. Executive summary The history of what Tukey called box plots is tangled up with that of
13,442
Evaluate Random Forest: OOB vs CV
Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minutes. So you better be sceptical and scrutinize this and not get fooled by my possibly overly confident writing style (me...
Evaluate Random Forest: OOB vs CV
Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minu
Evaluate Random Forest: OOB vs CV Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minutes. So you better be sceptical and scrutinize this and not get fooled by my possibly ...
Evaluate Random Forest: OOB vs CV Note: While I feel that my answer is probably correct, I also feel doubtful due to the fact that I made all this up by thinking about this problem only after reading this question for about 30-60 minu
13,443
Evaluate Random Forest: OOB vs CV
The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). does not seem correct. It is not true that OOBE error estimate being ...
Evaluate Random Forest: OOB vs CV
The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-
Evaluate Random Forest: OOB vs CV The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-fold cross-validation (where 10 folds is common). does not seem correct. It is not tr...
Evaluate Random Forest: OOB vs CV The motivation : Therefore in my view the only reason why OOBE is a pessimistic estimation of forest's error is only because it usually trains by a smaller number of samples than usually done with k-
13,444
Are Random Forest and Boosting parametric or non-parametric?
Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they don't need assumptions about your data distribution or classify your data into a theoretical distribution. In fact almo...
Are Random Forest and Boosting parametric or non-parametric?
Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they
Are Random Forest and Boosting parametric or non-parametric? Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they don't need assumptions about your data distribution or cla...
Are Random Forest and Boosting parametric or non-parametric? Parametrical models have parameters (infering them)or assumptions regarding the data distribution, whereas RF ,neural nets or boosting trees have parameters related with the algorithm itself, but they
13,445
Are Random Forest and Boosting parametric or non-parametric?
I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the features, you won't get more parameters by adding more training data. But for RF and so on, the details of model will change (...
Are Random Forest and Boosting parametric or non-parametric?
I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the featur
Are Random Forest and Boosting parametric or non-parametric? I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the features, you won't get more parameters by adding more training d...
Are Random Forest and Boosting parametric or non-parametric? I think the criterion for parametric and non-parametric is this: whether the number of parameters grows with the number of training samples. For logistic regression and svm, when you select the featur
13,446
Are Random Forest and Boosting parametric or non-parametric?
The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does this or not depends on how the tree splitting/pruning algorithm works. If no pruning is done, and splitting it based on sam...
Are Random Forest and Boosting parametric or non-parametric?
The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does thi
Are Random Forest and Boosting parametric or non-parametric? The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does this or not depends on how the tree splitting/pruning algorith...
Are Random Forest and Boosting parametric or non-parametric? The term "non-parametric" is a bit of a misnomer, as generally these models/algorithms are defined as having the number of parameters which increase as the sample size increases. Whether a RF does thi
13,447
Are Random Forest and Boosting parametric or non-parametric?
In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm, but it is not inherently derived from the data, but rather an input parameter that has to be provided by the user.
Are Random Forest and Boosting parametric or non-parametric?
In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm,
Are Random Forest and Boosting parametric or non-parametric? In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm, but it is not inherently derived from the data, but rather...
Are Random Forest and Boosting parametric or non-parametric? In statistical sense, the model is parametric, if parameters are learned or inferred based on the data. A tree in this sense is nonparametric. Of course the tree depth is a parameter of the algorithm,
13,448
Are Random Forest and Boosting parametric or non-parametric?
I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic regression, linear regression and models of this sort which would mostly be considered parametric whereas the parameters e...
Are Random Forest and Boosting parametric or non-parametric?
I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic re
Are Random Forest and Boosting parametric or non-parametric? I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic regression, linear regression and models of this sort which w...
Are Random Forest and Boosting parametric or non-parametric? I would have thought that the fact that a given training set only has one possible set of computed parameters would also determine if the model is parametric. This is the case in boosting, logistic re
13,449
What is Recurrent Reinforcement Learning
What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output is fed into the model as a part of input. It was soon extended to trading in a FX market. The RRL technique has been foun...
What is Recurrent Reinforcement Learning
What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output i
What is Recurrent Reinforcement Learning What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output is fed into the model as a part of input. It was soon extended to trading in a F...
What is Recurrent Reinforcement Learning What is a "recurrent reinforcement learning"? Recurrent reinforcement learning (RRL) was first introduced for training neural network trading systems in 1996. "Recurrent" means that previous output i
13,450
What is Recurrent Reinforcement Learning
The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network that processes each observation sequentially, in the same way for each time step. Original paper: Deep Recurrent Q-Lear...
What is Recurrent Reinforcement Learning
The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network
What is Recurrent Reinforcement Learning The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network that processes each observation sequentially, in the same way for each time st...
What is Recurrent Reinforcement Learning The distinction of (Deep) Recurrent RL, is that the function mapping the agents observations to its output action is a Recurrent Neural Network. A Recurrent Neural Network is a type of neural network
13,451
Variance-covariance matrix in lmer
Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if needed, and put this into a likelihood maximizer. The various variance structures you are describing, however, are the wo...
Variance-covariance matrix in lmer
Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if n
Variance-covariance matrix in lmer Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if needed, and put this into a likelihood maximizer. The various variance structures you ...
Variance-covariance matrix in lmer Mixed models are (generalized versions of) variance components models. You write down the fixed effects part, add error terms that may be common for some groups of observations, add link function if n
13,452
Variance-covariance matrix in lmer
The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer
The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
Variance-covariance matrix in lmer The FlexLamba branch of lmer provides such a functionality. See https://github.com/lme4/lme4/issues/224 for examples how to implement a specific structure of errors or random effects.
13,453
Variance-covariance matrix in lmer
To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allows for totally unstructured VCV's. To your address your question on "default structure": there is not a concept of defa...
Variance-covariance matrix in lmer
To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allo
Variance-covariance matrix in lmer To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allows for totally unstructured VCV's. To your address your question on "default structu...
Variance-covariance matrix in lmer To my knowledge lmer is not having an "easy" way to address this. Also given that in most cases lmer makes heavy use of sparse matrices for Cholesky factorization I would find it unlikely that it allo
13,454
Is there an unbiased estimator of the Hellinger distance between two distributions?
No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argument of Bickel and Lehmann (1969). Unbiased estimation in convex families. The Annals of Mathematical Statistics, 40 (5) ...
Is there an unbiased estimator of the Hellinger distance between two distributions?
No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argume
Is there an unbiased estimator of the Hellinger distance between two distributions? No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argument of Bickel and Lehmann (1969). Un...
Is there an unbiased estimator of the Hellinger distance between two distributions? No unbiased estimator either of $\mathfrak{H}$ or of $\mathfrak{H}^2$ exists for $f$ from any reasonably broad nonparametric class of distributions. We can show this with the beautifully simple argume
13,455
Is there an unbiased estimator of the Hellinger distance between two distributions?
I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a random sample $X_1,\dots,X_n$ from a density $f>0$. We want to estimate $$ H(f,f_0) = \sqrt{1 - \int_\mathscr{X} \sqrt{f(x)f_...
Is there an unbiased estimator of the Hellinger distance between two distributions?
I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a rando
Is there an unbiased estimator of the Hellinger distance between two distributions? I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a random sample $X_1,\dots,X_n$ from a dens...
Is there an unbiased estimator of the Hellinger distance between two distributions? I don't know how to construct (if it exists) an unbiased estimator of the Hellinger distance. It seems possible to construct a consistent estimator. We have some fixed known density $f_0$, and a rando
13,456
How can I predict values from new inputs of a linear model in R?
If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R?
If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R? If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
How can I predict values from new inputs of a linear model in R? If you want the predicted values for train_x = 1, 2, and 3, use predict(mod, data.frame(train_x = c(1, 2, 3))).
13,457
How to use weights in function lm in R?
I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the variable in the data set, R will take care of the rest, NA management, etc. You can also use formulas in the weight argu...
How to use weights in function lm in R?
I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the
How to use weights in function lm in R? I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the variable in the data set, R will take care of the rest, NA management, etc. You ...
How to use weights in function lm in R? I think R help page of lm answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the
13,458
How to use weights in function lm in R?
What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c(6, 7, 8, 8)) The second line produces the same intercept and slope as the third line (distinct from the first line's r...
How to use weights in function lm in R?
What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c
How to use weights in function lm in R? What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c(6, 7, 8, 8)) The second line produces the same intercept and slope as the thir...
How to use weights in function lm in R? What you suggest should work. See if this makes sense: lm(c(8000, 50000, 116000) ~ c(6, 7, 8)) lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246)) lm(c(8000, 50000, 116000, 116000) ~ c
13,459
Why is LogLoss preferred over other proper scoring rules?
Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply very natural as a KPIs. (A somewhat more common convention is to minimize the score, in which case, one takes the negati...
Why is LogLoss preferred over other proper scoring rules?
Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply
Why is LogLoss preferred over other proper scoring rules? Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply very natural as a KPIs. (A somewhat more common convention is ...
Why is LogLoss preferred over other proper scoring rules? Arguments for the log score On the one hand, as kjetil b halvorsen writes, the log loss is just a reformulation of the log likelihood, which statisticians are very used to maximizing, so it is simply
13,460
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions?
Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. into the vertical profiles: Gr1 Gr2 Total Yes p1 p2 p No q1 q2 q -------------- 100...
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions?
Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. in
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions? Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. into the vertical profiles: ...
At What Level is a $\chi^2$ test Mathematically Identical to a $z$-test of Proportions? Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. in
13,461
How to achieve strictly positive forecasts?
With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. When the lambda argument is specified, a Box-Cox transformation is used. The value $\lambda=0$ specifies a log transformat...
How to achieve strictly positive forecasts?
With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. Wh
How to achieve strictly positive forecasts? With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. When the lambda argument is specified, a Box-Cox transformation is used. The v...
How to achieve strictly positive forecasts? With the forecast package for R, simply set lambda=0 when fitting a model. For example: fit <- auto.arima(x, lambda=0) forecast(fit) Many of the functions in the package allow the lambda argument. Wh
13,462
Can I convert a covariance matrix into uncertainties for variables?
There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could consider doing. Firstly, the error (variance) in any particular direction $i$, is given by $\sigma_i^2 = \mathbf{e}_i ^ \top \S...
Can I convert a covariance matrix into uncertainties for variables?
There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could conside
Can I convert a covariance matrix into uncertainties for variables? There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could consider doing. Firstly, the error (variance) in any partic...
Can I convert a covariance matrix into uncertainties for variables? There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could conside
13,463
Survival analysis: continuous vs discrete time
The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of one month would be just fine over a 5-year period. However, the large number of ties at 6 and 12 months makes one wonder ...
Survival analysis: continuous vs discrete time
The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of o
Survival analysis: continuous vs discrete time The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of one month would be just fine over a 5-year period. However, the large numb...
Survival analysis: continuous vs discrete time The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of o
13,464
Survival analysis: continuous vs discrete time
I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred. If you fit parametric regression models with interval censoring using maximum likelihhod the tied survival times is not ...
Survival analysis: continuous vs discrete time
I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred.
Survival analysis: continuous vs discrete time I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred. If you fit parametric regression models with interval censoring using max...
Survival analysis: continuous vs discrete time I suspect if you use continuous time models you will want to use interval censoring, reflecting the fact that you don't know the exact time of failure, just an interval in which the failure ocurred.
13,465
Survival analysis: continuous vs discrete time
There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. Because, outside of some methodological needs to use one type of time or the other, how you model survival should depend ...
Survival analysis: continuous vs discrete time
There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. B
Survival analysis: continuous vs discrete time There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. Because, outside of some methodological needs to use one type of time or t...
Survival analysis: continuous vs discrete time There will be tied survival times in most analysis, but big, clear chunks of ties at particular events is troubling. I would think long and hard about the study itself, how its collecting data, etc. B
13,466
Survival analysis: continuous vs discrete time
If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require you to slice up the data into discrete intervals defined by the varying covariates. I found this pdf of lecture notes by ...
Survival analysis: continuous vs discrete time
If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require y
Survival analysis: continuous vs discrete time If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require you to slice up the data into discrete intervals defined by the varying co...
Survival analysis: continuous vs discrete time If you have covariates that vary over time for the some individuals (e.g. family income may vary in your example over the lifetime of a child), survival models (parametric and the cox model) require y
13,467
Interpreting the drop1 output in R
drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuous variables, this table is exactly equivalent to summary(lm1), as the F-values are just those T-values squared. P-valu...
Interpreting the drop1 output in R
drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuo
Interpreting the drop1 output in R drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuous variables, this table is exactly equivalent to summary(lm1), as the F-values are ...
Interpreting the drop1 output in R drop1 gives you a comparison of models based on the AIC criterion, and when using the option test="F" you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuo
13,468
Interpreting the drop1 output in R
For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary." The Sum of Sq column refers to the sum of squares (or more precisely sum of squared deviations)...
Interpreting the drop1 output in R
For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic t
Interpreting the drop1 output in R For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary." The Sum of Sq column refers to the sum of squares (or more p...
Interpreting the drop1 output in R For reference, these are the values that are included in the table: Df refers to Degrees of freedom, "the number of degrees of freedom is the number of values in the final calculation of a statistic t
13,469
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long a...
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases ...
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
13,470
Brier Score and extreme class imbalance
The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in minority classes. They propose a stratified Brier score: $$BS^+ = \frac{\sum_{y_i=1}\left(y_i- \hat{P}\left\{y_i|x_i\righ...
Brier Score and extreme class imbalance
The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in m
Brier Score and extreme class imbalance The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in minority classes. They propose a stratified Brier score: $$BS^+ = \frac{\sum_{y_i...
Brier Score and extreme class imbalance The paper "Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them)" (Wallace & Dahabreh 2012) argues that the Brier score as is fails to account for poor calibrations in m
13,471
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases have predictions near 0 or 0.5 as long a...
Brier Score and extreme class imbalance
If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probability forecasts for the 5 positive cases? As we do not care if the negative cases ...
Brier Score and extreme class imbalance If there is extreme class imbalance (e.g. 5 positive cases vs 1,000 negative cases), how does the Brier score ensure that we select the model that gives us the best performance regarding high probabil
13,472
How to interpret PCA on time-series data?
Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$ data points in the $n$-dimensional space $\mathbb R^n$. It is "a cloud of points", so performing PCA amounts to finding...
How to interpret PCA on time-series data?
Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$
How to interpret PCA on time-series data? Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$ data points in the $n$-dimensional space $\mathbb R^n$. It is "a cloud of poi...
How to interpret PCA on time-series data? Q1: What is the connection between PC time series and "maximum variance"? The data that they are analyzing are $\hat t$ data points for each of the $n$ neurons, so one can think about that as $\hat t$
13,473
How to interpret PCA on time-series data?
With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector much like any other that we might apply PCA to. The first $p$ columns of $\bf V$ are then the eigen-timecourses which, whe...
How to interpret PCA on time-series data?
With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector mu
How to interpret PCA on time-series data? With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector much like any other that we might apply PCA to. The first $p$ columns of $\bf V$...
How to interpret PCA on time-series data? With respect to the first question. Consider the whole time series through a particular voxel to be a single draw from a multivariate distribution. We can now think of this as a multivariate vector mu
13,474
Calculating standard error after a log-transform
Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute a rough approximation of $\text{sd}(Y)$ from $\text{sd}(\log(Y))$ via Taylor expansion. $$\text{Var}(g(X))\approx \left...
Calculating standard error after a log-transform
Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute
Calculating standard error after a log-transform Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute a rough approximation of $\text{sd}(Y)$ from $\text{sd}(\log(Y))$ via ...
Calculating standard error after a log-transform Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different. In some situations, you can compute
13,475
Calculating standard error after a log-transform
It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and others have already pointed out that that isn't correct for a few reasons. Instead, use: exp(mean(log(x))) * (sd(log(x))/s...
Calculating standard error after a log-transform
It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and ot
Calculating standard error after a log-transform It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and others have already pointed out that that isn't correct for a few reasons...
Calculating standard error after a log-transform It sounds like you effectively want the geometric standard error, akin to the geometric mean exp(mean(log(x))). While it might seem reasonable to compute that as: exp(sd(log(x)/sqrt(n-1))) You and ot
13,476
Are "random sample" and "iid random variable" synonyms?
You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dots,X_n$ of $n$ random variables, you know that if they are independent, $f(x_1,\dots,x_n)=f(x_1)\cdots f(x_n)$, and ident...
Are "random sample" and "iid random variable" synonyms?
You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dot
Are "random sample" and "iid random variable" synonyms? You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dots,X_n$ of $n$ random variables, you know that if they are indepe...
Are "random sample" and "iid random variable" synonyms? You don't say what the other statistics book is, but I'd guess that it is a book (or section) about finite population sampling. When you sample random variables, i.e. when you consider a set $X_1,\dot
13,477
Are "random sample" and "iid random variable" synonyms?
I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set of random values. In general, each one of the values may either be identically or differently distributed. $i.i.d.$ sam...
Are "random sample" and "iid random variable" synonyms?
I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set
Are "random sample" and "iid random variable" synonyms? I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set of random values. In general, each one of the values may either...
Are "random sample" and "iid random variable" synonyms? I will not bore you with probabilistic definitions and formulas, which you may easily pick up at any textbook (or here is a good place to start) Just think of this intuitively, random sample is a set
13,478
Are "random sample" and "iid random variable" synonyms?
A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured by the random variable --e.g. number of heads in 10 tosses of a coin or incomes/heights etc in a sample -- but that is no...
Are "random sample" and "iid random variable" synonyms?
A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured b
Are "random sample" and "iid random variable" synonyms? A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured by the random variable --e.g. number of heads in 10 tosses of a c...
Are "random sample" and "iid random variable" synonyms? A Random Variable usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. The random phenomenon may produce outcomes that have numerical values captured b
13,479
Are "random sample" and "iid random variable" synonyms?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. A random sample is a realization of a sequence of random variables. Those random variables may be i.i.d or not...
Are "random sample" and "iid random variable" synonyms?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Are "random sample" and "iid random variable" synonyms? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. A random sample is a realization of a sequence of rand...
Are "random sample" and "iid random variable" synonyms? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
13,480
Error in normal approximation to a uniform sum distribution
Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n = \sup_{x\in\mathbb R} |F_n(x) - \Phi(x)| \>, $$ where $F_n$ is the distribution of $S_n$. Lemma 1 (Uspensky): The follo...
Error in normal approximation to a uniform sum distribution
Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n
Error in normal approximation to a uniform sum distribution Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n = \sup_{x\in\mathbb R} |F_n(x) - \Phi(x)| \>, $$ where $F_n$...
Error in normal approximation to a uniform sum distribution Let $U_1, U_2,\dots$ be iid $\mathcal U(-b,b)$ random variables and consider the normalized sum $$ S_n = \frac{\sqrt{3} \sum_{i=1}^n U_i}{b \sqrt{n}} \>, $$ and the associated $\sup$ norm $$ \delta_n
13,481
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these held-out words given the document-topic mixtures as well as the topic-word mixtures. This is obviously not ideal as it doe...
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these he
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these held-out words given the document-topic mixture...
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? This is indeed something often glossed over. Some people are doing something a bit cheeky: holding out a proportion of the words in each document, and giving using predictive probabilities of these he
13,482
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If your variational distribution is enough equal to the original distribution, then $D(q(\theta,z)||p(\theta,z)) = 0$. So, $\...
How to calculate perplexity of a holdout with Latent Dirichlet Allocation?
We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If yo
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If your variational distribution is enough equal t...
How to calculate perplexity of a holdout with Latent Dirichlet Allocation? We know that parameters of LDA are estimated through Variational Inference. So $\log p(w|\alpha, \beta) = E[\log p(\theta,z,w|\alpha,\beta)]-E[\log q(\theta,z)] + D(q(\theta,z)||p(\theta,z))$. If yo
13,483
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Computational strategies for multivariate linear mixed-effects models with missing values, which uses an hybrid EM/Fisher sco...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Compu
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Computat...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul This is a great question! Not sure this is a full answer, however, I drop these few lines in case it helps. It seems that Yucel and Demirtas (2010) refer to an older paper published in the JCGS, Compu
13,484
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere. I've also noticed that Yucel & Demirtas (in the article i mentioned, page 798) write: These multiply imputed datasets ...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere.
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere. I'...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere.
13,485
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation
Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normally do 5) datasets, run the lme4 or nmle, get the confidence intervals (you have 100 of them) and then: Using a small int...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul
Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normal
How to combine confidence intervals for a variance component of a mixed-effects model when using multiple imputation Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normally ...
How to combine confidence intervals for a variance component of a mixed-effects model when using mul Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normal
13,486
What is the difference between Markov chains and Markov processes?
From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reserve this term for systems evolving on countable spaces with both discrete and continuous time parameters, usage seems to ...
What is the difference between Markov chains and Markov processes?
From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reser
What is the difference between Markov chains and Markov processes? From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reserve this term for systems evolving on countable spaces...
What is the difference between Markov chains and Markov processes? From the preface to the first edition of "Markov Chains and Stochastic Stability" by Meyn and Tweedie: We deal here with Markov Chains. Despite the initial attempts by Doob and Chung [99,71] to reser
13,487
What is the difference between Markov chains and Markov processes?
One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stochastic processes. If the state space of stochastic process is discrete, whether the time parameter is discrete or contin...
What is the difference between Markov chains and Markov processes?
One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stoc
What is the difference between Markov chains and Markov processes? One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stochastic processes. If the state space of stochastic pr...
What is the difference between Markov chains and Markov processes? One method of classification of stochastic processes is based on the nature of the time parameter(discrete or continuous) and state space(discrete or continuous). This leads to four categories of stoc
13,488
What is the origin of the autoencoder neural networks?
According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, "Modular learning in neural networks," Proceedings AAAI (1987). It's not clear if that's the first time auto-encode...
What is the origin of the autoencoder neural networks?
According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ball
What is the origin of the autoencoder neural networks? According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ballard, "Modular learning in neural networks," Proceedings AAAI (198...
What is the origin of the autoencoder neural networks? According to the history provided in Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks (2015), auto-encoders were proposed as a method for unsupervised pre-training in Ball
13,489
What is the origin of the autoencoder neural networks?
The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation." , Parallel Distributed Processing. Vol 1: Foundations. MIT Press, Cambridge, MA,...
What is the origin of the autoencoder neural networks?
The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal
What is the origin of the autoencoder neural networks? The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation." , Parallel Distributed Pr...
What is the origin of the autoencoder neural networks? The paper below talks about autoencoder indirectly and dates back to 1986.(which is a year earlier than the paper by Ballard in 1987) D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal
13,490
What is the origin of the autoencoder neural networks?
Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of an autoencoder, and is trained on the identity function, for compression and reconstruction of image data. The term "a...
What is the origin of the autoencoder neural networks?
Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of
What is the origin of the autoencoder neural networks? Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of an autoencoder, and is trained on the identity function, for com...
What is the origin of the autoencoder neural networks? Reviving this thread - In "Neurocomputing" by Robert Hecht-Nielsen @ 1990 there is reference to a 1986 paper by Cottrell/Munro/Zipser that outlines use of a neural network that has the architecture of
13,491
What is the origin of the autoencoder neural networks?
The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/web_courses/cpsc636_s10/kramer1991nonlinearPCA.pdf). He discusses dimensionality reduction and feature extraction and app...
What is the origin of the autoencoder neural networks?
The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/w
What is the origin of the autoencoder neural networks? The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/web_courses/cpsc636_s10/kramer1991nonlinearPCA.pdf). He discusses ...
What is the origin of the autoencoder neural networks? The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at https://people.engr.tamu.edu/rgutier/w
13,492
optimizing auc vs logloss in binary classification problems
As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. some other model by scaling the predicted values. Consider: auc <- function(prediction, actual) { mann_whit <- wilcox....
optimizing auc vs logloss in binary classification problems
As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. s
optimizing auc vs logloss in binary classification problems As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. some other model by scaling the predicted values. Consider: a...
optimizing auc vs logloss in binary classification problems As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. s
13,493
optimizing auc vs logloss in binary classification problems
For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize precision, you can consider doing cross-validation to select the best model (algorithm + hyperparameters) using "precision" ...
optimizing auc vs logloss in binary classification problems
For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize prec
optimizing auc vs logloss in binary classification problems For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize precision, you can consider doing cross-validation to select the...
optimizing auc vs logloss in binary classification problems For imbalanced labels, area under precision-recall curve is preferable to AUC (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/ or python scikit-learn docs) Also, if your goal is to maximize prec
13,494
What is the connection between regularization and the method of lagrange multipliers ?
Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement a structural risk minimization approach by constructing a nested set of models of increasing complexity), we would need...
What is the connection between regularization and the method of lagrange multipliers ?
Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement
What is the connection between regularization and the method of lagrange multipliers ? Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement a structural risk minimization a...
What is the connection between regularization and the method of lagrange multipliers ? Say we are optimizing a model with parameters $\vec{\theta}$, by minimizing some criterion $f(\vec{\theta})$ subject to a constraint on the magnitude of the parameter vector (for instance to implement
13,495
How do CNN's avoid the vanishing gradient problem
The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a long time for each step. If you have a fast GPU which can perform many more steps in a day, this is less of a problem. T...
How do CNN's avoid the vanishing gradient problem
The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a
How do CNN's avoid the vanishing gradient problem The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a long time for each step. If you have a fast GPU which can perform many...
How do CNN's avoid the vanishing gradient problem The vanishing gradient problem requires us to use small learning rates with gradient descent which then needs many small steps to converge. This is a problem if you have a slow computer which takes a
13,496
Do I need to drop variables that are correlated/collinear before running kmeans?
Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away information because of that. Secondly, if you drop your variables in the wrong way, you'll artificially bring some samples cl...
Do I need to drop variables that are correlated/collinear before running kmeans?
Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away infor
Do I need to drop variables that are correlated/collinear before running kmeans? Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away information because of that. Secondly, if yo...
Do I need to drop variables that are correlated/collinear before running kmeans? Don't drop any variables, but do consider using PCA. Here's why. Firstly, as pointed out by Anony-mousse, k-means is not badly affected by collinearity/correlations. You don't need to throw away infor
13,497
Do I need to drop variables that are correlated/collinear before running kmeans?
It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points. Keeping variables which are highly correlated is all but giving them more, double the weight in computing the distance...
Do I need to drop variables that are correlated/collinear before running kmeans?
It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points
Do I need to drop variables that are correlated/collinear before running kmeans? It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points. Keeping variables which are highly co...
Do I need to drop variables that are correlated/collinear before running kmeans? It's advisable to remove variables if they are highly correlated. Irrespective of the clustering algorithm or linkage method, one thing that you generally follow is to find the distance between points
13,498
Do I need to drop variables that are correlated/collinear before running kmeans?
On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means. And distance in this (d-1) dimensional hyperplane is a linear multiple of the same distance, so it doesn't change anythi...
Do I need to drop variables that are correlated/collinear before running kmeans?
On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means.
Do I need to drop variables that are correlated/collinear before running kmeans? On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means. And distance in this (d-1) dimensional ...
Do I need to drop variables that are correlated/collinear before running kmeans? On a toy example in 2d or 3d, it shouldn't make much of a difference, it just adds some redundancy to your data: all your points are on an odd, (d-1) dimensional hyperplane. So are the cluster means.
13,499
Is building a multiclass classifier better than several binary ones?
First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former situation, go with a battery of binary classifiers, because this is a default way of doing multilabel problems. If the lat...
Is building a multiclass classifier better than several binary ones?
First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former si
Is building a multiclass classifier better than several binary ones? First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former situation, go with a battery of binary classifiers, b...
Is building a multiclass classifier better than several binary ones? First of all, you must ask yourself if your problem is multilabel (i.e. a single URL can belong to several classes) or not (i.e. a single URL can belong to only one class). If you are in the former si
13,500
Is building a multiclass classifier better than several binary ones?
This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would be a better classifier for deciding population A vs B or C or one based on multiple linear discriminant functions that se...
Is building a multiclass classifier better than several binary ones?
This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would b
Is building a multiclass classifier better than several binary ones? This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would be a better classifier for deciding population A vs ...
Is building a multiclass classifier better than several binary ones? This will depend on how your data is dispersed. There is a beautiful example that was given recently to a similar question where the OP wanted to know if a single linear discriminant function would b