idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
13,201
Computing the decision boundary of a linear SVM model
It's a linear combination of the support vectors where the coefficients are given by the Lagrange multipliers corresponding to these support vectors.
Computing the decision boundary of a linear SVM model
It's a linear combination of the support vectors where the coefficients are given by the Lagrange multipliers corresponding to these support vectors.
Computing the decision boundary of a linear SVM model It's a linear combination of the support vectors where the coefficients are given by the Lagrange multipliers corresponding to these support vectors.
Computing the decision boundary of a linear SVM model It's a linear combination of the support vectors where the coefficients are given by the Lagrange multipliers corresponding to these support vectors.
13,202
How do survival models "account for censoring"? (Do they?)
Censoring is built into survival models by incorporating it into the likelihood function underlying the analysis. The most common form of censoring occurs when we observe an item for a finite period of time $T$ and it does not fail in that time. Below I will show you how the censoring is built into the likelihood fun...
How do survival models "account for censoring"? (Do they?)
Censoring is built into survival models by incorporating it into the likelihood function underlying the analysis. The most common form of censoring occurs when we observe an item for a finite period
How do survival models "account for censoring"? (Do they?) Censoring is built into survival models by incorporating it into the likelihood function underlying the analysis. The most common form of censoring occurs when we observe an item for a finite period of time $T$ and it does not fail in that time. Below I will ...
How do survival models "account for censoring"? (Do they?) Censoring is built into survival models by incorporating it into the likelihood function underlying the analysis. The most common form of censoring occurs when we observe an item for a finite period
13,203
How do survival models "account for censoring"? (Do they?)
Some students might benefit from the following way to represent the partial likelihood for a Cox model, as an example of Ben's answer (+1) about censoring in general. Display the partial likelihood under the proportional-hazards assumption (without tied event times) as follows: $$\prod_{i=1}^{n}\frac{h_0(t_i)\text{exp}...
How do survival models "account for censoring"? (Do they?)
Some students might benefit from the following way to represent the partial likelihood for a Cox model, as an example of Ben's answer (+1) about censoring in general. Display the partial likelihood un
How do survival models "account for censoring"? (Do they?) Some students might benefit from the following way to represent the partial likelihood for a Cox model, as an example of Ben's answer (+1) about censoring in general. Display the partial likelihood under the proportional-hazards assumption (without tied event t...
How do survival models "account for censoring"? (Do they?) Some students might benefit from the following way to represent the partial likelihood for a Cox model, as an example of Ben's answer (+1) about censoring in general. Display the partial likelihood un
13,204
How do survival models "account for censoring"? (Do they?)
I thought I’d directly answer the specific questions above: this is intended to address the top-level conceptual issues with some thoughts on teaching to this kind of audience, rather than a technical explanation. The Questions 1. Is it true, strictly speaking, that something like a Kaplan-Meier estimator or a Cox p...
How do survival models "account for censoring"? (Do they?)
I thought I’d directly answer the specific questions above: this is intended to address the top-level conceptual issues with some thoughts on teaching to this kind of audience, rather than a technical
How do survival models "account for censoring"? (Do they?) I thought I’d directly answer the specific questions above: this is intended to address the top-level conceptual issues with some thoughts on teaching to this kind of audience, rather than a technical explanation. The Questions 1. Is it true, strictly speaki...
How do survival models "account for censoring"? (Do they?) I thought I’d directly answer the specific questions above: this is intended to address the top-level conceptual issues with some thoughts on teaching to this kind of audience, rather than a technical
13,205
How do survival models "account for censoring"? (Do they?)
Update: @James Stanley address the same point in his answer; see point #4. This is not an answer but an extended comment to clarify some terminology and highlight an important assumption. You write: if you don't know what happened after Mr. Smith dropped out of your study (i.e. was lost to follow-up, i.e. was censore...
How do survival models "account for censoring"? (Do they?)
Update: @James Stanley address the same point in his answer; see point #4. This is not an answer but an extended comment to clarify some terminology and highlight an important assumption. You write:
How do survival models "account for censoring"? (Do they?) Update: @James Stanley address the same point in his answer; see point #4. This is not an answer but an extended comment to clarify some terminology and highlight an important assumption. You write: if you don't know what happened after Mr. Smith dropped out ...
How do survival models "account for censoring"? (Do they?) Update: @James Stanley address the same point in his answer; see point #4. This is not an answer but an extended comment to clarify some terminology and highlight an important assumption. You write:
13,206
Intuition behind Box-Cox transform
The design goals of the family of Box-Cox transformations of non-negative data were these: The formulas should be simple, straightforward, well understood, and easy to calculate. They should not change the middle of the data much, but affect the tails more. The family should be rich enough to induce large changes in...
Intuition behind Box-Cox transform
The design goals of the family of Box-Cox transformations of non-negative data were these: The formulas should be simple, straightforward, well understood, and easy to calculate. They should not cha
Intuition behind Box-Cox transform The design goals of the family of Box-Cox transformations of non-negative data were these: The formulas should be simple, straightforward, well understood, and easy to calculate. They should not change the middle of the data much, but affect the tails more. The family should be ric...
Intuition behind Box-Cox transform The design goals of the family of Box-Cox transformations of non-negative data were these: The formulas should be simple, straightforward, well understood, and easy to calculate. They should not cha
13,207
Intuition behind Box-Cox transform
Adding something to the great answer by whuber. Let's say you have $k$ independent random variables $X_1, X_2,..., X_k$ normally distributed with mean $m_i$ and variance $\sigma_i^2$ for $i=1,...,k$. Now, let's assume that $\sigma_i = f(m_i)$ and $f$ is some known function. In simple situations we can guess this functi...
Intuition behind Box-Cox transform
Adding something to the great answer by whuber. Let's say you have $k$ independent random variables $X_1, X_2,..., X_k$ normally distributed with mean $m_i$ and variance $\sigma_i^2$ for $i=1,...,k$.
Intuition behind Box-Cox transform Adding something to the great answer by whuber. Let's say you have $k$ independent random variables $X_1, X_2,..., X_k$ normally distributed with mean $m_i$ and variance $\sigma_i^2$ for $i=1,...,k$. Now, let's assume that $\sigma_i = f(m_i)$ and $f$ is some known function. In simple ...
Intuition behind Box-Cox transform Adding something to the great answer by whuber. Let's say you have $k$ independent random variables $X_1, X_2,..., X_k$ normally distributed with mean $m_i$ and variance $\sigma_i^2$ for $i=1,...,k$.
13,208
Intuition behind Box-Cox transform
The answers provided here are already useful. I just wanted to drop a couple more resources for learning about Box-Cox. There is actually an excellent episode from Quantitude dedicated to this very subject, which explains both the history and intuition behind this transformation (including the amusing history that the ...
Intuition behind Box-Cox transform
The answers provided here are already useful. I just wanted to drop a couple more resources for learning about Box-Cox. There is actually an excellent episode from Quantitude dedicated to this very su
Intuition behind Box-Cox transform The answers provided here are already useful. I just wanted to drop a couple more resources for learning about Box-Cox. There is actually an excellent episode from Quantitude dedicated to this very subject, which explains both the history and intuition behind this transformation (incl...
Intuition behind Box-Cox transform The answers provided here are already useful. I just wanted to drop a couple more resources for learning about Box-Cox. There is actually an excellent episode from Quantitude dedicated to this very su
13,209
k-means implementation with custom distance matrix in input
Since k-means needs to be able to find the means of different subsets of the points you want to cluster, it does not really make sense to ask for a version of k-means that takes a distance matrix as input. You could try k-medoids instead. There are some matlab implementations available.
k-means implementation with custom distance matrix in input
Since k-means needs to be able to find the means of different subsets of the points you want to cluster, it does not really make sense to ask for a version of k-means that takes a distance matrix as i
k-means implementation with custom distance matrix in input Since k-means needs to be able to find the means of different subsets of the points you want to cluster, it does not really make sense to ask for a version of k-means that takes a distance matrix as input. You could try k-medoids instead. There are some matlab...
k-means implementation with custom distance matrix in input Since k-means needs to be able to find the means of different subsets of the points you want to cluster, it does not really make sense to ask for a version of k-means that takes a distance matrix as i
13,210
k-means implementation with custom distance matrix in input
You could turn your matrix of distances into raw data and input these to K-Means clustering. The steps would be as follows: Distances between your N points must be squared euclidean ones. Perform "double centering" of the matrix: From each element, substract its row mean of elements, substract its column mean of eleme...
k-means implementation with custom distance matrix in input
You could turn your matrix of distances into raw data and input these to K-Means clustering. The steps would be as follows: Distances between your N points must be squared euclidean ones. Perform "do
k-means implementation with custom distance matrix in input You could turn your matrix of distances into raw data and input these to K-Means clustering. The steps would be as follows: Distances between your N points must be squared euclidean ones. Perform "double centering" of the matrix: From each element, substract ...
k-means implementation with custom distance matrix in input You could turn your matrix of distances into raw data and input these to K-Means clustering. The steps would be as follows: Distances between your N points must be squared euclidean ones. Perform "do
13,211
k-means implementation with custom distance matrix in input
Please see this article, written by one of my acquaintances ;) http://arxiv.org/abs/1304.6899 It is about a generalized k-means implementation, which takes an arbitrary distance matrix as input. It can be any symmetrical nonnegative matrix with a zero diagonal. Note that it may not give sensible results for weird dista...
k-means implementation with custom distance matrix in input
Please see this article, written by one of my acquaintances ;) http://arxiv.org/abs/1304.6899 It is about a generalized k-means implementation, which takes an arbitrary distance matrix as input. It ca
k-means implementation with custom distance matrix in input Please see this article, written by one of my acquaintances ;) http://arxiv.org/abs/1304.6899 It is about a generalized k-means implementation, which takes an arbitrary distance matrix as input. It can be any symmetrical nonnegative matrix with a zero diagonal...
k-means implementation with custom distance matrix in input Please see this article, written by one of my acquaintances ;) http://arxiv.org/abs/1304.6899 It is about a generalized k-means implementation, which takes an arbitrary distance matrix as input. It ca
13,212
k-means implementation with custom distance matrix in input
You can use Java Machine Learning Library. They have a K-Means implementation. One of the constructors accepts three arguments K Value. An object of that is an instance of the DistanceMeasure Class. Number of iterations. One can easily extend the DistanceMeasure class to achieve the desired result. The idea is to re...
k-means implementation with custom distance matrix in input
You can use Java Machine Learning Library. They have a K-Means implementation. One of the constructors accepts three arguments K Value. An object of that is an instance of the DistanceMeasure Class.
k-means implementation with custom distance matrix in input You can use Java Machine Learning Library. They have a K-Means implementation. One of the constructors accepts three arguments K Value. An object of that is an instance of the DistanceMeasure Class. Number of iterations. One can easily extend the DistanceMe...
k-means implementation with custom distance matrix in input You can use Java Machine Learning Library. They have a K-Means implementation. One of the constructors accepts three arguments K Value. An object of that is an instance of the DistanceMeasure Class.
13,213
Why would one suppress the intercept in linear regression?
If for some reason you know the intercept (particularly if it is zero), you can avoid wasting the variance in your data for estimating something you already know, and have more confidence in the values you do have to estimate. A somewhat oversimplified example is if you already know (from domain knowledge) that one var...
Why would one suppress the intercept in linear regression?
If for some reason you know the intercept (particularly if it is zero), you can avoid wasting the variance in your data for estimating something you already know, and have more confidence in the value
Why would one suppress the intercept in linear regression? If for some reason you know the intercept (particularly if it is zero), you can avoid wasting the variance in your data for estimating something you already know, and have more confidence in the values you do have to estimate. A somewhat oversimplified example ...
Why would one suppress the intercept in linear regression? If for some reason you know the intercept (particularly if it is zero), you can avoid wasting the variance in your data for estimating something you already know, and have more confidence in the value
13,214
Why would one suppress the intercept in linear regression?
Consider the case of a 3-level categorical covariate. If one has an intercept, that would require 2 indicator variables. Using the usual coding for indicator variables, the coefficient for either indicator variable is the mean difference compared to the reference group. By suppressing the intercept, you would have 3...
Why would one suppress the intercept in linear regression?
Consider the case of a 3-level categorical covariate. If one has an intercept, that would require 2 indicator variables. Using the usual coding for indicator variables, the coefficient for either in
Why would one suppress the intercept in linear regression? Consider the case of a 3-level categorical covariate. If one has an intercept, that would require 2 indicator variables. Using the usual coding for indicator variables, the coefficient for either indicator variable is the mean difference compared to the refer...
Why would one suppress the intercept in linear regression? Consider the case of a 3-level categorical covariate. If one has an intercept, that would require 2 indicator variables. Using the usual coding for indicator variables, the coefficient for either in
13,215
Why would one suppress the intercept in linear regression?
To illustrate @Nick Sabbe's point with a specific example. I once saw a researcher present a model of the age of a tree as a function of its width. It can be assumed that when the tree is at age zero, it effectively has a width of zero. Thus, an intercept is not required.
Why would one suppress the intercept in linear regression?
To illustrate @Nick Sabbe's point with a specific example. I once saw a researcher present a model of the age of a tree as a function of its width. It can be assumed that when the tree is at age zero,
Why would one suppress the intercept in linear regression? To illustrate @Nick Sabbe's point with a specific example. I once saw a researcher present a model of the age of a tree as a function of its width. It can be assumed that when the tree is at age zero, it effectively has a width of zero. Thus, an intercept is no...
Why would one suppress the intercept in linear regression? To illustrate @Nick Sabbe's point with a specific example. I once saw a researcher present a model of the age of a tree as a function of its width. It can be assumed that when the tree is at age zero,
13,216
Calculating required sample size, precision of variance estimate?
For i.i.d. random variables $X_1, \dotsc, X_n$, the unbiased estimator for the variance $s^2$ (the one with denominator $n-1$) has variance: $$\mathrm{Var}(s^2) = \sigma^4 \left(\frac{2}{n-1} + \frac{\kappa}{n}\right)$$ where $\kappa$ is the excess kurtosis of the distribution (reference: Wikipedia). So now you need t...
Calculating required sample size, precision of variance estimate?
For i.i.d. random variables $X_1, \dotsc, X_n$, the unbiased estimator for the variance $s^2$ (the one with denominator $n-1$) has variance: $$\mathrm{Var}(s^2) = \sigma^4 \left(\frac{2}{n-1} + \frac{
Calculating required sample size, precision of variance estimate? For i.i.d. random variables $X_1, \dotsc, X_n$, the unbiased estimator for the variance $s^2$ (the one with denominator $n-1$) has variance: $$\mathrm{Var}(s^2) = \sigma^4 \left(\frac{2}{n-1} + \frac{\kappa}{n}\right)$$ where $\kappa$ is the excess kurt...
Calculating required sample size, precision of variance estimate? For i.i.d. random variables $X_1, \dotsc, X_n$, the unbiased estimator for the variance $s^2$ (the one with denominator $n-1$) has variance: $$\mathrm{Var}(s^2) = \sigma^4 \left(\frac{2}{n-1} + \frac{
13,217
Calculating required sample size, precision of variance estimate?
Learning a variance is hard. It takes a (perhaps surprisingly) large number of samples to estimate a variance well in many cases. Below, I'll show the development for the "canonical" case of an i.i.d. normal sample. Suppose $Y_i$, $i=1,\ldots,n$ are independent $\mathcal{N}(\mu, \sigma^2)$ random variables. We seek a $...
Calculating required sample size, precision of variance estimate?
Learning a variance is hard. It takes a (perhaps surprisingly) large number of samples to estimate a variance well in many cases. Below, I'll show the development for the "canonical" case of an i.i.d.
Calculating required sample size, precision of variance estimate? Learning a variance is hard. It takes a (perhaps surprisingly) large number of samples to estimate a variance well in many cases. Below, I'll show the development for the "canonical" case of an i.i.d. normal sample. Suppose $Y_i$, $i=1,\ldots,n$ are inde...
Calculating required sample size, precision of variance estimate? Learning a variance is hard. It takes a (perhaps surprisingly) large number of samples to estimate a variance well in many cases. Below, I'll show the development for the "canonical" case of an i.i.d.
13,218
Calculating required sample size, precision of variance estimate?
I would focus on the SD rather than the variance, since it's on a scale that is more easily interpreted. People do sometimes look at confidence intervals for SDs or variances, but the focus is generally on means. The results you give for the distribution of $s^2/\sigma^2$ can be used to get a confidence interval for $\...
Calculating required sample size, precision of variance estimate?
I would focus on the SD rather than the variance, since it's on a scale that is more easily interpreted. People do sometimes look at confidence intervals for SDs or variances, but the focus is general
Calculating required sample size, precision of variance estimate? I would focus on the SD rather than the variance, since it's on a scale that is more easily interpreted. People do sometimes look at confidence intervals for SDs or variances, but the focus is generally on means. The results you give for the distribution...
Calculating required sample size, precision of variance estimate? I would focus on the SD rather than the variance, since it's on a scale that is more easily interpreted. People do sometimes look at confidence intervals for SDs or variances, but the focus is general
13,219
Calculating required sample size, precision of variance estimate?
The following solution was given by Greenwood and Sandomire in a 1950 JASA paper. Let $X_1,\dots,X_n$ be a random sample from a $\mathrm{N}(\mu,\sigma^2)$ distribution. You will make inferences about $\sigma$ using as (biased) estimator the sample standard deviation $$ S=\sqrt{\sum_{i=1}^n\frac{(X_i-\bar{X})^2}{n-1}}...
Calculating required sample size, precision of variance estimate?
The following solution was given by Greenwood and Sandomire in a 1950 JASA paper. Let $X_1,\dots,X_n$ be a random sample from a $\mathrm{N}(\mu,\sigma^2)$ distribution. You will make inferences about
Calculating required sample size, precision of variance estimate? The following solution was given by Greenwood and Sandomire in a 1950 JASA paper. Let $X_1,\dots,X_n$ be a random sample from a $\mathrm{N}(\mu,\sigma^2)$ distribution. You will make inferences about $\sigma$ using as (biased) estimator the sample standa...
Calculating required sample size, precision of variance estimate? The following solution was given by Greenwood and Sandomire in a 1950 JASA paper. Let $X_1,\dots,X_n$ be a random sample from a $\mathrm{N}(\mu,\sigma^2)$ distribution. You will make inferences about
13,220
Removing outliers based on cook's distance in R Language
This post has around 6000 views in 2 years so I guess an answer is much needed. Although I borrowed a lot of ideas from the reference, I made some modifications. We will be using the cars data in base r. library(tidyverse) # Inject outliers into data. cars1 <- cars[1:30, ] # original data cars_outliers <- data.frame(...
Removing outliers based on cook's distance in R Language
This post has around 6000 views in 2 years so I guess an answer is much needed. Although I borrowed a lot of ideas from the reference, I made some modifications. We will be using the cars data in base
Removing outliers based on cook's distance in R Language This post has around 6000 views in 2 years so I guess an answer is much needed. Although I borrowed a lot of ideas from the reference, I made some modifications. We will be using the cars data in base r. library(tidyverse) # Inject outliers into data. cars1 <- c...
Removing outliers based on cook's distance in R Language This post has around 6000 views in 2 years so I guess an answer is much needed. Although I borrowed a lot of ideas from the reference, I made some modifications. We will be using the cars data in base
13,221
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
The relevant manuscript is here. The difference between ward.D and ward.D2 is the difference between the two clustering criteria that in the manuscript are called Ward1 and Ward2. It basically boils down to the fact that the Ward algorithm is directly correctly implemented in just Ward2 (ward.D2), but Ward1 (ward.D) ca...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
The relevant manuscript is here. The difference between ward.D and ward.D2 is the difference between the two clustering criteria that in the manuscript are called Ward1 and Ward2. It basically boils d
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? The relevant manuscript is here. The difference between ward.D and ward.D2 is the difference between the two clustering criteria that in the manuscript are called Ward1 and Ward2. It basically boils down to the fact that the Ward algorithm ...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? The relevant manuscript is here. The difference between ward.D and ward.D2 is the difference between the two clustering criteria that in the manuscript are called Ward1 and Ward2. It basically boils d
13,222
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
The only difference between ward.D & ward.D2 is the input parameter. hclust(dist(x)^2,method="ward.D") ~ hclust(dist(x)^2,method="ward") which are equivalent to: hclust(dist(x),method="ward.D2") You can find the reserach paper : Ward’s Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm ...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
The only difference between ward.D & ward.D2 is the input parameter. hclust(dist(x)^2,method="ward.D") ~ hclust(dist(x)^2,method="ward") which are equivalent to: hclust(dist(x),method="ward.D2") You
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? The only difference between ward.D & ward.D2 is the input parameter. hclust(dist(x)^2,method="ward.D") ~ hclust(dist(x)^2,method="ward") which are equivalent to: hclust(dist(x),method="ward.D2") You can find the reserach paper : Ward’s H...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? The only difference between ward.D & ward.D2 is the input parameter. hclust(dist(x)^2,method="ward.D") ~ hclust(dist(x)^2,method="ward") which are equivalent to: hclust(dist(x),method="ward.D2") You
13,223
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
I came across the research paper that corresponds to the objective function that is being optimized by "Ward1 (ward.D)": Hierarchical Clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method. It turns out that R's implementation of "Ward1 (ward.D)" is equivalent to minimizing the energy d...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion?
I came across the research paper that corresponds to the objective function that is being optimized by "Ward1 (ward.D)": Hierarchical Clustering via Joint Between-Within Distances: Extending Ward's Mi
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? I came across the research paper that corresponds to the objective function that is being optimized by "Ward1 (ward.D)": Hierarchical Clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method. It turns out that...
What algorithm does ward.D in hclust() implement if it is not Ward's criterion? I came across the research paper that corresponds to the objective function that is being optimized by "Ward1 (ward.D)": Hierarchical Clustering via Joint Between-Within Distances: Extending Ward's Mi
13,224
SVM for unbalanced data
Many SVM implementations address this by assigning different weights to positive and negative instances. Essentially you weigh the samples so that the sum of the weights for the positives will be equal to that of the negatives. Of course, in your evaluation of the SVM you have to remember that if 95% of the data is neg...
SVM for unbalanced data
Many SVM implementations address this by assigning different weights to positive and negative instances. Essentially you weigh the samples so that the sum of the weights for the positives will be equa
SVM for unbalanced data Many SVM implementations address this by assigning different weights to positive and negative instances. Essentially you weigh the samples so that the sum of the weights for the positives will be equal to that of the negatives. Of course, in your evaluation of the SVM you have to remember that i...
SVM for unbalanced data Many SVM implementations address this by assigning different weights to positive and negative instances. Essentially you weigh the samples so that the sum of the weights for the positives will be equa
13,225
SVM for unbalanced data
SVMs work fine on sparse and unbalanced data. Class-weighted SVM is designed to deal with unbalanced data by assigning higher misclassification penalties to training instances of the minority class.
SVM for unbalanced data
SVMs work fine on sparse and unbalanced data. Class-weighted SVM is designed to deal with unbalanced data by assigning higher misclassification penalties to training instances of the minority class.
SVM for unbalanced data SVMs work fine on sparse and unbalanced data. Class-weighted SVM is designed to deal with unbalanced data by assigning higher misclassification penalties to training instances of the minority class.
SVM for unbalanced data SVMs work fine on sparse and unbalanced data. Class-weighted SVM is designed to deal with unbalanced data by assigning higher misclassification penalties to training instances of the minority class.
13,226
SVM for unbalanced data
In the case of sparse data like that SVM will work well. As stated by @Bitwise you should not use accuracy to measure the performance of the algorithm. Instead you should calculate the precision, recall and F-Score of the algorithm.
SVM for unbalanced data
In the case of sparse data like that SVM will work well. As stated by @Bitwise you should not use accuracy to measure the performance of the algorithm. Instead you should calculate the precision, reca
SVM for unbalanced data In the case of sparse data like that SVM will work well. As stated by @Bitwise you should not use accuracy to measure the performance of the algorithm. Instead you should calculate the precision, recall and F-Score of the algorithm.
SVM for unbalanced data In the case of sparse data like that SVM will work well. As stated by @Bitwise you should not use accuracy to measure the performance of the algorithm. Instead you should calculate the precision, reca
13,227
What is the difference between descriptive and inferential statistics?
Coming from a behavioural sciences background, I associate this terminology particularly with introductory statistics textbooks. In this context the distinction is that : Descriptive statistics are functions of the sample data that are intrinsically interesting in describing some feature of the data. Classic descrip...
What is the difference between descriptive and inferential statistics?
Coming from a behavioural sciences background, I associate this terminology particularly with introductory statistics textbooks. In this context the distinction is that : Descriptive statistics are
What is the difference between descriptive and inferential statistics? Coming from a behavioural sciences background, I associate this terminology particularly with introductory statistics textbooks. In this context the distinction is that : Descriptive statistics are functions of the sample data that are intrinsica...
What is the difference between descriptive and inferential statistics? Coming from a behavioural sciences background, I associate this terminology particularly with introductory statistics textbooks. In this context the distinction is that : Descriptive statistics are
13,228
What is the difference between descriptive and inferential statistics?
One form of inference is based on the random assignment of experimental treatments, & not on random sampling from a population (even hypothetically). Oscar Kempthorne was a proponent. The first example in Edgington (1995), Randomization Tests illustrates the approach well. A researcher obtains ten subjects, divides the...
What is the difference between descriptive and inferential statistics?
One form of inference is based on the random assignment of experimental treatments, & not on random sampling from a population (even hypothetically). Oscar Kempthorne was a proponent. The first exampl
What is the difference between descriptive and inferential statistics? One form of inference is based on the random assignment of experimental treatments, & not on random sampling from a population (even hypothetically). Oscar Kempthorne was a proponent. The first example in Edgington (1995), Randomization Tests illust...
What is the difference between descriptive and inferential statistics? One form of inference is based on the random assignment of experimental treatments, & not on random sampling from a population (even hypothetically). Oscar Kempthorne was a proponent. The first exampl
13,229
What is the difference between descriptive and inferential statistics?
I'm not sure that classification necessarily makes a statement about the population(s) from which the data points are drawn. Classification, as you probably know, uses training data consisting of some "feature" vectors, each labelled with a specific class, to predict the class labels belonging to other unlabeled featu...
What is the difference between descriptive and inferential statistics?
I'm not sure that classification necessarily makes a statement about the population(s) from which the data points are drawn. Classification, as you probably know, uses training data consisting of some
What is the difference between descriptive and inferential statistics? I'm not sure that classification necessarily makes a statement about the population(s) from which the data points are drawn. Classification, as you probably know, uses training data consisting of some "feature" vectors, each labelled with a specific...
What is the difference between descriptive and inferential statistics? I'm not sure that classification necessarily makes a statement about the population(s) from which the data points are drawn. Classification, as you probably know, uses training data consisting of some
13,230
What is the difference between descriptive and inferential statistics?
In one line, given the data, descriptive statistics try to summarize the content of your data with minimum loss of information ( depending on what measure do you use). You get to see the geography of the data.( Something like, see the performance graph of the class and say who is on top, the bottom and so on) In one li...
What is the difference between descriptive and inferential statistics?
In one line, given the data, descriptive statistics try to summarize the content of your data with minimum loss of information ( depending on what measure do you use). You get to see the geography of
What is the difference between descriptive and inferential statistics? In one line, given the data, descriptive statistics try to summarize the content of your data with minimum loss of information ( depending on what measure do you use). You get to see the geography of the data.( Something like, see the performance gr...
What is the difference between descriptive and inferential statistics? In one line, given the data, descriptive statistics try to summarize the content of your data with minimum loss of information ( depending on what measure do you use). You get to see the geography of
13,231
What is the difference between descriptive and inferential statistics?
In Short Descriptive statistics is the analysis of data that describe, show or summarize data in a meaningful; it simply a way to describe our data/talk about the whole population. some of them are Measures of central tendency and Measure of dispersion Inferential statistics is technique that allow us to use samples...
What is the difference between descriptive and inferential statistics?
In Short Descriptive statistics is the analysis of data that describe, show or summarize data in a meaningful; it simply a way to describe our data/talk about the whole population. some of them are
What is the difference between descriptive and inferential statistics? In Short Descriptive statistics is the analysis of data that describe, show or summarize data in a meaningful; it simply a way to describe our data/talk about the whole population. some of them are Measures of central tendency and Measure of disp...
What is the difference between descriptive and inferential statistics? In Short Descriptive statistics is the analysis of data that describe, show or summarize data in a meaningful; it simply a way to describe our data/talk about the whole population. some of them are
13,232
What is the relationship between R-squared and p-value in a regression?
The answer is no, there is no such regular relationship between $R^2$ and the overall regression p-value, because $R^2$ depends as much on the variance of the independent variables as it does on the variance of the residuals (to which it is inversely proportional), and you are free to change the variance of the indepen...
What is the relationship between R-squared and p-value in a regression?
The answer is no, there is no such regular relationship between $R^2$ and the overall regression p-value, because $R^2$ depends as much on the variance of the independent variables as it does on the v
What is the relationship between R-squared and p-value in a regression? The answer is no, there is no such regular relationship between $R^2$ and the overall regression p-value, because $R^2$ depends as much on the variance of the independent variables as it does on the variance of the residuals (to which it is inverse...
What is the relationship between R-squared and p-value in a regression? The answer is no, there is no such regular relationship between $R^2$ and the overall regression p-value, because $R^2$ depends as much on the variance of the independent variables as it does on the v
13,233
What is the relationship between R-squared and p-value in a regression?
This answer doesn't directly deal with the central question; it's nothing more than some additional information that's too long for a comment. I point this out because econometricstatsquestion will no doubt encounter this information, or something like it at some point (stating that $F$ and $R^2$ are related) and wonde...
What is the relationship between R-squared and p-value in a regression?
This answer doesn't directly deal with the central question; it's nothing more than some additional information that's too long for a comment. I point this out because econometricstatsquestion will no
What is the relationship between R-squared and p-value in a regression? This answer doesn't directly deal with the central question; it's nothing more than some additional information that's too long for a comment. I point this out because econometricstatsquestion will no doubt encounter this information, or something ...
What is the relationship between R-squared and p-value in a regression? This answer doesn't directly deal with the central question; it's nothing more than some additional information that's too long for a comment. I point this out because econometricstatsquestion will no
13,234
What is the relationship between R-squared and p-value in a regression?
"for OLS regression, does a higher R-squared also imply a higher P-value? Specifically for a single explanatory variable (Y = a + bX + e) " Specifically for a single explanatory variable, given the sample size, the answer is yes. As Glen_b has explained, there is a direct relationship between $R^2$ and the test stat...
What is the relationship between R-squared and p-value in a regression?
"for OLS regression, does a higher R-squared also imply a higher P-value? Specifically for a single explanatory variable (Y = a + bX + e) " Specifically for a single explanatory variable, given the
What is the relationship between R-squared and p-value in a regression? "for OLS regression, does a higher R-squared also imply a higher P-value? Specifically for a single explanatory variable (Y = a + bX + e) " Specifically for a single explanatory variable, given the sample size, the answer is yes. As Glen_b has e...
What is the relationship between R-squared and p-value in a regression? "for OLS regression, does a higher R-squared also imply a higher P-value? Specifically for a single explanatory variable (Y = a + bX + e) " Specifically for a single explanatory variable, given the
13,235
How to model bounded target variable?
You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range (that is, use $\max(0, \min(70, \hat{y}))$ instead of $\hat{y}$) will do well. Cross-validate the model to see whether ...
How to model bounded target variable?
You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range
How to model bounded target variable? You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range (that is, use $\max(0, \min(70, \hat{y}))$ instead of $\hat{y}$) will do well. Cr...
How to model bounded target variable? You don't necessarily have to do anything. It's possible the predictor will work fine. Even if the predictor extrapolates to values outside the range, possibly clamping the predictions to the range
13,236
How to model bounded target variable?
It is important to consider why are your values bounded in the 0-70 range. For example, if they are the number of correct answers on a 70-question test, then you should consider models for "number of successes" variables, such as overdispersed binomial regression. Other reasons might lead you to other solutions.
How to model bounded target variable?
It is important to consider why are your values bounded in the 0-70 range. For example, if they are the number of correct answers on a 70-question test, then you should consider models for "number of
How to model bounded target variable? It is important to consider why are your values bounded in the 0-70 range. For example, if they are the number of correct answers on a 70-question test, then you should consider models for "number of successes" variables, such as overdispersed binomial regression. Other reasons mig...
How to model bounded target variable? It is important to consider why are your values bounded in the 0-70 range. For example, if they are the number of correct answers on a 70-question test, then you should consider models for "number of
13,237
How to model bounded target variable?
Data transformation: rescale your data to lie in $[0,1]$ and model it using a glm model with a logit link. Edit: When you re-scale a vector (ie divide all the elements by the largest entry), as a rule, before you do that, screen (eyeballs) for outliers. UPDATE Assuming you have access to R, i would carry the modeling...
How to model bounded target variable?
Data transformation: rescale your data to lie in $[0,1]$ and model it using a glm model with a logit link. Edit: When you re-scale a vector (ie divide all the elements by the largest entry), as a rul
How to model bounded target variable? Data transformation: rescale your data to lie in $[0,1]$ and model it using a glm model with a logit link. Edit: When you re-scale a vector (ie divide all the elements by the largest entry), as a rule, before you do that, screen (eyeballs) for outliers. UPDATE Assuming you have a...
How to model bounded target variable? Data transformation: rescale your data to lie in $[0,1]$ and model it using a glm model with a logit link. Edit: When you re-scale a vector (ie divide all the elements by the largest entry), as a rul
13,238
ARIMA vs Kalman filter - how are they related
ARIMA is a class of models. These are stochastic processes that you can use to model some time series data. There is another class of models called linear Gaussian state space models, sometimes just state space models. This is a strictly larger class (every ARIMA model is a state space model). A state space model invol...
ARIMA vs Kalman filter - how are they related
ARIMA is a class of models. These are stochastic processes that you can use to model some time series data. There is another class of models called linear Gaussian state space models, sometimes just s
ARIMA vs Kalman filter - how are they related ARIMA is a class of models. These are stochastic processes that you can use to model some time series data. There is another class of models called linear Gaussian state space models, sometimes just state space models. This is a strictly larger class (every ARIMA model is a...
ARIMA vs Kalman filter - how are they related ARIMA is a class of models. These are stochastic processes that you can use to model some time series data. There is another class of models called linear Gaussian state space models, sometimes just s
13,239
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation?
First I show how you can specify a formula using aggregated data with proportions and weights. Then I show how you could specify a formula after dis-aggregating your data to individual observations. Documentation inglm indicates that: "For a binomial GLM prior weights are used to give the number of trials when th...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex
First I show how you can specify a formula using aggregated data with proportions and weights. Then I show how you could specify a formula after dis-aggregating your data to individual observations.
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation? First I show how you can specify a formula using aggregated data with proportions and weights. Then I show how you could specify a formula after dis-aggregating your dat...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex First I show how you can specify a formula using aggregated data with proportions and weights. Then I show how you could specify a formula after dis-aggregating your data to individual observations.
13,240
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation?
Binomial distribution versus Bernoulli distribution The binomial distribution can be viewed as multiple Bernoulli distributions with the same probability parameter $p$. Their probability distributions differ up to a constant. For observations $x_i \in \lbrace 0,1 \rbrace$, where $k$ is the number of observations that a...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex
Binomial distribution versus Bernoulli distribution The binomial distribution can be viewed as multiple Bernoulli distributions with the same probability parameter $p$. Their probability distributions
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation? Binomial distribution versus Bernoulli distribution The binomial distribution can be viewed as multiple Bernoulli distributions with the same probability parameter $p$. ...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex Binomial distribution versus Bernoulli distribution The binomial distribution can be viewed as multiple Bernoulli distributions with the same probability parameter $p$. Their probability distributions
13,241
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation?
I found this discussion very helpful for the analysis I need to conduct for my thesis. I am not sure if I understand it right, that the multiple trials you are talking about are with the same sample? In my case I have a control and a treatment group, and each respondent goes through 4 questions where he/she has to choo...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex
I found this discussion very helpful for the analysis I need to conduct for my thesis. I am not sure if I understand it right, that the multiple trials you are talking about are with the same sample?
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation? I found this discussion very helpful for the analysis I need to conduct for my thesis. I am not sure if I understand it right, that the multiple trials you are talking a...
In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the ex I found this discussion very helpful for the analysis I need to conduct for my thesis. I am not sure if I understand it right, that the multiple trials you are talking about are with the same sample?
13,242
How are the Error Function and Standard Normal distribution function related?
Because this comes up often in some systems (for instance, Mathematica insists on expressing the Normal CDF in terms of $\text{Erf}$), it's good to have a thread like this that documents the relationship. By definition, the Error Function is $$\text{Erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2} \mathrm{d}t.$$ Writin...
How are the Error Function and Standard Normal distribution function related?
Because this comes up often in some systems (for instance, Mathematica insists on expressing the Normal CDF in terms of $\text{Erf}$), it's good to have a thread like this that documents the relations
How are the Error Function and Standard Normal distribution function related? Because this comes up often in some systems (for instance, Mathematica insists on expressing the Normal CDF in terms of $\text{Erf}$), it's good to have a thread like this that documents the relationship. By definition, the Error Function is...
How are the Error Function and Standard Normal distribution function related? Because this comes up often in some systems (for instance, Mathematica insists on expressing the Normal CDF in terms of $\text{Erf}$), it's good to have a thread like this that documents the relations
13,243
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
"What makes the estimator work when the actual error distribution does not match the assumed error distribution?" In principle the QMPLE does not "work", in the sense of being a "good" estimator. The theory developed around the QMLE is useful because it has led to misspecification tests. What the QMLE certainly does i...
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
"What makes the estimator work when the actual error distribution does not match the assumed error distribution?" In principle the QMPLE does not "work", in the sense of being a "good" estimator. The
Idea and intuition behind quasi maximum likelihood estimation (QMLE) "What makes the estimator work when the actual error distribution does not match the assumed error distribution?" In principle the QMPLE does not "work", in the sense of being a "good" estimator. The theory developed around the QMLE is useful because...
Idea and intuition behind quasi maximum likelihood estimation (QMLE) "What makes the estimator work when the actual error distribution does not match the assumed error distribution?" In principle the QMPLE does not "work", in the sense of being a "good" estimator. The
13,244
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
The originating paper from Wedderburn in 74 is an excellent read regarding the subject of quasilikelihood. In particular he observed that for regular exponential families, the solutions to likelihood equations were obtained by solving a general score equation of the form: $$ 0 = \sum_{i=1}^n \mathbf{S}(\beta, X_i, Y_i)...
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
The originating paper from Wedderburn in 74 is an excellent read regarding the subject of quasilikelihood. In particular he observed that for regular exponential families, the solutions to likelihood
Idea and intuition behind quasi maximum likelihood estimation (QMLE) The originating paper from Wedderburn in 74 is an excellent read regarding the subject of quasilikelihood. In particular he observed that for regular exponential families, the solutions to likelihood equations were obtained by solving a general score ...
Idea and intuition behind quasi maximum likelihood estimation (QMLE) The originating paper from Wedderburn in 74 is an excellent read regarding the subject of quasilikelihood. In particular he observed that for regular exponential families, the solutions to likelihood
13,245
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
I had a similar question as the original one posted here from Richard Hardy. My confusion was that the parameters estimated from quasi-ML may not exist in the unknown "true" distribution. In this case, what does "consistency" exactly mean? What do the estimated parameters converge to? After checking some references (Wh...
Idea and intuition behind quasi maximum likelihood estimation (QMLE)
I had a similar question as the original one posted here from Richard Hardy. My confusion was that the parameters estimated from quasi-ML may not exist in the unknown "true" distribution. In this case
Idea and intuition behind quasi maximum likelihood estimation (QMLE) I had a similar question as the original one posted here from Richard Hardy. My confusion was that the parameters estimated from quasi-ML may not exist in the unknown "true" distribution. In this case, what does "consistency" exactly mean? What do the...
Idea and intuition behind quasi maximum likelihood estimation (QMLE) I had a similar question as the original one posted here from Richard Hardy. My confusion was that the parameters estimated from quasi-ML may not exist in the unknown "true" distribution. In this case
13,246
How to interpret the intercept term in a GLM?
The intercept term is the intercept in the linear part of the GLM equation, so your model for the mean is $E[Y] = g^{-1}(\mathbf{X \beta})$, where $g$ is your link function and $\mathbf{X\beta}$ is your linear model. This linear model contains an "intercept term", i.e.: $\mathbf{X\beta} = c + X_1\beta_1+X_2\beta_2+\cd...
How to interpret the intercept term in a GLM?
The intercept term is the intercept in the linear part of the GLM equation, so your model for the mean is $E[Y] = g^{-1}(\mathbf{X \beta})$, where $g$ is your link function and $\mathbf{X\beta}$ is yo
How to interpret the intercept term in a GLM? The intercept term is the intercept in the linear part of the GLM equation, so your model for the mean is $E[Y] = g^{-1}(\mathbf{X \beta})$, where $g$ is your link function and $\mathbf{X\beta}$ is your linear model. This linear model contains an "intercept term", i.e.: $\...
How to interpret the intercept term in a GLM? The intercept term is the intercept in the linear part of the GLM equation, so your model for the mean is $E[Y] = g^{-1}(\mathbf{X \beta})$, where $g$ is your link function and $\mathbf{X\beta}$ is yo
13,247
How to interpret the intercept term in a GLM?
It looks to me like there may be some problem with the data. It is odd that the parameter estimate for the coefficient would be 0.000. It looks like both your DV and your IV are dichotomous and that the proportions of your DV do not vary at all with your IV. Is this right? The intercept, as I noted in my comment (and a...
How to interpret the intercept term in a GLM?
It looks to me like there may be some problem with the data. It is odd that the parameter estimate for the coefficient would be 0.000. It looks like both your DV and your IV are dichotomous and that t
How to interpret the intercept term in a GLM? It looks to me like there may be some problem with the data. It is odd that the parameter estimate for the coefficient would be 0.000. It looks like both your DV and your IV are dichotomous and that the proportions of your DV do not vary at all with your IV. Is this right? ...
How to interpret the intercept term in a GLM? It looks to me like there may be some problem with the data. It is odd that the parameter estimate for the coefficient would be 0.000. It looks like both your DV and your IV are dichotomous and that t
13,248
How to interpret the intercept term in a GLM?
In your case, the intercept is the grand mean of attacked_excluding_app, calculated for all data regardless of treatment. The significance test in the table of coefficients is testing whether it is significantly different from zero. Whether this is relevant depends on whether you have some a priori reason to expect i...
How to interpret the intercept term in a GLM?
In your case, the intercept is the grand mean of attacked_excluding_app, calculated for all data regardless of treatment. The significance test in the table of coefficients is testing whether it is s
How to interpret the intercept term in a GLM? In your case, the intercept is the grand mean of attacked_excluding_app, calculated for all data regardless of treatment. The significance test in the table of coefficients is testing whether it is significantly different from zero. Whether this is relevant depends on whe...
How to interpret the intercept term in a GLM? In your case, the intercept is the grand mean of attacked_excluding_app, calculated for all data regardless of treatment. The significance test in the table of coefficients is testing whether it is s
13,249
In regression analysis what's the difference between data-generation process and model?
We all have a good sense of what "model" might mean, although its technical definition will vary among disciplines. To compare this to DGP, I began by looking at the top five hits (counting two hits with the same author as one) in Googling "data generation process". A paper on how the US Air Force actually creates da...
In regression analysis what's the difference between data-generation process and model?
We all have a good sense of what "model" might mean, although its technical definition will vary among disciplines. To compare this to DGP, I began by looking at the top five hits (counting two hits
In regression analysis what's the difference between data-generation process and model? We all have a good sense of what "model" might mean, although its technical definition will vary among disciplines. To compare this to DGP, I began by looking at the top five hits (counting two hits with the same author as one) in ...
In regression analysis what's the difference between data-generation process and model? We all have a good sense of what "model" might mean, although its technical definition will vary among disciplines. To compare this to DGP, I began by looking at the top five hits (counting two hits
13,250
In regression analysis what's the difference between data-generation process and model?
Whuber's answer is excellent, but it is worth adding emphasis to the fact that a statistical model need not resemble the data generating model in every respect to be an appropriate model for inferential exploration of data. Liu and Meng explain that point with great clarity in their recent arXived paper (http://arxiv.o...
In regression analysis what's the difference between data-generation process and model?
Whuber's answer is excellent, but it is worth adding emphasis to the fact that a statistical model need not resemble the data generating model in every respect to be an appropriate model for inferenti
In regression analysis what's the difference between data-generation process and model? Whuber's answer is excellent, but it is worth adding emphasis to the fact that a statistical model need not resemble the data generating model in every respect to be an appropriate model for inferential exploration of data. Liu and ...
In regression analysis what's the difference between data-generation process and model? Whuber's answer is excellent, but it is worth adding emphasis to the fact that a statistical model need not resemble the data generating model in every respect to be an appropriate model for inferenti
13,251
In regression analysis what's the difference between data-generation process and model?
The DGP is the true model. The model is what we have tried to, using our best skills, to represent the true state of nature. The DGP is influenced by "noise". Noise can be of many kinds: One time interventions Level shifts Trends Changes in Seasonality Changes in Model Parameters Changes in Variance If you don't c...
In regression analysis what's the difference between data-generation process and model?
The DGP is the true model. The model is what we have tried to, using our best skills, to represent the true state of nature. The DGP is influenced by "noise". Noise can be of many kinds: One time
In regression analysis what's the difference between data-generation process and model? The DGP is the true model. The model is what we have tried to, using our best skills, to represent the true state of nature. The DGP is influenced by "noise". Noise can be of many kinds: One time interventions Level shifts Trend...
In regression analysis what's the difference between data-generation process and model? The DGP is the true model. The model is what we have tried to, using our best skills, to represent the true state of nature. The DGP is influenced by "noise". Noise can be of many kinds: One time
13,252
In regression analysis what's the difference between data-generation process and model?
DGP is the virtual reality and a unique recipe for simulation. A model is a collection of DGP or possible ways that the data could have been generated. Read the first page of this mini course by Russell Davidson: http://russell-davidson.arts.mcgill.ca/Aarhus/bootstrap_course.pdf
In regression analysis what's the difference between data-generation process and model?
DGP is the virtual reality and a unique recipe for simulation. A model is a collection of DGP or possible ways that the data could have been generated. Read the first page of this mini course by Russe
In regression analysis what's the difference between data-generation process and model? DGP is the virtual reality and a unique recipe for simulation. A model is a collection of DGP or possible ways that the data could have been generated. Read the first page of this mini course by Russell Davidson: http://russell-davi...
In regression analysis what's the difference between data-generation process and model? DGP is the virtual reality and a unique recipe for simulation. A model is a collection of DGP or possible ways that the data could have been generated. Read the first page of this mini course by Russe
13,253
When are Monte Carlo methods preferred over temporal difference ones?
The main problem with TD learning and DP is that their step updates are biased on the initial conditions of the learning parameters. The bootstrapping process typically updates a function or lookup Q(s,a) on a successor value Q(s',a') using whatever the current estimates are in the latter. Clearly at the very start of ...
When are Monte Carlo methods preferred over temporal difference ones?
The main problem with TD learning and DP is that their step updates are biased on the initial conditions of the learning parameters. The bootstrapping process typically updates a function or lookup Q(
When are Monte Carlo methods preferred over temporal difference ones? The main problem with TD learning and DP is that their step updates are biased on the initial conditions of the learning parameters. The bootstrapping process typically updates a function or lookup Q(s,a) on a successor value Q(s',a') using whatever ...
When are Monte Carlo methods preferred over temporal difference ones? The main problem with TD learning and DP is that their step updates are biased on the initial conditions of the learning parameters. The bootstrapping process typically updates a function or lookup Q(
13,254
When are Monte Carlo methods preferred over temporal difference ones?
Essentially it depends on your environment. TD exploits the Markov property, i.e. the future states of a process rely only upon the current state, and therefore it's usually more efficient to use TD in Markov environments. MC does not exploit the Markov property as it bases rewards on the entire learning process, whic...
When are Monte Carlo methods preferred over temporal difference ones?
Essentially it depends on your environment. TD exploits the Markov property, i.e. the future states of a process rely only upon the current state, and therefore it's usually more efficient to use TD i
When are Monte Carlo methods preferred over temporal difference ones? Essentially it depends on your environment. TD exploits the Markov property, i.e. the future states of a process rely only upon the current state, and therefore it's usually more efficient to use TD in Markov environments. MC does not exploit the Mar...
When are Monte Carlo methods preferred over temporal difference ones? Essentially it depends on your environment. TD exploits the Markov property, i.e. the future states of a process rely only upon the current state, and therefore it's usually more efficient to use TD i
13,255
Understanding the Bayes risk
[Here is an excerpt from my own textbook, The Bayesian Choice (2007), that argues in favour of a decision-theoretic approach to Bayesian analysis, hence of using the Bayes risk.] Except for the most trivial settings, it is generally impossible to uniformly minimize (in $d$) the loss function $\text{L}(\theta,d)$ when...
Understanding the Bayes risk
[Here is an excerpt from my own textbook, The Bayesian Choice (2007), that argues in favour of a decision-theoretic approach to Bayesian analysis, hence of using the Bayes risk.] Except for the most t
Understanding the Bayes risk [Here is an excerpt from my own textbook, The Bayesian Choice (2007), that argues in favour of a decision-theoretic approach to Bayesian analysis, hence of using the Bayes risk.] Except for the most trivial settings, it is generally impossible to uniformly minimize (in $d$) the loss funct...
Understanding the Bayes risk [Here is an excerpt from my own textbook, The Bayesian Choice (2007), that argues in favour of a decision-theoretic approach to Bayesian analysis, hence of using the Bayes risk.] Except for the most t
13,256
Understanding the Bayes risk
Quoting the classical Statistical Decision Theory by James O. Berger: [...] We have already stated that decision rules will be evaluated in terms of their risk functions $R(\theta, \delta)$. [...] The problem, as pointed out earlier, is that different admissible decision rules will have risks which are better fo...
Understanding the Bayes risk
Quoting the classical Statistical Decision Theory by James O. Berger: [...] We have already stated that decision rules will be evaluated in terms of their risk functions $R(\theta, \delta)$. [...]
Understanding the Bayes risk Quoting the classical Statistical Decision Theory by James O. Berger: [...] We have already stated that decision rules will be evaluated in terms of their risk functions $R(\theta, \delta)$. [...] The problem, as pointed out earlier, is that different admissible decision rules will h...
Understanding the Bayes risk Quoting the classical Statistical Decision Theory by James O. Berger: [...] We have already stated that decision rules will be evaluated in terms of their risk functions $R(\theta, \delta)$. [...]
13,257
How to fit a mixed model with response variable between 0 and 1?
It makes sense to start with a simpler case of no random effects. There are four ways to deal with continuous zero-to-one response variable that behaves like a fraction or a probability (this is our most canonical/upvoted/viewed thread on this topic, but unfortunately not all four options are discussed there): If it i...
How to fit a mixed model with response variable between 0 and 1?
It makes sense to start with a simpler case of no random effects. There are four ways to deal with continuous zero-to-one response variable that behaves like a fraction or a probability (this is our m
How to fit a mixed model with response variable between 0 and 1? It makes sense to start with a simpler case of no random effects. There are four ways to deal with continuous zero-to-one response variable that behaves like a fraction or a probability (this is our most canonical/upvoted/viewed thread on this topic, but ...
How to fit a mixed model with response variable between 0 and 1? It makes sense to start with a simpler case of no random effects. There are four ways to deal with continuous zero-to-one response variable that behaves like a fraction or a probability (this is our m
13,258
Compute and graph the LDA decision boundary
This particular figure in Hastie et al. was produced without computing equations of class boundaries. Instead, algorithm outlined by @ttnphns in the comments was used, see footnote 2 in section 4.3, page 110: For this figure and many similar figures in the book we compute the decision boundaries by an exhaustive conto...
Compute and graph the LDA decision boundary
This particular figure in Hastie et al. was produced without computing equations of class boundaries. Instead, algorithm outlined by @ttnphns in the comments was used, see footnote 2 in section 4.3, p
Compute and graph the LDA decision boundary This particular figure in Hastie et al. was produced without computing equations of class boundaries. Instead, algorithm outlined by @ttnphns in the comments was used, see footnote 2 in section 4.3, page 110: For this figure and many similar figures in the book we compute th...
Compute and graph the LDA decision boundary This particular figure in Hastie et al. was produced without computing equations of class boundaries. Instead, algorithm outlined by @ttnphns in the comments was used, see footnote 2 in section 4.3, p
13,259
Compute and graph the LDA decision boundary
I want to start of by thanking @amoeba says Reinstate Monica & ttnphns for their contributions that have greatly helped me! I'm so grateful in fact, I'd want to buy them a drink or be able to return the favor somehow. The only thing I'm going to add to their reply is my python implementation of drawing these Decision b...
Compute and graph the LDA decision boundary
I want to start of by thanking @amoeba says Reinstate Monica & ttnphns for their contributions that have greatly helped me! I'm so grateful in fact, I'd want to buy them a drink or be able to return t
Compute and graph the LDA decision boundary I want to start of by thanking @amoeba says Reinstate Monica & ttnphns for their contributions that have greatly helped me! I'm so grateful in fact, I'd want to buy them a drink or be able to return the favor somehow. The only thing I'm going to add to their reply is my pytho...
Compute and graph the LDA decision boundary I want to start of by thanking @amoeba says Reinstate Monica & ttnphns for their contributions that have greatly helped me! I'm so grateful in fact, I'd want to buy them a drink or be able to return t
13,260
How to understand effect of RBF SVM
You can possibly start by looking at one of my answers here: Non-linear SVM classification with RBF kernel In that answer, I attempt to explain what a kernel function is attempting to do. Once you get a grasp of what it attempts to do, as a follow up, you can read my answer to a question on Quora : https://www.quora.c...
How to understand effect of RBF SVM
You can possibly start by looking at one of my answers here: Non-linear SVM classification with RBF kernel In that answer, I attempt to explain what a kernel function is attempting to do. Once you ge
How to understand effect of RBF SVM You can possibly start by looking at one of my answers here: Non-linear SVM classification with RBF kernel In that answer, I attempt to explain what a kernel function is attempting to do. Once you get a grasp of what it attempts to do, as a follow up, you can read my answer to a que...
How to understand effect of RBF SVM You can possibly start by looking at one of my answers here: Non-linear SVM classification with RBF kernel In that answer, I attempt to explain what a kernel function is attempting to do. Once you ge
13,261
How to predict outcome with only positive cases as training?
This is called learning from positive and unlabeled data, or PU learning for short, and is an active niche of semi-supervised learning. Briefly, it is important to use the unlabeled data in the learning process as it yields significantly improved models over so-called single-class classifiers that are trained exclusive...
How to predict outcome with only positive cases as training?
This is called learning from positive and unlabeled data, or PU learning for short, and is an active niche of semi-supervised learning. Briefly, it is important to use the unlabeled data in the learni
How to predict outcome with only positive cases as training? This is called learning from positive and unlabeled data, or PU learning for short, and is an active niche of semi-supervised learning. Briefly, it is important to use the unlabeled data in the learning process as it yields significantly improved models over ...
How to predict outcome with only positive cases as training? This is called learning from positive and unlabeled data, or PU learning for short, and is an active niche of semi-supervised learning. Briefly, it is important to use the unlabeled data in the learni
13,262
How to predict outcome with only positive cases as training?
I am assuming there aren't as many spam cases in your 18000 cases. To use a supervised learning approach to this, you need to have more than 1 category/class in your data. Since you know 2000 cases are spam, you can label the remaining 18000 cases as 'unknown category' and train any supervised learning model to predict...
How to predict outcome with only positive cases as training?
I am assuming there aren't as many spam cases in your 18000 cases. To use a supervised learning approach to this, you need to have more than 1 category/class in your data. Since you know 2000 cases ar
How to predict outcome with only positive cases as training? I am assuming there aren't as many spam cases in your 18000 cases. To use a supervised learning approach to this, you need to have more than 1 category/class in your data. Since you know 2000 cases are spam, you can label the remaining 18000 cases as 'unknown...
How to predict outcome with only positive cases as training? I am assuming there aren't as many spam cases in your 18000 cases. To use a supervised learning approach to this, you need to have more than 1 category/class in your data. Since you know 2000 cases ar
13,263
How to predict outcome with only positive cases as training?
What the OP is talking about is a one-class classification task, which is a very challenging one. There are many papers on this task across different research fields. I also wrote one An Efficient Intrinsic Authorship Verification Scheme Based on Ensemble Learning. It is very easy to adapt it in order to classify spam...
How to predict outcome with only positive cases as training?
What the OP is talking about is a one-class classification task, which is a very challenging one. There are many papers on this task across different research fields. I also wrote one An Efficient In
How to predict outcome with only positive cases as training? What the OP is talking about is a one-class classification task, which is a very challenging one. There are many papers on this task across different research fields. I also wrote one An Efficient Intrinsic Authorship Verification Scheme Based on Ensemble Le...
How to predict outcome with only positive cases as training? What the OP is talking about is a one-class classification task, which is a very challenging one. There are many papers on this task across different research fields. I also wrote one An Efficient In
13,264
Regression tree algorithm with linear regression models in each leaf
While they work differently than your algorithm, I believe you'll find mob() and FTtree interesting. For Zeileis' mob see http://cran.r-project.org/web/packages/party/vignettes/MOB.pdf For FTtree,Gama's functional trees an implementation is available in Weka and thus RWeka. See http://cran.r-project.org/web/packages/RW...
Regression tree algorithm with linear regression models in each leaf
While they work differently than your algorithm, I believe you'll find mob() and FTtree interesting. For Zeileis' mob see http://cran.r-project.org/web/packages/party/vignettes/MOB.pdf For FTtree,Gama
Regression tree algorithm with linear regression models in each leaf While they work differently than your algorithm, I believe you'll find mob() and FTtree interesting. For Zeileis' mob see http://cran.r-project.org/web/packages/party/vignettes/MOB.pdf For FTtree,Gama's functional trees an implementation is available ...
Regression tree algorithm with linear regression models in each leaf While they work differently than your algorithm, I believe you'll find mob() and FTtree interesting. For Zeileis' mob see http://cran.r-project.org/web/packages/party/vignettes/MOB.pdf For FTtree,Gama
13,265
Regression tree algorithm with linear regression models in each leaf
RWeka package offers many regression methods. Among them, you can find M5P (M5 Prime), which is exactly tree based regression model with linear equations in leafs. For further details about M5 method, see publication. An example code would be: library(RWeka) M5_model = M5P (Dep_var ~ ., data = train, control = Weka_c...
Regression tree algorithm with linear regression models in each leaf
RWeka package offers many regression methods. Among them, you can find M5P (M5 Prime), which is exactly tree based regression model with linear equations in leafs. For further details about M5 method,
Regression tree algorithm with linear regression models in each leaf RWeka package offers many regression methods. Among them, you can find M5P (M5 Prime), which is exactly tree based regression model with linear equations in leafs. For further details about M5 method, see publication. An example code would be: libra...
Regression tree algorithm with linear regression models in each leaf RWeka package offers many regression methods. Among them, you can find M5P (M5 Prime), which is exactly tree based regression model with linear equations in leafs. For further details about M5 method,
13,266
Regression tree algorithm with linear regression models in each leaf
I think this answers the short version of your question: The Cubist package fits rule-based models (similar to trees) with linear regression models in the terminal leaves, instance-based corrections and boosting. From Cran task views: Machine Learning
Regression tree algorithm with linear regression models in each leaf
I think this answers the short version of your question: The Cubist package fits rule-based models (similar to trees) with linear regression models in the terminal leaves, instance-based correct
Regression tree algorithm with linear regression models in each leaf I think this answers the short version of your question: The Cubist package fits rule-based models (similar to trees) with linear regression models in the terminal leaves, instance-based corrections and boosting. From Cran task views: Machine L...
Regression tree algorithm with linear regression models in each leaf I think this answers the short version of your question: The Cubist package fits rule-based models (similar to trees) with linear regression models in the terminal leaves, instance-based correct
13,267
Stacking/ensembling models with caret
It looks like Max Kuhn actually started working on a package for ensembleling caret models, but hasn't had time to finish it yet. This is exactly what I was looking for. I hope the project gets finished one day! edit: I wrote my own package to do this: caretEnsemble
Stacking/ensembling models with caret
It looks like Max Kuhn actually started working on a package for ensembleling caret models, but hasn't had time to finish it yet. This is exactly what I was looking for. I hope the project gets fini
Stacking/ensembling models with caret It looks like Max Kuhn actually started working on a package for ensembleling caret models, but hasn't had time to finish it yet. This is exactly what I was looking for. I hope the project gets finished one day! edit: I wrote my own package to do this: caretEnsemble
Stacking/ensembling models with caret It looks like Max Kuhn actually started working on a package for ensembleling caret models, but hasn't had time to finish it yet. This is exactly what I was looking for. I hope the project gets fini
13,268
Stacking/ensembling models with caret
What you are looking for is called "model ensembling". A simple introductory tutorial with R code can be found here: http://viksalgorithms.blogspot.jp/2012/01/intro-to-ensemble-learning-in-r.html
Stacking/ensembling models with caret
What you are looking for is called "model ensembling". A simple introductory tutorial with R code can be found here: http://viksalgorithms.blogspot.jp/2012/01/intro-to-ensemble-learning-in-r.html
Stacking/ensembling models with caret What you are looking for is called "model ensembling". A simple introductory tutorial with R code can be found here: http://viksalgorithms.blogspot.jp/2012/01/intro-to-ensemble-learning-in-r.html
Stacking/ensembling models with caret What you are looking for is called "model ensembling". A simple introductory tutorial with R code can be found here: http://viksalgorithms.blogspot.jp/2012/01/intro-to-ensemble-learning-in-r.html
13,269
Stacking/ensembling models with caret
I'm not quite sure what you are looking for but this might help: http://www.jstatsoft.org/v28/i05/paper It is how to use multiple models in caret. The part you might be interested is section 5 on pg. 13.
Stacking/ensembling models with caret
I'm not quite sure what you are looking for but this might help: http://www.jstatsoft.org/v28/i05/paper It is how to use multiple models in caret. The part you might be interested is section 5 on pg.
Stacking/ensembling models with caret I'm not quite sure what you are looking for but this might help: http://www.jstatsoft.org/v28/i05/paper It is how to use multiple models in caret. The part you might be interested is section 5 on pg. 13.
Stacking/ensembling models with caret I'm not quite sure what you are looking for but this might help: http://www.jstatsoft.org/v28/i05/paper It is how to use multiple models in caret. The part you might be interested is section 5 on pg.
13,270
Calculating AUPR in R [closed]
As of July 2016, the package PRROC works great for computing both ROC AUC and PR AUC. Assuming you already have a vector of probabilities (called probs) computed with your model and the true class labels are in your data frame as df$label (0 and 1) this code should work: install.packages("PRROC") require(PRROC) fg <- ...
Calculating AUPR in R [closed]
As of July 2016, the package PRROC works great for computing both ROC AUC and PR AUC. Assuming you already have a vector of probabilities (called probs) computed with your model and the true class lab
Calculating AUPR in R [closed] As of July 2016, the package PRROC works great for computing both ROC AUC and PR AUC. Assuming you already have a vector of probabilities (called probs) computed with your model and the true class labels are in your data frame as df$label (0 and 1) this code should work: install.packages(...
Calculating AUPR in R [closed] As of July 2016, the package PRROC works great for computing both ROC AUC and PR AUC. Assuming you already have a vector of probabilities (called probs) computed with your model and the true class lab
13,271
Calculating AUPR in R [closed]
AUPRC() is a function in the PerfMeas package which is much better than the pr.curve() function in PRROC package when the data is very large. pr.curve() is a nightmare and takes forever to finish when you have vectors with millions of entries. PerfMeas takes seconds in comparison. PRROC is written in R and PerfMeas is ...
Calculating AUPR in R [closed]
AUPRC() is a function in the PerfMeas package which is much better than the pr.curve() function in PRROC package when the data is very large. pr.curve() is a nightmare and takes forever to finish when
Calculating AUPR in R [closed] AUPRC() is a function in the PerfMeas package which is much better than the pr.curve() function in PRROC package when the data is very large. pr.curve() is a nightmare and takes forever to finish when you have vectors with millions of entries. PerfMeas takes seconds in comparison. PRROC i...
Calculating AUPR in R [closed] AUPRC() is a function in the PerfMeas package which is much better than the pr.curve() function in PRROC package when the data is very large. pr.curve() is a nightmare and takes forever to finish when
13,272
Calculating AUPR in R [closed]
A little googling returns one bioc package, qpgraph (qpPrecisionRecall), and a cran one, minet (auc.pr). I have no experience with them, though. Both have been devised to deal with biological networks.
Calculating AUPR in R [closed]
A little googling returns one bioc package, qpgraph (qpPrecisionRecall), and a cran one, minet (auc.pr). I have no experience with them, though. Both have been devised to deal with biological networks
Calculating AUPR in R [closed] A little googling returns one bioc package, qpgraph (qpPrecisionRecall), and a cran one, minet (auc.pr). I have no experience with them, though. Both have been devised to deal with biological networks.
Calculating AUPR in R [closed] A little googling returns one bioc package, qpgraph (qpPrecisionRecall), and a cran one, minet (auc.pr). I have no experience with them, though. Both have been devised to deal with biological networks
13,273
Calculating AUPR in R [closed]
Once you've got a precision recall curve from qpPrecisionRecall, e.g.: pr <- qpPrecisionRecall(measurements, goldstandard) you can calculate its AUC by doing this: f <- approxfun(pr[, 1:2]) auc <- integrate(f, 0, 1)$value the help page of qpPrecisionRecall gives you details on what data structure expects in its argum...
Calculating AUPR in R [closed]
Once you've got a precision recall curve from qpPrecisionRecall, e.g.: pr <- qpPrecisionRecall(measurements, goldstandard) you can calculate its AUC by doing this: f <- approxfun(pr[, 1:2]) auc <- in
Calculating AUPR in R [closed] Once you've got a precision recall curve from qpPrecisionRecall, e.g.: pr <- qpPrecisionRecall(measurements, goldstandard) you can calculate its AUC by doing this: f <- approxfun(pr[, 1:2]) auc <- integrate(f, 0, 1)$value the help page of qpPrecisionRecall gives you details on what data...
Calculating AUPR in R [closed] Once you've got a precision recall curve from qpPrecisionRecall, e.g.: pr <- qpPrecisionRecall(measurements, goldstandard) you can calculate its AUC by doing this: f <- approxfun(pr[, 1:2]) auc <- in
13,274
What are the differences between the Baum-Welch algorithm and Viterbi training?
The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visible outputs of your model, then the Viterbi algorithm gives you the most likely complete sequence of hidden states conditi...
What are the differences between the Baum-Welch algorithm and Viterbi training?
The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visibl
What are the differences between the Baum-Welch algorithm and Viterbi training? The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visible outputs of your model, then the Viterb...
What are the differences between the Baum-Welch algorithm and Viterbi training? The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visibl
13,275
What are the differences between the Baum-Welch algorithm and Viterbi training?
Forward-backward is used when you want to count 'invisible things'. For example, when using E-M to improve a model via unsupervised data. I think that Petrov's paper is an example. In the technique I'm thinking of, you first train a model with annotated data with fairly coarse annotations (e.g. a tag for 'Verb'). Then ...
What are the differences between the Baum-Welch algorithm and Viterbi training?
Forward-backward is used when you want to count 'invisible things'. For example, when using E-M to improve a model via unsupervised data. I think that Petrov's paper is an example. In the technique I'
What are the differences between the Baum-Welch algorithm and Viterbi training? Forward-backward is used when you want to count 'invisible things'. For example, when using E-M to improve a model via unsupervised data. I think that Petrov's paper is an example. In the technique I'm thinking of, you first train a model w...
What are the differences between the Baum-Welch algorithm and Viterbi training? Forward-backward is used when you want to count 'invisible things'. For example, when using E-M to improve a model via unsupervised data. I think that Petrov's paper is an example. In the technique I'
13,276
Are there any circumstances where stepwise regression should be used?
I am not aware of situations, in which stepwise regression would be the preferred approach. It may be okay (particularly in its step-down version starting from the full model) with bootstrapping of the whole stepwise process on extremely large datasets with $n>>p$. Here $n$ is the number of observations in an continuou...
Are there any circumstances where stepwise regression should be used?
I am not aware of situations, in which stepwise regression would be the preferred approach. It may be okay (particularly in its step-down version starting from the full model) with bootstrapping of th
Are there any circumstances where stepwise regression should be used? I am not aware of situations, in which stepwise regression would be the preferred approach. It may be okay (particularly in its step-down version starting from the full model) with bootstrapping of the whole stepwise process on extremely large datase...
Are there any circumstances where stepwise regression should be used? I am not aware of situations, in which stepwise regression would be the preferred approach. It may be okay (particularly in its step-down version starting from the full model) with bootstrapping of th
13,277
Are there any circumstances where stepwise regression should be used?
Two cases in which I would not object to seeing step-wise regression are Exploratory data analysis Predictive models In both these very important use cases, you are not so concerned about traditional statistical inference, so the fact that p-values, etc., are no longer valid is of little concern. For example, if a re...
Are there any circumstances where stepwise regression should be used?
Two cases in which I would not object to seeing step-wise regression are Exploratory data analysis Predictive models In both these very important use cases, you are not so concerned about traditiona
Are there any circumstances where stepwise regression should be used? Two cases in which I would not object to seeing step-wise regression are Exploratory data analysis Predictive models In both these very important use cases, you are not so concerned about traditional statistical inference, so the fact that p-values...
Are there any circumstances where stepwise regression should be used? Two cases in which I would not object to seeing step-wise regression are Exploratory data analysis Predictive models In both these very important use cases, you are not so concerned about traditiona
13,278
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning?
As the lead developer of Optunity I'll add my two cents. We have done extensive benchmarks comparing Optunity with the most popular Bayesian solvers (e.g., hyperopt, SMAC, bayesopt) on real-world problems, and the results indicate that PSO is in fact not less efficient in many practical cases. In our benchmark, which ...
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning?
As the lead developer of Optunity I'll add my two cents. We have done extensive benchmarks comparing Optunity with the most popular Bayesian solvers (e.g., hyperopt, SMAC, bayesopt) on real-world pro
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning? As the lead developer of Optunity I'll add my two cents. We have done extensive benchmarks comparing Optunity with the most popular Bayesian solvers (e.g., hyperopt, SMAC, bayesopt) on real-world problems, and the results i...
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning? As the lead developer of Optunity I'll add my two cents. We have done extensive benchmarks comparing Optunity with the most popular Bayesian solvers (e.g., hyperopt, SMAC, bayesopt) on real-world pro
13,279
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning?
The answer is problem-dependent and cannot be given without additional context. Typically, the answer would go as follows. Bayesian Optimization is more suitable for low-dimensional problems with the computational budget up to say 10x-100x the number of variables. PSO can be quite efficient for much larger budgets but...
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning?
The answer is problem-dependent and cannot be given without additional context. Typically, the answer would go as follows. Bayesian Optimization is more suitable for low-dimensional problems with the
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning? The answer is problem-dependent and cannot be given without additional context. Typically, the answer would go as follows. Bayesian Optimization is more suitable for low-dimensional problems with the computational budget up...
Advantages of Particle Swarm Optimization over Bayesian Optimization for hyperparameter tuning? The answer is problem-dependent and cannot be given without additional context. Typically, the answer would go as follows. Bayesian Optimization is more suitable for low-dimensional problems with the
13,280
Attainable correlations for lognormal random variables
I'll start by providing the definition of comonotonicity and countermonotonicity. Then, I'll mention why this is relevant to compute the minimum and maximum possible correlation coefficient between two random variables. And finally, I'll compute these bounds for the lognormal random variables $X_1$ and $X_2$. Comonoton...
Attainable correlations for lognormal random variables
I'll start by providing the definition of comonotonicity and countermonotonicity. Then, I'll mention why this is relevant to compute the minimum and maximum possible correlation coefficient between tw
Attainable correlations for lognormal random variables I'll start by providing the definition of comonotonicity and countermonotonicity. Then, I'll mention why this is relevant to compute the minimum and maximum possible correlation coefficient between two random variables. And finally, I'll compute these bounds for th...
Attainable correlations for lognormal random variables I'll start by providing the definition of comonotonicity and countermonotonicity. Then, I'll mention why this is relevant to compute the minimum and maximum possible correlation coefficient between tw
13,281
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$?
You're referring to a transformation from a pair of independent variates $(X,Y)$ to the polar representation $(R,\theta)$ (radius and angle), and then looking at the marginal distribution of $\theta$. I'm going to offer a somewhat intuitive explanation (though a mathematical derivation of the density does essentially w...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when
You're referring to a transformation from a pair of independent variates $(X,Y)$ to the polar representation $(R,\theta)$ (radius and angle), and then looking at the marginal distribution of $\theta$.
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$? You're referring to a transformation from a pair of independent variates $(X,Y)$ to the polar representation $(R,\theta)$ (radius and angle), and then looking at the marginal distributi...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when You're referring to a transformation from a pair of independent variates $(X,Y)$ to the polar representation $(R,\theta)$ (radius and angle), and then looking at the marginal distribution of $\theta$.
13,282
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$?
To complete the fairly good answers given by Glen and Michael, I'll just compute the density of $\theta$ when the distribution of $(X,Y)$ is uniform on the square $[-1,1]\times[-1,1]$. This uniform density is $1 \over 4$ on this square, $0$ elsewhere -- that is, the probability of sampling a point in a given region of ...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when
To complete the fairly good answers given by Glen and Michael, I'll just compute the density of $\theta$ when the distribution of $(X,Y)$ is uniform on the square $[-1,1]\times[-1,1]$. This uniform de
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$? To complete the fairly good answers given by Glen and Michael, I'll just compute the density of $\theta$ when the distribution of $(X,Y)$ is uniform on the square $[-1,1]\times[-1,1]$. ...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when To complete the fairly good answers given by Glen and Michael, I'll just compute the density of $\theta$ when the distribution of $(X,Y)$ is uniform on the square $[-1,1]\times[-1,1]$. This uniform de
13,283
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$?
I will answer the question about the normal case leading to the uniform distribution. It is well known that if $X$ and $Y$ are independent and normally distributed the contours of constant probability density is a circle in the $x-y$ plane. The radius $R= \sqrt{X^2+ Y^2}$ has the Rayleigh distribution. For a good dis...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when
I will answer the question about the normal case leading to the uniform distribution. It is well known that if $X$ and $Y$ are independent and normally distributed the contours of constant probabilit
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when $(x,y) \sim N(0,1)\times N(0,1)$? I will answer the question about the normal case leading to the uniform distribution. It is well known that if $X$ and $Y$ are independent and normally distributed the contours of cons...
How is $\theta$, the polar coordinate, distributed when $(x,y) \sim U(-1,1) \times U(-1,1)$ vs. when I will answer the question about the normal case leading to the uniform distribution. It is well known that if $X$ and $Y$ are independent and normally distributed the contours of constant probabilit
13,284
Is Correlation Transitive? [duplicate]
We may prove that if the correlations are sufficiently close to 1, then $X$ and $Z$ must be positively correlated. Let’s assume $C(x,y)$ is the correlation coefficient between $x$ and $y$. Like wise we have $C(x,z)$ and $C(y,z)$. Here is an equation which comes from solving correlation equation mathematically : $$C(x,y...
Is Correlation Transitive? [duplicate]
We may prove that if the correlations are sufficiently close to 1, then $X$ and $Z$ must be positively correlated. Let’s assume $C(x,y)$ is the correlation coefficient between $x$ and $y$. Like wise w
Is Correlation Transitive? [duplicate] We may prove that if the correlations are sufficiently close to 1, then $X$ and $Z$ must be positively correlated. Let’s assume $C(x,y)$ is the correlation coefficient between $x$ and $y$. Like wise we have $C(x,z)$ and $C(y,z)$. Here is an equation which comes from solving correl...
Is Correlation Transitive? [duplicate] We may prove that if the correlations are sufficiently close to 1, then $X$ and $Z$ must be positively correlated. Let’s assume $C(x,y)$ is the correlation coefficient between $x$ and $y$. Like wise w
13,285
Is Correlation Transitive? [duplicate]
Here is a great post by Terence Tao on the topic. Words from the man himself: I came across the (important) point that correlation is not necessarily transitive: if $X$ correlates with $Y$, and $Y$ correlates with $Z$, then this does not imply that $X$ correlates with $Z$.
Is Correlation Transitive? [duplicate]
Here is a great post by Terence Tao on the topic. Words from the man himself: I came across the (important) point that correlation is not necessarily transitive: if $X$ correlates with $Y$, and $Y$
Is Correlation Transitive? [duplicate] Here is a great post by Terence Tao on the topic. Words from the man himself: I came across the (important) point that correlation is not necessarily transitive: if $X$ correlates with $Y$, and $Y$ correlates with $Z$, then this does not imply that $X$ correlates with $Z$.
Is Correlation Transitive? [duplicate] Here is a great post by Terence Tao on the topic. Words from the man himself: I came across the (important) point that correlation is not necessarily transitive: if $X$ correlates with $Y$, and $Y$
13,286
Why are “time series” called such?
Why is it "Time Series", not "Time Sequence"? This inconsistency bugged me too the first time I saw it! But note that outside mathematics, people often use "series" to refer to what mathematicians might call a sequence. For example, the Oxford English dictionary online gives the main definition of "series" as a "numbe...
Why are “time series” called such?
Why is it "Time Series", not "Time Sequence"? This inconsistency bugged me too the first time I saw it! But note that outside mathematics, people often use "series" to refer to what mathematicians mig
Why are “time series” called such? Why is it "Time Series", not "Time Sequence"? This inconsistency bugged me too the first time I saw it! But note that outside mathematics, people often use "series" to refer to what mathematicians might call a sequence. For example, the Oxford English dictionary online gives the main...
Why are “time series” called such? Why is it "Time Series", not "Time Sequence"? This inconsistency bugged me too the first time I saw it! But note that outside mathematics, people often use "series" to refer to what mathematicians mig
13,287
Why are “time series” called such?
"Series" is: a group or a number of related or similar things (http://dictionary.reference.com/browse/series) a number of things or events that are arranged or happen one after the other (http://www.merriam-webster.com/dictionary/series) A number of objects or events arranged or coming one after the other in successio...
Why are “time series” called such?
"Series" is: a group or a number of related or similar things (http://dictionary.reference.com/browse/series) a number of things or events that are arranged or happen one after the other (http://www.
Why are “time series” called such? "Series" is: a group or a number of related or similar things (http://dictionary.reference.com/browse/series) a number of things or events that are arranged or happen one after the other (http://www.merriam-webster.com/dictionary/series) A number of objects or events arranged or comi...
Why are “time series” called such? "Series" is: a group or a number of related or similar things (http://dictionary.reference.com/browse/series) a number of things or events that are arranged or happen one after the other (http://www.
13,288
Why are “time series” called such?
The accepted answer is informative (I upvoted it myself), but it assumes that the "series" term in Time Series is really a misnomer and should be "sequence" instead. For the first few decades in the development of time series analysis, 1920s and 1930s, time series was synonymous with ARMA time series. An MA time series...
Why are “time series” called such?
The accepted answer is informative (I upvoted it myself), but it assumes that the "series" term in Time Series is really a misnomer and should be "sequence" instead. For the first few decades in the d
Why are “time series” called such? The accepted answer is informative (I upvoted it myself), but it assumes that the "series" term in Time Series is really a misnomer and should be "sequence" instead. For the first few decades in the development of time series analysis, 1920s and 1930s, time series was synonymous with ...
Why are “time series” called such? The accepted answer is informative (I upvoted it myself), but it assumes that the "series" term in Time Series is really a misnomer and should be "sequence" instead. For the first few decades in the d
13,289
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method
I am answering by pasting Douglas Bates's reply in the R-Sig-ME mailing list, on 17 Dec 2014 on the question of how to calculate an $R^2$ statistic for generalized linear mixed models, which I believe is required reading for anyone interested in such a thing. Bates is the original author of the lme4package for R and co...
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method
I am answering by pasting Douglas Bates's reply in the R-Sig-ME mailing list, on 17 Dec 2014 on the question of how to calculate an $R^2$ statistic for generalized linear mixed models, which I believe
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method I am answering by pasting Douglas Bates's reply in the R-Sig-ME mailing list, on 17 Dec 2014 on the question of how to calculate an $R^2$ statistic for generalized linear mixed models, which I believe is required reading for anyone int...
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method I am answering by pasting Douglas Bates's reply in the R-Sig-ME mailing list, on 17 Dec 2014 on the question of how to calculate an $R^2$ statistic for generalized linear mixed models, which I believe
13,290
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method
After browsing the literature I came across the following paper which compares several different methods for calculating $R^2$ values for mixed models, where the $R^2$(MVP) methods is equivalent to the method proposed by Nakagawa and Schielzeth. Lahuis, D et al (2014) Explained Variance Measures for Multilevel Models...
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method
After browsing the literature I came across the following paper which compares several different methods for calculating $R^2$ values for mixed models, where the $R^2$(MVP) methods is equivalent to th
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method After browsing the literature I came across the following paper which compares several different methods for calculating $R^2$ values for mixed models, where the $R^2$(MVP) methods is equivalent to the method proposed by Nakagawa and S...
Calculating $R^2$ in mixed models using Nakagawa & Schielzeth's (2013) R2glmm method After browsing the literature I came across the following paper which compares several different methods for calculating $R^2$ values for mixed models, where the $R^2$(MVP) methods is equivalent to th
13,291
Ways to reduce high dimensional data for visualization
I had some seven-dimensional data myself. Although I finally settled for a small selection of 3-dimensional slice-throughs, one option is the Parallel Coordinates Plot. This works for an arbitrary number of dimensions! From Wikipedia: Parallel coordinates is a common way of visualizing high-dimensional geometry and an...
Ways to reduce high dimensional data for visualization
I had some seven-dimensional data myself. Although I finally settled for a small selection of 3-dimensional slice-throughs, one option is the Parallel Coordinates Plot. This works for an arbitrary num
Ways to reduce high dimensional data for visualization I had some seven-dimensional data myself. Although I finally settled for a small selection of 3-dimensional slice-throughs, one option is the Parallel Coordinates Plot. This works for an arbitrary number of dimensions! From Wikipedia: Parallel coordinates is a com...
Ways to reduce high dimensional data for visualization I had some seven-dimensional data myself. Although I finally settled for a small selection of 3-dimensional slice-throughs, one option is the Parallel Coordinates Plot. This works for an arbitrary num
13,292
Ways to reduce high dimensional data for visualization
Pairs plots: This is not a method of dimensionality reduction, but it is a really good way to get a quick overview of where some meaningful relationships might lie. In R, the base package contains the pairs() function, which is good for continuous data (it converts everything to continuous). A better function is ggpair...
Ways to reduce high dimensional data for visualization
Pairs plots: This is not a method of dimensionality reduction, but it is a really good way to get a quick overview of where some meaningful relationships might lie. In R, the base package contains the
Ways to reduce high dimensional data for visualization Pairs plots: This is not a method of dimensionality reduction, but it is a really good way to get a quick overview of where some meaningful relationships might lie. In R, the base package contains the pairs() function, which is good for continuous data (it converts...
Ways to reduce high dimensional data for visualization Pairs plots: This is not a method of dimensionality reduction, but it is a really good way to get a quick overview of where some meaningful relationships might lie. In R, the base package contains the
13,293
Ways to reduce high dimensional data for visualization
Principal Component Analysis is generally a good choice for dimension reduction in most cases, I am not sure it will suit for your particular problem, but it will find the orthogonal dimensions along which most variation of data samples are captured. If you develop in R, you can use prcomp() to simply convert your orig...
Ways to reduce high dimensional data for visualization
Principal Component Analysis is generally a good choice for dimension reduction in most cases, I am not sure it will suit for your particular problem, but it will find the orthogonal dimensions along
Ways to reduce high dimensional data for visualization Principal Component Analysis is generally a good choice for dimension reduction in most cases, I am not sure it will suit for your particular problem, but it will find the orthogonal dimensions along which most variation of data samples are captured. If you develop...
Ways to reduce high dimensional data for visualization Principal Component Analysis is generally a good choice for dimension reduction in most cases, I am not sure it will suit for your particular problem, but it will find the orthogonal dimensions along
13,294
Ways to reduce high dimensional data for visualization
Here are a couple of ways of portraying 3-D data with ggplot2. You can combine approaches (facet grids, colors, shapes, etc.) to increase the dimensionality of your graphic. doInstall <- TRUE # Change to FALSE if you don't want packages installed. toInstall <- c("ggplot2") if(doInstall){install.packages(toInstall, rep...
Ways to reduce high dimensional data for visualization
Here are a couple of ways of portraying 3-D data with ggplot2. You can combine approaches (facet grids, colors, shapes, etc.) to increase the dimensionality of your graphic. doInstall <- TRUE # Chang
Ways to reduce high dimensional data for visualization Here are a couple of ways of portraying 3-D data with ggplot2. You can combine approaches (facet grids, colors, shapes, etc.) to increase the dimensionality of your graphic. doInstall <- TRUE # Change to FALSE if you don't want packages installed. toInstall <- c("...
Ways to reduce high dimensional data for visualization Here are a couple of ways of portraying 3-D data with ggplot2. You can combine approaches (facet grids, colors, shapes, etc.) to increase the dimensionality of your graphic. doInstall <- TRUE # Chang
13,295
Ways to reduce high dimensional data for visualization
For the two-dimensional problem, I wonder if you could plot a map of your trace points with some symbol at the (x,y) coordinates. Then this symbol would either change color or oscillate around its fixed position (corresponding to $p=p_{mean}$ for example). I can see both being relatively easy to do in matplotlib. The o...
Ways to reduce high dimensional data for visualization
For the two-dimensional problem, I wonder if you could plot a map of your trace points with some symbol at the (x,y) coordinates. Then this symbol would either change color or oscillate around its fix
Ways to reduce high dimensional data for visualization For the two-dimensional problem, I wonder if you could plot a map of your trace points with some symbol at the (x,y) coordinates. Then this symbol would either change color or oscillate around its fixed position (corresponding to $p=p_{mean}$ for example). I can se...
Ways to reduce high dimensional data for visualization For the two-dimensional problem, I wonder if you could plot a map of your trace points with some symbol at the (x,y) coordinates. Then this symbol would either change color or oscillate around its fix
13,296
Constrained linear regression through a specified point
If $(x_0,y_0)$ is the point through which the regression line must pass, fit the model $y−y_0=\beta (x−x_0)+\varepsilon$, i.e., a linear regression with "no intercept" on a translated data set. In $R$, this might look like lm( I(y-y0) ~ I(x-x0) + 0). Note the + 0 at the end which indicates to lm that no intercept term ...
Constrained linear regression through a specified point
If $(x_0,y_0)$ is the point through which the regression line must pass, fit the model $y−y_0=\beta (x−x_0)+\varepsilon$, i.e., a linear regression with "no intercept" on a translated data set. In $R$
Constrained linear regression through a specified point If $(x_0,y_0)$ is the point through which the regression line must pass, fit the model $y−y_0=\beta (x−x_0)+\varepsilon$, i.e., a linear regression with "no intercept" on a translated data set. In $R$, this might look like lm( I(y-y0) ~ I(x-x0) + 0). Note the + 0 ...
Constrained linear regression through a specified point If $(x_0,y_0)$ is the point through which the regression line must pass, fit the model $y−y_0=\beta (x−x_0)+\varepsilon$, i.e., a linear regression with "no intercept" on a translated data set. In $R$
13,297
Changing null hypothesis in linear regression
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data summary(lm(y ~ x)) # original model summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1 summary(lm(y-x ~ x)) # testing against slope=1 Outputs: Estimate Std. Error t value Pr(>|t|) ...
Changing null hypothesis in linear regression
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data summary(lm(y ~ x)) # original model summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1 sum
Changing null hypothesis in linear regression set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data summary(lm(y ~ x)) # original model summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1 summary(lm(y-x ~ x)) # testing against slope=1 Outputs: ...
Changing null hypothesis in linear regression set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data summary(lm(y ~ x)) # original model summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1 sum
13,298
Changing null hypothesis in linear regression
Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is $$y=\beta_0+\beta_1x+u$$ then for hypothesis $\beta_1=0$, $R=[0,1]$ and $r=1$. For these type of hypotheses you can use linearHypothesis function from pa...
Changing null hypothesis in linear regression
Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is $$y=\beta_0+\beta_1x+u$$ then for
Changing null hypothesis in linear regression Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is $$y=\beta_0+\beta_1x+u$$ then for hypothesis $\beta_1=0$, $R=[0,1]$ and $r=1$. For these type of hypotheses...
Changing null hypothesis in linear regression Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is $$y=\beta_0+\beta_1x+u$$ then for
13,299
Changing null hypothesis in linear regression
It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're different from 1. It sounds like you don't care that the slope is 0.07 different from 1. But what if you can't really tell?...
Changing null hypothesis in linear regression
It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're diffe
Changing null hypothesis in linear regression It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're different from 1. It sounds like you don't care that the slope is 0.07 differe...
Changing null hypothesis in linear regression It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're diffe
13,300
Changing null hypothesis in linear regression
The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference. For that, you'll have to define what effect size you deem reasonable to reject the null. Testing whether your slope is ...
Changing null hypothesis in linear regression
The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference.
Changing null hypothesis in linear regression The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference. For that, you'll have to define what effect size you deem reasonable to r...
Changing null hypothesis in linear regression The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference.