idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
11,201
Equations in the news: Translating a multi-level model to a general audience
"Your teaching score depends on how well your students did compared to a prediction made based on What they knew beforehand, as measured by a pretest, How well we think the students can learn based on what we know about them individually (their "characteristics"), And how well students do on average in your district,...
Equations in the news: Translating a multi-level model to a general audience
"Your teaching score depends on how well your students did compared to a prediction made based on What they knew beforehand, as measured by a pretest, How well we think the students can learn based o
Equations in the news: Translating a multi-level model to a general audience "Your teaching score depends on how well your students did compared to a prediction made based on What they knew beforehand, as measured by a pretest, How well we think the students can learn based on what we know about them individually (the...
Equations in the news: Translating a multi-level model to a general audience "Your teaching score depends on how well your students did compared to a prediction made based on What they knew beforehand, as measured by a pretest, How well we think the students can learn based o
11,202
Equations in the news: Translating a multi-level model to a general audience
There is just nothing to understand here. Well, ok, it is just a standard linear regression model. It assumes that the score of a student can be described as a linear function of several factors, including school and teacher efficiency coefficients -- thus it shares all the standard problems of linear models, mainly th...
Equations in the news: Translating a multi-level model to a general audience
There is just nothing to understand here. Well, ok, it is just a standard linear regression model. It assumes that the score of a student can be described as a linear function of several factors, incl
Equations in the news: Translating a multi-level model to a general audience There is just nothing to understand here. Well, ok, it is just a standard linear regression model. It assumes that the score of a student can be described as a linear function of several factors, including school and teacher efficiency coeffic...
Equations in the news: Translating a multi-level model to a general audience There is just nothing to understand here. Well, ok, it is just a standard linear regression model. It assumes that the score of a student can be described as a linear function of several factors, incl
11,203
How does one interpret histograms given by TensorFlow in TensorBoard?
Currently the name "histogram" is a misnomer. You can find evidence of that in the README. The meaning of the histogram interface might change some day as they said there. However, this is what it currently means. The graphs in your question mix different runs of TensorFlow. Instead, look at the following graphs that d...
How does one interpret histograms given by TensorFlow in TensorBoard?
Currently the name "histogram" is a misnomer. You can find evidence of that in the README. The meaning of the histogram interface might change some day as they said there. However, this is what it cur
How does one interpret histograms given by TensorFlow in TensorBoard? Currently the name "histogram" is a misnomer. You can find evidence of that in the README. The meaning of the histogram interface might change some day as they said there. However, this is what it currently means. The graphs in your question mix diff...
How does one interpret histograms given by TensorFlow in TensorBoard? Currently the name "histogram" is a misnomer. You can find evidence of that in the README. The meaning of the histogram interface might change some day as they said there. However, this is what it cur
11,204
Similarity measures between curves?
You are comparing trajectories, or curves. This is a studied topic. Procrustes analysis and dynamic time warping, as EMS says, are tools of the trade. Once you've aligned the curves you'll want to measure the distance, say the Fréchet distance. If you want to share some of your data we could take a crack at it ourselve...
Similarity measures between curves?
You are comparing trajectories, or curves. This is a studied topic. Procrustes analysis and dynamic time warping, as EMS says, are tools of the trade. Once you've aligned the curves you'll want to mea
Similarity measures between curves? You are comparing trajectories, or curves. This is a studied topic. Procrustes analysis and dynamic time warping, as EMS says, are tools of the trade. Once you've aligned the curves you'll want to measure the distance, say the Fréchet distance. If you want to share some of your data ...
Similarity measures between curves? You are comparing trajectories, or curves. This is a studied topic. Procrustes analysis and dynamic time warping, as EMS says, are tools of the trade. Once you've aligned the curves you'll want to mea
11,205
Similarity measures between curves?
You might consider Procrustes distance, or some distance based on dynamic time warping (even if one of your dimensions is not "time" per se, you can still use this transformation idea). See this recent work on Tracklets for an illustrative use case of dynamic time warping for measuring similarity between 3D space curve...
Similarity measures between curves?
You might consider Procrustes distance, or some distance based on dynamic time warping (even if one of your dimensions is not "time" per se, you can still use this transformation idea). See this recen
Similarity measures between curves? You might consider Procrustes distance, or some distance based on dynamic time warping (even if one of your dimensions is not "time" per se, you can still use this transformation idea). See this recent work on Tracklets for an illustrative use case of dynamic time warping for measuri...
Similarity measures between curves? You might consider Procrustes distance, or some distance based on dynamic time warping (even if one of your dimensions is not "time" per se, you can still use this transformation idea). See this recen
11,206
Similarity measures between curves?
The original post asked for a metric between ORDERED points in 3D. The only such metric is the Frechet distance. There was no mention of time as one of the dimensions, so I would assume that all the dimensions have units of distance (i.e. the units are not mixed). This can be done by modifying a function recently upl...
Similarity measures between curves?
The original post asked for a metric between ORDERED points in 3D. The only such metric is the Frechet distance. There was no mention of time as one of the dimensions, so I would assume that all the
Similarity measures between curves? The original post asked for a metric between ORDERED points in 3D. The only such metric is the Frechet distance. There was no mention of time as one of the dimensions, so I would assume that all the dimensions have units of distance (i.e. the units are not mixed). This can be done ...
Similarity measures between curves? The original post asked for a metric between ORDERED points in 3D. The only such metric is the Frechet distance. There was no mention of time as one of the dimensions, so I would assume that all the
11,207
Similarity measures between curves?
Hausdorff Distance might be what you are looking for. Hausdorff Distance between two point sets $X$ and $Y$ is defined as, $d_H(X, Y) = \max \{\sup_{x \in X} \inf_{y \in Y} ||x - y||, \sup_{y \in Y} \inf_{x \in X} ||x - y||\}$.
Similarity measures between curves?
Hausdorff Distance might be what you are looking for. Hausdorff Distance between two point sets $X$ and $Y$ is defined as, $d_H(X, Y) = \max \{\sup_{x \in X} \inf_{y \in Y} ||x - y||, \sup_{y \in Y} \
Similarity measures between curves? Hausdorff Distance might be what you are looking for. Hausdorff Distance between two point sets $X$ and $Y$ is defined as, $d_H(X, Y) = \max \{\sup_{x \in X} \inf_{y \in Y} ||x - y||, \sup_{y \in Y} \inf_{x \in X} ||x - y||\}$.
Similarity measures between curves? Hausdorff Distance might be what you are looking for. Hausdorff Distance between two point sets $X$ and $Y$ is defined as, $d_H(X, Y) = \max \{\sup_{x \in X} \inf_{y \in Y} ||x - y||, \sup_{y \in Y} \
11,208
Similarity measures between curves?
Similarity is quantity that reflects the strength of relationship between two objects or two features. This quantity is usually having range of either -1 to +1 or normalized into 0 to 1. Than you need to calculate the distance of two features by one of the methods below: Simple Matching distance Jaccard's distance Ham...
Similarity measures between curves?
Similarity is quantity that reflects the strength of relationship between two objects or two features. This quantity is usually having range of either -1 to +1 or normalized into 0 to 1. Than you need
Similarity measures between curves? Similarity is quantity that reflects the strength of relationship between two objects or two features. This quantity is usually having range of either -1 to +1 or normalized into 0 to 1. Than you need to calculate the distance of two features by one of the methods below: Simple Matc...
Similarity measures between curves? Similarity is quantity that reflects the strength of relationship between two objects or two features. This quantity is usually having range of either -1 to +1 or normalized into 0 to 1. Than you need
11,209
Visualizing the calibration of predicted probability of a model
Your thinking is good. John Tukey recommended binning by halves: split the data into upper and lower halves, then split those halves, then split the extreme halves recursively. Compared to equal-width binning, this allows visual inspection of tail behavior without devoting too many graphical elements to the bulk of th...
Visualizing the calibration of predicted probability of a model
Your thinking is good. John Tukey recommended binning by halves: split the data into upper and lower halves, then split those halves, then split the extreme halves recursively. Compared to equal-widt
Visualizing the calibration of predicted probability of a model Your thinking is good. John Tukey recommended binning by halves: split the data into upper and lower halves, then split those halves, then split the extreme halves recursively. Compared to equal-width binning, this allows visual inspection of tail behavio...
Visualizing the calibration of predicted probability of a model Your thinking is good. John Tukey recommended binning by halves: split the data into upper and lower halves, then split those halves, then split the extreme halves recursively. Compared to equal-widt
11,210
Visualizing the calibration of predicted probability of a model
Another option is isotonic regression. It is similar to whuber's answer except the bins are generated dynamically instead of by splitting in halves, with a requirement that outputs are strictly increasing. This primary usage of isotonic regression is to recalibrate your probabilities if they are shown to be poorly cali...
Visualizing the calibration of predicted probability of a model
Another option is isotonic regression. It is similar to whuber's answer except the bins are generated dynamically instead of by splitting in halves, with a requirement that outputs are strictly increa
Visualizing the calibration of predicted probability of a model Another option is isotonic regression. It is similar to whuber's answer except the bins are generated dynamically instead of by splitting in halves, with a requirement that outputs are strictly increasing. This primary usage of isotonic regression is to re...
Visualizing the calibration of predicted probability of a model Another option is isotonic regression. It is similar to whuber's answer except the bins are generated dynamically instead of by splitting in halves, with a requirement that outputs are strictly increa
11,211
Visualizing the calibration of predicted probability of a model
You might also want to look at package "verification": http://cran.r-project.org/web/packages/verification/index.html There are plots in the vignette that might be useful: http://cran.r-project.org/web/packages/verification/vignettes/verification.pdf
Visualizing the calibration of predicted probability of a model
You might also want to look at package "verification": http://cran.r-project.org/web/packages/verification/index.html There are plots in the vignette that might be useful: http://cran.r-project.org/we
Visualizing the calibration of predicted probability of a model You might also want to look at package "verification": http://cran.r-project.org/web/packages/verification/index.html There are plots in the vignette that might be useful: http://cran.r-project.org/web/packages/verification/vignettes/verification.pdf
Visualizing the calibration of predicted probability of a model You might also want to look at package "verification": http://cran.r-project.org/web/packages/verification/index.html There are plots in the vignette that might be useful: http://cran.r-project.org/we
11,212
Non-parametric test if two samples are drawn from the same distribution
The Kolmogorov-Smirnov test is the most common way to do this, but there are also some other options. The tests are based on the empirical cumulative distribution functions. The basic procedure is: Choose a way to measure the distance between the ECDFs. Since ECDFs are functions, the obvious candidates are the $L^p$ n...
Non-parametric test if two samples are drawn from the same distribution
The Kolmogorov-Smirnov test is the most common way to do this, but there are also some other options. The tests are based on the empirical cumulative distribution functions. The basic procedure is: C
Non-parametric test if two samples are drawn from the same distribution The Kolmogorov-Smirnov test is the most common way to do this, but there are also some other options. The tests are based on the empirical cumulative distribution functions. The basic procedure is: Choose a way to measure the distance between the ...
Non-parametric test if two samples are drawn from the same distribution The Kolmogorov-Smirnov test is the most common way to do this, but there are also some other options. The tests are based on the empirical cumulative distribution functions. The basic procedure is: C
11,213
Why is pruning not needed for random forest trees?
Roughly speaking, some of the potential over-fitting that might happen in a single tree (which is a reason you do pruning generally) is mitigated by two things in a Random Forest: The fact that the samples used to train the individual trees are "bootstrapped". The fact that you have a multitude of random trees using r...
Why is pruning not needed for random forest trees?
Roughly speaking, some of the potential over-fitting that might happen in a single tree (which is a reason you do pruning generally) is mitigated by two things in a Random Forest: The fact that the s
Why is pruning not needed for random forest trees? Roughly speaking, some of the potential over-fitting that might happen in a single tree (which is a reason you do pruning generally) is mitigated by two things in a Random Forest: The fact that the samples used to train the individual trees are "bootstrapped". The fac...
Why is pruning not needed for random forest trees? Roughly speaking, some of the potential over-fitting that might happen in a single tree (which is a reason you do pruning generally) is mitigated by two things in a Random Forest: The fact that the s
11,214
Why is pruning not needed for random forest trees?
A decision tree that is very deep or of full depth tends to learn the noise in the data. They overfit the data leading to low bias but high variance. Pruning is a suitable approach used in decision trees to reduce overfitting. However, generally random forests would give good performance with full depth. As random fore...
Why is pruning not needed for random forest trees?
A decision tree that is very deep or of full depth tends to learn the noise in the data. They overfit the data leading to low bias but high variance. Pruning is a suitable approach used in decision tr
Why is pruning not needed for random forest trees? A decision tree that is very deep or of full depth tends to learn the noise in the data. They overfit the data leading to low bias but high variance. Pruning is a suitable approach used in decision trees to reduce overfitting. However, generally random forests would gi...
Why is pruning not needed for random forest trees? A decision tree that is very deep or of full depth tends to learn the noise in the data. They overfit the data leading to low bias but high variance. Pruning is a suitable approach used in decision tr
11,215
Standard errors for multiple regression coefficients?
When doing least squares estimation (assuming a normal random component) the regression parameter estimates are normally distributed with mean equal to the true regression parameter and covariance matrix $\Sigma = s^2\cdot(X^TX)^{-1}$ where $s^2$ is the residual variance and $X^TX$ is the design matrix. $X^T$ is the tr...
Standard errors for multiple regression coefficients?
When doing least squares estimation (assuming a normal random component) the regression parameter estimates are normally distributed with mean equal to the true regression parameter and covariance mat
Standard errors for multiple regression coefficients? When doing least squares estimation (assuming a normal random component) the regression parameter estimates are normally distributed with mean equal to the true regression parameter and covariance matrix $\Sigma = s^2\cdot(X^TX)^{-1}$ where $s^2$ is the residual var...
Standard errors for multiple regression coefficients? When doing least squares estimation (assuming a normal random component) the regression parameter estimates are normally distributed with mean equal to the true regression parameter and covariance mat
11,216
How to use delta method for standard errors of marginal effects?
The delta method simply says that if you can represent an auxiliary variable you can represent as a function of normally distributed random variables, that auxiliary variable is approximately normally distributed with variance corresponding to how much the auxiliary varies with respect to the normal variables. (EDIT: a...
How to use delta method for standard errors of marginal effects?
The delta method simply says that if you can represent an auxiliary variable you can represent as a function of normally distributed random variables, that auxiliary variable is approximately normally
How to use delta method for standard errors of marginal effects? The delta method simply says that if you can represent an auxiliary variable you can represent as a function of normally distributed random variables, that auxiliary variable is approximately normally distributed with variance corresponding to how much th...
How to use delta method for standard errors of marginal effects? The delta method simply says that if you can represent an auxiliary variable you can represent as a function of normally distributed random variables, that auxiliary variable is approximately normally
11,217
What sense does it make to compare p-values to each other?
Many people would argue that a $p$-value can either be significant ($p< \alpha$) or not, and so it does not (ever) make sense to compare two $p$-values between each other. This is wrong; in some cases it does. In your particular case there is absolutely no doubt that you can directly compare the $p$-values. If the samp...
What sense does it make to compare p-values to each other?
Many people would argue that a $p$-value can either be significant ($p< \alpha$) or not, and so it does not (ever) make sense to compare two $p$-values between each other. This is wrong; in some cases
What sense does it make to compare p-values to each other? Many people would argue that a $p$-value can either be significant ($p< \alpha$) or not, and so it does not (ever) make sense to compare two $p$-values between each other. This is wrong; in some cases it does. In your particular case there is absolutely no doub...
What sense does it make to compare p-values to each other? Many people would argue that a $p$-value can either be significant ($p< \alpha$) or not, and so it does not (ever) make sense to compare two $p$-values between each other. This is wrong; in some cases
11,218
What sense does it make to compare p-values to each other?
Thanks to whoever just downvoted me, as I now have a completely different answer to this question.I have accordingly deleted my original answer as it is incorrect from this perspective. In the context of this question, which is only dealing with the question "was A or B a better discriminator in my study", we are deali...
What sense does it make to compare p-values to each other?
Thanks to whoever just downvoted me, as I now have a completely different answer to this question.I have accordingly deleted my original answer as it is incorrect from this perspective. In the context
What sense does it make to compare p-values to each other? Thanks to whoever just downvoted me, as I now have a completely different answer to this question.I have accordingly deleted my original answer as it is incorrect from this perspective. In the context of this question, which is only dealing with the question "w...
What sense does it make to compare p-values to each other? Thanks to whoever just downvoted me, as I now have a completely different answer to this question.I have accordingly deleted my original answer as it is incorrect from this perspective. In the context
11,219
What sense does it make to compare p-values to each other?
You get a difference in p, but it is unclear what that difference means (is it large, small, significant?) Maybe use bootstrapping: select (with replacement) from your data, redo your tests, compute difference of p's (p_a - p_b) , repeat 100-200 times check what fraction of your delta p's is < 0 (meaning p of A is bel...
What sense does it make to compare p-values to each other?
You get a difference in p, but it is unclear what that difference means (is it large, small, significant?) Maybe use bootstrapping: select (with replacement) from your data, redo your tests, compute d
What sense does it make to compare p-values to each other? You get a difference in p, but it is unclear what that difference means (is it large, small, significant?) Maybe use bootstrapping: select (with replacement) from your data, redo your tests, compute difference of p's (p_a - p_b) , repeat 100-200 times check wh...
What sense does it make to compare p-values to each other? You get a difference in p, but it is unclear what that difference means (is it large, small, significant?) Maybe use bootstrapping: select (with replacement) from your data, redo your tests, compute d
11,220
What sense does it make to compare p-values to each other?
Added an answer as it was too long for a comment! Michelle has a good response, but the many comments show some common discussions that come up about p-values. The basic ideas are the following: 1) A smaller p-value doesn't mean a result is more or less significant. It just means that the chances of getting a result at...
What sense does it make to compare p-values to each other?
Added an answer as it was too long for a comment! Michelle has a good response, but the many comments show some common discussions that come up about p-values. The basic ideas are the following: 1) A
What sense does it make to compare p-values to each other? Added an answer as it was too long for a comment! Michelle has a good response, but the many comments show some common discussions that come up about p-values. The basic ideas are the following: 1) A smaller p-value doesn't mean a result is more or less signifi...
What sense does it make to compare p-values to each other? Added an answer as it was too long for a comment! Michelle has a good response, but the many comments show some common discussions that come up about p-values. The basic ideas are the following: 1) A
11,221
Compute approximate quantiles for a stream of integers using moments?
You don't state this explicitly, but from your description of the problem it seems likely that you're after a high-biased set of quantiles (e.g., 50th, 90th, 95th and 99th percentiles). If that's the case, I've had a lot of success with the method described in "Effective Computation of Biased Quantiles over Data Stream...
Compute approximate quantiles for a stream of integers using moments?
You don't state this explicitly, but from your description of the problem it seems likely that you're after a high-biased set of quantiles (e.g., 50th, 90th, 95th and 99th percentiles). If that's the
Compute approximate quantiles for a stream of integers using moments? You don't state this explicitly, but from your description of the problem it seems likely that you're after a high-biased set of quantiles (e.g., 50th, 90th, 95th and 99th percentiles). If that's the case, I've had a lot of success with the method de...
Compute approximate quantiles for a stream of integers using moments? You don't state this explicitly, but from your description of the problem it seems likely that you're after a high-biased set of quantiles (e.g., 50th, 90th, 95th and 99th percentiles). If that's the
11,222
Compute approximate quantiles for a stream of integers using moments?
There is a more recent and much simpler algorithm for this that provides very good estimates of the extreme quantiles. The basic idea is that smaller bins are used at the extremes in a way that both bounds the size of the data structure and guarantees higher accuracy for small or large $q$. The algorithm is available i...
Compute approximate quantiles for a stream of integers using moments?
There is a more recent and much simpler algorithm for this that provides very good estimates of the extreme quantiles. The basic idea is that smaller bins are used at the extremes in a way that both b
Compute approximate quantiles for a stream of integers using moments? There is a more recent and much simpler algorithm for this that provides very good estimates of the extreme quantiles. The basic idea is that smaller bins are used at the extremes in a way that both bounds the size of the data structure and guarantee...
Compute approximate quantiles for a stream of integers using moments? There is a more recent and much simpler algorithm for this that provides very good estimates of the extreme quantiles. The basic idea is that smaller bins are used at the extremes in a way that both b
11,223
Why is the null hypothesis always a point value rather than a range in hypothesis testing?
First, it is not always the case. There might be a composite null. Most standard tests have a simple null because in the framework of Neyman and Pearson the aim is to provide a decision rule that permits you to control the error of rejecting the null when it is true. To control this error you need to specify one distr...
Why is the null hypothesis always a point value rather than a range in hypothesis testing?
First, it is not always the case. There might be a composite null. Most standard tests have a simple null because in the framework of Neyman and Pearson the aim is to provide a decision rule that per
Why is the null hypothesis always a point value rather than a range in hypothesis testing? First, it is not always the case. There might be a composite null. Most standard tests have a simple null because in the framework of Neyman and Pearson the aim is to provide a decision rule that permits you to control the error...
Why is the null hypothesis always a point value rather than a range in hypothesis testing? First, it is not always the case. There might be a composite null. Most standard tests have a simple null because in the framework of Neyman and Pearson the aim is to provide a decision rule that per
11,224
Why is the null hypothesis always a point value rather than a range in hypothesis testing?
I don't think that the null hypothesis should always be something like correlation=0.5. At least in the problems which I have come across that wasn't the case. For example in information theoretic statistics the following problem is considered. Suppose that $X_1, X_2, \cdots, X_n$ are coming from an unknown distributio...
Why is the null hypothesis always a point value rather than a range in hypothesis testing?
I don't think that the null hypothesis should always be something like correlation=0.5. At least in the problems which I have come across that wasn't the case. For example in information theoretic sta
Why is the null hypothesis always a point value rather than a range in hypothesis testing? I don't think that the null hypothesis should always be something like correlation=0.5. At least in the problems which I have come across that wasn't the case. For example in information theoretic statistics the following problem...
Why is the null hypothesis always a point value rather than a range in hypothesis testing? I don't think that the null hypothesis should always be something like correlation=0.5. At least in the problems which I have come across that wasn't the case. For example in information theoretic sta
11,225
Why can't a single ReLU learn a ReLU?
There's a hint in your plots of the loss as a function of $w$. These plots have a "kink" near $w=0$: that's because on the left of 0, the gradient of the loss is vanishing to 0 (however, $w=0$ is a suboptimal solution because the loss is higher there than it is for $w=1$). Moreover, this plot shows that the loss functi...
Why can't a single ReLU learn a ReLU?
There's a hint in your plots of the loss as a function of $w$. These plots have a "kink" near $w=0$: that's because on the left of 0, the gradient of the loss is vanishing to 0 (however, $w=0$ is a su
Why can't a single ReLU learn a ReLU? There's a hint in your plots of the loss as a function of $w$. These plots have a "kink" near $w=0$: that's because on the left of 0, the gradient of the loss is vanishing to 0 (however, $w=0$ is a suboptimal solution because the loss is higher there than it is for $w=1$). Moreover...
Why can't a single ReLU learn a ReLU? There's a hint in your plots of the loss as a function of $w$. These plots have a "kink" near $w=0$: that's because on the left of 0, the gradient of the loss is vanishing to 0 (however, $w=0$ is a su
11,226
Weighted Variance, one more time
Yes, you should expect both examples (unweighted vs weighted) to give you the same results. I have implemented the two algorithms from the Wikipedia article. This one works: If all of the $x_i$ are drawn from the same distribution and the integer weights $w_i$ indicate frequency of occurrence in the sample, then the u...
Weighted Variance, one more time
Yes, you should expect both examples (unweighted vs weighted) to give you the same results. I have implemented the two algorithms from the Wikipedia article. This one works: If all of the $x_i$ are d
Weighted Variance, one more time Yes, you should expect both examples (unweighted vs weighted) to give you the same results. I have implemented the two algorithms from the Wikipedia article. This one works: If all of the $x_i$ are drawn from the same distribution and the integer weights $w_i$ indicate frequency of occ...
Weighted Variance, one more time Yes, you should expect both examples (unweighted vs weighted) to give you the same results. I have implemented the two algorithms from the Wikipedia article. This one works: If all of the $x_i$ are d
11,227
What is the practical difference between association rules and decision trees in data mining?
Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their features. They basically map the set of record features $\mathcal{F} = {F_1 , \dots, F_m }$ (attributes, variables) into the class attribute $C$ (target variable), the object of...
What is the practical difference between association rules and decision trees in data mining?
Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their features. They basically map the set of record features $
What is the practical difference between association rules and decision trees in data mining? Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their features. They basically map the set of record features $\mathcal{F} = {F_1 , \dots...
What is the practical difference between association rules and decision trees in data mining? Basically, Decision Trees are a pure classification techniques. These techniques aim at labelling records of unknown class making use of their features. They basically map the set of record features $
11,228
What is the practical difference between association rules and decision trees in data mining?
"Association rules aim to find all rules above the given thresholds involving overlapping subsets of records, whereas decision trees find regions in space where most records belong to the same class. On the other hand, decision trees can miss many predictive rules found by association rules because they successively pa...
What is the practical difference between association rules and decision trees in data mining?
"Association rules aim to find all rules above the given thresholds involving overlapping subsets of records, whereas decision trees find regions in space where most records belong to the same class.
What is the practical difference between association rules and decision trees in data mining? "Association rules aim to find all rules above the given thresholds involving overlapping subsets of records, whereas decision trees find regions in space where most records belong to the same class. On the other hand, decisio...
What is the practical difference between association rules and decision trees in data mining? "Association rules aim to find all rules above the given thresholds involving overlapping subsets of records, whereas decision trees find regions in space where most records belong to the same class.
11,229
What is the practical difference between association rules and decision trees in data mining?
We may argue that both association rules and decision trees suggest a set of rules to the user and hence both are similar, but we must understand the theoretical difference between decision trees and association rules, and further how rules suggested by both are different in meaning or in use. Firstly, decision tree i...
What is the practical difference between association rules and decision trees in data mining?
We may argue that both association rules and decision trees suggest a set of rules to the user and hence both are similar, but we must understand the theoretical difference between decision trees and
What is the practical difference between association rules and decision trees in data mining? We may argue that both association rules and decision trees suggest a set of rules to the user and hence both are similar, but we must understand the theoretical difference between decision trees and association rules, and fur...
What is the practical difference between association rules and decision trees in data mining? We may argue that both association rules and decision trees suggest a set of rules to the user and hence both are similar, but we must understand the theoretical difference between decision trees and
11,230
Why is maximum likelihood estimation considered to be a frequentist technique
You apply a relatively narrow definition of frequentism and MLE - if we are a bit more generous and define Frequentism: goal of consistency, (asymptotic) optimality, unbiasedness, and controlled error rates under repeated sampling, independent of the true parameters MLE = point estimate + confidence intervals (CIs) t...
Why is maximum likelihood estimation considered to be a frequentist technique
You apply a relatively narrow definition of frequentism and MLE - if we are a bit more generous and define Frequentism: goal of consistency, (asymptotic) optimality, unbiasedness, and controlled erro
Why is maximum likelihood estimation considered to be a frequentist technique You apply a relatively narrow definition of frequentism and MLE - if we are a bit more generous and define Frequentism: goal of consistency, (asymptotic) optimality, unbiasedness, and controlled error rates under repeated sampling, independe...
Why is maximum likelihood estimation considered to be a frequentist technique You apply a relatively narrow definition of frequentism and MLE - if we are a bit more generous and define Frequentism: goal of consistency, (asymptotic) optimality, unbiasedness, and controlled erro
11,231
Why is maximum likelihood estimation considered to be a frequentist technique
Basically, for two reasons: Maximum likelihood is a point-wise estimate of the model parameters. We Bayesians like posterior distributions. Maximum likelihood assumes no prior distribution, We Bayesians need our priors, it could be informative or uninformative, but it need to exists
Why is maximum likelihood estimation considered to be a frequentist technique
Basically, for two reasons: Maximum likelihood is a point-wise estimate of the model parameters. We Bayesians like posterior distributions. Maximum likelihood assumes no prior distribution, We Bayesi
Why is maximum likelihood estimation considered to be a frequentist technique Basically, for two reasons: Maximum likelihood is a point-wise estimate of the model parameters. We Bayesians like posterior distributions. Maximum likelihood assumes no prior distribution, We Bayesians need our priors, it could be informati...
Why is maximum likelihood estimation considered to be a frequentist technique Basically, for two reasons: Maximum likelihood is a point-wise estimate of the model parameters. We Bayesians like posterior distributions. Maximum likelihood assumes no prior distribution, We Bayesi
11,232
Why is Pearson parametric and Spearman non-parametric
The problem is that "nonparametric" really has two distinct meanings these days. The definition in Wikipedia applies to things like nonparametric curve fitting, eg via splines or local regression. The other meaning, which is older, is more along the lines of "distribution-free" -- that is, techniques that can be applie...
Why is Pearson parametric and Spearman non-parametric
The problem is that "nonparametric" really has two distinct meanings these days. The definition in Wikipedia applies to things like nonparametric curve fitting, eg via splines or local regression. The
Why is Pearson parametric and Spearman non-parametric The problem is that "nonparametric" really has two distinct meanings these days. The definition in Wikipedia applies to things like nonparametric curve fitting, eg via splines or local regression. The other meaning, which is older, is more along the lines of "distri...
Why is Pearson parametric and Spearman non-parametric The problem is that "nonparametric" really has two distinct meanings these days. The definition in Wikipedia applies to things like nonparametric curve fitting, eg via splines or local regression. The
11,233
Why is Pearson parametric and Spearman non-parametric
I think the only reason why Pearson's correlation coefficient would be called parametric is because you can use it to estimate the parameters of the multivariate normal distribution. for instance, bivariate normal distribution has 5 parameters: two means, two variances and the correlation coefficient. The latter can be...
Why is Pearson parametric and Spearman non-parametric
I think the only reason why Pearson's correlation coefficient would be called parametric is because you can use it to estimate the parameters of the multivariate normal distribution. for instance, biv
Why is Pearson parametric and Spearman non-parametric I think the only reason why Pearson's correlation coefficient would be called parametric is because you can use it to estimate the parameters of the multivariate normal distribution. for instance, bivariate normal distribution has 5 parameters: two means, two varian...
Why is Pearson parametric and Spearman non-parametric I think the only reason why Pearson's correlation coefficient would be called parametric is because you can use it to estimate the parameters of the multivariate normal distribution. for instance, biv
11,234
Why is Pearson parametric and Spearman non-parametric
Simplest answer I think is that Spearmen's rho test uses ordinal data (numbers that can be ranked but don't tell you anything about the interval between the numbers e.g. 3 flavours of ice cream are ranked 1, 2 and 3 but this only tells you which flavour was preferred not how much by). Ordinal data cannot be used in pa...
Why is Pearson parametric and Spearman non-parametric
Simplest answer I think is that Spearmen's rho test uses ordinal data (numbers that can be ranked but don't tell you anything about the interval between the numbers e.g. 3 flavours of ice cream are ra
Why is Pearson parametric and Spearman non-parametric Simplest answer I think is that Spearmen's rho test uses ordinal data (numbers that can be ranked but don't tell you anything about the interval between the numbers e.g. 3 flavours of ice cream are ranked 1, 2 and 3 but this only tells you which flavour was preferre...
Why is Pearson parametric and Spearman non-parametric Simplest answer I think is that Spearmen's rho test uses ordinal data (numbers that can be ranked but don't tell you anything about the interval between the numbers e.g. 3 flavours of ice cream are ra
11,235
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
Let us recall some things. Let $(\Omega,A,P)$ be a probability space, $\Omega$ is our sample set, $A$ is our $\sigma$-algebra, and $P$ is a probability function defined on $A$. A random variable is a measurable function $X:\Omega \to \mathbb{R}$ i.e. $X^{-1}(S) \in A$ for any Lebesgue measurable subset in $\mathbb{R}$....
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
Let us recall some things. Let $(\Omega,A,P)$ be a probability space, $\Omega$ is our sample set, $A$ is our $\sigma$-algebra, and $P$ is a probability function defined on $A$. A random variable is a
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? Let us recall some things. Let $(\Omega,A,P)$ be a probability space, $\Omega$ is our sample set, $A$ is our $\sigma$-algebra, and $P$ is a probability function defined on $A$. A random variable is a measurable function $X:\Omega \to \mat...
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? Let us recall some things. Let $(\Omega,A,P)$ be a probability space, $\Omega$ is our sample set, $A$ is our $\sigma$-algebra, and $P$ is a probability function defined on $A$. A random variable is a
11,236
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
To answer the request for an example of two densities with the same integral (i.e. have the same distribution function) consider these functions defined on the real numbers: f(x) = 1 ; when x is odd integer f(x) = exp(-x^2) ; elsewhere and then; f2(x) = 1 ; when x is even integer f2(x) = exp(-x^2) ; elsewhere ...
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
To answer the request for an example of two densities with the same integral (i.e. have the same distribution function) consider these functions defined on the real numbers: f(x) = 1 ; when x is odd
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? To answer the request for an example of two densities with the same integral (i.e. have the same distribution function) consider these functions defined on the real numbers: f(x) = 1 ; when x is odd integer f(x) = exp(-x^2) ; elsewher...
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? To answer the request for an example of two densities with the same integral (i.e. have the same distribution function) consider these functions defined on the real numbers: f(x) = 1 ; when x is odd
11,237
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
I disagree with the statement, "the probability distribution function does not uniquely determine a probability measure", that you say in your opening question. It does uniquely determine it. Let $f_1,f_2:\mathbb{R}\to [0,\infty)$ be two probability mass functions. If, $$ \int_E f_1 = \int_E f_2 $$ For any measurabl...
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution?
I disagree with the statement, "the probability distribution function does not uniquely determine a probability measure", that you say in your opening question. It does uniquely determine it. Let $f_
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? I disagree with the statement, "the probability distribution function does not uniquely determine a probability measure", that you say in your opening question. It does uniquely determine it. Let $f_1,f_2:\mathbb{R}\to [0,\infty)$ be two...
Why does a Cumulative Distribution Function (CDF) uniquely define a distribution? I disagree with the statement, "the probability distribution function does not uniquely determine a probability measure", that you say in your opening question. It does uniquely determine it. Let $f_
11,238
Why aren't power or log transformations taught much in machine learning?
The book Applied Predictive Modeling by Kuhn and Johnson is a highly-regarded practical machine learning book with a large section on variable transformation including Box-Cox. The authors claim that many machine learning algorithms work better if the features have symmetric and unimodal distributions. Transforming the...
Why aren't power or log transformations taught much in machine learning?
The book Applied Predictive Modeling by Kuhn and Johnson is a highly-regarded practical machine learning book with a large section on variable transformation including Box-Cox. The authors claim that
Why aren't power or log transformations taught much in machine learning? The book Applied Predictive Modeling by Kuhn and Johnson is a highly-regarded practical machine learning book with a large section on variable transformation including Box-Cox. The authors claim that many machine learning algorithms work better if...
Why aren't power or log transformations taught much in machine learning? The book Applied Predictive Modeling by Kuhn and Johnson is a highly-regarded practical machine learning book with a large section on variable transformation including Box-Cox. The authors claim that
11,239
Why aren't power or log transformations taught much in machine learning?
Well from my own perspective, quite often I am interested in the predictive distribution of the response variable, rather than just the conditional mean, and in that case it is better to use a likelihood that more correctly represents the target distribution. For instance, I like to use kernelised linear models rather...
Why aren't power or log transformations taught much in machine learning?
Well from my own perspective, quite often I am interested in the predictive distribution of the response variable, rather than just the conditional mean, and in that case it is better to use a likelih
Why aren't power or log transformations taught much in machine learning? Well from my own perspective, quite often I am interested in the predictive distribution of the response variable, rather than just the conditional mean, and in that case it is better to use a likelihood that more correctly represents the target d...
Why aren't power or log transformations taught much in machine learning? Well from my own perspective, quite often I am interested in the predictive distribution of the response variable, rather than just the conditional mean, and in that case it is better to use a likelih
11,240
Why aren't power or log transformations taught much in machine learning?
Here is my afterward thoughts. I think it's because ML is largely deal with classification, and classification is no need for transforming y (y is categorical). ML usually deal with large independent variables (e.g. thousands in NLP ) and logistic regression doesn't require normality; I think that's why they don't us...
Why aren't power or log transformations taught much in machine learning?
Here is my afterward thoughts. I think it's because ML is largely deal with classification, and classification is no need for transforming y (y is categorical). ML usually deal with large independen
Why aren't power or log transformations taught much in machine learning? Here is my afterward thoughts. I think it's because ML is largely deal with classification, and classification is no need for transforming y (y is categorical). ML usually deal with large independent variables (e.g. thousands in NLP ) and logist...
Why aren't power or log transformations taught much in machine learning? Here is my afterward thoughts. I think it's because ML is largely deal with classification, and classification is no need for transforming y (y is categorical). ML usually deal with large independen
11,241
How to choose an optimal number of latent factors in non-negative matrix factorization?
To choose an optimal number of latent factors in non-negative matrix factorization, use cross-validation. As you wrote, the aim of NMF is to find low-dimensional $\mathbf W$ and $\mathbf H$ with all non-negative elements minimizing reconstruction error $\|\mathbf V-\mathbf W\mathbf H\|^2$. Imagine that we leave out one...
How to choose an optimal number of latent factors in non-negative matrix factorization?
To choose an optimal number of latent factors in non-negative matrix factorization, use cross-validation. As you wrote, the aim of NMF is to find low-dimensional $\mathbf W$ and $\mathbf H$ with all n
How to choose an optimal number of latent factors in non-negative matrix factorization? To choose an optimal number of latent factors in non-negative matrix factorization, use cross-validation. As you wrote, the aim of NMF is to find low-dimensional $\mathbf W$ and $\mathbf H$ with all non-negative elements minimizing ...
How to choose an optimal number of latent factors in non-negative matrix factorization? To choose an optimal number of latent factors in non-negative matrix factorization, use cross-validation. As you wrote, the aim of NMF is to find low-dimensional $\mathbf W$ and $\mathbf H$ with all n
11,242
How to choose an optimal number of latent factors in non-negative matrix factorization?
To my knowledge, there are two good criteria: 1) the cophenetic correlation coefficient and 2) comparing the residual sum of squares against randomized data for a set of ranks (maybe there is a name for that, but I dont remember) Cophenetic correlation coefficient: You repeat NMF several time per rank and you calculat...
How to choose an optimal number of latent factors in non-negative matrix factorization?
To my knowledge, there are two good criteria: 1) the cophenetic correlation coefficient and 2) comparing the residual sum of squares against randomized data for a set of ranks (maybe there is a name f
How to choose an optimal number of latent factors in non-negative matrix factorization? To my knowledge, there are two good criteria: 1) the cophenetic correlation coefficient and 2) comparing the residual sum of squares against randomized data for a set of ranks (maybe there is a name for that, but I dont remember) C...
How to choose an optimal number of latent factors in non-negative matrix factorization? To my knowledge, there are two good criteria: 1) the cophenetic correlation coefficient and 2) comparing the residual sum of squares against randomized data for a set of ranks (maybe there is a name f
11,243
How to choose an optimal number of latent factors in non-negative matrix factorization?
In the NMF factorization, the parameter $k$ (noted $r$ in most literature) is the rank of the approximation of $V$ and is chosen such that $k < \text{min}(m, n)$. The choice of the parameter determines the representation of your data $V$ in an over-complete basis composed of the columns of $W$; the $w_i \text{ , } i = ...
How to choose an optimal number of latent factors in non-negative matrix factorization?
In the NMF factorization, the parameter $k$ (noted $r$ in most literature) is the rank of the approximation of $V$ and is chosen such that $k < \text{min}(m, n)$. The choice of the parameter determine
How to choose an optimal number of latent factors in non-negative matrix factorization? In the NMF factorization, the parameter $k$ (noted $r$ in most literature) is the rank of the approximation of $V$ and is chosen such that $k < \text{min}(m, n)$. The choice of the parameter determines the representation of your dat...
How to choose an optimal number of latent factors in non-negative matrix factorization? In the NMF factorization, the parameter $k$ (noted $r$ in most literature) is the rank of the approximation of $V$ and is chosen such that $k < \text{min}(m, n)$. The choice of the parameter determine
11,244
Boosting AND Bagging Trees (XGBoost, LightGBM)
Bagging: Take N random samples of x% of the samples and y% of the Features Instances are repeatedly sub-sampled in Bagging, but not Features. (RandomForests, XGBoost and CatBoost do both): Given dataset D of size N. For m in n_models: Create new dataset D_i of size N by sampling with replacement from D. Tra...
Boosting AND Bagging Trees (XGBoost, LightGBM)
Bagging: Take N random samples of x% of the samples and y% of the Features Instances are repeatedly sub-sampled in Bagging, but not Features. (RandomForests, XGBoost and CatBoost do both): Given d
Boosting AND Bagging Trees (XGBoost, LightGBM) Bagging: Take N random samples of x% of the samples and y% of the Features Instances are repeatedly sub-sampled in Bagging, but not Features. (RandomForests, XGBoost and CatBoost do both): Given dataset D of size N. For m in n_models: Create new dataset D_i of size...
Boosting AND Bagging Trees (XGBoost, LightGBM) Bagging: Take N random samples of x% of the samples and y% of the Features Instances are repeatedly sub-sampled in Bagging, but not Features. (RandomForests, XGBoost and CatBoost do both): Given d
11,245
How to train LSTM model on multiple time series data?
Make the identity of the agent one of the features, and train on all data. Probably train on a mini-batch of eg 128 agents at a time: run through the time-series from start to finish for those 128 agents, then select a new mini-batch of agents. For each mini-batch, run a slice of say 50 timesteps, then backprop. Keep t...
How to train LSTM model on multiple time series data?
Make the identity of the agent one of the features, and train on all data. Probably train on a mini-batch of eg 128 agents at a time: run through the time-series from start to finish for those 128 age
How to train LSTM model on multiple time series data? Make the identity of the agent one of the features, and train on all data. Probably train on a mini-batch of eg 128 agents at a time: run through the time-series from start to finish for those 128 agents, then select a new mini-batch of agents. For each mini-batch, ...
How to train LSTM model on multiple time series data? Make the identity of the agent one of the features, and train on all data. Probably train on a mini-batch of eg 128 agents at a time: run through the time-series from start to finish for those 128 age
11,246
Notation of estimators (tilde vs. hat)
Hats and tildes The convention in (my end of) applied statistics is that $\hat{\beta}$ is an estimate of the true parameter value $\beta$ and that $\tilde{\beta}$ is another, possibly competing estimate. Following the Wolfram example, these can both be distinguished from a statistic (function of the data) that als...
Notation of estimators (tilde vs. hat)
Hats and tildes The convention in (my end of) applied statistics is that $\hat{\beta}$ is an estimate of the true parameter value $\beta$ and that $\tilde{\beta}$ is another, possibly competing estima
Notation of estimators (tilde vs. hat) Hats and tildes The convention in (my end of) applied statistics is that $\hat{\beta}$ is an estimate of the true parameter value $\beta$ and that $\tilde{\beta}$ is another, possibly competing estimate. Following the Wolfram example, these can both be distinguished from a st...
Notation of estimators (tilde vs. hat) Hats and tildes The convention in (my end of) applied statistics is that $\hat{\beta}$ is an estimate of the true parameter value $\beta$ and that $\tilde{\beta}$ is another, possibly competing estima
11,247
Data augmentation techniques for general datasets?
I understand this question as involving both feature construction and dealing with the wealth of features you already have + will construct, relative to your observations (N << P). Feature Construction Expanding upon @yasin.yazici's comment, some possible ways to augment the data would be: PCA Auto-encoding Transform'...
Data augmentation techniques for general datasets?
I understand this question as involving both feature construction and dealing with the wealth of features you already have + will construct, relative to your observations (N << P). Feature Constructio
Data augmentation techniques for general datasets? I understand this question as involving both feature construction and dealing with the wealth of features you already have + will construct, relative to your observations (N << P). Feature Construction Expanding upon @yasin.yazici's comment, some possible ways to augme...
Data augmentation techniques for general datasets? I understand this question as involving both feature construction and dealing with the wealth of features you already have + will construct, relative to your observations (N << P). Feature Constructio
11,248
Data augmentation techniques for general datasets?
I faced a similar problem where in I wanted to augment unlabelled numeric data. I augmented data in the following way: (Say I have a data set of size 100*10.) Create a list by randomly sampling values from {0,1}, such that the number of zeros are less than the number of 1s,say the proportion of 0s is 20% in this case....
Data augmentation techniques for general datasets?
I faced a similar problem where in I wanted to augment unlabelled numeric data. I augmented data in the following way: (Say I have a data set of size 100*10.) Create a list by randomly sampling value
Data augmentation techniques for general datasets? I faced a similar problem where in I wanted to augment unlabelled numeric data. I augmented data in the following way: (Say I have a data set of size 100*10.) Create a list by randomly sampling values from {0,1}, such that the number of zeros are less than the number ...
Data augmentation techniques for general datasets? I faced a similar problem where in I wanted to augment unlabelled numeric data. I augmented data in the following way: (Say I have a data set of size 100*10.) Create a list by randomly sampling value
11,249
Testing significance of peaks in spectral density
You should be aware that estimating power spectra using a periodogram is not recommended, and in fact has been bad practice since ~ 1896. It is an inconsistent estimator for anything less than millions of data samples (and even then ...), and generally biased. The exact same thing applies to using standard estimates of...
Testing significance of peaks in spectral density
You should be aware that estimating power spectra using a periodogram is not recommended, and in fact has been bad practice since ~ 1896. It is an inconsistent estimator for anything less than million
Testing significance of peaks in spectral density You should be aware that estimating power spectra using a periodogram is not recommended, and in fact has been bad practice since ~ 1896. It is an inconsistent estimator for anything less than millions of data samples (and even then ...), and generally biased. The exact...
Testing significance of peaks in spectral density You should be aware that estimating power spectra using a periodogram is not recommended, and in fact has been bad practice since ~ 1896. It is an inconsistent estimator for anything less than million
11,250
Testing significance of peaks in spectral density
Ronald Fisher proposed an exact test of the maximum periodogram coordinate in R.A. Fisher, Proc. R. Soc A (1929) 125:54. The test is based on the g-statistic. Specifically, the null hypothesis of Gaussian white noise is rejected if g is significantly large, that is, if one of the values ​​of $f(\omega_k)$ is significan...
Testing significance of peaks in spectral density
Ronald Fisher proposed an exact test of the maximum periodogram coordinate in R.A. Fisher, Proc. R. Soc A (1929) 125:54. The test is based on the g-statistic. Specifically, the null hypothesis of Gaus
Testing significance of peaks in spectral density Ronald Fisher proposed an exact test of the maximum periodogram coordinate in R.A. Fisher, Proc. R. Soc A (1929) 125:54. The test is based on the g-statistic. Specifically, the null hypothesis of Gaussian white noise is rejected if g is significantly large, that is, if ...
Testing significance of peaks in spectral density Ronald Fisher proposed an exact test of the maximum periodogram coordinate in R.A. Fisher, Proc. R. Soc A (1929) 125:54. The test is based on the g-statistic. Specifically, the null hypothesis of Gaus
11,251
Testing significance of peaks in spectral density
We have tried an attempt to address this issue by a wavelet transform of a spectral-based test recently in this paper. Essentially, you need to consider the periodogram ordinates distribution, similarly to the article of Fisher, mentioned in the earlier answers. Another paper from Koen is this. We have recently publis...
Testing significance of peaks in spectral density
We have tried an attempt to address this issue by a wavelet transform of a spectral-based test recently in this paper. Essentially, you need to consider the periodogram ordinates distribution, similar
Testing significance of peaks in spectral density We have tried an attempt to address this issue by a wavelet transform of a spectral-based test recently in this paper. Essentially, you need to consider the periodogram ordinates distribution, similarly to the article of Fisher, mentioned in the earlier answers. Another...
Testing significance of peaks in spectral density We have tried an attempt to address this issue by a wavelet transform of a spectral-based test recently in this paper. Essentially, you need to consider the periodogram ordinates distribution, similar
11,252
Testing significance of peaks in spectral density
Use the spectrum.test function in the ts.extend package You can conduct a "permutation spectrum test" on your data using the ts.extend package. This is a permutation-based variant of the classic Fisher test that looks at the maximum spectral intensity of the data and compares it to its null distribution under the null...
Testing significance of peaks in spectral density
Use the spectrum.test function in the ts.extend package You can conduct a "permutation spectrum test" on your data using the ts.extend package. This is a permutation-based variant of the classic Fish
Testing significance of peaks in spectral density Use the spectrum.test function in the ts.extend package You can conduct a "permutation spectrum test" on your data using the ts.extend package. This is a permutation-based variant of the classic Fisher test that looks at the maximum spectral intensity of the data and c...
Testing significance of peaks in spectral density Use the spectrum.test function in the ts.extend package You can conduct a "permutation spectrum test" on your data using the ts.extend package. This is a permutation-based variant of the classic Fish
11,253
Autoencoders can't learn meaningful features
Debugging neural networks usually involves tweaking hyperparameters, visualizing the learned filters, and plotting important metrics. Could you share what hyperparameters you've been using? What's your batch size? What's your learning rate? What type of autoencoder are you're using? Have you tried using a Denoising Au...
Autoencoders can't learn meaningful features
Debugging neural networks usually involves tweaking hyperparameters, visualizing the learned filters, and plotting important metrics. Could you share what hyperparameters you've been using? What's yo
Autoencoders can't learn meaningful features Debugging neural networks usually involves tweaking hyperparameters, visualizing the learned filters, and plotting important metrics. Could you share what hyperparameters you've been using? What's your batch size? What's your learning rate? What type of autoencoder are you'...
Autoencoders can't learn meaningful features Debugging neural networks usually involves tweaking hyperparameters, visualizing the learned filters, and plotting important metrics. Could you share what hyperparameters you've been using? What's yo
11,254
Autoencoders can't learn meaningful features
I don't have enough rep to comment, so I will put this into in answer. I don't know exact reason, however: The pattern in bottom left region looks similar to your second example, and pattern in right bottom corner seems very like to your first example, when inspected closely. The question is, how much variety is in y...
Autoencoders can't learn meaningful features
I don't have enough rep to comment, so I will put this into in answer. I don't know exact reason, however: The pattern in bottom left region looks similar to your second example, and pattern in right
Autoencoders can't learn meaningful features I don't have enough rep to comment, so I will put this into in answer. I don't know exact reason, however: The pattern in bottom left region looks similar to your second example, and pattern in right bottom corner seems very like to your first example, when inspected closel...
Autoencoders can't learn meaningful features I don't have enough rep to comment, so I will put this into in answer. I don't know exact reason, however: The pattern in bottom left region looks similar to your second example, and pattern in right
11,255
Rules for selecting convolutional neural network hyperparameters
To some degree yes, a recent paper came out by Google researchers on how to choose good Inception architectures. Inception nets achieve very high performance on a constrained parameter budget, so this is as good of a place to start as any, and it's recent. Here's the link: Rethinking the Inception Architecture for Comp...
Rules for selecting convolutional neural network hyperparameters
To some degree yes, a recent paper came out by Google researchers on how to choose good Inception architectures. Inception nets achieve very high performance on a constrained parameter budget, so this
Rules for selecting convolutional neural network hyperparameters To some degree yes, a recent paper came out by Google researchers on how to choose good Inception architectures. Inception nets achieve very high performance on a constrained parameter budget, so this is as good of a place to start as any, and it's recent...
Rules for selecting convolutional neural network hyperparameters To some degree yes, a recent paper came out by Google researchers on how to choose good Inception architectures. Inception nets achieve very high performance on a constrained parameter budget, so this
11,256
Rules for selecting convolutional neural network hyperparameters
I haven't come across any literature on choosing these hyper-parameters as a function of the problem specifications. But, it's my understanding that most are adopting Bayesian optimization methods to zero in on effective values. You specify a reasonable range, and by testing various combinations, you learn a model of h...
Rules for selecting convolutional neural network hyperparameters
I haven't come across any literature on choosing these hyper-parameters as a function of the problem specifications. But, it's my understanding that most are adopting Bayesian optimization methods to
Rules for selecting convolutional neural network hyperparameters I haven't come across any literature on choosing these hyper-parameters as a function of the problem specifications. But, it's my understanding that most are adopting Bayesian optimization methods to zero in on effective values. You specify a reasonable r...
Rules for selecting convolutional neural network hyperparameters I haven't come across any literature on choosing these hyper-parameters as a function of the problem specifications. But, it's my understanding that most are adopting Bayesian optimization methods to
11,257
Is R viable for production (deployed) code
Yes it is. Look for example at this page for the wonderful headless RServe R server instance (by R Core member Simon Urbanek) which lists these deployments: Some projects using Rserve: The Dataverse Network Project Phenyx "J" interface Nexus BPM Taverna ...
Is R viable for production (deployed) code
Yes it is. Look for example at this page for the wonderful headless RServe R server instance (by R Core member Simon Urbanek) which lists these deployments: Some projects using Rserve: The Dataver
Is R viable for production (deployed) code Yes it is. Look for example at this page for the wonderful headless RServe R server instance (by R Core member Simon Urbanek) which lists these deployments: Some projects using Rserve: The Dataverse Network Project Phenyx "J" interface Nexus BPM ...
Is R viable for production (deployed) code Yes it is. Look for example at this page for the wonderful headless RServe R server instance (by R Core member Simon Urbanek) which lists these deployments: Some projects using Rserve: The Dataver
11,258
Is R viable for production (deployed) code
Typically not as R is an interpreted language, which on average is many times slower than equivalent compiled code. While converting your program to C, Fortran, or Java takes a significant investment, the code can literally run 10-100X faster than an equivalent R version. Additionally, R has very limited tools to man...
Is R viable for production (deployed) code
Typically not as R is an interpreted language, which on average is many times slower than equivalent compiled code. While converting your program to C, Fortran, or Java takes a significant investment
Is R viable for production (deployed) code Typically not as R is an interpreted language, which on average is many times slower than equivalent compiled code. While converting your program to C, Fortran, or Java takes a significant investment, the code can literally run 10-100X faster than an equivalent R version. Ad...
Is R viable for production (deployed) code Typically not as R is an interpreted language, which on average is many times slower than equivalent compiled code. While converting your program to C, Fortran, or Java takes a significant investment
11,259
Is R viable for production (deployed) code
I believe (but this is based on anecdote) that R tends to be used more as a prototyping language by the companies you name above. R excels in the task of developing and testing multiple models quickly and effectively. However, it is not a good fit for personalisation tasks as these often need to take place as a user in...
Is R viable for production (deployed) code
I believe (but this is based on anecdote) that R tends to be used more as a prototyping language by the companies you name above. R excels in the task of developing and testing multiple models quickly
Is R viable for production (deployed) code I believe (but this is based on anecdote) that R tends to be used more as a prototyping language by the companies you name above. R excels in the task of developing and testing multiple models quickly and effectively. However, it is not a good fit for personalisation tasks as ...
Is R viable for production (deployed) code I believe (but this is based on anecdote) that R tends to be used more as a prototyping language by the companies you name above. R excels in the task of developing and testing multiple models quickly
11,260
Structural Equation Models (SEMs) versus Bayesian Networks (BNs)
As far as I can tell, Bayesian Networks do not claim to be able to estimate causal effects in non-directed acyclic graphs, whereas SEM does. That's a generalization in favor of SEM... if you believe it. An example of this might be measuring cognitive decline among people where cognition is a latent effect estimated usi...
Structural Equation Models (SEMs) versus Bayesian Networks (BNs)
As far as I can tell, Bayesian Networks do not claim to be able to estimate causal effects in non-directed acyclic graphs, whereas SEM does. That's a generalization in favor of SEM... if you believe i
Structural Equation Models (SEMs) versus Bayesian Networks (BNs) As far as I can tell, Bayesian Networks do not claim to be able to estimate causal effects in non-directed acyclic graphs, whereas SEM does. That's a generalization in favor of SEM... if you believe it. An example of this might be measuring cognitive decl...
Structural Equation Models (SEMs) versus Bayesian Networks (BNs) As far as I can tell, Bayesian Networks do not claim to be able to estimate causal effects in non-directed acyclic graphs, whereas SEM does. That's a generalization in favor of SEM... if you believe i
11,261
Structural Equation Models (SEMs) versus Bayesian Networks (BNs)
I don't really understand this, but see here: Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equation model is an algebraic object. As long as the causal graph remains acyclic, algebraic manipulations are interpreted...
Structural Equation Models (SEMs) versus Bayesian Networks (BNs)
I don't really understand this, but see here: Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equat
Structural Equation Models (SEMs) versus Bayesian Networks (BNs) I don't really understand this, but see here: Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equation model is an algebraic object. As long as the caus...
Structural Equation Models (SEMs) versus Bayesian Networks (BNs) I don't really understand this, but see here: Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equat
11,262
How to decompose a time series with multiple seasonal components?
R's forecast package bats() and tbats() functions can fit BATS and TBATS models to the data. The functions return lists with a class attribute either "bats" or "tbats". One of the elements on this list is a time series of state vectors, $x(t)$, for each time, $t$. See http://robjhyndman.com/papers/complex-seasonality/...
How to decompose a time series with multiple seasonal components?
R's forecast package bats() and tbats() functions can fit BATS and TBATS models to the data. The functions return lists with a class attribute either "bats" or "tbats". One of the elements on this lis
How to decompose a time series with multiple seasonal components? R's forecast package bats() and tbats() functions can fit BATS and TBATS models to the data. The functions return lists with a class attribute either "bats" or "tbats". One of the elements on this list is a time series of state vectors, $x(t)$, for each ...
How to decompose a time series with multiple seasonal components? R's forecast package bats() and tbats() functions can fit BATS and TBATS models to the data. The functions return lists with a class attribute either "bats" or "tbats". One of the elements on this lis
11,263
How to decompose a time series with multiple seasonal components?
R's forecast package now has a function mstl() to handle multiple seasonal time series decomposition. This page has got more details how to use it: https://pkg.robjhyndman.com/forecast/reference/mstl.html
How to decompose a time series with multiple seasonal components?
R's forecast package now has a function mstl() to handle multiple seasonal time series decomposition. This page has got more details how to use it: https://pkg.robjhyndman.com/forecast/reference/mstl.
How to decompose a time series with multiple seasonal components? R's forecast package now has a function mstl() to handle multiple seasonal time series decomposition. This page has got more details how to use it: https://pkg.robjhyndman.com/forecast/reference/mstl.html
How to decompose a time series with multiple seasonal components? R's forecast package now has a function mstl() to handle multiple seasonal time series decomposition. This page has got more details how to use it: https://pkg.robjhyndman.com/forecast/reference/mstl.
11,264
How to decompose a time series with multiple seasonal components?
The facebook prophet package supports multiple seasonalities. Yearly, weekly and daily seasonalities are built-in but custom seasonalities can be specified. Here is a custom monthly seasonality: df <- ... # data to build model on or decompose future <- ... # data to make forecasts on m <- prophet(weekly.seasonalit...
How to decompose a time series with multiple seasonal components?
The facebook prophet package supports multiple seasonalities. Yearly, weekly and daily seasonalities are built-in but custom seasonalities can be specified. Here is a custom monthly seasonality: df <-
How to decompose a time series with multiple seasonal components? The facebook prophet package supports multiple seasonalities. Yearly, weekly and daily seasonalities are built-in but custom seasonalities can be specified. Here is a custom monthly seasonality: df <- ... # data to build model on or decompose future ...
How to decompose a time series with multiple seasonal components? The facebook prophet package supports multiple seasonalities. Yearly, weekly and daily seasonalities are built-in but custom seasonalities can be specified. Here is a custom monthly seasonality: df <-
11,265
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis?
Percentage agreement (with tolerance = 0): 0.0143 Percentage agreement (with tolerance = 1): 11.8 Krippendorff's alpha: 0.1529467 These agreement measures state that there is virtually no categorial agreement - each coder has his or her own internal cutoff point for judging comments as "friendly" or "unfriendly". If w...
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis?
Percentage agreement (with tolerance = 0): 0.0143 Percentage agreement (with tolerance = 1): 11.8 Krippendorff's alpha: 0.1529467 These agreement measures state that there is virtually no categorial
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis? Percentage agreement (with tolerance = 0): 0.0143 Percentage agreement (with tolerance = 1): 11.8 Krippendorff's alpha: 0.1529467 These agreement measures state that there is virtually no categorial agreement - each coder has his or her ...
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis? Percentage agreement (with tolerance = 0): 0.0143 Percentage agreement (with tolerance = 1): 11.8 Krippendorff's alpha: 0.1529467 These agreement measures state that there is virtually no categorial
11,266
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis?
Reliability of scores is frequently interpreted in terms of Classical Test Theory. Here one has a true score, X, but what you observe at any particular outcome is not only the true score, but the true score with some error (i.e. Observed = X + error). In theory, by taking multiple observed measures of the same underlyi...
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis?
Reliability of scores is frequently interpreted in terms of Classical Test Theory. Here one has a true score, X, but what you observe at any particular outcome is not only the true score, but the true
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis? Reliability of scores is frequently interpreted in terms of Classical Test Theory. Here one has a true score, X, but what you observe at any particular outcome is not only the true score, but the true score with some error (i.e. Observed ...
Is Joel Spolsky's "Hunting of the Snark" post valid statistical content analysis? Reliability of scores is frequently interpreted in terms of Classical Test Theory. Here one has a true score, X, but what you observe at any particular outcome is not only the true score, but the true
11,267
Correlation between two time series
Macro's point is correct the proper way to compare for relationships between time series is by the cross-correlation function (assuming stationarity). Having the same length is not essential. The cross correlation at lag 0 just computes a correlation like doing the Pearson correlation estimate pairing the data at the...
Correlation between two time series
Macro's point is correct the proper way to compare for relationships between time series is by the cross-correlation function (assuming stationarity). Having the same length is not essential. The cr
Correlation between two time series Macro's point is correct the proper way to compare for relationships between time series is by the cross-correlation function (assuming stationarity). Having the same length is not essential. The cross correlation at lag 0 just computes a correlation like doing the Pearson correlat...
Correlation between two time series Macro's point is correct the proper way to compare for relationships between time series is by the cross-correlation function (assuming stationarity). Having the same length is not essential. The cr
11,268
Correlation between two time series
You might want to look at a similar question and my answer Correlating volume timeseries which suggests that you can compute cross-correlations BUT testing them is a horse of a different color ( an equine of a different hue ) due to autoregressive or deterministic structure within either series.
Correlation between two time series
You might want to look at a similar question and my answer Correlating volume timeseries which suggests that you can compute cross-correlations BUT testing them is a horse of a different color ( an eq
Correlation between two time series You might want to look at a similar question and my answer Correlating volume timeseries which suggests that you can compute cross-correlations BUT testing them is a horse of a different color ( an equine of a different hue ) due to autoregressive or deterministic structure within ei...
Correlation between two time series You might want to look at a similar question and my answer Correlating volume timeseries which suggests that you can compute cross-correlations BUT testing them is a horse of a different color ( an eq
11,269
Correlation between two time series
There is some interesting stuff here https://stackoverflow.com/questions/3949226/calculating-pearson-correlation-and-significance-in-python This was actually what I needed. Simple to implement and explain.
Correlation between two time series
There is some interesting stuff here https://stackoverflow.com/questions/3949226/calculating-pearson-correlation-and-significance-in-python This was actually what I needed. Simple to implement and exp
Correlation between two time series There is some interesting stuff here https://stackoverflow.com/questions/3949226/calculating-pearson-correlation-and-significance-in-python This was actually what I needed. Simple to implement and explain.
Correlation between two time series There is some interesting stuff here https://stackoverflow.com/questions/3949226/calculating-pearson-correlation-and-significance-in-python This was actually what I needed. Simple to implement and exp
11,270
What is the architecture of a stacked convolutional autoencoder?
I am currently exploring stacked-convolutional autoencoders. I will try and answer some of your questions to the best of my knowledge. Mind you, I might be wrong so take it with a grain of salt. Yes, you have to "reverse" pool and then convolve with a set of filters to recover your output image. A standard neural netw...
What is the architecture of a stacked convolutional autoencoder?
I am currently exploring stacked-convolutional autoencoders. I will try and answer some of your questions to the best of my knowledge. Mind you, I might be wrong so take it with a grain of salt. Yes,
What is the architecture of a stacked convolutional autoencoder? I am currently exploring stacked-convolutional autoencoders. I will try and answer some of your questions to the best of my knowledge. Mind you, I might be wrong so take it with a grain of salt. Yes, you have to "reverse" pool and then convolve with a se...
What is the architecture of a stacked convolutional autoencoder? I am currently exploring stacked-convolutional autoencoders. I will try and answer some of your questions to the best of my knowledge. Mind you, I might be wrong so take it with a grain of salt. Yes,
11,271
What is the architecture of a stacked convolutional autoencoder?
I have also been searching for fully explained model of Stacked Convolutional Autoencoders. I came across three different architectures. I am still studying them and I thought these might help others who are also starting to explore CAEs. Any further references to papers or implementations would greatly help. The on...
What is the architecture of a stacked convolutional autoencoder?
I have also been searching for fully explained model of Stacked Convolutional Autoencoders. I came across three different architectures. I am still studying them and I thought these might help others
What is the architecture of a stacked convolutional autoencoder? I have also been searching for fully explained model of Stacked Convolutional Autoencoders. I came across three different architectures. I am still studying them and I thought these might help others who are also starting to explore CAEs. Any further refe...
What is the architecture of a stacked convolutional autoencoder? I have also been searching for fully explained model of Stacked Convolutional Autoencoders. I came across three different architectures. I am still studying them and I thought these might help others
11,272
What is the architecture of a stacked convolutional autoencoder?
I don't think the layer-wised training method is correct. For example, the architecture of convolutional auto-encoder is: input->conv->max_poo->de_max_pool->de_conv->output. This is a auto-encoder, and should be trained with the entire architecture. Furthermore, there is no strict criterion whether one convolutional au...
What is the architecture of a stacked convolutional autoencoder?
I don't think the layer-wised training method is correct. For example, the architecture of convolutional auto-encoder is: input->conv->max_poo->de_max_pool->de_conv->output. This is a auto-encoder, an
What is the architecture of a stacked convolutional autoencoder? I don't think the layer-wised training method is correct. For example, the architecture of convolutional auto-encoder is: input->conv->max_poo->de_max_pool->de_conv->output. This is a auto-encoder, and should be trained with the entire architecture. Furth...
What is the architecture of a stacked convolutional autoencoder? I don't think the layer-wised training method is correct. For example, the architecture of convolutional auto-encoder is: input->conv->max_poo->de_max_pool->de_conv->output. This is a auto-encoder, an
11,273
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
I'm not totally sure of your question, but can remark on his claims and your confusion in the example model. Andrew is not quite clear if scientific interest lies in the height adjusted sex-income association or the sex adjusted height-income association. In a causal model framework sex causes height but height does no...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
I'm not totally sure of your question, but can remark on his claims and your confusion in the example model. Andrew is not quite clear if scientific interest lies in the height adjusted sex-income ass
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height I'm not totally sure of your question, but can remark on his claims and your confusion in the example model. Andrew is not quite clear if scientific interest lies in the height adjusted sex-income association or the sex ad...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height I'm not totally sure of your question, but can remark on his claims and your confusion in the example model. Andrew is not quite clear if scientific interest lies in the height adjusted sex-income ass
11,274
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
"why compare a man and a woman who are both 66 inches tall, for example? That would be a comparison of a short man with a tall woman" The model assumes that income depends on gender and height. However, the way in which height generates higher income may not be the same for men and women. Women may be considered tall "...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
"why compare a man and a woman who are both 66 inches tall, for example? That would be a comparison of a short man with a tall woman" The model assumes that income depends on gender and height. Howeve
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height "why compare a man and a woman who are both 66 inches tall, for example? That would be a comparison of a short man with a tall woman" The model assumes that income depends on gender and height. However, the way in which he...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height "why compare a man and a woman who are both 66 inches tall, for example? That would be a comparison of a short man with a tall woman" The model assumes that income depends on gender and height. Howeve
11,275
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
Would you be telling (in more plain words) that the typical gender fight saying men have more chances than women as their income is p% higher would be paradoxically biased? Maybe that's a point. We tend to see things how they look like and not to analyze the underlying implications. to go above Simpson's paradox we wou...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height
Would you be telling (in more plain words) that the typical gender fight saying men have more chances than women as their income is p% higher would be paradoxically biased? Maybe that's a point. We te
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height Would you be telling (in more plain words) that the typical gender fight saying men have more chances than women as their income is p% higher would be paradoxically biased? Maybe that's a point. We tend to see things how t...
Understanding Simpson's paradox: Andrew Gelman's example with regressing income on sex and height Would you be telling (in more plain words) that the typical gender fight saying men have more chances than women as their income is p% higher would be paradoxically biased? Maybe that's a point. We te
11,276
Did I just invent a Bayesian method for analysis of ROC curves?
First off, there is no accepted way to "analyze" a ROC curve: it is merely a graphic that portrays the predictive ability of a classification model. You can certainly summarize a ROC curve using a c-statistic or the AUC, but calculating confidence intervals and performing inference using $c$-statistics is well understo...
Did I just invent a Bayesian method for analysis of ROC curves?
First off, there is no accepted way to "analyze" a ROC curve: it is merely a graphic that portrays the predictive ability of a classification model. You can certainly summarize a ROC curve using a c-s
Did I just invent a Bayesian method for analysis of ROC curves? First off, there is no accepted way to "analyze" a ROC curve: it is merely a graphic that portrays the predictive ability of a classification model. You can certainly summarize a ROC curve using a c-statistic or the AUC, but calculating confidence interval...
Did I just invent a Bayesian method for analysis of ROC curves? First off, there is no accepted way to "analyze" a ROC curve: it is merely a graphic that portrays the predictive ability of a classification model. You can certainly summarize a ROC curve using a c-s
11,277
Did I just invent a Bayesian method for analysis of ROC curves?
This post seems similar to an existing paper. I could be misunderstanding the nuances though. Hellmich, Martin, et al. "A Bayesian approach to a general regression model for ROC curves." Medical Decision Making 18.4 (1998): 436-443.
Did I just invent a Bayesian method for analysis of ROC curves?
This post seems similar to an existing paper. I could be misunderstanding the nuances though. Hellmich, Martin, et al. "A Bayesian approach to a general regression model for ROC curves." Medical Decis
Did I just invent a Bayesian method for analysis of ROC curves? This post seems similar to an existing paper. I could be misunderstanding the nuances though. Hellmich, Martin, et al. "A Bayesian approach to a general regression model for ROC curves." Medical Decision Making 18.4 (1998): 436-443.
Did I just invent a Bayesian method for analysis of ROC curves? This post seems similar to an existing paper. I could be misunderstanding the nuances though. Hellmich, Martin, et al. "A Bayesian approach to a general regression model for ROC curves." Medical Decis
11,278
Is a log transformation a valid technique for t-testing non-normal data?
It is common to try to apply some kind of transformation to normality (using e.g. logarithms, square roots, ...) when encountered with data that isn't normal. While the logarithm yields good results for skewed data reasonably often, there is no guarantee that it will work in this particular case. One should also bear @...
Is a log transformation a valid technique for t-testing non-normal data?
It is common to try to apply some kind of transformation to normality (using e.g. logarithms, square roots, ...) when encountered with data that isn't normal. While the logarithm yields good results f
Is a log transformation a valid technique for t-testing non-normal data? It is common to try to apply some kind of transformation to normality (using e.g. logarithms, square roots, ...) when encountered with data that isn't normal. While the logarithm yields good results for skewed data reasonably often, there is no gu...
Is a log transformation a valid technique for t-testing non-normal data? It is common to try to apply some kind of transformation to normality (using e.g. logarithms, square roots, ...) when encountered with data that isn't normal. While the logarithm yields good results f
11,279
Sites for predictive modeling competitions
Isabelle Guyon (working with many colleagues) has organised a series of machine learning challenges, see the website for details of previous challenges. The competitions are usually part of the programme of a conference, but attendance at the event is optional, and they are a good test of the tools in ones toolbox!
Sites for predictive modeling competitions
Isabelle Guyon (working with many colleagues) has organised a series of machine learning challenges, see the website for details of previous challenges. The competitions are usually part of the progr
Sites for predictive modeling competitions Isabelle Guyon (working with many colleagues) has organised a series of machine learning challenges, see the website for details of previous challenges. The competitions are usually part of the programme of a conference, but attendance at the event is optional, and they are a...
Sites for predictive modeling competitions Isabelle Guyon (working with many colleagues) has organised a series of machine learning challenges, see the website for details of previous challenges. The competitions are usually part of the progr
11,280
Sites for predictive modeling competitions
Here are some nice datasets: http://archive.ics.uci.edu/ml/ Update: The question has changed since I gave this answer.
Sites for predictive modeling competitions
Here are some nice datasets: http://archive.ics.uci.edu/ml/ Update: The question has changed since I gave this answer.
Sites for predictive modeling competitions Here are some nice datasets: http://archive.ics.uci.edu/ml/ Update: The question has changed since I gave this answer.
Sites for predictive modeling competitions Here are some nice datasets: http://archive.ics.uci.edu/ml/ Update: The question has changed since I gave this answer.
11,281
Sites for predictive modeling competitions
Adding Numerai! http://www.numer.ai Numerai is an AI competition where you model the underlying fund's data - in real time - and make predictions. Download the fund's data - protected by homomorphic encryption. Upload predictions and get featured on the leaderboard. You may even get assigned profits.
Sites for predictive modeling competitions
Adding Numerai! http://www.numer.ai Numerai is an AI competition where you model the underlying fund's data - in real time - and make predictions. Download the fund's data - protected by homomorphic e
Sites for predictive modeling competitions Adding Numerai! http://www.numer.ai Numerai is an AI competition where you model the underlying fund's data - in real time - and make predictions. Download the fund's data - protected by homomorphic encryption. Upload predictions and get featured on the leaderboard. You may ev...
Sites for predictive modeling competitions Adding Numerai! http://www.numer.ai Numerai is an AI competition where you model the underlying fund's data - in real time - and make predictions. Download the fund's data - protected by homomorphic e
11,282
Bonferroni or Tukey? When does the number of comparisons become large?
In addition to the useful link mentioned in the comments by @schenectady. I would also add the point that Bonferroni correction applies to a broader class of problems. As far as I'm aware Tukey's HSD is only applied to situations where you want to examine all possible pairwise comparisons, whereas Bonferroni correctio...
Bonferroni or Tukey? When does the number of comparisons become large?
In addition to the useful link mentioned in the comments by @schenectady. I would also add the point that Bonferroni correction applies to a broader class of problems. As far as I'm aware Tukey's HSD
Bonferroni or Tukey? When does the number of comparisons become large? In addition to the useful link mentioned in the comments by @schenectady. I would also add the point that Bonferroni correction applies to a broader class of problems. As far as I'm aware Tukey's HSD is only applied to situations where you want to ...
Bonferroni or Tukey? When does the number of comparisons become large? In addition to the useful link mentioned in the comments by @schenectady. I would also add the point that Bonferroni correction applies to a broader class of problems. As far as I'm aware Tukey's HSD
11,283
Unbiased estimation of covariance matrix for multiply censored data
I have not full internalized the issue of matrix interference but here is one approach. Let: $Y$ be a vector that represents the concentration of all the target compounds in the undiluted sample. $Z$ be the corresponding vector in the diluted sample. $d$ be the dilution factor i.e., the sample is diluted $d$:1. Our mod...
Unbiased estimation of covariance matrix for multiply censored data
I have not full internalized the issue of matrix interference but here is one approach. Let: $Y$ be a vector that represents the concentration of all the target compounds in the undiluted sample. $Z$
Unbiased estimation of covariance matrix for multiply censored data I have not full internalized the issue of matrix interference but here is one approach. Let: $Y$ be a vector that represents the concentration of all the target compounds in the undiluted sample. $Z$ be the corresponding vector in the diluted sample. $...
Unbiased estimation of covariance matrix for multiply censored data I have not full internalized the issue of matrix interference but here is one approach. Let: $Y$ be a vector that represents the concentration of all the target compounds in the undiluted sample. $Z$
11,284
Unbiased estimation of covariance matrix for multiply censored data
Another more computationally efficient option would be to fit the covariance matrix by moment matching using a model that has been called the "dichomized Gaussian", really just a Gaussian copula model. A recent paper from Macke et al 2010 describes a closed form procedure for fitting this model which involves only th...
Unbiased estimation of covariance matrix for multiply censored data
Another more computationally efficient option would be to fit the covariance matrix by moment matching using a model that has been called the "dichomized Gaussian", really just a Gaussian copula model
Unbiased estimation of covariance matrix for multiply censored data Another more computationally efficient option would be to fit the covariance matrix by moment matching using a model that has been called the "dichomized Gaussian", really just a Gaussian copula model. A recent paper from Macke et al 2010 describes a...
Unbiased estimation of covariance matrix for multiply censored data Another more computationally efficient option would be to fit the covariance matrix by moment matching using a model that has been called the "dichomized Gaussian", really just a Gaussian copula model
11,285
Unbiased estimation of covariance matrix for multiply censored data
How many compounds are in your sample? (Or, how big is the covariance matrix in question?). Alan Genz has some very nice code in a variety of languages (R, Matlab, Fortran; see here) for computing integrals of multivariate normal densities over hyper-rectangles (i.e., the kinds of integrals you need to evaluate the ...
Unbiased estimation of covariance matrix for multiply censored data
How many compounds are in your sample? (Or, how big is the covariance matrix in question?). Alan Genz has some very nice code in a variety of languages (R, Matlab, Fortran; see here) for computing
Unbiased estimation of covariance matrix for multiply censored data How many compounds are in your sample? (Or, how big is the covariance matrix in question?). Alan Genz has some very nice code in a variety of languages (R, Matlab, Fortran; see here) for computing integrals of multivariate normal densities over hype...
Unbiased estimation of covariance matrix for multiply censored data How many compounds are in your sample? (Or, how big is the covariance matrix in question?). Alan Genz has some very nice code in a variety of languages (R, Matlab, Fortran; see here) for computing
11,286
Two ways of using bootstrap to estimate the confidence interval of coefficients in regression
If the response-predictor pairs have been obtained from a population by random sample, it is safe to use case/random-x/your-first resampling scheme. If predictors were controlled for, or the values of the predictors were set by the experimenter, you may consider using residual/model-based/fixed-x/your-second resampling...
Two ways of using bootstrap to estimate the confidence interval of coefficients in regression
If the response-predictor pairs have been obtained from a population by random sample, it is safe to use case/random-x/your-first resampling scheme. If predictors were controlled for, or the values of
Two ways of using bootstrap to estimate the confidence interval of coefficients in regression If the response-predictor pairs have been obtained from a population by random sample, it is safe to use case/random-x/your-first resampling scheme. If predictors were controlled for, or the values of the predictors were set b...
Two ways of using bootstrap to estimate the confidence interval of coefficients in regression If the response-predictor pairs have been obtained from a population by random sample, it is safe to use case/random-x/your-first resampling scheme. If predictors were controlled for, or the values of
11,287
What is the probability that $n$ random points in $d$ dimensions are linearly separable?
Assuming no duplicates exist in the data. If $n\leq d+1$, the probability is $\text{Pr}=1$. For other combinations of $(n,d)$, see the following plot: I generated this plot simulating input and output data as specified in the OP. Linear separability was defined as failure of convergence in a logistic regression model,...
What is the probability that $n$ random points in $d$ dimensions are linearly separable?
Assuming no duplicates exist in the data. If $n\leq d+1$, the probability is $\text{Pr}=1$. For other combinations of $(n,d)$, see the following plot: I generated this plot simulating input and outpu
What is the probability that $n$ random points in $d$ dimensions are linearly separable? Assuming no duplicates exist in the data. If $n\leq d+1$, the probability is $\text{Pr}=1$. For other combinations of $(n,d)$, see the following plot: I generated this plot simulating input and output data as specified in the OP. ...
What is the probability that $n$ random points in $d$ dimensions are linearly separable? Assuming no duplicates exist in the data. If $n\leq d+1$, the probability is $\text{Pr}=1$. For other combinations of $(n,d)$, see the following plot: I generated this plot simulating input and outpu
11,288
What is the probability that $n$ random points in $d$ dimensions are linearly separable?
This is related to Cover's theorem. A nice summary by Emin Orhan is given here. Ps: I would post this in a comment but don't have enough reputation.
What is the probability that $n$ random points in $d$ dimensions are linearly separable?
This is related to Cover's theorem. A nice summary by Emin Orhan is given here. Ps: I would post this in a comment but don't have enough reputation.
What is the probability that $n$ random points in $d$ dimensions are linearly separable? This is related to Cover's theorem. A nice summary by Emin Orhan is given here. Ps: I would post this in a comment but don't have enough reputation.
What is the probability that $n$ random points in $d$ dimensions are linearly separable? This is related to Cover's theorem. A nice summary by Emin Orhan is given here. Ps: I would post this in a comment but don't have enough reputation.
11,289
Fitting custom distributions by MLE
This answer assumes $\mu$ is known. One very flexible way to get MLE's in R is to use STAN via rstan. STAN has a reputation for being an MCMC tool, but it also can estimate parameters by variational inference or MAP. And you're free to not specify the priors. In this case, what you're doing is very similar to their hur...
Fitting custom distributions by MLE
This answer assumes $\mu$ is known. One very flexible way to get MLE's in R is to use STAN via rstan. STAN has a reputation for being an MCMC tool, but it also can estimate parameters by variational i
Fitting custom distributions by MLE This answer assumes $\mu$ is known. One very flexible way to get MLE's in R is to use STAN via rstan. STAN has a reputation for being an MCMC tool, but it also can estimate parameters by variational inference or MAP. And you're free to not specify the priors. In this case, what you'r...
Fitting custom distributions by MLE This answer assumes $\mu$ is known. One very flexible way to get MLE's in R is to use STAN via rstan. STAN has a reputation for being an MCMC tool, but it also can estimate parameters by variational i
11,290
Fitting custom distributions by MLE
My STANswer is so complex that it's just begging for something to go wrong. Here's a simpler way: do all of your inference conditional on the (known) facts of whether each datum exceeds 0 and whether each datum exceeds $\mu$. In other words, reduce the data to: The set of observations $S_1 \equiv \{y_i: 0<x_i<\mu\}$. ...
Fitting custom distributions by MLE
My STANswer is so complex that it's just begging for something to go wrong. Here's a simpler way: do all of your inference conditional on the (known) facts of whether each datum exceeds 0 and whether
Fitting custom distributions by MLE My STANswer is so complex that it's just begging for something to go wrong. Here's a simpler way: do all of your inference conditional on the (known) facts of whether each datum exceeds 0 and whether each datum exceeds $\mu$. In other words, reduce the data to: The set of observatio...
Fitting custom distributions by MLE My STANswer is so complex that it's just begging for something to go wrong. Here's a simpler way: do all of your inference conditional on the (known) facts of whether each datum exceeds 0 and whether
11,291
Do you have a global vision on those analysis techniques?
In terms of batch versus on-line , my experience tells me that sometimes you combine both. What I mean is that you let the heavy-lifting i.e compute intensive stuff relating to model formulation be done off-line and then employ quick/adaptive procedures to use these models. We have found that "new data" can be used in ...
Do you have a global vision on those analysis techniques?
In terms of batch versus on-line , my experience tells me that sometimes you combine both. What I mean is that you let the heavy-lifting i.e compute intensive stuff relating to model formulation be do
Do you have a global vision on those analysis techniques? In terms of batch versus on-line , my experience tells me that sometimes you combine both. What I mean is that you let the heavy-lifting i.e compute intensive stuff relating to model formulation be done off-line and then employ quick/adaptive procedures to use t...
Do you have a global vision on those analysis techniques? In terms of batch versus on-line , my experience tells me that sometimes you combine both. What I mean is that you let the heavy-lifting i.e compute intensive stuff relating to model formulation be do
11,292
Do you have a global vision on those analysis techniques?
Breiman address this issue in "Statistical Modeling: Two Cultures". A first response to an excellent question.
Do you have a global vision on those analysis techniques?
Breiman address this issue in "Statistical Modeling: Two Cultures". A first response to an excellent question.
Do you have a global vision on those analysis techniques? Breiman address this issue in "Statistical Modeling: Two Cultures". A first response to an excellent question.
Do you have a global vision on those analysis techniques? Breiman address this issue in "Statistical Modeling: Two Cultures". A first response to an excellent question.
11,293
Do you have a global vision on those analysis techniques?
I suspect the answer to this question is something along the lines of "there is no free lunch." Perhaps the reason statisticians, computer scientists, and electrical engineers have developed different algorithms is that they're interest in solving different sorts of problems.
Do you have a global vision on those analysis techniques?
I suspect the answer to this question is something along the lines of "there is no free lunch." Perhaps the reason statisticians, computer scientists, and electrical engineers have developed different
Do you have a global vision on those analysis techniques? I suspect the answer to this question is something along the lines of "there is no free lunch." Perhaps the reason statisticians, computer scientists, and electrical engineers have developed different algorithms is that they're interest in solving different sort...
Do you have a global vision on those analysis techniques? I suspect the answer to this question is something along the lines of "there is no free lunch." Perhaps the reason statisticians, computer scientists, and electrical engineers have developed different
11,294
Do you have a global vision on those analysis techniques?
I would say that these three group you indicated are indeed only two groups: Statistics Machine learning, artificial intelligence and pattern recognition. All the branches related to signal filtering are based on two aspects: feature extraction (wavelets, Gabor and Fourier) which belongs to pattern recognition and Di...
Do you have a global vision on those analysis techniques?
I would say that these three group you indicated are indeed only two groups: Statistics Machine learning, artificial intelligence and pattern recognition. All the branches related to signal filterin
Do you have a global vision on those analysis techniques? I would say that these three group you indicated are indeed only two groups: Statistics Machine learning, artificial intelligence and pattern recognition. All the branches related to signal filtering are based on two aspects: feature extraction (wavelets, Gabo...
Do you have a global vision on those analysis techniques? I would say that these three group you indicated are indeed only two groups: Statistics Machine learning, artificial intelligence and pattern recognition. All the branches related to signal filterin
11,295
Random matrices with constraints on row and column length
As @cardinal said in a comment: Actually, after a little thought, I think you algorithm is exactly the Sinkhorn-Knopp algorithm with a very minor modification. Let $X$ be your original matrix and let $Y$ be a matrix of the same size such that $Y_{ij}=X^2_{ij}$. Then, your algorithm is equivalent to applying Sinkhorn-K...
Random matrices with constraints on row and column length
As @cardinal said in a comment: Actually, after a little thought, I think you algorithm is exactly the Sinkhorn-Knopp algorithm with a very minor modification. Let $X$ be your original matrix and let
Random matrices with constraints on row and column length As @cardinal said in a comment: Actually, after a little thought, I think you algorithm is exactly the Sinkhorn-Knopp algorithm with a very minor modification. Let $X$ be your original matrix and let $Y$ be a matrix of the same size such that $Y_{ij}=X^2_{ij}$....
Random matrices with constraints on row and column length As @cardinal said in a comment: Actually, after a little thought, I think you algorithm is exactly the Sinkhorn-Knopp algorithm with a very minor modification. Let $X$ be your original matrix and let
11,296
The cross validation (CV) and the generalized cross validation (GCV) statistics
I believe the comments are pointing at the answer, but not stating it bluntly. So I'll be blunt. The V formula cited here is specific to linear ridge regression. They don't say it is the same as PRESS, they say it is a rotation-invariant version of PRESS. The "rotation-invariant" part is what makes this generalized....
The cross validation (CV) and the generalized cross validation (GCV) statistics
I believe the comments are pointing at the answer, but not stating it bluntly. So I'll be blunt. The V formula cited here is specific to linear ridge regression. They don't say it is the same as PRE
The cross validation (CV) and the generalized cross validation (GCV) statistics I believe the comments are pointing at the answer, but not stating it bluntly. So I'll be blunt. The V formula cited here is specific to linear ridge regression. They don't say it is the same as PRESS, they say it is a rotation-invariant ...
The cross validation (CV) and the generalized cross validation (GCV) statistics I believe the comments are pointing at the answer, but not stating it bluntly. So I'll be blunt. The V formula cited here is specific to linear ridge regression. They don't say it is the same as PRE
11,297
When are zero-correlation mixed models theoretically sound?
The answer to this question turns out to be rather definitional. If one shifted the coordinates of the independent variables of a ZCP model and allowed correlations to develop in an unconstrained manner, predictions would not change, because linear mixed effects models with unconstrained correlations are translation i...
When are zero-correlation mixed models theoretically sound?
The answer to this question turns out to be rather definitional. If one shifted the coordinates of the independent variables of a ZCP model and allowed correlations to develop in an unconstrained man
When are zero-correlation mixed models theoretically sound? The answer to this question turns out to be rather definitional. If one shifted the coordinates of the independent variables of a ZCP model and allowed correlations to develop in an unconstrained manner, predictions would not change, because linear mixed effe...
When are zero-correlation mixed models theoretically sound? The answer to this question turns out to be rather definitional. If one shifted the coordinates of the independent variables of a ZCP model and allowed correlations to develop in an unconstrained man
11,298
State of art streaming learning
A rigorous survey of multiple algorithms similar to the Delgado paper you linked is not available as far as I know, but there have been efforts to gather results for families of algorithms. Here are some sources I find useful (disclaimer: I publish in the area, so it's likely I'm biased in my selection): A survey on E...
State of art streaming learning
A rigorous survey of multiple algorithms similar to the Delgado paper you linked is not available as far as I know, but there have been efforts to gather results for families of algorithms. Here are s
State of art streaming learning A rigorous survey of multiple algorithms similar to the Delgado paper you linked is not available as far as I know, but there have been efforts to gather results for families of algorithms. Here are some sources I find useful (disclaimer: I publish in the area, so it's likely I'm biased ...
State of art streaming learning A rigorous survey of multiple algorithms similar to the Delgado paper you linked is not available as far as I know, but there have been efforts to gather results for families of algorithms. Here are s
11,299
How to create a multivariate Brownian Bridge?
As you already pointed out in the comments, the question reduces to simulating a Brownian sheet. This can be done by generalizing simulation of Brownian motion in a straightforward way. To simulating the Brownian motion, one can take an i.i.d. mean-0 variance-1 time series $W_i$, $i = 1, 2, \cdots$, and construct the n...
How to create a multivariate Brownian Bridge?
As you already pointed out in the comments, the question reduces to simulating a Brownian sheet. This can be done by generalizing simulation of Brownian motion in a straightforward way. To simulating
How to create a multivariate Brownian Bridge? As you already pointed out in the comments, the question reduces to simulating a Brownian sheet. This can be done by generalizing simulation of Brownian motion in a straightforward way. To simulating the Brownian motion, one can take an i.i.d. mean-0 variance-1 time series ...
How to create a multivariate Brownian Bridge? As you already pointed out in the comments, the question reduces to simulating a Brownian sheet. This can be done by generalizing simulation of Brownian motion in a straightforward way. To simulating
11,300
Wavelet-domain Gaussian processes: what is the covariance?
The driving process, white noise η(t), is independent of the choice of basis. In a CWT (unlike DWT jumping in octaves) there is some redundancy, narrow wavebands do overlap. The "feature" being tested for significance is a variance (power) observed in a narrow frequency over a short time. This clearly does depend mat...
Wavelet-domain Gaussian processes: what is the covariance?
The driving process, white noise η(t), is independent of the choice of basis. In a CWT (unlike DWT jumping in octaves) there is some redundancy, narrow wavebands do overlap. The "feature" being teste
Wavelet-domain Gaussian processes: what is the covariance? The driving process, white noise η(t), is independent of the choice of basis. In a CWT (unlike DWT jumping in octaves) there is some redundancy, narrow wavebands do overlap. The "feature" being tested for significance is a variance (power) observed in a narrow...
Wavelet-domain Gaussian processes: what is the covariance? The driving process, white noise η(t), is independent of the choice of basis. In a CWT (unlike DWT jumping in octaves) there is some redundancy, narrow wavebands do overlap. The "feature" being teste