idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
15,501
What is the "partial" in partial least squares methods?
I would like to answer this question, largely based on the historical perspective, which is quite interesting. Herman Wold, who invented partial least squares (PLS) approach, hasn't started using term PLS (or even mentioning term partial) right away. During the initial period (1966-1969), he referred to this approach a...
What is the "partial" in partial least squares methods?
I would like to answer this question, largely based on the historical perspective, which is quite interesting. Herman Wold, who invented partial least squares (PLS) approach, hasn't started using term
What is the "partial" in partial least squares methods? I would like to answer this question, largely based on the historical perspective, which is quite interesting. Herman Wold, who invented partial least squares (PLS) approach, hasn't started using term PLS (or even mentioning term partial) right away. During the in...
What is the "partial" in partial least squares methods? I would like to answer this question, largely based on the historical perspective, which is quite interesting. Herman Wold, who invented partial least squares (PLS) approach, hasn't started using term
15,502
What is the "partial" in partial least squares methods?
In modern expositions of PLS there is nothing "partial": PLS looks for linear combinations among variables in $X$ and among variables in $Y$ that have maximal covariance. It is an easy eigenvector problem. That's it. See The Elements of Statistical Learning, Section 3.5.2, or e.g. Rosipal & Krämer, 2005, Overview and R...
What is the "partial" in partial least squares methods?
In modern expositions of PLS there is nothing "partial": PLS looks for linear combinations among variables in $X$ and among variables in $Y$ that have maximal covariance. It is an easy eigenvector pro
What is the "partial" in partial least squares methods? In modern expositions of PLS there is nothing "partial": PLS looks for linear combinations among variables in $X$ and among variables in $Y$ that have maximal covariance. It is an easy eigenvector problem. That's it. See The Elements of Statistical Learning, Secti...
What is the "partial" in partial least squares methods? In modern expositions of PLS there is nothing "partial": PLS looks for linear combinations among variables in $X$ and among variables in $Y$ that have maximal covariance. It is an easy eigenvector pro
15,503
Why isn't bayesian statistics more popular for statistical process control?
WARNING I wrote this answer a long time ago with very little idea what I was talking about. I can't delete it because it's been accepted, but I can't stand behind most of the content. This is a very long answer and I hope it'll be helpful in some way. SPC isn't my area, but I think these comments are general enough th...
Why isn't bayesian statistics more popular for statistical process control?
WARNING I wrote this answer a long time ago with very little idea what I was talking about. I can't delete it because it's been accepted, but I can't stand behind most of the content. This is a very
Why isn't bayesian statistics more popular for statistical process control? WARNING I wrote this answer a long time ago with very little idea what I was talking about. I can't delete it because it's been accepted, but I can't stand behind most of the content. This is a very long answer and I hope it'll be helpful in s...
Why isn't bayesian statistics more popular for statistical process control? WARNING I wrote this answer a long time ago with very little idea what I was talking about. I can't delete it because it's been accepted, but I can't stand behind most of the content. This is a very
15,504
Why isn't bayesian statistics more popular for statistical process control?
In my humble opinion, Bayesian statistics suffers from some drawbacks that conflict with its widespread use (in SPC but in other research sectors as well): It is more difficult to get estimates vs. its frequentist counterpart (the widest part of classes on statistics adopt the frequentist approach. By the way, it woul...
Why isn't bayesian statistics more popular for statistical process control?
In my humble opinion, Bayesian statistics suffers from some drawbacks that conflict with its widespread use (in SPC but in other research sectors as well): It is more difficult to get estimates vs. i
Why isn't bayesian statistics more popular for statistical process control? In my humble opinion, Bayesian statistics suffers from some drawbacks that conflict with its widespread use (in SPC but in other research sectors as well): It is more difficult to get estimates vs. its frequentist counterpart (the widest part ...
Why isn't bayesian statistics more popular for statistical process control? In my humble opinion, Bayesian statistics suffers from some drawbacks that conflict with its widespread use (in SPC but in other research sectors as well): It is more difficult to get estimates vs. i
15,505
Why isn't bayesian statistics more popular for statistical process control?
One reason is that Bayesian statistics was frozen out of the mainstream until around 1990. When I was studying statistics in the 1970s it was almost heresy (not everywhere, but in most graduate programs). It didn't help that most of the interesting problems were intractable. As a result, nearly everyone who is teachi...
Why isn't bayesian statistics more popular for statistical process control?
One reason is that Bayesian statistics was frozen out of the mainstream until around 1990. When I was studying statistics in the 1970s it was almost heresy (not everywhere, but in most graduate progr
Why isn't bayesian statistics more popular for statistical process control? One reason is that Bayesian statistics was frozen out of the mainstream until around 1990. When I was studying statistics in the 1970s it was almost heresy (not everywhere, but in most graduate programs). It didn't help that most of the inter...
Why isn't bayesian statistics more popular for statistical process control? One reason is that Bayesian statistics was frozen out of the mainstream until around 1990. When I was studying statistics in the 1970s it was almost heresy (not everywhere, but in most graduate progr
15,506
Qualitively what is Cross Entropy
To encode an event occurring with probability $p$ you need at least $\log_2(1/p)$ bits (why? see my answer on "What is the role of the logarithm in Shannon's entropy?"). So in optimal encoding the average length of encoded message is $$ \sum_i p_i \log_2(\tfrac{1}{p_i}), $$ that is, Shannon entropy of the original prob...
Qualitively what is Cross Entropy
To encode an event occurring with probability $p$ you need at least $\log_2(1/p)$ bits (why? see my answer on "What is the role of the logarithm in Shannon's entropy?"). So in optimal encoding the ave
Qualitively what is Cross Entropy To encode an event occurring with probability $p$ you need at least $\log_2(1/p)$ bits (why? see my answer on "What is the role of the logarithm in Shannon's entropy?"). So in optimal encoding the average length of encoded message is $$ \sum_i p_i \log_2(\tfrac{1}{p_i}), $$ that is, Sh...
Qualitively what is Cross Entropy To encode an event occurring with probability $p$ you need at least $\log_2(1/p)$ bits (why? see my answer on "What is the role of the logarithm in Shannon's entropy?"). So in optimal encoding the ave
15,507
Conditional expectation of exponential random variable
$\ldots$ by the memoryless property the distribution of $X|X > x$ is the same as that of $X$ but shifted to the right by $x$. Let $f_X(t)$ denote the probability density function (pdf) of $X$. Then, the mathematical formulation for what you correctly state $-$ namely, the conditional pdf of $X$ given that $\{X > x\}...
Conditional expectation of exponential random variable
$\ldots$ by the memoryless property the distribution of $X|X > x$ is the same as that of $X$ but shifted to the right by $x$. Let $f_X(t)$ denote the probability density function (pdf) of $X$. Then,
Conditional expectation of exponential random variable $\ldots$ by the memoryless property the distribution of $X|X > x$ is the same as that of $X$ but shifted to the right by $x$. Let $f_X(t)$ denote the probability density function (pdf) of $X$. Then, the mathematical formulation for what you correctly state $-$ na...
Conditional expectation of exponential random variable $\ldots$ by the memoryless property the distribution of $X|X > x$ is the same as that of $X$ but shifted to the right by $x$. Let $f_X(t)$ denote the probability density function (pdf) of $X$. Then,
15,508
Conditional expectation of exponential random variable
For $x>0$, the event $\{X>x\}$ has probability $P\{X>x\}=1-F_X(x)=e^{-\lambda x} > 0$. Hence, $$ \newcommand{\E}{\mathbb{E}} \E[X\mid X> x] = \frac{\E[X\,I_{\{X>x\}}]}{P\{X>x\}} \, , $$ but $$ \E[X\,I_{\{X>x\}}] = \int_x^\infty t\,\lambda\,e^{-\lambda t}\,dt = (*) $$ (using Feynman's trick, vindicated by the Domin...
Conditional expectation of exponential random variable
For $x>0$, the event $\{X>x\}$ has probability $P\{X>x\}=1-F_X(x)=e^{-\lambda x} > 0$. Hence, $$ \newcommand{\E}{\mathbb{E}} \E[X\mid X> x] = \frac{\E[X\,I_{\{X>x\}}]}{P\{X>x\}} \, , $$ but $$ \E[
Conditional expectation of exponential random variable For $x>0$, the event $\{X>x\}$ has probability $P\{X>x\}=1-F_X(x)=e^{-\lambda x} > 0$. Hence, $$ \newcommand{\E}{\mathbb{E}} \E[X\mid X> x] = \frac{\E[X\,I_{\{X>x\}}]}{P\{X>x\}} \, , $$ but $$ \E[X\,I_{\{X>x\}}] = \int_x^\infty t\,\lambda\,e^{-\lambda t}\,dt = ...
Conditional expectation of exponential random variable For $x>0$, the event $\{X>x\}$ has probability $P\{X>x\}=1-F_X(x)=e^{-\lambda x} > 0$. Hence, $$ \newcommand{\E}{\mathbb{E}} \E[X\mid X> x] = \frac{\E[X\,I_{\{X>x\}}]}{P\{X>x\}} \, , $$ but $$ \E[
15,509
Grid search on k-fold cross validation
Yes, this would be a violation as the test data for folds 2-10 of the outer cross-validation would have been part of the training data for fold 1 which were used to determine the values of the kernel and regularisation parameters. This means that some information about the test data has potentially leaked into the des...
Grid search on k-fold cross validation
Yes, this would be a violation as the test data for folds 2-10 of the outer cross-validation would have been part of the training data for fold 1 which were used to determine the values of the kernel
Grid search on k-fold cross validation Yes, this would be a violation as the test data for folds 2-10 of the outer cross-validation would have been part of the training data for fold 1 which were used to determine the values of the kernel and regularisation parameters. This means that some information about the test d...
Grid search on k-fold cross validation Yes, this would be a violation as the test data for folds 2-10 of the outer cross-validation would have been part of the training data for fold 1 which were used to determine the values of the kernel
15,510
Grid search on k-fold cross validation
After doing the grid search for each surrogate model, you can and should check a few things: variation of the optimized parameters (here $\gamma$ and $C$). Are the optimal parameters stable? If not, you're very likely in trouble. Compare the reported performance of the inner and outer cross validation. If the inner (...
Grid search on k-fold cross validation
After doing the grid search for each surrogate model, you can and should check a few things: variation of the optimized parameters (here $\gamma$ and $C$). Are the optimal parameters stable? If not,
Grid search on k-fold cross validation After doing the grid search for each surrogate model, you can and should check a few things: variation of the optimized parameters (here $\gamma$ and $C$). Are the optimal parameters stable? If not, you're very likely in trouble. Compare the reported performance of the inner and...
Grid search on k-fold cross validation After doing the grid search for each surrogate model, you can and should check a few things: variation of the optimized parameters (here $\gamma$ and $C$). Are the optimal parameters stable? If not,
15,511
Grid search on k-fold cross validation
You should fix $\gamma$ and $C$ initially. Then do $k$-fold cross validation to get a single test error estimate, $terr(\gamma,C)$. Then do a two-dimensional grid search, varying $\gamma$ and $C$ separately to generate a test error matrix. To speed things up people typically use a logarithmic grid, $\gamma,C \in \{ 2^{...
Grid search on k-fold cross validation
You should fix $\gamma$ and $C$ initially. Then do $k$-fold cross validation to get a single test error estimate, $terr(\gamma,C)$. Then do a two-dimensional grid search, varying $\gamma$ and $C$ sepa
Grid search on k-fold cross validation You should fix $\gamma$ and $C$ initially. Then do $k$-fold cross validation to get a single test error estimate, $terr(\gamma,C)$. Then do a two-dimensional grid search, varying $\gamma$ and $C$ separately to generate a test error matrix. To speed things up people typically use a...
Grid search on k-fold cross validation You should fix $\gamma$ and $C$ initially. Then do $k$-fold cross validation to get a single test error estimate, $terr(\gamma,C)$. Then do a two-dimensional grid search, varying $\gamma$ and $C$ sepa
15,512
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")?
Short answer: Both validation techniques involve training and testing a number of models. Long answer about how to do it best: That of course depends. But here a some thoughts that I use to guide my decisions about resampling validation. I'm chemometrician, so these strategies and also the terms are more or less clos...
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")?
Short answer: Both validation techniques involve training and testing a number of models. Long answer about how to do it best: That of course depends. But here a some thoughts that I use to guide my
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")? Short answer: Both validation techniques involve training and testing a number of models. Long answer about how to do it best: That of course depends. But here a some thoughts that I use to guide my decisions about resampling vali...
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")? Short answer: Both validation techniques involve training and testing a number of models. Long answer about how to do it best: That of course depends. But here a some thoughts that I use to guide my
15,513
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")?
I don't know about "best" (which probably depends on what you use it for), but I use bootstrap validation to estimate error on new data the following way ( third way if you like): Draw a training set of N observations from the original data (of size N) with replacement. Fit the model to the training data. Evaluate th...
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")?
I don't know about "best" (which probably depends on what you use it for), but I use bootstrap validation to estimate error on new data the following way ( third way if you like): Draw a training set
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")? I don't know about "best" (which probably depends on what you use it for), but I use bootstrap validation to estimate error on new data the following way ( third way if you like): Draw a training set of N observations from the ori...
What is the procedure for "bootstrap validation" (a.k.a. "resampling cross-validation")? I don't know about "best" (which probably depends on what you use it for), but I use bootstrap validation to estimate error on new data the following way ( third way if you like): Draw a training set
15,514
What distribution is most commonly used to model server response time?
The Log-Normal distribution is the one I find best at describing latencies of server response times across all the user base over a period of time. You may see some examples at the aptly-named site lognormal.com whose in the business of measuring site latency distribution over time and more. I have no affiliation with ...
What distribution is most commonly used to model server response time?
The Log-Normal distribution is the one I find best at describing latencies of server response times across all the user base over a period of time. You may see some examples at the aptly-named site lo
What distribution is most commonly used to model server response time? The Log-Normal distribution is the one I find best at describing latencies of server response times across all the user base over a period of time. You may see some examples at the aptly-named site lognormal.com whose in the business of measuring si...
What distribution is most commonly used to model server response time? The Log-Normal distribution is the one I find best at describing latencies of server response times across all the user base over a period of time. You may see some examples at the aptly-named site lo
15,515
What distribution is most commonly used to model server response time?
My research shows the best model is determined by a few things: 1) Are you concerned with the body, the tail, or both? If not "both", modeling a filtered dataset can be more useful. 2) Do you want a very simple or a very accurate one? i.e. how many parameters? If the answer to 1 was "both" and 2 was "simple", Pareto se...
What distribution is most commonly used to model server response time?
My research shows the best model is determined by a few things: 1) Are you concerned with the body, the tail, or both? If not "both", modeling a filtered dataset can be more useful. 2) Do you want a v
What distribution is most commonly used to model server response time? My research shows the best model is determined by a few things: 1) Are you concerned with the body, the tail, or both? If not "both", modeling a filtered dataset can be more useful. 2) Do you want a very simple or a very accurate one? i.e. how many ...
What distribution is most commonly used to model server response time? My research shows the best model is determined by a few things: 1) Are you concerned with the body, the tail, or both? If not "both", modeling a filtered dataset can be more useful. 2) Do you want a v
15,516
Properties of logistic regressions
The behavior you are observing is the "typical" case in logistic regression, but is not always true. It also holds in much more generality (see below). It is the consequence of the confluence of three separate facts. The choice of modeling the log-odds as a linear function of the predictors, The use of maximum likelih...
Properties of logistic regressions
The behavior you are observing is the "typical" case in logistic regression, but is not always true. It also holds in much more generality (see below). It is the consequence of the confluence of three
Properties of logistic regressions The behavior you are observing is the "typical" case in logistic regression, but is not always true. It also holds in much more generality (see below). It is the consequence of the confluence of three separate facts. The choice of modeling the log-odds as a linear function of the pre...
Properties of logistic regressions The behavior you are observing is the "typical" case in logistic regression, but is not always true. It also holds in much more generality (see below). It is the consequence of the confluence of three
15,517
Robust outlier detection in financial timeseries
The problem is definitely hard. Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different as for example: fixings like interbank rate may be constant for some time and then jump all of a sudden similarly for ...
Robust outlier detection in financial timeseries
The problem is definitely hard. Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different
Robust outlier detection in financial timeseries The problem is definitely hard. Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different as for example: fixings like interbank rate may be constant for some ...
Robust outlier detection in financial timeseries The problem is definitely hard. Mechanical rules like the +/- N1 times standard deviations, or +/ N2 times MAD, or +/- N3 IQR or ... will fail because there are always some series that are different
15,518
Robust outlier detection in financial timeseries
I'll add some paper references when I'm back at a computer, but here are some simple suggestions: Definitely start by working with returns. This is critical to deal with the irregular spacing where you can naturally get big price gaps (especially around weekends). Then you can apply a simple filter to remove returns we...
Robust outlier detection in financial timeseries
I'll add some paper references when I'm back at a computer, but here are some simple suggestions: Definitely start by working with returns. This is critical to deal with the irregular spacing where yo
Robust outlier detection in financial timeseries I'll add some paper references when I'm back at a computer, but here are some simple suggestions: Definitely start by working with returns. This is critical to deal with the irregular spacing where you can naturally get big price gaps (especially around weekends). Then y...
Robust outlier detection in financial timeseries I'll add some paper references when I'm back at a computer, but here are some simple suggestions: Definitely start by working with returns. This is critical to deal with the irregular spacing where yo
15,519
Robust outlier detection in financial timeseries
I have (with some delay) changed my answer to reflect your concern about the lack of 'adaptability' of the unconditional mad/median. You can address the problem of time varying volatility with the robust statistics framework. This is done by using a robust estimator of the conditional variance (instead of the robust es...
Robust outlier detection in financial timeseries
I have (with some delay) changed my answer to reflect your concern about the lack of 'adaptability' of the unconditional mad/median. You can address the problem of time varying volatility with the rob
Robust outlier detection in financial timeseries I have (with some delay) changed my answer to reflect your concern about the lack of 'adaptability' of the unconditional mad/median. You can address the problem of time varying volatility with the robust statistics framework. This is done by using a robust estimator of t...
Robust outlier detection in financial timeseries I have (with some delay) changed my answer to reflect your concern about the lack of 'adaptability' of the unconditional mad/median. You can address the problem of time varying volatility with the rob
15,520
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
The Frequentist needs asymptotics because the things they are interested in, like intervals which cover the true value 95% of the time or tests which have a false positive rate of less than 5% when the null hypothesis is true, typically do not exist. If the model is linear and the errors Gaussian, we can get exact conf...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
The Frequentist needs asymptotics because the things they are interested in, like intervals which cover the true value 95% of the time or tests which have a false positive rate of less than 5% when th
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? The Frequentist needs asymptotics because the things they are interested in, like intervals which cover the true value 95% of the time or tests which have a false positive rate of less than 5% when the null hypothesis is true, typically do no...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? The Frequentist needs asymptotics because the things they are interested in, like intervals which cover the true value 95% of the time or tests which have a false positive rate of less than 5% when th
15,521
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
The limit theorem is 'central' The central limit theorem (CLT) has a central role in all of statistics. That is why it is called central! It is not specific to frequentist or Bayesian statistics. Note the early (and possibly first ever) use of the term 'central limit theorem' by George Pólya who used it in the article ...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
The limit theorem is 'central' The central limit theorem (CLT) has a central role in all of statistics. That is why it is called central! It is not specific to frequentist or Bayesian statistics. Note
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? The limit theorem is 'central' The central limit theorem (CLT) has a central role in all of statistics. That is why it is called central! It is not specific to frequentist or Bayesian statistics. Note the early (and possibly first ever) use o...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? The limit theorem is 'central' The central limit theorem (CLT) has a central role in all of statistics. That is why it is called central! It is not specific to frequentist or Bayesian statistics. Note
15,522
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
Reproduced verbatim from the Wikipedia page: In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in the limit of infinite data to a multivariat...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
Reproduced verbatim from the Wikipedia page: In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models.
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? Reproduced verbatim from the Wikipedia page: In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a p...
What is the role, if any, of the Central Limit Theorem in Bayesian Inference? Reproduced verbatim from the Wikipedia page: In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models.
15,523
Is supervised learning a subset of reinforcement learning?
It's true that any supervised learning problem can be cast as an equivalent reinforcement learning problem: Let states correspond to the input data. Let actions correspond to predictions of the output. Define reward as the negative of the loss function used for supervised learning. Maximize expected reward. In contrast...
Is supervised learning a subset of reinforcement learning?
It's true that any supervised learning problem can be cast as an equivalent reinforcement learning problem: Let states correspond to the input data. Let actions correspond to predictions of the output
Is supervised learning a subset of reinforcement learning? It's true that any supervised learning problem can be cast as an equivalent reinforcement learning problem: Let states correspond to the input data. Let actions correspond to predictions of the output. Define reward as the negative of the loss function used for...
Is supervised learning a subset of reinforcement learning? It's true that any supervised learning problem can be cast as an equivalent reinforcement learning problem: Let states correspond to the input data. Let actions correspond to predictions of the output
15,524
Is bootstrapping standard errors and confidence intervals appropriate in regressions where homoscedasticity assumption is violated?
There are at least three (may be more) approaches to perform the bootstrap for linear regression with independent, but not identically distributed data. (If you have other violations of the "standard" assumptions, e.g., due to autocorrelations with time series data, or clustering due to sampling design, things get even...
Is bootstrapping standard errors and confidence intervals appropriate in regressions where homosceda
There are at least three (may be more) approaches to perform the bootstrap for linear regression with independent, but not identically distributed data. (If you have other violations of the "standard"
Is bootstrapping standard errors and confidence intervals appropriate in regressions where homoscedasticity assumption is violated? There are at least three (may be more) approaches to perform the bootstrap for linear regression with independent, but not identically distributed data. (If you have other violations of th...
Is bootstrapping standard errors and confidence intervals appropriate in regressions where homosceda There are at least three (may be more) approaches to perform the bootstrap for linear regression with independent, but not identically distributed data. (If you have other violations of the "standard"
15,525
Fast method for finding best metaparameters of SVM (that is faster than grid search)
The downside being of grid search being that the runtime grows as fast as the product of the number of options for each parameter. Here is an entry in Alex Smola's blog related to your question Here is a quote: [...] pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance of all such pairs and ta...
Fast method for finding best metaparameters of SVM (that is faster than grid search)
The downside being of grid search being that the runtime grows as fast as the product of the number of options for each parameter. Here is an entry in Alex Smola's blog related to your question Here i
Fast method for finding best metaparameters of SVM (that is faster than grid search) The downside being of grid search being that the runtime grows as fast as the product of the number of options for each parameter. Here is an entry in Alex Smola's blog related to your question Here is a quote: [...] pick, say 1000 pa...
Fast method for finding best metaparameters of SVM (that is faster than grid search) The downside being of grid search being that the runtime grows as fast as the product of the number of options for each parameter. Here is an entry in Alex Smola's blog related to your question Here i
15,526
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If you make the assumption that there is a relatively smooth function underlying the grid of parameters, then there are certain things that you can do. For example, one simple heuristic is to start with a very coarse grid of parameters, and then use a finer grid around the best of the parameter settings from the coarse...
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If you make the assumption that there is a relatively smooth function underlying the grid of parameters, then there are certain things that you can do. For example, one simple heuristic is to start wi
Fast method for finding best metaparameters of SVM (that is faster than grid search) If you make the assumption that there is a relatively smooth function underlying the grid of parameters, then there are certain things that you can do. For example, one simple heuristic is to start with a very coarse grid of parameters...
Fast method for finding best metaparameters of SVM (that is faster than grid search) If you make the assumption that there is a relatively smooth function underlying the grid of parameters, then there are certain things that you can do. For example, one simple heuristic is to start wi
15,527
Fast method for finding best metaparameters of SVM (that is faster than grid search)
I use simulated annealing for searching parameters. The behavior is governed by a few parameters: k is Boltzmann's constant. T_max is your starting temperature. T_min is your ending threshold. mu_T (μ) is how much you lower the temperature (T->T/μ) i is the number of iterations at each temperature z is a step size - y...
Fast method for finding best metaparameters of SVM (that is faster than grid search)
I use simulated annealing for searching parameters. The behavior is governed by a few parameters: k is Boltzmann's constant. T_max is your starting temperature. T_min is your ending threshold. mu_T (
Fast method for finding best metaparameters of SVM (that is faster than grid search) I use simulated annealing for searching parameters. The behavior is governed by a few parameters: k is Boltzmann's constant. T_max is your starting temperature. T_min is your ending threshold. mu_T (μ) is how much you lower the temper...
Fast method for finding best metaparameters of SVM (that is faster than grid search) I use simulated annealing for searching parameters. The behavior is governed by a few parameters: k is Boltzmann's constant. T_max is your starting temperature. T_min is your ending threshold. mu_T (
15,528
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If anyone is interested here are some of my thoughts on the subject: As @tdc suggested I'm doing coarse/fine grid search. This introduces two problems: In most cases I will get set of good metaparameter sets that have wildly different parametes --- i'm interpreting it in this way that these parameters are optimal so...
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If anyone is interested here are some of my thoughts on the subject: As @tdc suggested I'm doing coarse/fine grid search. This introduces two problems: In most cases I will get set of good metapara
Fast method for finding best metaparameters of SVM (that is faster than grid search) If anyone is interested here are some of my thoughts on the subject: As @tdc suggested I'm doing coarse/fine grid search. This introduces two problems: In most cases I will get set of good metaparameter sets that have wildly differe...
Fast method for finding best metaparameters of SVM (that is faster than grid search) If anyone is interested here are some of my thoughts on the subject: As @tdc suggested I'm doing coarse/fine grid search. This introduces two problems: In most cases I will get set of good metapara
15,529
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If the kernel is radial, you can use this heuristic to get a proper $\sigma$ -- C optimisation is way easier then.
Fast method for finding best metaparameters of SVM (that is faster than grid search)
If the kernel is radial, you can use this heuristic to get a proper $\sigma$ -- C optimisation is way easier then.
Fast method for finding best metaparameters of SVM (that is faster than grid search) If the kernel is radial, you can use this heuristic to get a proper $\sigma$ -- C optimisation is way easier then.
Fast method for finding best metaparameters of SVM (that is faster than grid search) If the kernel is radial, you can use this heuristic to get a proper $\sigma$ -- C optimisation is way easier then.
15,530
"Moderation" versus "interaction"?
You should consider the two terms to be synonymous. Although they are used in slightly different ways, and come from different traditions within statistics ('interaction' is associated more with ANOVA, and 'moderator variable' is more associated with regression), there is no real difference in the underlying meaning. ...
"Moderation" versus "interaction"?
You should consider the two terms to be synonymous. Although they are used in slightly different ways, and come from different traditions within statistics ('interaction' is associated more with ANOV
"Moderation" versus "interaction"? You should consider the two terms to be synonymous. Although they are used in slightly different ways, and come from different traditions within statistics ('interaction' is associated more with ANOVA, and 'moderator variable' is more associated with regression), there is no real dif...
"Moderation" versus "interaction"? You should consider the two terms to be synonymous. Although they are used in slightly different ways, and come from different traditions within statistics ('interaction' is associated more with ANOV
15,531
"Moderation" versus "interaction"?
I think you have things mostly correct except for the part about "in interaction, M (which is gender in this case) affects other the IV." In an interaction (a true synonym for a moderator effect--not something different), there is no need for one predictor to influence the other or even be correlated with the other. ...
"Moderation" versus "interaction"?
I think you have things mostly correct except for the part about "in interaction, M (which is gender in this case) affects other the IV." In an interaction (a true synonym for a moderator effect--not
"Moderation" versus "interaction"? I think you have things mostly correct except for the part about "in interaction, M (which is gender in this case) affects other the IV." In an interaction (a true synonym for a moderator effect--not something different), there is no need for one predictor to influence the other or e...
"Moderation" versus "interaction"? I think you have things mostly correct except for the part about "in interaction, M (which is gender in this case) affects other the IV." In an interaction (a true synonym for a moderator effect--not
15,532
"Moderation" versus "interaction"?
Moderation Vs Interaction Both moderation and interaction effects are very much similar to each other. Mathematically, they both can be modelled by using product term in the regression equation. Often researcher use the two terms as synonyms but there is a thin line between interaction and moderation. The difference be...
"Moderation" versus "interaction"?
Moderation Vs Interaction Both moderation and interaction effects are very much similar to each other. Mathematically, they both can be modelled by using product term in the regression equation. Often
"Moderation" versus "interaction"? Moderation Vs Interaction Both moderation and interaction effects are very much similar to each other. Mathematically, they both can be modelled by using product term in the regression equation. Often researcher use the two terms as synonyms but there is a thin line between interactio...
"Moderation" versus "interaction"? Moderation Vs Interaction Both moderation and interaction effects are very much similar to each other. Mathematically, they both can be modelled by using product term in the regression equation. Often
15,533
"Moderation" versus "interaction"?
I think the most general model one can write regarding moderation of a variable z "in a relationship between y and x" is: y = f(x) + g(z) + h(x)z The marginal effect of x is f'(x)+h'(x)z, so the moderation effect is h'(x). Mike
"Moderation" versus "interaction"?
I think the most general model one can write regarding moderation of a variable z "in a relationship between y and x" is: y = f(x) + g(z) + h(x)z The marginal effect of x is f'(x)+h'(x)z, so the moder
"Moderation" versus "interaction"? I think the most general model one can write regarding moderation of a variable z "in a relationship between y and x" is: y = f(x) + g(z) + h(x)z The marginal effect of x is f'(x)+h'(x)z, so the moderation effect is h'(x). Mike
"Moderation" versus "interaction"? I think the most general model one can write regarding moderation of a variable z "in a relationship between y and x" is: y = f(x) + g(z) + h(x)z The marginal effect of x is f'(x)+h'(x)z, so the moder
15,534
Social network datasets
Check out the Stanford large network dataset collection: SNAP.
Social network datasets
Check out the Stanford large network dataset collection: SNAP.
Social network datasets Check out the Stanford large network dataset collection: SNAP.
Social network datasets Check out the Stanford large network dataset collection: SNAP.
15,535
Social network datasets
A huge twitter dataset that includes followers, not just tweets large collection of twitter datasets here
Social network datasets
A huge twitter dataset that includes followers, not just tweets large collection of twitter datasets here
Social network datasets A huge twitter dataset that includes followers, not just tweets large collection of twitter datasets here
Social network datasets A huge twitter dataset that includes followers, not just tweets large collection of twitter datasets here
15,536
Social network datasets
A large index of facebook pages was created and is available as a torrent (It is ~2.8Gb) http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579b90838c2cd86a8e9575 Twitter datasets are tagged on Infochimps: http://infochimps.com/tags/twitter A lastfm dataset is av...
Social network datasets
A large index of facebook pages was created and is available as a torrent (It is ~2.8Gb) http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579
Social network datasets A large index of facebook pages was created and is available as a torrent (It is ~2.8Gb) http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579b90838c2cd86a8e9575 Twitter datasets are tagged on Infochimps: http://infochimps.com/tags/twitte...
Social network datasets A large index of facebook pages was created and is available as a torrent (It is ~2.8Gb) http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579
15,537
Social network datasets
visit the Max Planck institute. They have also collected several datasets for OSNs.
Social network datasets
visit the Max Planck institute. They have also collected several datasets for OSNs.
Social network datasets visit the Max Planck institute. They have also collected several datasets for OSNs.
Social network datasets visit the Max Planck institute. They have also collected several datasets for OSNs.
15,538
Social network datasets
Just found this: 476 million Twitter tweets (via @yarapavan).
Social network datasets
Just found this: 476 million Twitter tweets (via @yarapavan).
Social network datasets Just found this: 476 million Twitter tweets (via @yarapavan).
Social network datasets Just found this: 476 million Twitter tweets (via @yarapavan).
15,539
Social network datasets
We have curated a Twitter dataset for friends of users in 2009 and then in 2009. You can find more information here: http://strict.dista.uninsubria.it/?p=364
Social network datasets
We have curated a Twitter dataset for friends of users in 2009 and then in 2009. You can find more information here: http://strict.dista.uninsubria.it/?p=364
Social network datasets We have curated a Twitter dataset for friends of users in 2009 and then in 2009. You can find more information here: http://strict.dista.uninsubria.it/?p=364
Social network datasets We have curated a Twitter dataset for friends of users in 2009 and then in 2009. You can find more information here: http://strict.dista.uninsubria.it/?p=364
15,540
Social network datasets
Check out kaggle.com , they have some contests about social networks and they give out datasets. Also, Stanford's SNAP is a great resource. And it has research works to boot.
Social network datasets
Check out kaggle.com , they have some contests about social networks and they give out datasets. Also, Stanford's SNAP is a great resource. And it has research works to boot.
Social network datasets Check out kaggle.com , they have some contests about social networks and they give out datasets. Also, Stanford's SNAP is a great resource. And it has research works to boot.
Social network datasets Check out kaggle.com , they have some contests about social networks and they give out datasets. Also, Stanford's SNAP is a great resource. And it has research works to boot.
15,541
Social network datasets
Facebook social graph, application installations and Last.fm users, events, groups at http://odysseas.calit2.uci.edu/research/ Two datasets (collected April-May 2009) which contain representative samples of ~1 million users Facebook-wide, with a few annotated properties: for each sampled user, the friend list, privacy...
Social network datasets
Facebook social graph, application installations and Last.fm users, events, groups at http://odysseas.calit2.uci.edu/research/ Two datasets (collected April-May 2009) which contain representative samp
Social network datasets Facebook social graph, application installations and Last.fm users, events, groups at http://odysseas.calit2.uci.edu/research/ Two datasets (collected April-May 2009) which contain representative samples of ~1 million users Facebook-wide, with a few annotated properties: for each sampled user, ...
Social network datasets Facebook social graph, application installations and Last.fm users, events, groups at http://odysseas.calit2.uci.edu/research/ Two datasets (collected April-May 2009) which contain representative samp
15,542
Social network datasets
A good resource for finding datasets is: /r/datasets on Reddit. A quick glance at that page reveals this source, which might contain something useful for you.
Social network datasets
A good resource for finding datasets is: /r/datasets on Reddit. A quick glance at that page reveals this source, which might contain something useful for you.
Social network datasets A good resource for finding datasets is: /r/datasets on Reddit. A quick glance at that page reveals this source, which might contain something useful for you.
Social network datasets A good resource for finding datasets is: /r/datasets on Reddit. A quick glance at that page reveals this source, which might contain something useful for you.
15,543
Social network datasets
This paper uses a facebook dataset that is available here. Here is the description from the authors: The data includes the complete set of nodes and links (and some demographic information) from 100 US colleges and universities from a single-time snapshot in September 2005.
Social network datasets
This paper uses a facebook dataset that is available here. Here is the description from the authors: The data includes the complete set of nodes and links (and some demographic information) from 10
Social network datasets This paper uses a facebook dataset that is available here. Here is the description from the authors: The data includes the complete set of nodes and links (and some demographic information) from 100 US colleges and universities from a single-time snapshot in September 2005.
Social network datasets This paper uses a facebook dataset that is available here. Here is the description from the authors: The data includes the complete set of nodes and links (and some demographic information) from 10
15,544
How to visualize 3D contingency matrix?
I would try some kind of 3D heatmap, mosaic plot or a sieve plot (available in the vcd package). Isn't the base mosaicplot() function working with three-way table? (at least mosaic3d() in the vcdExtra package should work, see e.g. http://datavis.ca/R/) Here's an example (including a conditional plot): A <- sample(c(T,F...
How to visualize 3D contingency matrix?
I would try some kind of 3D heatmap, mosaic plot or a sieve plot (available in the vcd package). Isn't the base mosaicplot() function working with three-way table? (at least mosaic3d() in the vcdExtra
How to visualize 3D contingency matrix? I would try some kind of 3D heatmap, mosaic plot or a sieve plot (available in the vcd package). Isn't the base mosaicplot() function working with three-way table? (at least mosaic3d() in the vcdExtra package should work, see e.g. http://datavis.ca/R/) Here's an example (includin...
How to visualize 3D contingency matrix? I would try some kind of 3D heatmap, mosaic plot or a sieve plot (available in the vcd package). Isn't the base mosaicplot() function working with three-way table? (at least mosaic3d() in the vcdExtra
15,545
How to visualize 3D contingency matrix?
I recently came across a paper by Hadley Wickham and I was reminded of this question (I must spend too much time on the site!) Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transactions on Visualization and Computer Graphics (Proc. Infovis `11). Pre-print PDF Abstract We propose a new framework fo...
How to visualize 3D contingency matrix?
I recently came across a paper by Hadley Wickham and I was reminded of this question (I must spend too much time on the site!) Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transacti
How to visualize 3D contingency matrix? I recently came across a paper by Hadley Wickham and I was reminded of this question (I must spend too much time on the site!) Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transactions on Visualization and Computer Graphics (Proc. Infovis `11). Pre-print PD...
How to visualize 3D contingency matrix? I recently came across a paper by Hadley Wickham and I was reminded of this question (I must spend too much time on the site!) Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transacti
15,546
Hyper parameters tuning: Random search vs Bayesian optimization
I think that the answer here is the same as everywhere in data science: it depends on the data :-) It might happen that one method outperforms another (here https://arimo.com/data-science/2016/bayesian-optimization-hyperparameter-tuning/ people compare Bayesian hyperparameter optimization and achieve a better result on...
Hyper parameters tuning: Random search vs Bayesian optimization
I think that the answer here is the same as everywhere in data science: it depends on the data :-) It might happen that one method outperforms another (here https://arimo.com/data-science/2016/bayesia
Hyper parameters tuning: Random search vs Bayesian optimization I think that the answer here is the same as everywhere in data science: it depends on the data :-) It might happen that one method outperforms another (here https://arimo.com/data-science/2016/bayesian-optimization-hyperparameter-tuning/ people compare Bay...
Hyper parameters tuning: Random search vs Bayesian optimization I think that the answer here is the same as everywhere in data science: it depends on the data :-) It might happen that one method outperforms another (here https://arimo.com/data-science/2016/bayesia
15,547
Hyper parameters tuning: Random search vs Bayesian optimization
Bayesian optimization is better, because it makes smarter decisions. You can check this article in order to learn more: Hyperparameter optimization for neural networks. This articles also has info about pros and cons for both methods + some extra techniques like grid search and Tree-structured parzen estimators. Even t...
Hyper parameters tuning: Random search vs Bayesian optimization
Bayesian optimization is better, because it makes smarter decisions. You can check this article in order to learn more: Hyperparameter optimization for neural networks. This articles also has info abo
Hyper parameters tuning: Random search vs Bayesian optimization Bayesian optimization is better, because it makes smarter decisions. You can check this article in order to learn more: Hyperparameter optimization for neural networks. This articles also has info about pros and cons for both methods + some extra technique...
Hyper parameters tuning: Random search vs Bayesian optimization Bayesian optimization is better, because it makes smarter decisions. You can check this article in order to learn more: Hyperparameter optimization for neural networks. This articles also has info abo
15,548
Hyper parameters tuning: Random search vs Bayesian optimization
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Of note, Bayesian hyperparameter optimization is a seq...
Hyper parameters tuning: Random search vs Bayesian optimization
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Hyper parameters tuning: Random search vs Bayesian optimization Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Hyper parameters tuning: Random search vs Bayesian optimization Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
15,549
How to run linear regression in a parallel/distributed way for big data setting?
Short Answer: Yes, running linear regression in parallel has been done. For example, Xiangrui Meng et al. (2016) for Machine Learning in Apache Spark. The way it works is using stochastic gradient descent (SGD). In section 3, core features, the author mentioned: Generalized linear models are learned via optimization a...
How to run linear regression in a parallel/distributed way for big data setting?
Short Answer: Yes, running linear regression in parallel has been done. For example, Xiangrui Meng et al. (2016) for Machine Learning in Apache Spark. The way it works is using stochastic gradient des
How to run linear regression in a parallel/distributed way for big data setting? Short Answer: Yes, running linear regression in parallel has been done. For example, Xiangrui Meng et al. (2016) for Machine Learning in Apache Spark. The way it works is using stochastic gradient descent (SGD). In section 3, core features...
How to run linear regression in a parallel/distributed way for big data setting? Short Answer: Yes, running linear regression in parallel has been done. For example, Xiangrui Meng et al. (2016) for Machine Learning in Apache Spark. The way it works is using stochastic gradient des
15,550
How to run linear regression in a parallel/distributed way for big data setting?
As @hxd1011 mentioned, one approach is to formulate linear regression as an optimization problem, then solve it using an iterative algorithm (e.g. stochastic gradient descent). This approach can be parallelized but there are a couple important questions: 1) How should be problem be broken into subproblems? 2) Given tha...
How to run linear regression in a parallel/distributed way for big data setting?
As @hxd1011 mentioned, one approach is to formulate linear regression as an optimization problem, then solve it using an iterative algorithm (e.g. stochastic gradient descent). This approach can be pa
How to run linear regression in a parallel/distributed way for big data setting? As @hxd1011 mentioned, one approach is to formulate linear regression as an optimization problem, then solve it using an iterative algorithm (e.g. stochastic gradient descent). This approach can be parallelized but there are a couple impor...
How to run linear regression in a parallel/distributed way for big data setting? As @hxd1011 mentioned, one approach is to formulate linear regression as an optimization problem, then solve it using an iterative algorithm (e.g. stochastic gradient descent). This approach can be pa
15,551
How to run linear regression in a parallel/distributed way for big data setting?
Long, long, before map reduce I solved this. Below is reference to an old paper of mine in Journal of Econometrics 1980. It was for parallel nonlinear maximum likelihood and would work for M-estimation. The method is exact for regressions. Split data into k subsets on k processors/units (could be done sequentially as ...
How to run linear regression in a parallel/distributed way for big data setting?
Long, long, before map reduce I solved this. Below is reference to an old paper of mine in Journal of Econometrics 1980. It was for parallel nonlinear maximum likelihood and would work for M-estimatio
How to run linear regression in a parallel/distributed way for big data setting? Long, long, before map reduce I solved this. Below is reference to an old paper of mine in Journal of Econometrics 1980. It was for parallel nonlinear maximum likelihood and would work for M-estimation. The method is exact for regressions...
How to run linear regression in a parallel/distributed way for big data setting? Long, long, before map reduce I solved this. Below is reference to an old paper of mine in Journal of Econometrics 1980. It was for parallel nonlinear maximum likelihood and would work for M-estimatio
15,552
How to run linear regression in a parallel/distributed way for big data setting?
As far as I understand, the formulas for the intercept and the slope in a simple linear regression model can be rewritten as expressions on various sums that don't contain the mean value, so that these sums can be calculated in parallel during a map phase or similar: sum(x) sum(y) sum(x*x) sum(x*y) In a final phase t...
How to run linear regression in a parallel/distributed way for big data setting?
As far as I understand, the formulas for the intercept and the slope in a simple linear regression model can be rewritten as expressions on various sums that don't contain the mean value, so that thes
How to run linear regression in a parallel/distributed way for big data setting? As far as I understand, the formulas for the intercept and the slope in a simple linear regression model can be rewritten as expressions on various sums that don't contain the mean value, so that these sums can be calculated in parallel du...
How to run linear regression in a parallel/distributed way for big data setting? As far as I understand, the formulas for the intercept and the slope in a simple linear regression model can be rewritten as expressions on various sums that don't contain the mean value, so that thes
15,553
Are PCA components of multivariate Gaussian data statistically independent?
I will start with an intuitive demonstration. I generated $n=100$ observations (a) from a strongly non-Gaussian 2D distribution, and (b) from a 2D Gaussian distribution. In both cases I centered the data and performed the singular value decomposition $\mathbf X=\mathbf{USV}^\top$. Then for each case I made a scatter pl...
Are PCA components of multivariate Gaussian data statistically independent?
I will start with an intuitive demonstration. I generated $n=100$ observations (a) from a strongly non-Gaussian 2D distribution, and (b) from a 2D Gaussian distribution. In both cases I centered the d
Are PCA components of multivariate Gaussian data statistically independent? I will start with an intuitive demonstration. I generated $n=100$ observations (a) from a strongly non-Gaussian 2D distribution, and (b) from a 2D Gaussian distribution. In both cases I centered the data and performed the singular value decompo...
Are PCA components of multivariate Gaussian data statistically independent? I will start with an intuitive demonstration. I generated $n=100$ observations (a) from a strongly non-Gaussian 2D distribution, and (b) from a 2D Gaussian distribution. In both cases I centered the d
15,554
Understanding bootstrapping for validation and model selection
First you have to decide if you really need model selection, or you just need to model. In the majority of situations, depending on dimensionality, fitting a flexible comprehensive model is preferred. The bootstrap is a great way to estimate the performance of a model. The simplest thing to estimate is variance. Mor...
Understanding bootstrapping for validation and model selection
First you have to decide if you really need model selection, or you just need to model. In the majority of situations, depending on dimensionality, fitting a flexible comprehensive model is preferred
Understanding bootstrapping for validation and model selection First you have to decide if you really need model selection, or you just need to model. In the majority of situations, depending on dimensionality, fitting a flexible comprehensive model is preferred. The bootstrap is a great way to estimate the performanc...
Understanding bootstrapping for validation and model selection First you have to decide if you really need model selection, or you just need to model. In the majority of situations, depending on dimensionality, fitting a flexible comprehensive model is preferred
15,555
Understanding bootstrapping for validation and model selection
Consider using the bootstrap for model averaging. The paper below could help, as it compares a bootstrap model averaging approach to (the more commonly used?) Bayesian modeling averaging, and lays out a recipe for performing the model averaging. Bootstrap model averaging in time series studies of particulate matter air...
Understanding bootstrapping for validation and model selection
Consider using the bootstrap for model averaging. The paper below could help, as it compares a bootstrap model averaging approach to (the more commonly used?) Bayesian modeling averaging, and lays out
Understanding bootstrapping for validation and model selection Consider using the bootstrap for model averaging. The paper below could help, as it compares a bootstrap model averaging approach to (the more commonly used?) Bayesian modeling averaging, and lays out a recipe for performing the model averaging. Bootstrap m...
Understanding bootstrapping for validation and model selection Consider using the bootstrap for model averaging. The paper below could help, as it compares a bootstrap model averaging approach to (the more commonly used?) Bayesian modeling averaging, and lays out
15,556
When to check model assumptions
Some general observations: First, we tend to spend a lot of time using non-robust models when there are many robust options that have equal or greater statistical power. Semiparametric (ordinal) regression for continuous Y is one class of model that is not used often enough. Such models (e.g., the proportional odds m...
When to check model assumptions
Some general observations: First, we tend to spend a lot of time using non-robust models when there are many robust options that have equal or greater statistical power. Semiparametric (ordinal) regr
When to check model assumptions Some general observations: First, we tend to spend a lot of time using non-robust models when there are many robust options that have equal or greater statistical power. Semiparametric (ordinal) regression for continuous Y is one class of model that is not used often enough. Such model...
When to check model assumptions Some general observations: First, we tend to spend a lot of time using non-robust models when there are many robust options that have equal or greater statistical power. Semiparametric (ordinal) regr
15,557
When to check model assumptions
Some remarks: Model assumptions are never fulfilled in reality, and there is therefore no way to make sure or even check that they are fulfilled. The correct question is not whether model assumptions are fulfilled, but rather whether violations of the model assumptions can be suspected that mislead the interpretation ...
When to check model assumptions
Some remarks: Model assumptions are never fulfilled in reality, and there is therefore no way to make sure or even check that they are fulfilled. The correct question is not whether model assumptions
When to check model assumptions Some remarks: Model assumptions are never fulfilled in reality, and there is therefore no way to make sure or even check that they are fulfilled. The correct question is not whether model assumptions are fulfilled, but rather whether violations of the model assumptions can be suspected ...
When to check model assumptions Some remarks: Model assumptions are never fulfilled in reality, and there is therefore no way to make sure or even check that they are fulfilled. The correct question is not whether model assumptions
15,558
When to check model assumptions
It's useful to distinguish different scenarios. In the first scenario, strict type I error control, as well as pre-specification (otherwise the first is hard to imagine), are required. E.g. think of a randomized Phase III trial for a new drug. In this scenario, you would check your assumptions before you even run your ...
When to check model assumptions
It's useful to distinguish different scenarios. In the first scenario, strict type I error control, as well as pre-specification (otherwise the first is hard to imagine), are required. E.g. think of a
When to check model assumptions It's useful to distinguish different scenarios. In the first scenario, strict type I error control, as well as pre-specification (otherwise the first is hard to imagine), are required. E.g. think of a randomized Phase III trial for a new drug. In this scenario, you would check your assum...
When to check model assumptions It's useful to distinguish different scenarios. In the first scenario, strict type I error control, as well as pre-specification (otherwise the first is hard to imagine), are required. E.g. think of a
15,559
Universal approximation theorem for convolutional networks
It seems this question has been answered in the affirmative in this recent article by Dmitry Yarotsky: Universal approximations of invariant maps by neural networks. The article shows that any translation equivariant function can be approximated arbitrarily well by a convolutional neural network given that it is suffic...
Universal approximation theorem for convolutional networks
It seems this question has been answered in the affirmative in this recent article by Dmitry Yarotsky: Universal approximations of invariant maps by neural networks. The article shows that any transla
Universal approximation theorem for convolutional networks It seems this question has been answered in the affirmative in this recent article by Dmitry Yarotsky: Universal approximations of invariant maps by neural networks. The article shows that any translation equivariant function can be approximated arbitrarily wel...
Universal approximation theorem for convolutional networks It seems this question has been answered in the affirmative in this recent article by Dmitry Yarotsky: Universal approximations of invariant maps by neural networks. The article shows that any transla
15,560
Universal approximation theorem for convolutional networks
This is an interesting question, however, it does lack a proper clarification what is considered a convolutional neural network. Is the only requirement that the network has to include a convolution operation? Does it have to only include convolution operations? Are pooling operations admitted? Convolutional networks u...
Universal approximation theorem for convolutional networks
This is an interesting question, however, it does lack a proper clarification what is considered a convolutional neural network. Is the only requirement that the network has to include a convolution o
Universal approximation theorem for convolutional networks This is an interesting question, however, it does lack a proper clarification what is considered a convolutional neural network. Is the only requirement that the network has to include a convolution operation? Does it have to only include convolution operations...
Universal approximation theorem for convolutional networks This is an interesting question, however, it does lack a proper clarification what is considered a convolutional neural network. Is the only requirement that the network has to include a convolution o
15,561
Universal approximation theorem for convolutional networks
See the paper Universality of Deep Convolutional Neural Networks by Ding-Xuan Zhou, who shows that convolutional neural networks are universal, that is, they can approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough.
Universal approximation theorem for convolutional networks
See the paper Universality of Deep Convolutional Neural Networks by Ding-Xuan Zhou, who shows that convolutional neural networks are universal, that is, they can approximate any continuous function to
Universal approximation theorem for convolutional networks See the paper Universality of Deep Convolutional Neural Networks by Ding-Xuan Zhou, who shows that convolutional neural networks are universal, that is, they can approximate any continuous function to an arbitrary accuracy when the depth of the neural network i...
Universal approximation theorem for convolutional networks See the paper Universality of Deep Convolutional Neural Networks by Ding-Xuan Zhou, who shows that convolutional neural networks are universal, that is, they can approximate any continuous function to
15,562
Which multiple comparison method to use for a lmer model: lsmeans or glht?
Not a complete answer... The difference between glht(myfit, mcp(myfactor="Tukey")) and the two other methods is that this way uses a "z" statistic (normal distribution), whereas the other ones use a "t" statistic (Student distribution). The "z" statistic it the same as a "t" statistic with an infinite degree of freedom...
Which multiple comparison method to use for a lmer model: lsmeans or glht?
Not a complete answer... The difference between glht(myfit, mcp(myfactor="Tukey")) and the two other methods is that this way uses a "z" statistic (normal distribution), whereas the other ones use a "
Which multiple comparison method to use for a lmer model: lsmeans or glht? Not a complete answer... The difference between glht(myfit, mcp(myfactor="Tukey")) and the two other methods is that this way uses a "z" statistic (normal distribution), whereas the other ones use a "t" statistic (Student distribution). The "z" ...
Which multiple comparison method to use for a lmer model: lsmeans or glht? Not a complete answer... The difference between glht(myfit, mcp(myfactor="Tukey")) and the two other methods is that this way uses a "z" statistic (normal distribution), whereas the other ones use a "
15,563
Which multiple comparison method to use for a lmer model: lsmeans or glht?
(This answer highlights some information already present in one comment by @RussLenth above) In situations where the difference due to "z" vs. "t" statistic is negligible (i.e. in the limit of an infinite number of observations), calling lsmeans and multcomp with the default treatment of multiple comparisons would also...
Which multiple comparison method to use for a lmer model: lsmeans or glht?
(This answer highlights some information already present in one comment by @RussLenth above) In situations where the difference due to "z" vs. "t" statistic is negligible (i.e. in the limit of an infi
Which multiple comparison method to use for a lmer model: lsmeans or glht? (This answer highlights some information already present in one comment by @RussLenth above) In situations where the difference due to "z" vs. "t" statistic is negligible (i.e. in the limit of an infinite number of observations), calling lsmeans...
Which multiple comparison method to use for a lmer model: lsmeans or glht? (This answer highlights some information already present in one comment by @RussLenth above) In situations where the difference due to "z" vs. "t" statistic is negligible (i.e. in the limit of an infi
15,564
How to deal with overdispersion in Poisson regression: quasi-likelihood, negative binomial GLM, or subject-level random effect?
Poisson regression is just a GLM: People often speak of the parametric rationale for applying Poisson regression. In fact, Poisson regression is just a GLM. That means Poisson regression is justified for any type of data (counts, ratings, exam scores, binary events, etc.) when two assumptions are met: 1) the log of the...
How to deal with overdispersion in Poisson regression: quasi-likelihood, negative binomial GLM, or s
Poisson regression is just a GLM: People often speak of the parametric rationale for applying Poisson regression. In fact, Poisson regression is just a GLM. That means Poisson regression is justified
How to deal with overdispersion in Poisson regression: quasi-likelihood, negative binomial GLM, or subject-level random effect? Poisson regression is just a GLM: People often speak of the parametric rationale for applying Poisson regression. In fact, Poisson regression is just a GLM. That means Poisson regression is ju...
How to deal with overdispersion in Poisson regression: quasi-likelihood, negative binomial GLM, or s Poisson regression is just a GLM: People often speak of the parametric rationale for applying Poisson regression. In fact, Poisson regression is just a GLM. That means Poisson regression is justified
15,565
centering and scaling dummy variables
When constructing dummy variables for use in regression analyses, each category in a categorical variable except for one should get a binary variable. So you should have e.g. A_level2, A_level3 etc. One of the categories should not have a binary variable, and this category will serve as the reference category. If you d...
centering and scaling dummy variables
When constructing dummy variables for use in regression analyses, each category in a categorical variable except for one should get a binary variable. So you should have e.g. A_level2, A_level3 etc. O
centering and scaling dummy variables When constructing dummy variables for use in regression analyses, each category in a categorical variable except for one should get a binary variable. So you should have e.g. A_level2, A_level3 etc. One of the categories should not have a binary variable, and this category will ser...
centering and scaling dummy variables When constructing dummy variables for use in regression analyses, each category in a categorical variable except for one should get a binary variable. So you should have e.g. A_level2, A_level3 etc. O
15,566
centering and scaling dummy variables
If you are using R and scaling the dummy variables or variables having 0 or 1 to a scale between 0 and 1 only, then there won't be any change on the values of these variables, rest of the columns will be scaled. maxs <- apply(data, 2, max) mins <- apply(data, 2, min) data.scaled <- as.data.frame(scale(data, center = ...
centering and scaling dummy variables
If you are using R and scaling the dummy variables or variables having 0 or 1 to a scale between 0 and 1 only, then there won't be any change on the values of these variables, rest of the columns will
centering and scaling dummy variables If you are using R and scaling the dummy variables or variables having 0 or 1 to a scale between 0 and 1 only, then there won't be any change on the values of these variables, rest of the columns will be scaled. maxs <- apply(data, 2, max) mins <- apply(data, 2, min) data.scaled ...
centering and scaling dummy variables If you are using R and scaling the dummy variables or variables having 0 or 1 to a scale between 0 and 1 only, then there won't be any change on the values of these variables, rest of the columns will
15,567
centering and scaling dummy variables
The point of mean centering in regression is to make the intercept more interpretable. That is, id you mean center all the variables in your regression model, then the intercept (called Constant in SPSS output) equals the overall grand mean for your outcome variable. Which can be convenient when interpreting the final ...
centering and scaling dummy variables
The point of mean centering in regression is to make the intercept more interpretable. That is, id you mean center all the variables in your regression model, then the intercept (called Constant in SP
centering and scaling dummy variables The point of mean centering in regression is to make the intercept more interpretable. That is, id you mean center all the variables in your regression model, then the intercept (called Constant in SPSS output) equals the overall grand mean for your outcome variable. Which can be c...
centering and scaling dummy variables The point of mean centering in regression is to make the intercept more interpretable. That is, id you mean center all the variables in your regression model, then the intercept (called Constant in SP
15,568
Explanation for non-integer degrees of freedom in t test with unequal variances
The Welch-Satterthwaite d.f. can be shown to be a scaled weighted harmonic mean of the two degrees of freedom, with weights in proportion to the corresponding standard deviations. The original expression reads: $$\nu_{_W} = \frac{\left(\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}\right)^2}{\frac{s_1^4}{n_1^2\nu_1}+\frac{s_2^4}{...
Explanation for non-integer degrees of freedom in t test with unequal variances
The Welch-Satterthwaite d.f. can be shown to be a scaled weighted harmonic mean of the two degrees of freedom, with weights in proportion to the corresponding standard deviations. The original express
Explanation for non-integer degrees of freedom in t test with unequal variances The Welch-Satterthwaite d.f. can be shown to be a scaled weighted harmonic mean of the two degrees of freedom, with weights in proportion to the corresponding standard deviations. The original expression reads: $$\nu_{_W} = \frac{\left(\fra...
Explanation for non-integer degrees of freedom in t test with unequal variances The Welch-Satterthwaite d.f. can be shown to be a scaled weighted harmonic mean of the two degrees of freedom, with weights in proportion to the corresponding standard deviations. The original express
15,569
Explanation for non-integer degrees of freedom in t test with unequal variances
What you are referring to is the Welch-Satterthwaite correction to the degrees of freedom. The $t$-test when the WS correction is applied is often called Welch's $t$-test. (Incidentally, this has nothing to do with SPSS, all statistical software will be able to conduct Welch's $t$-test, they just don't usually report...
Explanation for non-integer degrees of freedom in t test with unequal variances
What you are referring to is the Welch-Satterthwaite correction to the degrees of freedom. The $t$-test when the WS correction is applied is often called Welch's $t$-test. (Incidentally, this has no
Explanation for non-integer degrees of freedom in t test with unequal variances What you are referring to is the Welch-Satterthwaite correction to the degrees of freedom. The $t$-test when the WS correction is applied is often called Welch's $t$-test. (Incidentally, this has nothing to do with SPSS, all statistical s...
Explanation for non-integer degrees of freedom in t test with unequal variances What you are referring to is the Welch-Satterthwaite correction to the degrees of freedom. The $t$-test when the WS correction is applied is often called Welch's $t$-test. (Incidentally, this has no
15,570
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
A very short answer: the REML is a ML, so the test based on REML is correct anyway. As the estimation of the variance parameters with REML is better, it is natural to use it. Why is REML a ML? Consider e.g. a model $$Y = X\beta + Zu + e \def\R{\mathbb{R}}$$ with $X\in\R^{n\times p}$, $Z\in\R^{n\times q}$, and $\beta ...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
A very short answer: the REML is a ML, so the test based on REML is correct anyway. As the estimation of the variance parameters with REML is better, it is natural to use it. Why is REML a ML? Conside
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? A very short answer: the REML is a ML, so the test based on REML is correct anyway. As the estimation of the variance parameters with REML is better, it is natural to use it. Why is REML a ML? Consider e.g. a model $$Y = X\beta +...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? A very short answer: the REML is a ML, so the test based on REML is correct anyway. As the estimation of the variance parameters with REML is better, it is natural to use it. Why is REML a ML? Conside
15,571
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
Likelihood ratio tests are statistical hypothesis tests that are based on a ratio of two likelihoods. Their properties are linked to maximum likelihood estimation (MLE). (see e.g. Maximum Likelihood Estimation (MLE) in layman terms). In your case (see question) you want to ''choose'' among two nested var-covar model...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
Likelihood ratio tests are statistical hypothesis tests that are based on a ratio of two likelihoods. Their properties are linked to maximum likelihood estimation (MLE). (see e.g. Maximum Likelihood
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? Likelihood ratio tests are statistical hypothesis tests that are based on a ratio of two likelihoods. Their properties are linked to maximum likelihood estimation (MLE). (see e.g. Maximum Likelihood Estimation (MLE) in layman te...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? Likelihood ratio tests are statistical hypothesis tests that are based on a ratio of two likelihoods. Their properties are linked to maximum likelihood estimation (MLE). (see e.g. Maximum Likelihood
15,572
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
I have an answer that has more to do with common sense than with Statistics. If you take a look at PROC MIXED in SAS, the estimation can be performed with six methods: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_mixed_sect008.htm but REML is the default. Why? Apparently, the ...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models?
I have an answer that has more to do with common sense than with Statistics. If you take a look at PROC MIXED in SAS, the estimation can be performed with six methods: http://support.sas.com/documenta
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? I have an answer that has more to do with common sense than with Statistics. If you take a look at PROC MIXED in SAS, the estimation can be performed with six methods: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/...
Why does one have to use REML (instead of ML) for choosing among nested var-covar models? I have an answer that has more to do with common sense than with Statistics. If you take a look at PROC MIXED in SAS, the estimation can be performed with six methods: http://support.sas.com/documenta
15,573
Log-linked Gamma GLM vs log-linked Gaussian GLM vs log-transformed LM
Well, quite clearly the log-linear fit to the Gaussian is unsuitable; there's strong heteroskedasticity in the residuals. So let's take that out of consideration. What's left is lognormal vs gamma. Note that the histogram of $T$ is of no direct use, since the marginal distribution will be a mixture of variates (each co...
Log-linked Gamma GLM vs log-linked Gaussian GLM vs log-transformed LM
Well, quite clearly the log-linear fit to the Gaussian is unsuitable; there's strong heteroskedasticity in the residuals. So let's take that out of consideration. What's left is lognormal vs gamma. No
Log-linked Gamma GLM vs log-linked Gaussian GLM vs log-transformed LM Well, quite clearly the log-linear fit to the Gaussian is unsuitable; there's strong heteroskedasticity in the residuals. So let's take that out of consideration. What's left is lognormal vs gamma. Note that the histogram of $T$ is of no direct use, ...
Log-linked Gamma GLM vs log-linked Gaussian GLM vs log-transformed LM Well, quite clearly the log-linear fit to the Gaussian is unsuitable; there's strong heteroskedasticity in the residuals. So let's take that out of consideration. What's left is lognormal vs gamma. No
15,574
Finding the fitted and predicted values for a statistical model
You have to be a bit careful with model objects in R. For example, whilst the fitted values and the predictions of the training data should be the same in the glm() model case, they are not the same when you use the correct extractor functions: R> fitted(md2) 1 2 3 4 5 6 ...
Finding the fitted and predicted values for a statistical model
You have to be a bit careful with model objects in R. For example, whilst the fitted values and the predictions of the training data should be the same in the glm() model case, they are not the same w
Finding the fitted and predicted values for a statistical model You have to be a bit careful with model objects in R. For example, whilst the fitted values and the predictions of the training data should be the same in the glm() model case, they are not the same when you use the correct extractor functions: R> fitted(m...
Finding the fitted and predicted values for a statistical model You have to be a bit careful with model objects in R. For example, whilst the fitted values and the predictions of the training data should be the same in the glm() model case, they are not the same w
15,575
What are good datasets to illustrate particular aspects of statistical analysis?
The low birth weight study This is one of the datasets in Hosmer and Lemeshow's textbook on Applied Logistic Regression (2000, Wiley, 2nd ed.). The goal of this prospective study was to identify risk factors associated with giving birth to a low birth weight baby (weighing less than 2,500 grams). Data were collected on...
What are good datasets to illustrate particular aspects of statistical analysis?
The low birth weight study This is one of the datasets in Hosmer and Lemeshow's textbook on Applied Logistic Regression (2000, Wiley, 2nd ed.). The goal of this prospective study was to identify risk
What are good datasets to illustrate particular aspects of statistical analysis? The low birth weight study This is one of the datasets in Hosmer and Lemeshow's textbook on Applied Logistic Regression (2000, Wiley, 2nd ed.). The goal of this prospective study was to identify risk factors associated with giving birth to...
What are good datasets to illustrate particular aspects of statistical analysis? The low birth weight study This is one of the datasets in Hosmer and Lemeshow's textbook on Applied Logistic Regression (2000, Wiley, 2nd ed.). The goal of this prospective study was to identify risk
15,576
What are good datasets to illustrate particular aspects of statistical analysis?
Of course, the Anscombe 4 datasets are very good for teaching - they look very different, yet have identical simple statistical properties. I also suggest KDD Cup datasets http://www.kdd.org/kddcup/ because they have been well studied and there are many solutions, so students can compare their results and see how they...
What are good datasets to illustrate particular aspects of statistical analysis?
Of course, the Anscombe 4 datasets are very good for teaching - they look very different, yet have identical simple statistical properties. I also suggest KDD Cup datasets http://www.kdd.org/kddcup/
What are good datasets to illustrate particular aspects of statistical analysis? Of course, the Anscombe 4 datasets are very good for teaching - they look very different, yet have identical simple statistical properties. I also suggest KDD Cup datasets http://www.kdd.org/kddcup/ because they have been well studied and...
What are good datasets to illustrate particular aspects of statistical analysis? Of course, the Anscombe 4 datasets are very good for teaching - they look very different, yet have identical simple statistical properties. I also suggest KDD Cup datasets http://www.kdd.org/kddcup/
15,577
What are good datasets to illustrate particular aspects of statistical analysis?
A lot of my Statistical Analysis courses at Cal Poly have used the "Iris" dataset which in already in R. It has categorical variables, and highly correlated variables.
What are good datasets to illustrate particular aspects of statistical analysis?
A lot of my Statistical Analysis courses at Cal Poly have used the "Iris" dataset which in already in R. It has categorical variables, and highly correlated variables.
What are good datasets to illustrate particular aspects of statistical analysis? A lot of my Statistical Analysis courses at Cal Poly have used the "Iris" dataset which in already in R. It has categorical variables, and highly correlated variables.
What are good datasets to illustrate particular aspects of statistical analysis? A lot of my Statistical Analysis courses at Cal Poly have used the "Iris" dataset which in already in R. It has categorical variables, and highly correlated variables.
15,578
What are good datasets to illustrate particular aspects of statistical analysis?
The Titanic dataset used by Harrell in "Regression Modeling Strategies". I use a simplified version of his analysis when explaining logistic regression, explaining survival using sex, class and age. The Loyn dataset discussed in "Experimental Design and Data Analysis for Biologists" by Gerry Quinn and Mick Keough conta...
What are good datasets to illustrate particular aspects of statistical analysis?
The Titanic dataset used by Harrell in "Regression Modeling Strategies". I use a simplified version of his analysis when explaining logistic regression, explaining survival using sex, class and age. T
What are good datasets to illustrate particular aspects of statistical analysis? The Titanic dataset used by Harrell in "Regression Modeling Strategies". I use a simplified version of his analysis when explaining logistic regression, explaining survival using sex, class and age. The Loyn dataset discussed in "Experimen...
What are good datasets to illustrate particular aspects of statistical analysis? The Titanic dataset used by Harrell in "Regression Modeling Strategies". I use a simplified version of his analysis when explaining logistic regression, explaining survival using sex, class and age. T
15,579
Hidden Markov models with Baum-Welch algorithm using python
The scikit-learn has an HMM implementation. It was until recently considered as unmaintained and its usage was discouraged. However it has improved in the development version. I cannot vouch for its quality, though, as I know nothing of HMMs. Disclaimer: I am a scikit-learn developer. Edit: we have moved the HMMs outsi...
Hidden Markov models with Baum-Welch algorithm using python
The scikit-learn has an HMM implementation. It was until recently considered as unmaintained and its usage was discouraged. However it has improved in the development version. I cannot vouch for its q
Hidden Markov models with Baum-Welch algorithm using python The scikit-learn has an HMM implementation. It was until recently considered as unmaintained and its usage was discouraged. However it has improved in the development version. I cannot vouch for its quality, though, as I know nothing of HMMs. Disclaimer: I am ...
Hidden Markov models with Baum-Welch algorithm using python The scikit-learn has an HMM implementation. It was until recently considered as unmaintained and its usage was discouraged. However it has improved in the development version. I cannot vouch for its q
15,580
Hidden Markov models with Baum-Welch algorithm using python
You can find Python implementations on: Hidden Markov Models in Python - CS440: Introduction to Artifical Intelligence - CSU Baum-Welch algorithm: Finding parameters for our HMM | Does this make sense? BTW: See Example of implementation of Baum-Welch on Stack Overflow - the answer turns out to be in Python.
Hidden Markov models with Baum-Welch algorithm using python
You can find Python implementations on: Hidden Markov Models in Python - CS440: Introduction to Artifical Intelligence - CSU Baum-Welch algorithm: Finding parameters for our HMM | Does this make sens
Hidden Markov models with Baum-Welch algorithm using python You can find Python implementations on: Hidden Markov Models in Python - CS440: Introduction to Artifical Intelligence - CSU Baum-Welch algorithm: Finding parameters for our HMM | Does this make sense? BTW: See Example of implementation of Baum-Welch on Stac...
Hidden Markov models with Baum-Welch algorithm using python You can find Python implementations on: Hidden Markov Models in Python - CS440: Introduction to Artifical Intelligence - CSU Baum-Welch algorithm: Finding parameters for our HMM | Does this make sens
15,581
Hidden Markov models with Baum-Welch algorithm using python
Have you seen NLTK? http://www.nltk.org/ It has some classes that are suitable for this sort of thing, but somewhat application dependent. http://www.nltk.org/api/nltk.tag.html#nltk.tag.hmm.HiddenMarkovModelTrainer If you are looking for something more 'education oriented', I wrote toy trainer a while ago: http://paste...
Hidden Markov models with Baum-Welch algorithm using python
Have you seen NLTK? http://www.nltk.org/ It has some classes that are suitable for this sort of thing, but somewhat application dependent. http://www.nltk.org/api/nltk.tag.html#nltk.tag.hmm.HiddenMark
Hidden Markov models with Baum-Welch algorithm using python Have you seen NLTK? http://www.nltk.org/ It has some classes that are suitable for this sort of thing, but somewhat application dependent. http://www.nltk.org/api/nltk.tag.html#nltk.tag.hmm.HiddenMarkovModelTrainer If you are looking for something more 'educat...
Hidden Markov models with Baum-Welch algorithm using python Have you seen NLTK? http://www.nltk.org/ It has some classes that are suitable for this sort of thing, but somewhat application dependent. http://www.nltk.org/api/nltk.tag.html#nltk.tag.hmm.HiddenMark
15,582
Hidden Markov models with Baum-Welch algorithm using python
Some implementation of basic algorithms (including Baum-welch in python) are available here: http://ai.cs.umbc.edu/icgi2012/challenge/Pautomac/baseline.php
Hidden Markov models with Baum-Welch algorithm using python
Some implementation of basic algorithms (including Baum-welch in python) are available here: http://ai.cs.umbc.edu/icgi2012/challenge/Pautomac/baseline.php
Hidden Markov models with Baum-Welch algorithm using python Some implementation of basic algorithms (including Baum-welch in python) are available here: http://ai.cs.umbc.edu/icgi2012/challenge/Pautomac/baseline.php
Hidden Markov models with Baum-Welch algorithm using python Some implementation of basic algorithms (including Baum-welch in python) are available here: http://ai.cs.umbc.edu/icgi2012/challenge/Pautomac/baseline.php
15,583
Hidden Markov models with Baum-Welch algorithm using python
The General Hidden Markov Model library has python bindings and uses the Baum-Welch algorithm.
Hidden Markov models with Baum-Welch algorithm using python
The General Hidden Markov Model library has python bindings and uses the Baum-Welch algorithm.
Hidden Markov models with Baum-Welch algorithm using python The General Hidden Markov Model library has python bindings and uses the Baum-Welch algorithm.
Hidden Markov models with Baum-Welch algorithm using python The General Hidden Markov Model library has python bindings and uses the Baum-Welch algorithm.
15,584
Hidden Markov models with Baum-Welch algorithm using python
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Following is a Pyhton implementation of Baum-Welch Alg...
Hidden Markov models with Baum-Welch algorithm using python
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Hidden Markov models with Baum-Welch algorithm using python Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Hidden Markov models with Baum-Welch algorithm using python Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
15,585
Correcting for outliers in a running average
If that example graph you have is typical, then any of the criteria you list will work. Most of those statistical methods are for riding the edge of errors right at the fuzzy level of "is this really an error?" But your problem looks wildly simple.. your errors are not just a couple standard deviations from the norm, ...
Correcting for outliers in a running average
If that example graph you have is typical, then any of the criteria you list will work. Most of those statistical methods are for riding the edge of errors right at the fuzzy level of "is this really
Correcting for outliers in a running average If that example graph you have is typical, then any of the criteria you list will work. Most of those statistical methods are for riding the edge of errors right at the fuzzy level of "is this really an error?" But your problem looks wildly simple.. your errors are not just...
Correcting for outliers in a running average If that example graph you have is typical, then any of the criteria you list will work. Most of those statistical methods are for riding the edge of errors right at the fuzzy level of "is this really
15,586
Correcting for outliers in a running average
The definition of what constitutes an abnormal value must scale to the data itself. The classic method of doing this is to calculate the z score of each of the data points and throwing out any values greater than 3 z scores from the average. The z score can be found by taking the difference between the data point and...
Correcting for outliers in a running average
The definition of what constitutes an abnormal value must scale to the data itself. The classic method of doing this is to calculate the z score of each of the data points and throwing out any values
Correcting for outliers in a running average The definition of what constitutes an abnormal value must scale to the data itself. The classic method of doing this is to calculate the z score of each of the data points and throwing out any values greater than 3 z scores from the average. The z score can be found by tak...
Correcting for outliers in a running average The definition of what constitutes an abnormal value must scale to the data itself. The classic method of doing this is to calculate the z score of each of the data points and throwing out any values
15,587
Correcting for outliers in a running average
I would compute a running median (robust alternative to mean) and a running mad (robust alternative to sd), remove everything that more than 5 mad's away from the median https://ec.europa.eu/eurostat/documents/1001617/4398385/S4P1-MIRROROUTLIERDETECTION-LIAPIS.pdf
Correcting for outliers in a running average
I would compute a running median (robust alternative to mean) and a running mad (robust alternative to sd), remove everything that more than 5 mad's away from the median https://ec.europa.eu/eurostat/
Correcting for outliers in a running average I would compute a running median (robust alternative to mean) and a running mad (robust alternative to sd), remove everything that more than 5 mad's away from the median https://ec.europa.eu/eurostat/documents/1001617/4398385/S4P1-MIRROROUTLIERDETECTION-LIAPIS.pdf
Correcting for outliers in a running average I would compute a running median (robust alternative to mean) and a running mad (robust alternative to sd), remove everything that more than 5 mad's away from the median https://ec.europa.eu/eurostat/
15,588
Correcting for outliers in a running average
Another solution is to use the harmonic mean. Your case is very similar to the example discussed in http://economistatlarge.com/finance/applied-finance/differences-arithmetic-geometric-harmonic-means
Correcting for outliers in a running average
Another solution is to use the harmonic mean. Your case is very similar to the example discussed in http://economistatlarge.com/finance/applied-finance/differences-arithmetic-geometric-harmonic-means
Correcting for outliers in a running average Another solution is to use the harmonic mean. Your case is very similar to the example discussed in http://economistatlarge.com/finance/applied-finance/differences-arithmetic-geometric-harmonic-means
Correcting for outliers in a running average Another solution is to use the harmonic mean. Your case is very similar to the example discussed in http://economistatlarge.com/finance/applied-finance/differences-arithmetic-geometric-harmonic-means
15,589
Correcting for outliers in a running average
You need to have some idea of expected variation or distribution, if you want to be able to exclude certain (higher) instances of variation as erroneous. For instance, if you can approximate the distribution of the "average times" result to a normal (Gaussian) distribution, then you can do what ojblass suggested and ex...
Correcting for outliers in a running average
You need to have some idea of expected variation or distribution, if you want to be able to exclude certain (higher) instances of variation as erroneous. For instance, if you can approximate the distr
Correcting for outliers in a running average You need to have some idea of expected variation or distribution, if you want to be able to exclude certain (higher) instances of variation as erroneous. For instance, if you can approximate the distribution of the "average times" result to a normal (Gaussian) distribution, ...
Correcting for outliers in a running average You need to have some idea of expected variation or distribution, if you want to be able to exclude certain (higher) instances of variation as erroneous. For instance, if you can approximate the distr
15,590
Correcting for outliers in a running average
Maybe a good method would be to ignore any results that are more than some defined value outside the current running average?
Correcting for outliers in a running average
Maybe a good method would be to ignore any results that are more than some defined value outside the current running average?
Correcting for outliers in a running average Maybe a good method would be to ignore any results that are more than some defined value outside the current running average?
Correcting for outliers in a running average Maybe a good method would be to ignore any results that are more than some defined value outside the current running average?
15,591
Correcting for outliers in a running average
The naive (and possibly best) answer to the bootstrapping question is "Accept the first N values without filtering." Choose N to be as large as possible while still allowing the setup time to be "short" in your application. In this case, you might consider using the window width (64 samples) for N. Then I would go with...
Correcting for outliers in a running average
The naive (and possibly best) answer to the bootstrapping question is "Accept the first N values without filtering." Choose N to be as large as possible while still allowing the setup time to be "shor
Correcting for outliers in a running average The naive (and possibly best) answer to the bootstrapping question is "Accept the first N values without filtering." Choose N to be as large as possible while still allowing the setup time to be "short" in your application. In this case, you might consider using the window w...
Correcting for outliers in a running average The naive (and possibly best) answer to the bootstrapping question is "Accept the first N values without filtering." Choose N to be as large as possible while still allowing the setup time to be "shor
15,592
Finding most likely permutation
Provided the measurement errors are independent and identically Normally distributed for each instrument, the solution is to match the two sets of measurements in sorted order. Although this is intuitively obvious (comments posted shortly after the question was posted state this solution), it remains to prove it. To t...
Finding most likely permutation
Provided the measurement errors are independent and identically Normally distributed for each instrument, the solution is to match the two sets of measurements in sorted order. Although this is intui
Finding most likely permutation Provided the measurement errors are independent and identically Normally distributed for each instrument, the solution is to match the two sets of measurements in sorted order. Although this is intuitively obvious (comments posted shortly after the question was posted state this solutio...
Finding most likely permutation Provided the measurement errors are independent and identically Normally distributed for each instrument, the solution is to match the two sets of measurements in sorted order. Although this is intui
15,593
Finding most likely permutation
@whuber (+1) has answered the question in your title about finding the most likely permutation. My purpose here is to explore briefly by simulation whether you can expect that most likely permutation to be exactly correct. Roughly speaking, the order for the second weighing is most likely to be correct if the minimum d...
Finding most likely permutation
@whuber (+1) has answered the question in your title about finding the most likely permutation. My purpose here is to explore briefly by simulation whether you can expect that most likely permutation
Finding most likely permutation @whuber (+1) has answered the question in your title about finding the most likely permutation. My purpose here is to explore briefly by simulation whether you can expect that most likely permutation to be exactly correct. Roughly speaking, the order for the second weighing is most likel...
Finding most likely permutation @whuber (+1) has answered the question in your title about finding the most likely permutation. My purpose here is to explore briefly by simulation whether you can expect that most likely permutation
15,594
Finding most likely permutation
Inspired by BruceET's answer here is a simulation that computes the distribution of the difference in the rank. While Whuber's answer shows that identity is the most likely permutation, it is not the most likely difference in rank (there is only one permutation, the identity, with no change in rank, but there are many ...
Finding most likely permutation
Inspired by BruceET's answer here is a simulation that computes the distribution of the difference in the rank. While Whuber's answer shows that identity is the most likely permutation, it is not the
Finding most likely permutation Inspired by BruceET's answer here is a simulation that computes the distribution of the difference in the rank. While Whuber's answer shows that identity is the most likely permutation, it is not the most likely difference in rank (there is only one permutation, the identity, with no cha...
Finding most likely permutation Inspired by BruceET's answer here is a simulation that computes the distribution of the difference in the rank. While Whuber's answer shows that identity is the most likely permutation, it is not the
15,595
Fine Tuning vs. Transferlearning vs. Learning from scratch
Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and train only the output model. In Transfer Learning or Domain Adaptation, we train the model with a dataset. Then, we trai...
Fine Tuning vs. Transferlearning vs. Learning from scratch
Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and
Fine Tuning vs. Transferlearning vs. Learning from scratch Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and train only the output model. In Transfer Learning or Domain A...
Fine Tuning vs. Transferlearning vs. Learning from scratch Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and
15,596
Fine Tuning vs. Transferlearning vs. Learning from scratch
Fine tuning, transfer learning, and learning from scratch are similar in that they are approaches to training a model on some data. But there are important differences. Both fine tuning and transfer learning build on knowledge (parameters) an existing model has learned from previous data, while training from scratch do...
Fine Tuning vs. Transferlearning vs. Learning from scratch
Fine tuning, transfer learning, and learning from scratch are similar in that they are approaches to training a model on some data. But there are important differences. Both fine tuning and transfer l
Fine Tuning vs. Transferlearning vs. Learning from scratch Fine tuning, transfer learning, and learning from scratch are similar in that they are approaches to training a model on some data. But there are important differences. Both fine tuning and transfer learning build on knowledge (parameters) an existing model has...
Fine Tuning vs. Transferlearning vs. Learning from scratch Fine tuning, transfer learning, and learning from scratch are similar in that they are approaches to training a model on some data. But there are important differences. Both fine tuning and transfer l
15,597
In what kind of real-life situations can we use a multi-arm bandit algorithm?
When you play the original Pokemon games (Red or Blue and Yellow) and you get to Celadon city, the Team rocket slot machines have different odds. Multi-Arm Bandit right there if you want to optimize getting that Porygon really fast. In all seriousness, people talk about the problem with choosing tuning variables in ...
In what kind of real-life situations can we use a multi-arm bandit algorithm?
When you play the original Pokemon games (Red or Blue and Yellow) and you get to Celadon city, the Team rocket slot machines have different odds. Multi-Arm Bandit right there if you want to optimize
In what kind of real-life situations can we use a multi-arm bandit algorithm? When you play the original Pokemon games (Red or Blue and Yellow) and you get to Celadon city, the Team rocket slot machines have different odds. Multi-Arm Bandit right there if you want to optimize getting that Porygon really fast. In all...
In what kind of real-life situations can we use a multi-arm bandit algorithm? When you play the original Pokemon games (Red or Blue and Yellow) and you get to Celadon city, the Team rocket slot machines have different odds. Multi-Arm Bandit right there if you want to optimize
15,598
In what kind of real-life situations can we use a multi-arm bandit algorithm?
They can be used in a biomedical treatment / research design setting. For example, I believe q-learning algorithms are used in Sequential, Multiple Assignment, Randomized Trial (SMART trials). Loosely, the idea is that the treatment regime adapts optimally to the progress the patient is making. It is clear how this ...
In what kind of real-life situations can we use a multi-arm bandit algorithm?
They can be used in a biomedical treatment / research design setting. For example, I believe q-learning algorithms are used in Sequential, Multiple Assignment, Randomized Trial (SMART trials). Loose
In what kind of real-life situations can we use a multi-arm bandit algorithm? They can be used in a biomedical treatment / research design setting. For example, I believe q-learning algorithms are used in Sequential, Multiple Assignment, Randomized Trial (SMART trials). Loosely, the idea is that the treatment regime ...
In what kind of real-life situations can we use a multi-arm bandit algorithm? They can be used in a biomedical treatment / research design setting. For example, I believe q-learning algorithms are used in Sequential, Multiple Assignment, Randomized Trial (SMART trials). Loose
15,599
In what kind of real-life situations can we use a multi-arm bandit algorithm?
They are used in A/B testing of online advertising, where different ads are displayed to different users and based on the outcomes decisions are made about what ads to show in the future. This is described in nice paper by Google researcher Steven L. Scott (2010), there was also a page that is currently offline, but av...
In what kind of real-life situations can we use a multi-arm bandit algorithm?
They are used in A/B testing of online advertising, where different ads are displayed to different users and based on the outcomes decisions are made about what ads to show in the future. This is desc
In what kind of real-life situations can we use a multi-arm bandit algorithm? They are used in A/B testing of online advertising, where different ads are displayed to different users and based on the outcomes decisions are made about what ads to show in the future. This is described in nice paper by Google researcher S...
In what kind of real-life situations can we use a multi-arm bandit algorithm? They are used in A/B testing of online advertising, where different ads are displayed to different users and based on the outcomes decisions are made about what ads to show in the future. This is desc
15,600
In what kind of real-life situations can we use a multi-arm bandit algorithm?
I asked the same question on Quora Here's the answer Allocation of funding for different departments of an organization Picking best performing athletes out of a group of students given limited time and an arbitrary selection threshold Maximizing website earnings while simultaneously testing new features (in lieu o...
In what kind of real-life situations can we use a multi-arm bandit algorithm?
I asked the same question on Quora Here's the answer Allocation of funding for different departments of an organization Picking best performing athletes out of a group of students given limited tim
In what kind of real-life situations can we use a multi-arm bandit algorithm? I asked the same question on Quora Here's the answer Allocation of funding for different departments of an organization Picking best performing athletes out of a group of students given limited time and an arbitrary selection threshold Ma...
In what kind of real-life situations can we use a multi-arm bandit algorithm? I asked the same question on Quora Here's the answer Allocation of funding for different departments of an organization Picking best performing athletes out of a group of students given limited tim