Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8800 | 1 | 9035 | null | 3 | 403 | i'm making experiments using app. 5000 labeled dataset.i'm trying different supervised ML algorithm to evaluate the results.The vector size is 13 with the labels (totally 12 features+1 label) and i have 15 vector of labeled "flower" class. experiments consist of all data set using 10k cross validation. All features are continuous.
```
1 experiments using the "pure" features of all dataset.
2 experiments using only "one" feature (out of 12) change of the flower class.
```
i applied naive bayes, C4.5 but all results of 1 and 2 is same, however logistic regression gives different results and lasted longer.
1- To your best experiences, what causes the difference between naive bayes, c.45 and logistic regression, how should i evaluate the results to make the audiences satisfied?
2- if performance is an important metric, and classifier is used for IDS systems, which ML algorithms do you offer?
Edit: More explanation to make the question clear:
we have 8 different class labels. flower + other 7 labels. In experiment 2 we change the only one attribute of the flower class out of 12 attributes (15 flower labeled class stays same but only one attribute is changed.). all other dataset stays same. so we make experiments using logistic regression, naive bayes and c4.5 seperately with two different dataset. (1- with 5000 dataset, 2- other dataset has difference of change in one attribute of flower class, all other classes stays same).
comparision: we have results of situation number 1 and 2, in C4.5 and naive bayes, nothing changes, FP and FN. but logistic regression gives interesting results.
```
12 0.4 0.4 0.5 2.333 434 12.2 10 2 10 12 12 flower
........................................... flower
...........................................
........................................... flower (total 15 flower class.)
// one feature change 2nd feature.
12 0.8 0.4 0.5 2.333 434 12.2 10 2 10 12 12 flower
........................................... flower
...........................................
........................................... flower (total 15 flower class,2nd feature all changed.)
```
>
For example can i make i comment like
that: because C4.5 uses the maximum
number of class in leaf nodes, the
change of one feature in flower class
will not affect/change the leaf node classes
However, logistic regression uses ... so we observe this kind of differences.???
| Interpretation of "one" feature change in a supervised classifier | CC BY-SA 2.5 | null | 2011-03-26T16:37:30.610 | 2011-04-01T11:26:39.950 | 2011-03-31T17:16:29.933 | 3270 | 3270 | [
"machine-learning",
"classification",
"feature-selection",
"dataset"
] |
8802 | 2 | null | 8695 | 4 | null | In case richiemorrisroe's response doesn't give you quite enough, I suggest...
- right-clicking your SPSS factor
analysis output and choosing Results
Coach to clarify the contents of the Variance
Explained table
- searching the Help files or
Tutorial for Reliability Analysis.
I'm thinking that by "composite
reliability" you mean internal
consistency reliability
(Cronbach's alpha).
| null | CC BY-SA 2.5 | null | 2011-03-26T17:51:56.140 | 2011-03-26T17:51:56.140 | null | null | 2669 | null |
8803 | 2 | null | 8797 | 3 | null | IME, the confidence interval is useful as it gives a sense of how uncertain your estimates are. Its a good way to check how variable your results might be, and to give others a sense for how likely the results are to be within a particular range.
That being said, the typical interpretation of one is that 95 (for a 95% interval) times of 100 if this experiment were repeated, the true value of the mean (or whatever you've calculated the interval for) would lie in this range. So it does not tell you that it is 95% certain that the true value lies within that range. On the other hand, a bayesian credible interval will tell you this, but these are not as widely used.
| null | CC BY-SA 2.5 | null | 2011-03-26T18:49:30.687 | 2011-03-26T18:49:30.687 | null | null | 656 | null |
8804 | 2 | null | 8375 | 2 | null | As I see it there are two survival analysis paradigms that could be used. The Cox regression framework allows time varying covariates and would produce an estimate for the risk of cancellation conditioned on any particular set of covariates relative to the mean level of cancellation. The glm framework with Poisson errors is also a proportional hazards model and is particularly suited to discrete intervals. JVM has pointed out that there is potential error in using incomplete data in the current month, but the sense I get is that you want an estimate that is conditional on the latest value of a co-variate or set of covariates. Better description of the data situation could yield better worked examples....
| null | CC BY-SA 2.5 | null | 2011-03-26T19:03:42.100 | 2011-03-26T19:03:42.100 | null | null | 2129 | null |
8805 | 2 | null | 8799 | 6 | null | There are a number of ways you can approach this problem (as chl has noted) and you should definitely read the links he gives to other questions.
That being said, here is some advice which you may find useful.
The psych package is quite good for simple analysis of questionnaries.
Download this using install.packages("psych") from a local mirror.
There is a useful pairs.panels() function which will show you the correlations between your variables, their distributions and plot regression lines through the points for you. Its a great graphic, but not to be used if you have over 10 variables.
Your next step should probably be to run a factor analysis.
This can be done with either the factanal function in base R, or with the fa function in psych.
Note that this is likely to produce misleading results if you have a small sample size. You can test how many factors to extract using parallel analysis (fa.parallel in the psych package) or Minimum average partial (VSS in the psych package). This could give you some good ideas of how many factors to retain.
Your question suggests that you have no prior hypotheses about the structure of the instrument which may suggest factoring the questions a number of times and finding the solution that makes the most sense.
You can also assess cronbach's alpha which is calculated as the mean of all possible split half reliabilites. The reason i suggested doing the factor analysis first is that cronbach's alpha tends to give weird results if applied to a questionnaire which has multiple factors. The alpha function in the psych package could be used for this computation.
If you wish to formally test which model is best, then you could look into Confirmatory Factor Analysis, but that might be overkill right now. If you are interested, the sem lavaan and OpenMx packages for R can all carry out this kind of analysis.
| null | CC BY-SA 2.5 | null | 2011-03-26T19:04:16.593 | 2011-03-26T19:04:16.593 | null | null | 656 | null |
8806 | 2 | null | 8375 | 1 | null | Thank you for the clarification, B_Miner. I don't do a lot of forecasting myself, so take what follows with a pinch of salt. Here is what I would do as at least a first cut at the data.
- First, formulate and estimate a model that explains your TVCs. Do all of the cross-validation, error checking, etc., to make sure you have a decent model for the data.
- Second, formulate and estimate a survival model (of whatever flavor). Do all of the cross-validation, error checking, to make sure this model is reasonable as well.
- Third, settle on a method of using the forecasts from the TVCs model as the basis of forecasting risks of churn and whatever else you want. Once again, verify that the predictions are reasonable using your sample.
Once you have a model that you think is reasonable, I would suggest bootstrapping the data as a way to incorporate the error in the first TVC model into the second model. Basically, apply steps 1-3 N times, each time taking a bootstrap sample from the data and producing a set of forecasts. When you have a reasonable number of forecasts, summarize them in any way you think is appropriate for your task; e.g., provide mean risk of churn for each individual or covariate profile of interest as well as 95% confidence intervals.
| null | CC BY-SA 2.5 | null | 2011-03-26T20:40:48.077 | 2011-03-26T20:40:48.077 | null | null | 3265 | null |
8807 | 1 | 8832 | null | 40 | 12823 | I've been using the [caret package](http://cran.r-project.org/web/packages/caret/index.html) in R to build predictive models for classification and regression. Caret provides a unified interface to tune model hyper-parameters by cross validation or boot strapping. For example, if you are building a simple 'nearest neighbors' model for classification, how many neighbors should you use? 2? 10? 100? Caret helps you answer this question by re-sampling your data, trying different parameters, and then aggregating the results to decide which yield the best predictive accuracy.
I like this approach because it is provides a robust methodology for choosing model hyper-parameters, and once you've chosen the final hyper-parameters it provides a cross-validated estimate of how 'good' the model is, using accuracy for classification models and RMSE for regression models.
I now have some time-series data that I want to build a regression model for, probably using a random forest. What is a good technique to assess the predictive accuracy of my model, given the nature of the data? If random forests don't really apply to time series data, what's the best way to build an accurate ensemble model for time series analysis?
| Cross-validating time-series analysis | CC BY-SA 2.5 | null | 2011-03-26T20:50:33.563 | 2022-07-12T14:24:45.640 | 2011-03-27T17:59:44.317 | 2817 | 2817 | [
"r",
"time-series",
"cross-validation"
] |
8808 | 1 | null | null | 1 | 455 | How do you derive the expression for the $100(1-\alpha)$% bayesian confidence interval when working with the uniform distribution in the interval $[-\theta,\theta]$?
| Derivation of bayesian confidence interval | CC BY-SA 2.5 | null | 2011-03-26T20:51:44.430 | 2011-04-29T01:06:26.093 | 2011-04-29T01:06:26.093 | 3911 | null | [
"bayesian",
"self-study"
] |
8809 | 2 | null | 8795 | 5 | null | I'm pretty sure the answer is yes, the standard binomial 'fair coin' test is still valid: if you wish to test whether two of the three probabilities of a [multinomial distribution](http://en.wikipedia.org/wiki/Multinomial_distribution) are the same but you're not interested in any hypotheses about the third probability, you can analyse the numbers of the corresponding two outcomes as if they were drawn from a [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution).
In fact this seems to make quite a nice exercise about sufficient statistics and conditional likelihood:
You can think of this as a multinomial distribution with three possible outcomes and hence two estimable parameters (as the three probabilities must sum to 1). But you're not interested in the probability of the 'middle' outcome, so you can take this to be the [nuisance parameter](http://en.wikipedia.org/wiki/Nuisance_parameter), and the difference between the number of 'top' and 'bottom' outcomes to be the parameter of interest.
It's straightforward to show (using the [Fisher–Neyman factorization theorem](http://en.wikipedia.org/wiki/Sufficient_statistic#Fisher.E2.80.93Neyman_factorization_theorem)) that the numbers of 'top' and 'bottom' outcomes together form a (two-dimensional) [sufficient statistic](http://en.wikipedia.org/wiki/Sufficient_statistic) for the parameter of interest, i.e. the number of 'middle' outcomes doesn't provide any additional information about the value of the parameter of interest. The number of 'middle' outcomes is clearly a sufficient statistic for the nuisance prameter. If we condition on the latter, I think (haven't checked properly) that the resulting [conditional likelihood](http://en.wikipedia.org/wiki/Conditional_likelihood#Conditional_likelihood) will end up the same as the likelihood for the binomial distribution, i.e. the coin-tossing problem.
| null | CC BY-SA 2.5 | null | 2011-03-26T22:13:15.587 | 2011-03-26T22:13:15.587 | null | null | 449 | null |
8811 | 2 | null | 7321 | 1 | null | A null hypothesis of your significance test could be that the number of fields overlapping an $x$ point of the axis ($fields(x)$) comes from the same $k$ distribution no matter which point of the axis we take, i.e. $fields(x) \sim K$, where $K$ may take the values $0, 1, 2, ... \infty$. The alternative hypothesis could be that the number of fields at the point of interest ($fields(x_A)$) comes from an other distribution. You have already defined the test statistic: the number of fields overlapping ($fields(x)$), and the wording “significantly more” suggests you are thinking in terms of a one tailed test.
When you say “elsewhere” does it mean anywhere along this continuous axis? Or does it mean a number of points along the axis? I will assume the second, and that you have an $(x_1, fields(x_1)), (x_2, fields(x_2)), ... (x_N, fields(x_N))$ sample. I will also assume that the $x_1, x_2, ... X_N$ points are “not too close to each other” - your above figure suggests that the width of the fields is not negligible, thus $fields(x_1) = fields(x_2)$ if $x_1$ and $x_2$ are close to each other.
The significance value of the test is the probability of observing a number of fields equal to or more than $fields(x_A)$ under the null hypothesis. Under the above assumptions this is $p \approx 1 - ECDF_K(fields(x_A)-1))$, where $ECDF_K$ is the empirical cumulative distribution function of $K$ estimated using the sample of $N$ points (not including $x_A$).
If you use R all you have to do is use the `ecdf()` function to perform this calculation.
| null | CC BY-SA 2.5 | null | 2011-03-26T23:42:45.120 | 2011-03-26T23:55:53.190 | 2011-03-26T23:55:53.190 | 3911 | 3911 | null |
8812 | 1 | 8826 | null | 12 | 5177 | What is the Jeffreys prior for the geometric distribution?
| Jeffreys prior for geometric distribution? | CC BY-SA 2.5 | null | 2011-03-27T01:37:04.537 | 2015-11-04T13:39:21.083 | 2011-04-29T00:56:16.210 | 3911 | 3911 | [
"distributions",
"bayesian",
"prior"
] |
8813 | 2 | null | 8807 | 5 | null | If you have time series data then you might have a "degrees of freedom problem" . For example if you have 4 observations taken at hourly intervals and then decide to use 241 observations at 1minute intervals, you have 241 observations but they are not necessarily independent. When you submit these 241 values/measurements to an analytical package, the package might expect that these are 241 independent values as it proceeds to perform it's particular magic. If you have time series data you might have to upgrade your analytics. I don't know the program you refer to but it is a reasonable guess on my part ( I could be wrong ! ) that it's tests ( F tests / T tests ...etc ) probably don't apply to your problem set.
| null | CC BY-SA 2.5 | null | 2011-03-27T01:49:23.647 | 2011-03-27T18:40:07.790 | 2011-03-27T18:40:07.790 | 2116 | 3382 | null |
8814 | 2 | null | 8791 | 1 | null | There might be a bit of confusion here with some imprecise statistical jargon. If you have data points that have been measured/reported with different precision/reliability/variability then one turns naturally to Generalized Least Squares where one transforms/weights the data by adjusting for the relative variability . Search for Weighted Least Squares for example. Now given that one has weighted/transformed observed data one might we faced with another weighting issue. When you have correlated observations over space and/or time (taken at fixed intervals either time or space) one is advised to form an adaptive/autoregressive/auto-projected model called an ARIMA Model. Please review my answer to [Seeking certain type of ARIMA explanation](https://stats.stackexchange.com/questions/6498/seeking-certain-type-of-arima-explanation/8599#8599) which suggests that an ARIMA is simply a weighted average of previous values. For example y(t)=.5*y(t-1)+.25*y(t-2)+.125*y(t-3) +.... or y(t)=.5*y(t-1)+.5y(t-12)
These are two totally different "weighting solutions" . From your very vivid example it might be that you might have both opportunities to investigate. For more on time series you might review some of my postings and review what others might have said.
| null | CC BY-SA 2.5 | null | 2011-03-27T02:11:59.317 | 2011-03-27T11:15:02.570 | 2017-04-13T12:44:29.013 | -1 | 3382 | null |
8816 | 2 | null | 8795 | 3 | null | If you frame this as a binomial problem (p, 1-p), not a multinomial problem, you'll only be able to describe the past. You won't be able to say anything about the future. Why? Your removal of the middle "edge flips" is implied in your regrouping of the data.
In other words, your "data described" probability "p" of a positive result and probability "1-p" of a negative result will not apply on the next "binomial flip of the coin", because in the future you really have probabilities "x", "y", and "(1-x-y)".
Edit (03/27/2011) ===============================
I added the following diagram to help explain my comments below.

| null | CC BY-SA 2.5 | null | 2011-03-27T03:21:38.070 | 2011-03-27T22:54:06.670 | 2011-03-27T22:54:06.670 | 2775 | 2775 | null |
8817 | 1 | 8833 | null | 54 | 15155 | I am considering using Python libraries for doing my Machine Learning experiments. Thus far, I had been relying on WEKA, but have been pretty dissatisfied on the whole. This is primarily because I have found WEKA to be not so well supported (very few examples, documentation is sparse and community support is less than desirable in my experience), and have found myself in sticky situations with no help forthcoming. Another reason I am contemplating this move is because I am really liking Python (I am new to Python), and don't want to go back to coding in Java.
So my question is, what are the more
- comprehensive
- scalable (100k features, 10k examples) and
- well supported libraries for doing ML in Python out there?
I am particularly interested in doing text classification, and so would like to use a library that has a good collection of classifiers, feature selection methods (Information Gain, Chi-Sqaured etc.), and text pre-processing capabilities (stemming, stopword removal, tf-idf etc.).
Based on the past e-mail threads here and elsewhere, I have been looking at PyML, scikits-learn and Orange so far. How have people's experiences been with respect to the above 3 metrics that I mention?
Any other suggestions?
| Machine Learning using Python | CC BY-SA 4.0 | null | 2011-03-27T04:00:59.400 | 2018-10-30T07:06:54.283 | 2018-10-30T07:06:54.283 | 128677 | 3301 | [
"machine-learning",
"python"
] |
8818 | 1 | 8825 | null | 12 | 9202 | What is the relation between a dimension and a component in a Gaussian Mixture Model? And what are the meanings of dimension and component? Thank you.
Please correct me if Im wrong: my understanding is the observed data have many dimensions. Each dimension represents a feature/aspect of the collected data and has its own Gaussian distribution. I don't know where "component" fits into this picture and what it means.
| What's a component in gaussian mixture model? | CC BY-SA 2.5 | null | 2011-03-27T04:17:46.557 | 2011-04-08T20:43:53.850 | 2011-04-08T20:43:53.850 | 919 | 2729 | [
"multivariate-analysis",
"normal-distribution",
"mixture-distribution"
] |
8819 | 2 | null | 8817 | 14 | null | In terms of working with text, have a look at NLTK. Very, very well supported & documented (there's even a book online, or in paper if you prefer) and will do the preprocesing you require. You might find Gensim useful as well; the emphasis is on vector space modeling and it's got scalable implementations of LSI and LDA (pLSI too I think) if those are of interest. It will also do selection by tf-idf - I'm not sure that NLTK does. I've used pieces of these on corpora of ~50k without much difficulty.
NLTK:
[http://www.nltk.org/](http://www.nltk.org/)
Gensim:
[http://nlp.fi.muni.cz/projekty/gensim/](http://nlp.fi.muni.cz/projekty/gensim/)
Unfortunately, as to the main thrust of your question I'm not familiar with the specific libraries you reference (although I've used bits of scikits-learn before).
| null | CC BY-SA 2.5 | null | 2011-03-27T04:41:29.610 | 2011-03-27T04:41:29.610 | null | null | 26 | null |
8820 | 1 | 8831 | null | 3 | 1181 | I have a data sample (in this case an EEG data sample, but my question refers to any type of data samples of prior unknown distributions).
I would like to do a nonparametric estimate of the expected value for my sample. I did some research, from what I understood I can do this using bootstrap sampling. I found a pdf [here](http://www.scss.tcd.ie/Rozenn.Dahyot/453Bootstrap/2007Bootstrap02.pdf) giving a formula for bootstrap expected value, hopefully it's correct.
In case it's not, can someone please let me know how to do it once I have generated the samples by bootstrapping?
Another possibility seems to be MCMC, but I would need to know the distribution from what I understood. I could do a kernel density estimation probably, but I think using bootstrapping might be less complex?
I can use python, Matlab or R, in case you do this kind of thing often and have code at hand to share, I'd really appreciate it.
Any other methods/suggestions are more than welcome.
| Nonparametric expected value estimation of sample from unknown distribution | CC BY-SA 2.5 | null | 2011-03-27T04:42:00.160 | 2011-03-27T10:59:28.920 | null | null | null | [
"estimation",
"sampling",
"nonparametric",
"markov-chain-montecarlo",
"bootstrap"
] |
8821 | 2 | null | 8817 | 3 | null | Check out [libsvm](http://www.csie.ntu.edu.tw/~cjlin/libsvm/).
| null | CC BY-SA 2.5 | null | 2011-03-27T05:23:07.833 | 2011-03-27T05:23:07.833 | null | null | 364 | null |
8822 | 2 | null | 2547 | 3 | null | I don't think there is a good descriptive reason for choosing median over mean for age distributions. There is one of practicality when comparing reported data.
Many countries report their population in 5-year age intervals with the top band open-ended. This causes some difficulties calculating the mean from the intervals, especially for the youngest interval (affected by infant mortality rates), the top "interval" (what is the mean of an 80+ "interval"?), and the near top intervals (the mean of each interval is usually lower than the middle).
It is far easier to estimate the median by interpolating within the median interval, often approximating by assuming a flat or trapezium age distribution in that interval (death rates in many countries are relatively low around the median age, making this a more reasonable approximation than it is for the young or old).
| null | CC BY-SA 3.0 | null | 2011-03-27T05:39:30.573 | 2012-01-02T22:59:09.940 | 2012-01-02T22:59:09.940 | 930 | 2958 | null |
8823 | 1 | 436376 | null | 23 | 1603 | This an exercise given in Probability Theory: The Logic of Science by Edwin Jaynes, 2003. There is a partial solution [here](http://ksvanhorn.com/bayes/Papers/sanders.pdf). I have worked out a more general partial solution, and was wondering if anyone else has solved it. I will wait a bit before posting my answer, to give others a go.
Okay, so suppose we have $n$ mutually exclusive and exhaustive hypothesis, denoted by $H_i \;\;(i=1,\dots,n)$. Further suppose we have $m$ data sets, denoted by $D_j \;\;(j=1,\dots,m)$. The likelihood ratio for the ith hypothesis is given by:
$$LR(H_{i})=\frac{P(D_{1}D_{2}\dots,D_{m}|H_{i})}{P(D_{1}D_{2}\dots,D_{m}|\overline{H}_{i})}$$
Note that these are conditional probabilities. Now suppose that given the ith hypothesis $H_{i}$ the $m$ data sets are independent, so we have:
$$P(D_{1}D_{2}\dots,D_{m}|H_{i})=\prod_{j=1}^{m}P(D_{j}|H_{i}) \;\;\;\; (i=1,\dots,n)\;\;\;\text{Condition 1}$$
Now it would be quite convenient if the denominator also factored in this situation, so that we have:
$$P(D_{1}D_{2}\dots,D_{m}|\overline{H}_{i})=\prod_{j=1}^{m}P(D_{j}|\overline{H}_{i}) \;\;\;\; (i=1,\dots,n)\;\;\;\text{Condition 2}$$
For in this case the likelihood ratio will split into a product of smaller factors for each data set, so that we have:
$$LR(H_i)=\prod_{j=1}^{m}\frac{P(D_{j}|H_{i})}{P(D_{j}|\overline{H}_{i})}$$
So in this case, each data set will "vote for $H_i$" or "vote against $H_i$" independently of any other data set.
The exercise is to prove that if $n>2$ (more than two hypothesis), there is no such non-trivial way in which this factoring can occur. That is, if you assume that condition 1 and condition 2 hold, then at most one of the factors:
$$\frac{P(D_{1}|H_{i})}{P(D_{1}|\overline{H}_{i})}\frac{P(D_{2}|H_{i})}{P(D_{2}|\overline{H}_{i})}\dots\frac{P(D_{m}|H_{i})}{P(D_{m}|\overline{H}_{i})}$$
is different from 1, and thus only 1 data set will contribute to the likelihood ratio.
I personally found this result quite fascinating, because it basically shows that multiple hypothesis testing is nothing but a series of binary hypothesis tests.
| Has anyone solved PTLOS exercise 4.1? | CC BY-SA 2.5 | null | 2011-03-27T08:36:41.173 | 2021-12-08T09:59:14.283 | null | null | 2392 | [
"independence",
"likelihood-ratio",
"hypothesis-testing",
"multiple-comparisons"
] |
8824 | 2 | null | 8817 | 9 | null | Python has a wide range of ML libraries (check out mloss.org as well). However, I always have the feeling that it's more of use for ml researchers than for ml practitioners.
[Numpy/SciPy](http://scipy.org) and [matplotlib](http://matplotlib.sf.net) are excellent tools for scientific work with Python. If you are not afraid to hack in most of the math formulas yourself, you will not be disappointed. Also, it is very easy to use the GPU with [cudamat](http://code.google.com/p/cudamat/) or [gnumpy](http://www.cs.toronto.edu/~tijmen/gnumpy.html) - experiments that took days before are now completed in hours or even minutes.
The latest kid on the block is probably [Theano](http://www.google.de/search?sourceid=chrome&ie=UTF-8&q=theano%20python). It is a symbolic language for mathematical expressions that comes with opmitimzations, GPU implementations and the über-feature automatic differentiation which is nothing short of awesome for gradient based methods.
Also, as far as I know the NLTK mentioned by JMS is basically the number one open source natural language library out there.
Python is the right tool for machine learning.
| null | CC BY-SA 2.5 | null | 2011-03-27T08:55:32.053 | 2011-03-27T08:55:32.053 | null | null | 2860 | null |
8825 | 2 | null | 8818 | 11 | null | A mixture of Gaussians is defined as a linear combination of multiple Gaussian distributions. Thus it has multiple modes. The dimension refers to the data (e.g. the color, length, width, height and material of a shoe) while the number of components refers to the model. Each Gaussian in your mixture is one component. Thus each component will correspond to one mode, in most of the cases.
I suggest you read up on [mixture models on wikipedia](http://en.wikipedia.org/wiki/Mixture_of_gaussians).
| null | CC BY-SA 2.5 | null | 2011-03-27T08:59:37.747 | 2011-03-27T08:59:37.747 | null | null | 2860 | null |
8826 | 2 | null | 8812 | 16 | null | The geometric distribution is given by:
$$p(X|\theta)=(1-\theta)^{X-1}\theta \;\;\; X=1,2,3,\dots$$
The log likelihood is thus given by:
$$\log[p(X|\theta)]=L=(X-1)\log(1-\theta)+\log(\theta)$$
Differentiate once:
$$\frac{\partial L}{\partial \theta}=\frac{1}{\theta}-\frac{X-1}{1-\theta}$$
And again:
$$\frac{\partial^{2} L}{\partial \theta^{2}}=-\frac{1}{\theta^{2}}-\frac{X-1}{(1-\theta)^{2}}$$
Take the negative expectation of this conditional on $\theta$ (called Fisher information), note that $E(X|\theta)=\frac{1}{\theta}$
And so we have:
$$I(\theta)=\frac{1}{\theta^{2}}+\frac{\theta^{-1}-1}{(1-\theta)^{2}}=\theta^{-2}\left(1+\frac{\theta}{1-\theta}\right)=\theta^{-2}(1-\theta)^{-1}$$
The Jeffreys prior is given by the square root of this:
$$p(\theta|I) \propto \sqrt{I(\theta)}=\theta^{-1}(1-\theta)^{-\frac{1}{2}}$$
| null | CC BY-SA 3.0 | null | 2011-03-27T08:59:51.443 | 2013-10-25T08:14:19.340 | 2013-10-25T08:14:19.340 | 17230 | 2392 | null |
8827 | 2 | null | 7430 | 2 | null | @JMS answer is adequate for the nuts and bolts of changing variables. However, [This question](https://stats.stackexchange.com/questions/8104/why-is-a-p-sigma2-sim-textig0-001-0-001-prior-on-variance-considered-we) may help you a bit with why it is uniform on that scale.
My answer to [this question](https://stats.stackexchange.com/questions/8477/expression-for-conditional-density-for-arch-processes/8489#8489) goes through a slightly longer derivation of the "jacobian rule" result given in @JMS's answer. It may help with understanding why the rule applies.
| null | CC BY-SA 2.5 | null | 2011-03-27T09:09:00.470 | 2011-03-27T09:09:00.470 | 2017-04-13T12:44:49.683 | -1 | 2392 | null |
8828 | 2 | null | 8817 | 4 | null | Not sure if this is particularly useful, but there's a guide for programmers to learn statistics in Python available online. [http://www.greenteapress.com/thinkstats/](http://www.greenteapress.com/thinkstats/)
It seems pretty good from my brief scan, and it appears to talk about some machine learning methods, so it might be a good place to start.
| null | CC BY-SA 2.5 | null | 2011-03-27T09:10:04.543 | 2011-03-27T09:10:04.543 | null | null | 656 | null |
8829 | 2 | null | 8808 | 3 | null | So you have a density of:
$$p(X_i|\theta)=\frac{1}{2\theta}\;\;\;\; X_i\in[-\theta,\theta]$$
Now this is what is called a scale density, and $\theta$ is a scale parameter, just like the standard deviation in a normal distribution.
Now to do a Bayesian CI you require a prior distribution for $\theta$. Because $\theta$ is a scale parameter, the prior describing complete initial ignorance is given by:
$$p(\theta|\theta_L,\theta_U) = \frac{1}{\theta [log(\frac{\theta_U}{\theta_L})]} \;\;\; \theta\in[\theta_L,\theta_U]$$
In your comment, you state that "the improper prior is to be used" so this means that you take the upper limit $\theta_U\rightarrow \infty$ and the lower limit $\theta_L\rightarrow 0$. But this is in principle, to be done at the end of the calculation, and not at the start. This is to ensure that you don't have an "infinity" floating around in your results, making them arbitrary. If the limit does not exist, then "probability theory" is "telling you" that the actual bounds are important to your conclusion.
I assume you want to work this out for yourself, so the remainder just goes:
- calculate the posterior (Note: I have used $s$ as a dummy variable to indicate that the denominator is independent of $\theta$). $$p(\theta|D,\theta_L,\theta_U)=\frac{p(\theta|\theta_L,\theta_U)\prod_{i=1}^{n}p(X_i|\theta)}{\int_{\theta_L}^{\theta_U}p(s|\theta_L,\theta_U)\prod_{i=1}^{n}p(X_i|s)ds}$$
- Calculate calculate the lower bound $C_L$ and upper bound $C_U$ such that there is a $100(1-\alpha)$% probability that $C_L<\theta<C_U$. You can do this by solving for general limits
$$\int_{C_L}^{C_U}p(\theta|D,L,U)d\theta=1-\alpha$$
This should give you enough of the "machinery" to go and solve the problem. However, if you need more details, I can post them.
| null | CC BY-SA 2.5 | null | 2011-03-27T09:43:27.663 | 2011-03-27T09:43:27.663 | null | null | 2392 | null |
8830 | 2 | null | 8817 | 8 | null | Let me suggest [Orange](http://orange.biolab.si/)
>
comprehensive
Yes
>
scalable (100k features, 10k examples)
Yes
>
well supported libraries for doing ML in Python out there?
Yes
>
library that has a good collection of classifiers, feature selection methods (Information Gain, Chi-Sqaured etc.),
All of these work out of box in Orange
>
and text pre-processing capabilities (stemming, stopword removal, tf-idf etc.).
I have never used Orange for text processing, though
| null | CC BY-SA 2.5 | null | 2011-03-27T10:03:22.980 | 2011-03-27T10:03:22.980 | null | null | 1496 | null |
8831 | 2 | null | 8820 | 2 | null | It is quite simple; you make a subsample by sampling with replacement:
```
sample(x,replace=T)
```
calculate the statistic you want on it:
```
mean(sample(x,replace=T))
```
finally average it over many repetitions:
```
mean(replicate(1000,mean(sample(x,replace=T)))
```
| null | CC BY-SA 2.5 | null | 2011-03-27T10:51:10.953 | 2011-03-27T10:59:28.920 | 2011-03-27T10:59:28.920 | null | null | null |
8832 | 2 | null | 8807 | 11 | null | The "classical" k-times cross-validation technique is based on the fact that each sample in the available data set is used (k-1)-times to train a model and 1 time to test it. Since it is very important to validate time series models on "future" data, this approach will not contribute to the stability of the model.
One important property of many (most?) time series is the correlation between the adjacent values. As pointed out by IrishStat, if you use previous readings as the independent variables of your model candidate, this correlation (or lack of independence) plays a significant role and is another reason why k-times cross validation isn't a good idea.
One way to overcome over this problem is to "oversample" the data and decorrelate it. If the decorrelation process is successful, then using cross validation on time series becomes less problematic. It will not, however, solve the issue of validating the model using future data
Clarifications
by validating model on future data I mean constructing the model, waiting for new data that wasn't available during model construction, testing, fine-tuning etc and validating it on that new data.
by oversampling the data I mean collecting time series data at frequency much higher than practically needed. For example: sampling stock prices every 5 seconds, when you are really interested in hourly alterations. Here, when I say "sampling" I don't mean "interpolating", "estimating" etc. If the data cannot be measured at higher frequency, this technique is meaningless
| null | CC BY-SA 2.5 | null | 2011-03-27T11:19:52.957 | 2011-03-28T14:49:33.693 | 2011-03-28T14:49:33.693 | 1496 | 1496 | null |
8833 | 2 | null | 8817 | 40 | null | About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset).
Here is a tutorial I gave at PyCon 2011 with a chapter on text classification with exercises and solutions:
- http://scikit-learn.github.com/scikit-learn-tutorial/ (online HTML version)
- https://github.com/downloads/scikit-learn/scikit-learn-tutorial/scikit_learn_tutorial.pdf (PDF version)
- https://github.com/scikit-learn/scikit-learn-tutorial (source code + exercises)
I also gave a talk on the topic which is an updated version of the version I gave at PyCon FR. Here are the slides (and the embedded video in the comments):
- http://www.slideshare.net/ogrisel/statistical-machine-learning-for-text-classification-with-scikitlearn-and-nltk
As for feature selection, have a look at this answer on quora where all the examples are based on the scikit-learn documentation:
- http://www.quora.com/What-are-some-feature-selection-methods/answer/Olivier-Grisel
We don't have collocation feature extraction in scikit-learn yet. Use nltk and nltk-trainer to do this in the mean time:
- https://github.com/japerk/nltk-trainer
| null | CC BY-SA 2.5 | null | 2011-03-27T11:20:59.597 | 2011-03-27T11:35:10.863 | 2011-03-27T11:35:10.863 | 2150 | 2150 | null |
8834 | 2 | null | 8732 | 0 | null | Whether or not you wish to forecast or not has nothing whatsoever to do with correct time series analysis. Time series methods can develop a robust model which can be used simply to characterize the relationship between a dependent series and a set of user-suggested inputs (a.k.a. user-specified predictor series) and empirically identified omitted variables be they deterministic or stochastic.Users at their option can then extend the "signal" into the future i.e. forecast with uncertainties based upon the uncertainty in the coefficients and the uncertainty in the future values of the predictor . Now these two kinds of empirically identified "omitted series" can be classified as 1) deterministic and 2) stochastic. The first type are simply Pulses, Level Shifts , Seasonal Pulses and Local Time Trends whereas the second type is represented by the ARIMA portion of your final model. When one omits one or more stochastic series from the list of possible predictors, the omission is characterized by the ARIMA component in your final model. Time series modelers refer to ARIMA models as a "Poor Man's Regression Model" because the past of the series is being used as a proxy for omitted stochastic input series.
| null | CC BY-SA 2.5 | null | 2011-03-27T13:06:33.083 | 2011-03-27T13:06:33.083 | null | null | 3382 | null |
8837 | 2 | null | 8744 | 6 | null | I suggest a two-step approach:
- get a good initial estimates of the cluster centers, e.g. using hard or fuzzy K-means.
- Use Global Nearest Neighbor assignment to associate points with cluster centers: Calculate a distance matrix between each point and each cluster center (you can make the problem a bit smaller by only calculating reasonable distances), replicate each cluster center X times, and solve the linear assignment problem. You'll get, for each cluster center, exactly X matches to data points, so that, globally, the distance between data points and cluster centers is minimized.
Note that you can update cluster centers after step 2 and repeat step 2 to basically run K-means with fixed number of points per cluster. Still, it will be a good idea to get a good initial guess first.
| null | CC BY-SA 2.5 | null | 2011-03-27T14:00:14.600 | 2011-03-27T14:28:15.977 | 2011-03-27T14:28:15.977 | 198 | 198 | null |
8838 | 2 | null | 8568 | 1 | null | You could rewrite your model in a Bayesian software (OpenBUGS, PyMC).
When any new information is available add them to the model and re-estimate the posterior.
| null | CC BY-SA 2.5 | null | 2011-03-27T14:26:36.253 | 2011-03-27T14:26:36.253 | null | null | 3911 | null |
8839 | 2 | null | 7197 | 1 | null | You could set up a model that predicts LOS using YEAR, DG and other variables available (hospital datasets usually include age, gender and many other potential predictors).
One way of comparing your hospital to the comparison set is joining the two datasets and adding a hospital column (either 'my hospital' or 'comparison hospital'). If the conditions (linearity, independence, normality, homoscedasticity) of a linear model are met, you could use a simple ANCOVA. It looks reasonable to assume that LOS differences between your hospital and the comparison data may depend on DG, so you should probably include the DG hospital interaction term. Depending on the details more sophisticated models may be needed.R code sample:
```
model = lm(LOS ~ YEAR + DG * hospital, data=theJoinedTable)
summary(model)
```
| null | CC BY-SA 2.5 | null | 2011-03-27T14:57:41.187 | 2011-03-27T14:57:41.187 | null | null | 3911 | null |
8840 | 2 | null | 8817 | 3 | null | SHOGUN ([将軍](http://www.shogun-toolbox.org/)) is a large scale machine learning toolbox, which seems promising.
| null | CC BY-SA 2.5 | null | 2011-03-27T15:16:27.763 | 2011-03-27T15:16:27.763 | null | null | 1351 | null |
8841 | 5 | null | null | 0 | null | Mixture models arise in attempts to characterize complicated probability distributions, especially those with two or more modes, in terms of distributions with mathematically simple descriptions.
### Disambiguation
- Do not confuse a "mixture model" with a "mixed model"! The former concerns distributions, typically multi-modal, that will be analyzed as positive linear combinations of other distributions. The latter occurs in a regression setting where some of the independent variables are viewed as fixed and others are viewed as realizations of random variables.
- Note that although the density of a mixture is, by definition, a linear combination of densities, it is not in general the same as the density of a linear combination of random variables. For example, the average of two normal random variables is normal (and therefore has a single mode), but a 50:50 mixture of two different normal densities often has two modes and is never normal.
- Compound distributions are also known as "mixtures". Please use the compound-distributions tag in such cases. See the meta thread on The “mixture” vs. the “compound-distributions” tags for details.
| null | CC BY-SA 4.0 | null | 2011-03-27T16:20:20.390 | 2020-05-02T09:09:50.780 | 2020-05-02T09:09:50.780 | 1352 | 919 | null |
8842 | 4 | null | null | 0 | null | A mixture distribution is one that is written as a convex combination of other distributions. Use the "compound-distributions" tag for "concatenations" of distributions (where a parameter of a distribution is itself a random variable). | null | CC BY-SA 4.0 | null | 2011-03-27T16:20:20.390 | 2020-05-02T09:01:39.847 | 2020-05-02T09:01:39.847 | 1352 | 919 | null |
8844 | 1 | 8860 | null | 10 | 3462 | How to calculate discrete interval coverage?
What I know how to do:
If I had a continuous model, I could define a 95% confidence interval for each of my predicted values, and then see how often the actual values were within the confidence interval. I might find that only 88% of the time did my 95% confidence interval cover the actual values.
What I don't know how to do:
How do I do this for a discrete model, such as poisson or gamma-poisson? What I have for this model is as follows, taking a single observation (out of over 100,000 I plan to generate:)
Observation #: (arbitrary)
Predicted value: 1.5
Predicted probability of 0: .223
Predicted probability of 1: .335
Predicted probability of 2: .251
Predicted probability of 3: .126
Predicted probability of 4: .048
Predicted probability of 5: .014 [and 5 or more is .019]
...(etc)
Predicted probability of 100 (or to some otherwise unrealistic figure): .000
Actual value (an integer such as "4")
Note that while I've given poisson values above, in the actual model a predicted value of 1.5 may have different predicted probabilities of 0,1,...100 across observations.
I'm confused by the discreteness of the values. A "5" is obviously outside the 95% interval, since there's only .019 at 5 and above, which is less than .025. But there will be a lot of 4's -- individually they are within, but how do I jointly evaluate the number of 4's more appropriately?
Why do I care?
The models I'm looking at have been criticized for being accurate at the aggregate level but giving poor individual predictions. I want to see how much worse the poor individual predictions are than the inherently wide confidence intervals predicted by the model. I'm expecting the empirical coverage to be worse (e.g. I might find 88% of the values lie within the 95% confidence interval), but I hope only a bit worse.
| Discrete functions: Confidence interval coverage? | CC BY-SA 2.5 | null | 2011-03-27T17:31:46.470 | 2011-12-12T13:59:11.507 | null | null | 3919 | [
"confidence-interval",
"discrete-data"
] |
8845 | 1 | 8861 | null | 6 | 241 | [Samiuddin, (1976)](http://www.jstor.org/stable/2285344) states:

or, typset with $\LaTeX$ as originally posted
>
We start with the usual noninformative
prior distribution of $\mu_i$ and
$\sigma_i (i = 1,2,\ldots, k)$
$$\pi(\mu_1, \mu_2, \ldots, \mu_k;
\sigma_1, \sigma_2, \ldots, \sigma_k)
\propto d\mu_1d\mu_2\ldots d\mu_k
\frac{d\sigma_1d\sigma_2\ldots d\sigma_k}
{\sigma_1\sigma_2\ldots \sigma_k}$$
What does this notation mean?
---
Samiuddin, M. 1976. Bayesian Test of Homogeneity of Variance. Journal of the American Statistical Assoc. Vol. 71, No. 354
| What does $d$ mean in this notation of the "usual noninformative prior of $\mu_i$ and $\sigma_i$?" | CC BY-SA 3.0 | null | 2011-03-27T17:46:18.863 | 2011-05-18T13:45:14.577 | 2011-05-18T13:45:14.577 | 1381 | 1381 | [
"probability",
"bayesian",
"prior",
"notation"
] |
8846 | 1 | 8863 | null | 6 | 4852 | I'm about to apply Kruskal-Wallis test (non-parametric ANOVA) on three groups of unequal length. I was taught/advised to apply Krukal-Wallis only if:
- dependent variable is at least at ordinal level of measurement
- group's $ n > 5 $ (otherwise H statistic is not $ \chi ^2 $ distributed, so exact p-value cannot be calculated, etc.)
- Wikipedia page says that it requires an identically-shaped and scaled distribution for each group
Apart from that, what are general considerations when applying Kruskal-Wallis test? Should I check homoscedascity with Fligner-Killeen's test, and what should I do in case of small $ n $, or unequal group sizes?
There are `exactRankTests::wilcox.exact` and `PASWR::wilcoxE.test` functions in R, but I can't seem to find an analogous one for Kruskal-Wallis...
Advice, anyone?
| Kruskal-Wallis test data considerations | CC BY-SA 2.5 | null | 2011-03-27T19:12:32.223 | 2011-03-28T02:25:16.240 | 2011-03-27T21:11:14.717 | 1356 | 1356 | [
"r",
"nonparametric",
"kruskal-wallis-test"
] |
8847 | 2 | null | 8807 | 11 | null | [http://robjhyndman.com/researchtips/crossvalidation/](http://robjhyndman.com/researchtips/crossvalidation/) contains a quick tip for cross validation of time series. Regarding using random forest for time series data....not sure although it seems like an odd choice given that the model is fitted using bootstrap samples. There are classic time series methods of course (e.g. ARIMA) that can be used, as can ML techniques like Neural Nets (example [example pdf](http://www.neural-forecasting.com/Downloads/EVIC05_tutorial%20EVIC%2705%20Slides%20-%20Forecasting%20with%20Neural%20Networks%20Tutorial%20SFCrone.pdf)). Perhaps some of the time series experts can comment on how well ML techniques work compared to time series specific algorithms.
| null | CC BY-SA 2.5 | null | 2011-03-27T19:20:57.207 | 2011-03-27T19:20:57.207 | null | null | 2040 | null |
8849 | 2 | null | 8846 | 0 | null |
- You need not check homoscedasticity. Kruskal and Wallis stated in their original paper that the “test may be fairly insensitive to differences in variability”.
- If there is no exact test available, you can use bootstrap.
| null | CC BY-SA 2.5 | null | 2011-03-27T19:34:23.447 | 2011-03-27T23:12:57.357 | 2011-03-27T23:12:57.357 | 3911 | 3911 | null |
8851 | 2 | null | 8664 | 1 | null | When you include subject as a random effect in ANOVA you assume that the subject effect and the product effect are additive. Have you thought about negative covariances? Maybe the more one likes red socks the less they like green ones...
| null | CC BY-SA 2.5 | null | 2011-03-27T20:16:46.323 | 2011-03-27T20:16:46.323 | null | null | 3911 | null |
8852 | 2 | null | 4150 | 1 | null | Write down the complete likelihood, take the derivative and do a gradient based optimization.
You can do this online very easily (that is, process one point after the other) and this might result in far faster convergence than EM if the redundancy in your data is high.
| null | CC BY-SA 2.5 | null | 2011-03-27T20:36:29.907 | 2011-03-27T20:36:29.907 | null | null | 2860 | null |
8853 | 2 | null | 8653 | 2 | null | I understand you have
- many brain imaging datasets
- classified into 2 groups, study and control
- image processing methods
- parameters of the image processing to tune
- a collection of processed images with various parameter settings
and that you will
- run a new study recruiting similar subjects and controls
- pick a single pixel of each dataset
- compare pixels of study subjects with controls
- use a two sample T test
and you want
- a sample size formula that includes the parameters of image processing.
I believe you need to explore and understand how the difference between groups and the standard deviation depends on the parameters of image processing. In a second step you can understand how the requires sample size depends on the parameters. (You mentioned log-log plot: the relationship may only be linear after double log transformation if very special conditions are fulfilled, however a linear approximation may be satisfactory in the parameter range you find practical.)
I suggest to perform visualization of the dependence of effect size and SD on the parameters of image processing, and other explorative statistics. After these you can set up a model that predicts the inputs of the sample size formula using the image processing parameters.
You may find that even your hundreds of parameter settings already evaluated do not give sufficient insight (especially if there are many parameters), in which case you may need to evaluate further parameter combinations. Most image processing methods may be automated, automation saves a lot of time when tweaking the parameters.
| null | CC BY-SA 2.5 | null | 2011-03-27T20:51:45.013 | 2011-03-27T20:51:45.013 | null | null | 3911 | null |
8854 | 1 | 8857 | null | 2 | 6453 | I'm interested in determining whether two or more groups of data share the same mean, and it seems like the ANOVA framework is a good way to approach this. However, ANOVA assumes residuals are normally distributed, while each of my data is a number between 0 and 100 (a percentage). Because normal distributions have support on all the real line, my data cannot adhere to the normality assumption.
My question: Is there an ANOVA-like test which is suited to predicting percentages (i.e., a variables that is bounded)?
| ANOVA-like test for bounded variables (percentages) | CC BY-SA 3.0 | null | 2011-03-27T21:15:26.893 | 2012-06-25T06:16:16.217 | 2012-06-25T06:16:16.217 | 183 | 3921 | [
"hypothesis-testing",
"anova"
] |
8856 | 2 | null | 8633 | 2 | null | Your covariate is not only different across subjects, but also gets different during the multiple measurements on the same subject. This has implications on the study design and the analysis method as well.
(I'm not sure if you really meant the effect of the covariate in your second sentence.)
Study design: if your multiple measurements on the same subject are of various natures it's important that you run the measurements in various orders so as to be able to separate the effect of increasing fatigue from the effect of the natures of measurement.
Analysis: some statistical software require the dataset of repeated measure ANCOVA be formatted in the wide format, some other software require the long format ([example for wide/long](http://www.ats.ucla.edu/stat/Spss/modules/reshapel115.htm), [permalink](http://www.webcitation.org/5xVQrvOJ9)). Only in the long format will you be able to specify multiple RTs per subject. You will need a statistical software that supports the long format for repeated measures ANCOVA.
| null | CC BY-SA 2.5 | null | 2011-03-27T21:28:27.603 | 2011-03-27T21:28:27.603 | null | null | 3911 | null |
8857 | 2 | null | 8854 | 4 | null | Is the percentage the most raw data you have, or did you compute the percentage from some sort of binomial count data? If the latter, then you should submit the raw 1s and 0s to a logistic regression. In R, check out glm:
```
glm(
response ~ group
, family = binomial
)
```
If each individual of each group contributes multiple 1s an 0s, then use lmer from the lme4 package to model the individuals as random effects:
```
lmer(
response ~ (1|individual) + group
, family = binomial
)
```
If all you have are the percentages, then maybe consider either bootstrapping confidence interval on the group means and differences, or employ a permutation test for differences.
| null | CC BY-SA 2.5 | null | 2011-03-27T21:36:20.393 | 2011-03-27T21:36:20.393 | null | null | 364 | null |
8858 | 2 | null | 8854 | 2 | null | Percentage values may have normal distributions, e.g. cholesterol levels across humans is approximately normally distributed, and it remains normal even if expressed as percentage of the maximal cholesterol level seen. In such cases you need not worry about the fact that your data cover only a narrow interval.
However, there may be a very different mechanism which leads to your percentage. Often percentages summarize many binary measurements on the same subject (e.g. proportion of malignant cells in the tissue sample from the subject). In such cases it's best to use a model that takes the appropriate distribution into consideration (e.g. binomial distribution). To get more specific advice on such a model, tell more about your percentage values!
Other generic methods include [transforming](http://en.wikipedia.org/wiki/Data_transformation_%28statistics%29) the percentage variable, or using a non-parametric test (e.g. [Kruskal-Wallis](http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance)).
| null | CC BY-SA 2.5 | null | 2011-03-27T21:56:14.757 | 2011-03-27T21:56:14.757 | null | null | 3911 | null |
8859 | 1 | null | null | 0 | 2322 | I am currently implementing a text classification program with Naive Bayes. I produce two multinominal models in my training function: p(w|nonSPAM) and p(w|SPAM)) as well as a prior probability P(S).
In my testing function I go through each test document, and for each test document, I go through all the terms and compute logP(nonSPAM|D) and logP(SPAM|D). Then I make a classification decision by comparing these two quantities (SPAM = 0 or nonSPAM =1).
My problem is: I want to return a score (e.g. 0.52121) rather then a strict probability (0 or 1), so I can use different thresholds in my program.
Is it possible to calculate a score only by using logP(nonSPAM|D) and logP(SPAM|D) and prior probability P(S)?
I asked a similar question [here](https://stackoverflow.com/questions/5451004/log-likelihood-to-implement-naive-bayes-for-text-classification?tab=votes#tab-top) but my current question is more related with statistics.
| How to convert log likelihoods into scores in Naive Bayes? | CC BY-SA 2.5 | null | 2011-03-27T22:13:20.827 | 2011-04-08T20:43:40.633 | 2017-05-23T12:39:26.167 | -1 | null | [
"machine-learning",
"maximum-likelihood",
"naive-bayes"
] |
8860 | 2 | null | 8844 | 7 | null | Neyman's confidence intervals make no attempt to provide coverage of the parameter in the case of any particular interval. Instead they provide coverage over all possible parameter values in the long run. In a sense they attempt to be globally accurate at the expense of local accuracy.
Confidence intervals for binomial proportions offer a clear illustration of this issue. Neymanian assessment of intervals yields the irregular coverage plots like this, which is for 95% Clopper-Pearson intervals for n=10 Binomial trials:

There is an alternative way to do coverage, one that I personally think is much more intuitively approachable and (thus) useful. The coverage by intervals can be specified conditional on the observed result. That coverage would be local coverage. Here is a plot showing local coverage for three different methods of calculation of confindence intervals for binomial proportions: Clopper-Pearson, Wilson's scores, and a conditional exact method that yield intervals identical to Bayesian intervals with a uniform prior:

Notice that the 95% Clopper-Pearson method gives over 98% local coverage but the exact conditional intervals are, well, exact.
A way to think of the difference between the global and local intervals is to consider the global to be inversions of Neyman-Pearson hypothesis tests where the outcome is a decision that is made on the basis of consideration of long-term error rates for the current experiment as a member of the global set of all experiments that might be run. The local intervals are more akin to inversion of Fisherian significance tests which yield a P value which represents evidence against the null in from this particular experiment.
(As far as I know, the distinction between global and local statistics was first made in an unpublished Master’s thesis by Claire F Leslie (1998) Lack of confidence : a study of the suppression of certain counter-examples to the Neyman-Pearson theory of statistical inference with particular reference to the theory of confidence intervals. That thesis is held by the Baillieu library at The University of Melbourne.)
| null | CC BY-SA 3.0 | null | 2011-03-27T22:45:57.677 | 2011-12-12T13:59:11.507 | 2011-12-12T13:59:11.507 | 1036 | 1679 | null |
8861 | 2 | null | 8845 | 7 | null | This is shorthand notation for a "differential" of the mean and variance parameters. The longhand version goes:
$$p(\mu\in[\mu_1,\mu_1+d\mu_1)|I)\propto d\mu_1$$
This indicates a uniform probability with respect to $\mu$. A more familiar notation is:
$$p(\mu|I)\propto 1$$
It comes from the "proper" derivation of a PDF from a CDF.
$$lim_{dy\rightarrow 0}P(Y\in[y,y+dy))=f(y)dy$$
EDIT: I initially wrote this answer in a hasty fashion, and so had a bit of unclear notation myself. In my example, I only had a 1-dimension variable $\mu_1$, and all the above relate to a 1-dimensional random variable. I think the statistical physics literature ("maxent" people) uses this notation (but not entirely sure) - Edwin Jaynes, Larry Bretthorst, Stephen Gull, and others. I've never seen it explained in any more detail than what I have given.
And second is that $I$ stands for "prior information", not an identity matrix. This is just a good habit to express $I$ explicitly as part of your assumptions - so that you don't forget that 1) they are there, and 2) you answer depends on the prior information just as much it depends on the data.
| null | CC BY-SA 2.5 | null | 2011-03-27T22:58:10.103 | 2011-03-28T09:27:14.167 | 2011-03-28T09:27:14.167 | 2392 | 2392 | null |
8862 | 2 | null | 8818 | 3 | null | A mixture of Gaussians algorithm is a probabilistic generalization of the $k$-means algorithm. Each mean vector in $k$-means is component. The number of elements in each of the $k$ vectors is the dimension of the model. Thus, if you have $n$ dimensions, you have a $k\times n$ matrix of mean vectors.
It is no different in a mixture of Gaussians except that now you have to deal with covariance matrices in your model.
| null | CC BY-SA 2.5 | null | 2011-03-28T02:16:21.630 | 2011-03-28T19:37:08.190 | 2011-03-28T19:37:08.190 | 2660 | 2660 | null |
8863 | 2 | null | 8846 | 9 | null | With small, and possibly unequal group sizes, I'd go with chl's and onestop's suggestion and do a Monte-Carlo permutation test. For the permutation test to be valid, you need exchangeability under $H_{0}$. If all distributions have the same shape (and are therefore identical under $H_{0}$), this is true.
Here's a first try at looking at the case of 3 groups and no ties. First, let's compare the asymptotic $\chi^{2}$ distribution function against a MC-permutation one for given group sizes (this implementation will break for larger group sizes).
```
P <- 3 # number of groups
Nj <- c(4, 8, 6) # group sizes
N <- sum(Nj) # total number of subjects
IV <- factor(rep(1:P, Nj)) # grouping factor
alpha <- 0.05 # alpha-level
# there are N! permutations of ranks within the total sample, but we only want 5000
nPerms <- min(factorial(N), 5000)
# random sample of all N! permutations
# sample(1:factorial(N), nPerms) doesn't work for N! >= .Machine$integer.max
permIdx <- unique(round(runif(nPerms) * (factorial(N)-1)))
nPerms <- length(permIdx)
H <- numeric(nPerms) # vector to later contain the test statistics
# function to calculate test statistic from a given rank permutation
getH <- function(ranks) {
Rj <- tapply(ranks, IV, sum)
(12 / (N*(N+1))) * sum((1/Nj) * (Rj-(Nj*(N+1) / 2))^2)
}
# all test statistics for the random sample of rank permutations (breaks for larger N)
# numperm() internally orders all N! permutations and returns the one with a desired index
library(sna) # for numperm()
for(i in seq(along=permIdx)) { H[i] <- getH(numperm(N, permIdx[i]-1)) }
# cumulative relative frequencies of test statistic from random permutations
pKWH <- cumsum(table(round(H, 4)) / nPerms)
qPerm <- quantile(H, probs=1-alpha) # critical value for level alpha from permutations
qAsymp <- qchisq(1-alpha, P-1) # critical value for level alpha from chi^2
# illustration of cumRelFreq vs. chi^2 distribution function and resp. critical values
plot(names(pKWH), pKWH, main="Kruskal-Wallis: permutation vs. asymptotic",
type="n", xlab="h", ylab="P(H <= h)", cex.lab=1.4)
points(names(pKWH), pKWH, pch=16, col="red")
curve(pchisq(x, P-1), lwd=2, n=200, add=TRUE)
abline(h=0.95, col="blue") # level alpha
abline(v=c(qPerm, qAsymp), col=c("red", "black")) # critical values
legend(x="bottomright", legend=c("permutation", "asymptotic"),
pch=c(16, NA), col=c("red", "black"), lty=c(NA, 1), lwd=c(NA, 2))
```

Now for an actual MC-permutation test. This compares the asymptotic $\chi^{2}$-derived p-value with the result from `coin`'s `oneway_test()` and the cumulative relative frequency distribution from the MC-permutation sample above.
```
> DV1 <- round(rnorm(Nj[1], 100, 15), 2) # data group 1
> DV2 <- round(rnorm(Nj[2], 110, 15), 2) # data group 2
> DV3 <- round(rnorm(Nj[3], 120, 15), 2) # data group 3
> DV <- c(DV1, DV2, DV3) # all data
> kruskal.test(DV ~ IV) # asymptotic p-value
Kruskal-Wallis rank sum test
data: DV by IV
Kruskal-Wallis chi-squared = 7.6506, df = 2, p-value = 0.02181
> library(coin) # for oneway_test()
> oneway_test(DV ~ IV, distribution=approximate(B=9999))
Approximative K-Sample Permutation Test
data: DV by IV (1, 2, 3)
maxT = 2.5463, p-value = 0.0191
> Hobs <- getH(rank(DV)) # observed test statistic
# proportion of test statistics at least as extreme as observed one (+1)
> (pPerm <- (sum(H >= Hobs) + 1) / (length(H) + 1))
[1] 0.0139972
```
| null | CC BY-SA 2.5 | null | 2011-03-28T02:17:44.203 | 2011-03-28T02:25:16.240 | 2011-03-28T02:25:16.240 | 1909 | 1909 | null |
8864 | 1 | null | null | 5 | 743 | I'm new to predictive models and I have a problem at hand that I need some advice with. Basically for a clinical application we want to predict the outcome of a rating scale with a model built on top of outcomes of our new measurement device. My dependent variable, a clinical rating scale, is an integer between 0 and 10 (inclusive). Unfortunately I don't have a large sample ($n \approx 100$) and I have a lot features to select from ($p \approx 120$). Also many of these features are correlated. Nearly all of the features are continuous variables. I have a separate sample for validation ($ n \approx 40$). There are several issues I'd like have your advice about:
- Should I go for regression or tree based methods?
- Should I try ensemble learning methods or I'd better stick with a single model? Which methods should I try and why?
- If it's better to go for a single model, how should I handle the model selection problem? Should I e.g. limit the number of predictors and go for methods like LEAPS with AIC or should I go for methods like LASSO?
- If ensemble methods are suggested, which methods can handle cases with small $n$ and large $p$ better?
- Discussing selected/influential features is important for me. Depending on the answers to previous questions, how should I go about it?
I have some understanding of regression modeling and model selection problems. I have used the bestglm package in the past. Currently I'm looking at the Caret package as it brings a large number of methods under the same interface. References get technical about the details of the models but so far I didn't find a good one to go over practical issues for problems with small n and big p. I appreciate your suggestions and help.
Thanks,
AlefSin
| Looking for ideas to build a predictive model | CC BY-SA 2.5 | null | 2011-03-28T02:44:24.290 | 2011-03-29T19:28:12.003 | null | null | 2020 | [
"r",
"regression",
"predictive-models"
] |
8866 | 2 | null | 8405 | 1 | null | As an alternative to the `Hmisc` package, you can use `table` or `xtabs`:
```
table(cut(X$Age, c(0, 27, 37, 47, 999), X$Outcome)
xtabs(~ cut(Age, c(0, 27, 37, 47, 999) + Outcome, data=X)
```
| null | CC BY-SA 2.5 | null | 2011-03-28T06:07:52.767 | 2011-03-28T06:07:52.767 | null | null | 1569 | null |
8867 | 1 | null | null | 21 | 771 | Given the following hierarchical model,
$$
X \sim {\mathcal N}(\mu,1),
$$
and,
$$
\mu \sim {\rm Laplace}(0, c)
$$
where $\mathcal{N}(\cdot,\cdot)$ is a normal distribution. Is there a way to get an exact expression for the Fisher information of the marginal distribution of $X$ given $c$. That is, what is the Fisher information of:
$$
p(x | c) = \int p(x|\mu) p(\mu|c) d\mu
$$
I can get an expression for the marginal distribution of $X$ given $c$, but differentiating w.r.t. $c$ and then taking expectations seems very difficult. Am I missing something obvious? Any help would be appreciated.
| Fisher information in a hierarchical model | CC BY-SA 3.0 | null | 2011-03-28T06:33:55.567 | 2020-08-07T16:47:16.740 | 2020-08-07T16:47:16.740 | 7290 | 530 | [
"multilevel-analysis",
"fisher-information"
] |
8868 | 1 | 8870 | null | 28 | 59160 | When doing time series research in R, I found that `arima` provides only the coefficient values and their standard errors of fitted model. However, I also want to get the p-value of the coefficients.
I did not find any function that provides the significance of coef.
So I wish to calculate it by myself, but I don't know the degree of freedom in the t or chisq distribution of the coefficients. So my question is how to get the p-values for the coefficients of fitted arima model in R?
| How to calculate the p-value of parameters for ARIMA model in R? | CC BY-SA 2.5 | null | 2011-03-28T09:19:01.760 | 2020-09-27T06:24:55.793 | 2020-09-27T06:24:55.793 | 7290 | 3926 | [
"r",
"time-series",
"chi-squared-test",
"arima"
] |
8869 | 1 | 8871 | null | 6 | 858 | After knowing how LSA works, I went on continue reading on pLSA but couldn't really make sense of the mathematical formula. This is what I get from [wikipedia](http://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis) (other academic papers/tutorial show similar form)
\begin{align}
P(w,d) & = \sum_{c} P(c) P(d|c) P(w|c)\\
& = P(d) \sum_{c} P(c|d) P(w|c)\\
\end{align}
I gave up trying to derive it, and found [this](http://www.hongliangjie.com/2010/01/04/notes-on-probabilistic-latent-semantic-analysis-plsa/) instead
\begin{align}
P(c|d) & = \frac{P(d|c)P(c)}{P(d)}\\
P(c|d)P(d) & = P(d|c)P(c)\\
P(w|c)P(c|d)P(d) & = P(w|c)P(d|c)P(c)\\
P(d) \sum_{c} P(w|c)P(c|d) & = \sum_{c} P(w|c)P(d|c)P(c)
\end{align}
How does the summation appear at the last line? I am currently reading through some tutorial on [Bayesian Inferencing](http://en.wikipedia.org/wiki/Bayesian_inference) (learnt basic probability rules and Bayesian theorem before but can't really see them being useful enough here).
| Deriving mathematical model of pLSA | CC BY-SA 3.0 | null | 2011-03-28T09:21:13.490 | 2015-09-25T06:48:48.613 | 2015-09-25T06:48:48.613 | 3837 | 3837 | [
"machine-learning",
"probability",
"bayesian",
"multilevel-analysis",
"latent-semantic-analysis"
] |
8870 | 2 | null | 8868 | 5 | null | The "t value" is the ratio of the coefficient to the standard error. The degrees of freedom (ndf) would be the number of observations minus the max order of difference in the model minus the number of estimated coefficients. The "F value " would be the square of the "t value" In order to exactly compute probability you would have to call a non-central chi-square function and pass in the F value and the degrees of freedom (1,ndf) or perhaps simply call an F function lookup.
| null | CC BY-SA 2.5 | null | 2011-03-28T09:36:22.263 | 2011-03-28T09:36:22.263 | null | null | 3382 | null |
8871 | 2 | null | 8869 | 6 | null | I am assuming you want to derive:
\begin{align*}
P(w,d) = \sum_{c} P(c) P(d|c) P(w|c)
&= P(d) \sum_{c} P(c|d) P(w|c)
\end{align*}
Further, this is similar to Probabilistic latent semantic indexing (cf. Blei, Jordan, and Ng (2003) Latent Dirichlet Allocation. JMLR section 4.3). PLSI posits that a document label $d$ and a word $w$ are conditionally independent given an unobserved topic $z$.
If this is true, your formula is a simple consequence of [Bayes theorem](http://en.wikipedia.org/wiki/Bayes%27_theorem). Here are the steps:
\begin{align*}
P(w, d) &= \displaystyle \sum_z P(w, z, d)\\
& = \displaystyle \sum_z P(w, d | z) p(z)\\
&= \displaystyle \sum_z P(w | z) p(d|z) p(z),
\end{align*}
where factorization into products is because of conditional independence.
Now use Bayes theorem again to get
\begin{align}
\displaystyle \sum_z P(w | z) p(d|z) p(z) &= \displaystyle \sum_z P(w | z) p(z,d)\\
&= \displaystyle \sum_z P(w | z) p(z|d)p(d)\\
&= p(d)\displaystyle \sum_z P(w | z) p(z|d)
\end{align}
| null | CC BY-SA 2.5 | null | 2011-03-28T11:24:36.273 | 2011-03-28T11:53:06.290 | 2011-03-28T11:53:06.290 | 2116 | 1307 | null |
8872 | 2 | null | 8868 | 27 | null | Since `arima` uses maximum likelihood for estimation, the coefficients are assymptoticaly normal. Hence divide coefficients by their standard errors to get the z-statistics and then calculate p-values. Here is the example with in R with the first example from `arima` help page:
```
> aa <- arima(lh, order = c(1,0,0))
> aa
Call:
arima(x = lh, order = c(1, 0, 0))
Coefficients:
ar1 intercept
0.5739 2.4133
s.e. 0.1161 0.1466
sigma^2 estimated as 0.1975: log likelihood = -29.38, aic = 64.76
> (1-pnorm(abs(aa$coef)/sqrt(diag(aa$var.coef))))*2
ar1 intercept
1.935776e-07 0.000000e+00
```
The last line gives the p-values.
| null | CC BY-SA 3.0 | null | 2011-03-28T11:41:48.280 | 2013-05-06T10:22:17.483 | 2013-05-06T10:22:17.483 | 2116 | 2116 | null |
8873 | 1 | null | null | 3 | 600 | How to calculate sample size needed for GWAS for a given MAF, power, $p$-value and frequency of the disease ?
| Sample size in genome-wide studies | CC BY-SA 2.5 | null | 2011-03-28T13:24:08.807 | 2011-03-28T20:05:23.343 | 2011-03-28T14:17:07.833 | 930 | 3870 | [
"genetics",
"statistical-power"
] |
8874 | 2 | null | 8797 | 4 | null | Based on my calculations, it seems that you had about 16 or 17 participants. The typical methods for calculating confidence intervals of means assume that sample means are normally distributed. In the case of very skewed distributions, that assumption is only valid for large samples, which rules of thumb define as at least 20 or 30 (depending on whose thumb you talk to).
Also, your data are ordinal data but not necessarily interval data; the difference between a 3 and a 4 is not necessarily the same as the difference between a 1 and a 2. This also makes the typical methods less valid.
If you want to develop some quantitative measure of variability, I suggest that you use a binomial test to estimate a confidence interval of the median. But I probably wouldn't even do that; the exact numbers aren't particularly meaningful unless you have a random sample and you've tested the questionnaire for validity and reliability or if you're comparing it to a similar question or by some blocking factor.
Considering all of this, I don't trust statistics to be particularly meaningful on individual questions from questionnaires like this. When I run questionnaires like this, I generally just plot histograms. I think they tell you more than the numbers.
| null | CC BY-SA 2.5 | null | 2011-03-28T13:54:58.437 | 2011-03-28T13:54:58.437 | null | null | 3874 | null |
8875 | 2 | null | 8869 | 3 | null | The line $P(c|d)P(c) = P(d|c)P(c)$ (your eq 2) should be $P(c|d)P(d) = P(d|c)P(c)$.
I'm not sure why you don't think Bayes theorem and basic probability rules are useful:
Eq 1 is Bayes theorem (ie recognizing that $P(d|c)P(c) = P(c,d)$ and plugging in to the definition of conditional probability)
Eq 2 follows immediately from eq 1
Eq 3 is just eq 2 multiplied through by $P(w|c)$.
Since eq 3 holds for all $c$ the sums are equal. Then since $w$ is independent of $d$ given $c$ (an assumption from the model), $P(w|c)P(c|d) = P(w|c, d)P(c|d) = P(w,c|d)$ and so $\sum_c\ P(w,c|d) = P(w|d)$ by the law of total probability, giving you $P(w|d)P(d)$.
Finally, $P(w|d)P(d)=P(w,d)$ from the definition of conditional probability.
So basic probability is in fact both necessary and sufficient for the derivation!
| null | CC BY-SA 2.5 | null | 2011-03-28T13:56:59.857 | 2011-03-28T13:56:59.857 | null | null | 26 | null |
8876 | 2 | null | 8572 | 19 | null | So the simple answer is yes: Metropolis-Hastings and its special case Gibbs sampling :) General and powerful; whether or not it scales depends on the problem at hand.
I'm not sure why you think sampling an arbitrary discrete distribution is more difficult than an arbitrary continuous distribution. If you can calculate the discrete distribution and the sample space isn't huge then it's much, much easier (unless the continuous distribution is standard, perhaps). Calculate the likelihood $f(k)$ for each category, then normalise to get the probabilities $P(\tilde k = k) = f(k)/\sum f(k)$ and use inverse transform sampling (imposing an arbitrary order on $k$).
Have you got a particular model in mind? There are all sorts of MCMC approaches to fitting mixture models, for example, where the latent component assignments are discrete parameters. These range from very simple (Gibbs) to quite complex.
How big is the parameter space? Is it potentially enormous (eg in the mixture model case, it's N by the number of mixture components)? You might not need anything more than a Gibbs sampler, since conjugacy is no longer an issue (you can get the normalizing constant directly so you can compute the full conditionals). In fact griddy Gibbs used to be popular for these cases, where a continuous prior is discretized to ease computation.
I don't think there is a particular "best" for all problems having a discrete parameter space any more than there is for the continuous case. But if you tell us more about the models you're interested in perhaps we can make some recommendations.
Edit: OK, I can give a little more information in re: your examples.
Your first example has pretty long history, as you might imagine. A recent-ish review is in [1], see also [2]. I'll try to give some details here: A relevant example is stochastic search variable selection. The initial formulation was to use absolutely continuous priors like $p(\beta)\sim \pi N(\beta; 0, \tau) + (1-\pi) N(\beta, 0, 1000\tau)$. That actually turns out to work poorly compared to priors like $p(\beta)\sim \pi \delta_0 (\beta) + (1-\pi) N(\beta, 0, \tau)$ where $\delta_0$ is a point mass at 0. Note that both fit into your original formulation; an MCMC approach would usually proceed by augmenting $\beta$ with a (discrete) model indicator (say $Z$). This is equivalent to a model index; if you have $Z_1\dots, Z_p$ then obviously you can remap the $2^p$ possible configurations to numbers in $1:2^p$.
So how can you improve the MCMC? In a lot of these models you can sample from $p(Z, \beta|y)$ by composition, ie using that $p(Z, \beta|y) = p(\beta | Y, Z)p(Z|Y)$. Block updates like this can tremendously improve mixing since the correlation between $Z$ and $\beta$ is now irrelevant to the sampler
SSVS embeds the whole model space in one big model. Often this is easy to implement but gives works poorly. Reversible jump MCMC is a different kind of approach which lets the dimension of the parameter space vary explicitly; see [3] for a review and some practical notes. You can find more detailed notes on implementation in different models in the literature, I'm sure.
Oftentimes a complete MCMC approach is infeasible; say you have a linear regression with $p=1000$ variables and you're using an approach like SSVS. You can't hope for your sampler to converge; there's not enough time or computing power to visit all those model configurations, and you're especially hosed if some of your variables are even moderately correlated. You should be especially skeptical of people trying to estimate things like variable inclusion probabilities in this way. Various stochastic search algorithms used in conjunction with MCMC have been proposed for such cases. One example is BAS [4], another is in [5] (Sylvia Richardson has other relevant work too); most of the others I'm aware of are geared toward a particular model.
A different approach which is gaining in popularity is to use absolutely continuous shrinkage priors that mimic model averaged results. Typically these are formulated as scale mixtures of normals. The Bayesian lasso is one example, which is a special case of normal-gamma priors and a limiting case of normal-exponential-gamma priors. Other choices include the horseshoe and the general class of normal distributions with inverted beta priors on their variance. For more on these, I'd suggest starting with [6] and walking back through the references (too many for me to replicate here :) )
I'll add more about outlier models later if I get a chance; the classic reference is [7]. They're very similar in spirit to shrinkage priors. Usually they're pretty easy to do with Gibbs sampling.
Perhaps not as practical as you were hoping for; model selection in particular is a hard problem and the more elaborate the model the worse it gets. Block update wherever possible is the only piece of general advice I have. Sampling from a mixture of distributions you will often have the problem that membership indicators and component parameters are highly correlated. I also haven't touched on label switching issues (or lack of label switching); there is quite a bit of literature there but it's a little out of my wheelhouse.
Anyway, I think it's useful to start with some of the references here, to get a feeling for the different ways that others are approaching similar problems.
[1] Merlise Clyde and E. I. George. Model Uncertainty Statistical Science 19 (2004): 81--94.
[http://www.isds.duke.edu/~clyde/papers/statsci.pdf](http://www.isds.duke.edu/~clyde/papers/statsci.pdf)
[2]http://www-personal.umich.edu/~bnyhan/montgomery-nyhan-bma.pdf
[3] Green & Hastie Reversible jump MCMC (2009)
[http://www.stats.bris.ac.uk/~mapjg/papers/rjmcmc_20090613.pdf](http://www.stats.bris.ac.uk/~mapjg/papers/rjmcmc_20090613.pdf)
[4] [http://www.stat.duke.edu/~clyde/BAS/](http://www.stat.duke.edu/~clyde/BAS/)
[5] [http://ba.stat.cmu.edu/journal/2010/vol05/issue03/bottolo.pdf](http://ba.stat.cmu.edu/journal/2010/vol05/issue03/bottolo.pdf)
[6] [http://www.uv.es/bernardo/Polson.pdf](http://www.uv.es/bernardo/Polson.pdf)
[7] Mike West Outlier models and prior distributions in Bayesian linear regression (1984) JRSS-B
| null | CC BY-SA 2.5 | null | 2011-03-28T14:24:43.273 | 2011-03-31T16:45:44.557 | 2011-03-31T16:45:44.557 | 26 | 26 | null |
8877 | 1 | null | null | 7 | 2928 | My [struggle](https://stats.stackexchange.com/questions/8846/kruskal-wallis-test-data-considerations) with non-parametric methods continues... I'd like to apply a median polish instead of two-way ANOVA (normality and homoscedascity assumptions are violated, and $ n_{ij} $ are small, so I can't use CLT as an excuse). I've never used median polish so far, and our course in statistics taught us to worship ANOVA and forget about robust methods if basic assumptions are not met. I saw [this post](https://stats.stackexchange.com/questions/3634/multi-way-nonparametric-anova) and it seems that median polish can be applied for two-way factorial design. Which technique do you find appropriate in case of violation of ANOVA assumptions?
Now, what are the basic data considerations for median polish (or any other technique you find appropriate in this case)? Same shape, homoscedascity? Any resource (link/reference) is appreciated.
---
P.S.
Note that I'm aware of `medpolish` function in R.
| Two-way robust ANOVA | CC BY-SA 2.5 | null | 2011-03-28T15:37:13.107 | 2011-03-28T22:54:48.707 | 2017-04-13T12:44:33.310 | -1 | 1356 | [
"r",
"anova",
"nonparametric",
"median",
"robust"
] |
8878 | 2 | null | 8877 | 2 | null | How is normality violated? Medians are more sensitive to skew than means as n gets low. Be careful of that. It would be very problematic if small n's varied in a systematic way.
How much is homoscedascity violated? If the n's are about equal it won't matter much for quite large differences.
| null | CC BY-SA 2.5 | null | 2011-03-28T16:13:24.807 | 2011-03-28T16:13:24.807 | null | null | 601 | null |
8880 | 2 | null | 8808 | 3 | null | The question is not totally clear, but I am going to assume that that you have an improper prior for $\theta$ proportional to $1/\theta$ and $n$ observed points $\{X_i\}$.
A sufficient statistic is $Y=\max_i |X_i|$, and you will have
$$Pr(Y \le y|\theta) = (y/\theta)^n \textrm{ if } 0 \le y \le \theta $$
so with density $p(y|\theta) = ny^{n-1}/\theta^n$.
So using Bayes' theorem you get
$$p(\theta|y) = \frac{ny^{n-1}/\theta^{n+1}}{\int_{\phi=y}^{\infty}ny^{n-1}/\phi^{n+1} \; d\phi} =\frac{n y^{n}}{\theta^{n+1}} \textrm{ if } \theta \ge y.$$
This is a decreasing function of $\theta$ and has an integral $Pr(\theta \le \theta_0|Y=y) = 1-(y/\theta_0)^{n}$.
So there are several credible intervals for $\theta$ given $Y=\max_i |X_i|$. The narrowest (highest density) $1-\alpha$ interval is $$\left[Y, \frac{Y}{\sqrt[n]{\alpha}}\right]$$ while the quantile centred interval is $$\left[\frac{Y}{\sqrt[n]{1-\frac{\alpha}{2}}}, \frac{Y}{\sqrt[n]{\frac{\alpha}{2}}}\right] .$$
| null | CC BY-SA 2.5 | null | 2011-03-28T18:36:09.553 | 2011-03-29T21:02:58.523 | 2011-03-29T21:02:58.523 | 2958 | 2958 | null |
8881 | 1 | null | null | 1 | 1762 | I have a dataset with the following types of predictors:
- binary (e.g., gender),
- nominal with 3 categories,
- ordinal, and
- continuous
### Question:
What is the best way to set up a regression model that includes these different types of variable?
| How to perform a regression model with a mix of binary, nominal, ordinal, and continuous predictors? | CC BY-SA 2.5 | null | 2011-03-28T18:53:05.850 | 2016-07-24T11:34:16.580 | 2011-04-05T13:28:12.420 | 183 | 3472 | [
"regression"
] |
8882 | 1 | 8888 | null | 5 | 1729 | If one has dataset with a single outlier such as the following graph taken from Vanni-Mercer et al. (2009), is there a statistical test that one can use that accounts for the single outlier rather than having to throw it out or declare significance because of a single data point?
RT is reaction time. Trial rank is essentially the trial number.

| Alternative to Tukey's HSD | CC BY-SA 2.5 | null | 2011-03-28T19:45:31.427 | 2011-03-29T07:25:09.323 | null | null | 2660 | [
"post-hoc"
] |
8883 | 1 | 8921 | null | 4 | 2868 | I have a process which consists of a number of events and what is known is the timings between the events. What I'm trying to determine is a distribution that allows me to determine a likelyhood that a new sample fits the distibution.
The issue is mainly that if you have lots of samples you can approximate the result using a standard gaussian and use the mean and standard deviation. But if you only have a handful of samples, the gaussian does not accurately represent the situation.
From what I've read it is common to model waiting times using the gamma distribution. Looking at how the process evolves it looks like it matches well. The unknown is the scale parameter, since the shape parameter I think should be the number of samples. What I've worked out so far is that given the timings $X_1 ... X_N$ you can say:
$$ \sum_{n=1}^N X_n \sim \Gamma(N,\theta) $$
($N$ is known and fixed)
However, $\theta$ is unknown, but the maximum likelihood parameter is the average of the $X_i$ (according to wikipedia anyway).
My question is, can I use this to estimate a distribution for $X_i$, that is, since the $X_i$ are independent:
$$ N X_i | \sum_n X_n \sim \Gamma(N, \tfrac{1}{N}\sum_n X_n) $$
Something else I've wondered about. Suppose I do have information about $\theta$, say a distribution. How can I incorporate this into the model?
Edit: Clarified that N is fixed.
| Conditional expection of gamma distribution on sum | CC BY-SA 2.5 | null | 2011-03-28T19:53:17.083 | 2011-03-30T04:22:56.787 | 2011-03-29T16:18:23.670 | 3932 | 3932 | [
"conditional-probability",
"gamma-distribution"
] |
8884 | 1 | 8887 | null | 8 | 2694 | The central limit theorem as I am familiar with it applies to the limiting (rescaled) distribution of $n$ convolutions of a single probability distribution as $n$ goes to infinity, or equivalently, to distribution one gets from taking a sum of $n$ random variables each with a single fixed distribution. That is, it is a theorem about the (limiting as $n\to \infty$) probability distribution of
$A_1 + A_2 + ... + A_n$ where each term has a fixed distribution $P$.
I am asking about a theorem about the limiting probability distribution of
$A_1 + A_2 + ... + A_n$
where $A_1$ has probability distribution $P_1$, $A_2$ has probability distribution $P_2$, $A_3$ has probability distribution $P_3$, etc.
Also, is there a theorem for the case where each distribution isn't fixed, but is selected at random with probability determined by a measure $\mu$?
Is there such a general theorem, where the limit isn't necessarily gaussian, the limit can be reconstructed from $\mu$, and the convergence is pretty strong?
| Central limit theorem for sum from varied distributions | CC BY-SA 2.5 | null | 2011-03-28T19:59:07.047 | 2011-03-30T03:34:38.853 | 2011-03-28T20:24:39.990 | 2116 | 2912 | [
"central-limit-theorem"
] |
8885 | 1 | 17070 | null | 5 | 195 | How to order a set of vectors $W$ if we are given a training set $V$ consisting of $k$ $n$-dimensional vectors and partial order of them? It is not the total order, so some vectors might not be comparable with some other. The answer will depend on assumptions, so feel free to make any reasonable assumptions.
Example
Let: $k=4$ and $n=2$
$v_{1}=(1,2)$
$v_{2}=(5,8)$
$v_{3}=(4,3)$
$v_{4}=(9,6)$
We know that $v_{1}<v_{3}$, $v_{2}<v_{4}$ and $v_{3}<v_{4}$
Vectors that we want to order are following:
$w_{1} = (2,6)$
$w_{2} = (7,4)$
$w_{3} = (5,5)$
The most intuitive order is $w_{1}<w_{3}<w_{2}$, because it seems that the first attribute is the most important.
| Prediction of an order of vectors using partially ordered set | CC BY-SA 2.5 | null | 2011-03-28T20:01:44.653 | 2011-10-16T12:32:23.090 | null | null | 1643 | [
"ordinal-data"
] |
8886 | 2 | null | 8873 | 2 | null | Haven't tried it myself, but you might like to try the [GWApower](http://www.stats.ox.ac.uk/~marchini/software.html#GWApower) R package. See [Spencer et al. 2009](http://dx.doi.org/10.1371/journal.pgen.1000477).
| null | CC BY-SA 2.5 | null | 2011-03-28T20:05:23.343 | 2011-03-28T20:05:23.343 | null | null | 449 | null |
8887 | 2 | null | 8884 | 7 | null | The [theorem 3.1](http://books.google.com/books?id=4LkdSaI4xXMC&lpg=PP1&dq=inauthor%3a%22Valentin%20Vladimirovich%20Petrov%22&hl=fr&pg=PA91#v=onepage&q&f=false) in this [book](http://books.google.com/books?id=4LkdSaI4xXMC&lpg=PP1&dq=inauthor%3A%22Valentin%20Vladimirovich%20Petrov%22&hl=fr&pg=PP1#v=onepage&q&f=false) answers your first question. The key restriction in central limit theorem is not different distributions but the independence. The result is a very nice one, since it says that for interesting sums of independent random variables the limiting distribution has to have certain property, namely [infinite divisibility](http://en.wikipedia.org/wiki/Infinite_divisibility_%28probability%29). The classical central limit theorem (with iid variables with finite variances) is then only a very special case of this theorem.
Note that this is a very general answer to very general question. Given the nature of your distributions more precise answer can be given. For example if the distributions satisfy [Lindeberg's condition](http://en.wikipedia.org/wiki/Lindeberg%27s_condition) then the limiting distribution is necessary normal (if we exclude let us say non-interesting cases).
| null | CC BY-SA 2.5 | null | 2011-03-28T20:18:16.797 | 2011-03-28T20:18:16.797 | null | null | 2116 | null |
8888 | 2 | null | 8882 | 4 | null | Typically in RT studies there's good reason to believe that the first trials are different qualitatively from the rest and the long RT is merely an indicator of that. Why would you want to bother keeping them?
| null | CC BY-SA 2.5 | null | 2011-03-28T20:20:14.667 | 2011-03-28T20:20:14.667 | null | null | 601 | null |
8890 | 2 | null | 8864 | 5 | null | If you use a regression model you may start with ordinal logistic regression since your dependent variable has an ordinal scale of 11 levels. Then you may want to look at the threshold values as you may find that they are equidistant (after some transformation), in which case you may go for linear regression.
Tree based methods are able to capture some non-linearities, interactions and they are very good at finding thresholds in the explanatory variables. You may be able to explain some of these by adding transformed versions of the explanatory variables to the feature set of the regression analysis. Playing with [ACE or AVAS](http://cran.r-project.org/web/packages/acepack/acepack.pdf) may help finding suitable transformations.
As it is important for you to discuss the influential features I recommend you to do extensive exploration with trees, regression models and graphs to understand the biology behind the data and models. I would start with your last question, understand the biology first, and then formulate a conformable model.
| null | CC BY-SA 2.5 | null | 2011-03-28T21:43:57.283 | 2011-03-29T19:28:12.003 | 2011-03-29T19:28:12.003 | 3911 | 3911 | null |
8891 | 1 | null | null | 8 | 10826 | I am trying to predict real estate sales prices.
- In my dataset there are independent variables that are both nominal and numeric (square meters, prices etc.)
- Before feeding the data to any regression algorithm I'd like to preprocess it correctly (binning, normalizing mean / std deviation, discretization etc.)
- I am overwhelmed by the many methods listed in various textbooks and try to find out what works well in practice
Although the most reasonable answer to this question is probably 'it depends', could you maybe give me some rules of thumb / war stories / general advice?
- How do you usually preprocess data for regression?
- What methods do you usually apply?
- Which regression algorithms need a special treatment?
As my tools I am using weka and R.
Many Thanks!
| Data preparation for regression | CC BY-SA 2.5 | null | 2011-03-28T21:53:25.267 | 2011-03-29T14:11:32.650 | null | null | 3933 | [
"r",
"regression",
"predictive-models",
"standardization"
] |
8893 | 2 | null | 8882 | 2 | null | You might consider checking out the [gamm4](http://cran.r-project.org/web/packages/gamm4/index.html) package in R, which basically finds a non-linear function that fits the data while auto-penalizing complexity. I recently used it to fit a similar data set, then obtained the residuals and used these to bootstrap pretty confidence ribbons for the fit.
| null | CC BY-SA 2.5 | null | 2011-03-28T23:11:36.710 | 2011-03-28T23:11:36.710 | null | null | 364 | null |
8894 | 2 | null | 8885 | 0 | null | In the example if we order the vectors according to the first attribute the three pairwise comparisons will be satisfied. This is the same solution as what you suggested in your last sentence. Why aren't you satisfied with this solution? Do you have any further information on the problem?
| null | CC BY-SA 2.5 | null | 2011-03-28T23:39:05.167 | 2011-03-28T23:39:05.167 | null | null | 3911 | null |
8895 | 2 | null | 8891 | 1 | null | The real estate prices that you are tying to predict , are they consecutive/chronological values i.e. time series data or are they prices for different classes e.g. this years prices for different classes for the same time frame. You might want to read something I wrote on these two kinds of problems as it warns that if you are dealing with longitudinal data ( time series) then the tools of ordinary cross-sectional regression will not ordinarily apply. It is entitled "Regression vs Box-Jenkins" [http://www.autobox.com/pdfs/regvsbox.pdf](http://www.autobox.com/pdfs/regvsbox.pdf) .
| null | CC BY-SA 2.5 | null | 2011-03-28T23:57:31.773 | 2011-03-28T23:57:31.773 | null | null | 3382 | null |
8897 | 2 | null | 8891 | 1 | null | Binning your data is usually a bad idea because it will cause you to lose information, which will likely result in loss of power. Also, I would rarely standardise variables before doing regression, although some people may like to.
A really good book to read, if you can get it, is "Regression Modeling Strategies" by Frank Harrell.
| null | CC BY-SA 2.5 | null | 2011-03-29T06:38:54.180 | 2011-03-29T06:38:54.180 | null | null | 3835 | null |
8898 | 1 | null | null | 6 | 857 | A pathologist friend came to me for help with the following question for a research project. The goal is to compare the effectiveness of three different diagnostic techniques. The data set is as follows: there are 50 different specimens, each specimen was evaluated by 4 pathologists, and 3 different instruments (ie 600 total diagnoses). Each case has a possible diagnosis of positive or negative, and the true results are known, as they have been independently determined. The success rate depends on both the quality of the instrument and the skill of the pathologist, and we cannot assume that the four pathologists have the same proficiency. Finally, even though each person measured the same specimen 3 times, they can be treated as independent measurements.
What are the appropriate tests for comparing effectiveness among the instruments?
Thanks.
ADDED:
Lots of good info in the answers, thanks to both. Any thoughts on how ROC and randomized block compare/contrast?
I'm not sure I've digested it enough to know which method is "better". Since the results need to be communicated to a certain audience, it probably depends on which is more widely used among that audience.
| How to compare the effectiveness of medical diagnostic techniques? | CC BY-SA 3.0 | null | 2011-03-29T06:56:06.277 | 2011-04-24T17:14:47.580 | 2011-04-14T02:01:51.063 | 3939 | 3939 | [
"hypothesis-testing",
"statistical-significance"
] |
8899 | 1 | 8906 | null | 0 | 239 | Suppose I have 2 variables
$A$:
$P(A) =$ 0.01
$P( \lnot A) =$ 0.99
And $B$ that depends on $A$:
$P(B|A) =$ 0.05
$P( \lnot B|A) =$ 0.95
$P(B| \lnot A) =$ 0.01
$P( \lnot B| \lnot A) =$ 0.99
Applying:
$$P(B)=\sum_{A}^{ } P(B|A)P(A)$$
we get
$P(B)=(0.01)(0.05)+(0.99)(0.01)=0.0104$
Ok, my question is the following:
If I set the probability of $P(B)=1$
How do I get the values of $P(A)$?
As $B$ depends on $A$, how are all the probabilities affected?
How to compute
$P(A)$ ?
$P(B|A)$?
| Several questions about conditional probability | CC BY-SA 2.5 | null | 2011-03-29T07:06:51.563 | 2011-03-29T14:41:30.803 | 2011-03-29T14:41:30.803 | 2958 | 3681 | [
"conditional-probability"
] |
8901 | 2 | null | 1564 | 1 | null | check this [wikipedia](http://en.wikipedia.org/wiki/Bayes%27_theorem#Generalizations) page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events.
| null | CC BY-SA 4.0 | null | 2011-03-29T08:44:46.397 | 2022-05-12T13:52:16.437 | 2022-05-12T13:52:16.437 | 256587 | 3837 | null |
8903 | 1 | null | null | 6 | 5917 | I'm computing cosine similarities between 2 vectors.
These vectors are information retrieval query and document representations respectively.
They have been computed using [tf-idf](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) weights.
Since my documents have different length, tf-idf weights are theoretically unbounded.
The question is: is cosine similarity still a valid measure? Can I compare several cosine similarities for each doc?
| Comparing cosine similarities for tf-idf vectors for documents with different length | CC BY-SA 2.5 | null | 2011-03-29T09:06:24.640 | 2012-06-17T16:23:19.430 | 2011-03-29T11:55:26.907 | 449 | 3941 | [
"text-mining",
"information-retrieval"
] |
8904 | 1 | null | null | 1 | 5301 | I need to run Newey West t statistics in SAS 9.2. I already run regression, White's test, Breusch Godfrey test and Jarque-Bera normality test.
Regression is simple. Number of observations = 522
Depended variable name: rtest 1
Independent variable name: rtest 2
Data are time series data.
I found somewhere that i should start with:
```
PROC model;
PARMS B0 B1;
rtest1=B0+B1*rtest2;
```
I do not understand this part:
```
FIT rtest1/GMM Kernel=(BART,1,0.605);
RUN;
```
How to get numbers in parenthesis. What is meaning of those numbers?
What does command `FIT` do to data?
Do I need to add more commands?
| How to calculate Newey West t-statistic in SAS 9.2? | CC BY-SA 2.5 | null | 2011-03-29T09:09:00.380 | 2015-07-29T12:25:19.890 | 2011-03-29T11:27:01.840 | 2116 | null | [
"sas"
] |
8905 | 2 | null | 8899 | 0 | null | Using subjective probabilities adding the “information that $P(B)=1$ to the model as an equation” is no different from adding the “data that $B$ is observed to be true”. So you can use the Bayes theorem:
$P_{prior} = \begin{smallmatrix} 0.05 \cdot 0.01 & 0.01 \cdot 0.99
\\ 0.95 \cdot 0.01 & 0.99 \cdot 0.99
\end{smallmatrix}$
$P_{posterior} = \begin{smallmatrix} 0.05 \cdot 0.01 & 0.01 \cdot 0.99
\\ 0 & 0
\end{smallmatrix} \cdot constant = \begin{smallmatrix}
0.0480769230769 & 0.9519230769231
\\ 0 & 0
\end{smallmatrix}$
Thus
$P_{posterior}(A)={0.05 \cdot 0.01} / ({0.05 \cdot 0.01 + 0.01 \cdot 0.99}) = 0.0480769230769$
| null | CC BY-SA 2.5 | null | 2011-03-29T09:23:21.043 | 2011-03-29T09:37:12.443 | 2011-03-29T09:37:12.443 | 3911 | 3911 | null |
8906 | 2 | null | 8899 | 1 | null | When you set $Pr(B)=1$ other things will change, though some can remain the same. So you have to decide what is remaining the same.
For example, in the first part, you could have worked out $Pr(A|B)$, $Pr( \lnot A|B)$, $Pr(A| \lnot B) $ and $Pr( \lnot A| \lnot B)$. So $Pr(A|B) = \frac{Pr(B|A)Pr(A)}{Pr(B)} = \frac{0.0005}{0.0104} \approx 0.0480769\ldots$
If you assume $Pr(A|B)$ stays the same into the second part of the question then $Pr(B)=1$ would give $Pr(A)=\approx 0.0480769\ldots$
| null | CC BY-SA 2.5 | null | 2011-03-29T09:39:14.657 | 2011-03-29T09:39:14.657 | null | null | 2958 | null |
8907 | 1 | 8910 | null | 5 | 5073 | I have downloaded the Gaussian Processes for Machine Learning (GPML) package (gpml-matlab-v3.1-2010-09-27.zip) from the website,
and I can run the regression example ([demoRegression](http://www.gaussianprocess.org/gpml/code/matlab/doc/index.html)) in [Octave](http://www.gnu.org/software/octave/). It works just fine.
Now I have my own data for regression where the x (input) matrix is a 54x10 matrix (54 samples, 10 input vars), and the y (target) vector is 54x1.
The problem is that I do not understand hov to calculate the meanfunc and the covfunc, the code provided for the regression example does not work for multiple input datasets.
I do not understand enough Octave to decode the specific code used to calculate this.
Are there anybody who has tried this, and maybe can show an example?
| How do I use the GPML package for multi dimensional input? | CC BY-SA 3.0 | null | 2011-03-29T09:47:18.770 | 2014-10-14T17:22:21.210 | 2013-07-24T16:41:33.833 | 12786 | 3943 | [
"regression",
"machine-learning",
"matlab",
"stochastic-processes",
"nonparametric-bayes"
] |
8908 | 1 | 8913 | null | 3 | 283 | I'm working up a taxonomy showing different methods used in pattern recognition and I'd be curious to hear about how it could be improved. The Mind Map groups different methods based on the discipline which influenced their development.
[taxonomy http://bentham.k2.t.u-tokyo.ac.jp/media/zoo.png](http://bentham.k2.t.u-tokyo.ac.jp/media/zoo.png)
| What is missing from this taxonomy of methods used in pattern recognition? | CC BY-SA 3.0 | null | 2011-03-29T10:09:49.400 | 2011-10-26T09:55:35.580 | 2011-10-26T09:55:35.580 | 183 | 2624 | [
"algorithms"
] |
8909 | 1 | 8989 | null | 6 | 1163 | I am designing a data capture method for a client for inplay sporting events and he wants to record the odds movements for later analysis in Excel once every half second. I want to get this right so that it's easy to use the data down the line for analysis in other packages.
A bit more background and assumptions.
- Each event can have between 4 - 40
contenders (c)
- Each event has 10 variables that
apply equally to all contenders (e)
- Each contender has 20 variables of same heading/type with values unique to contender (i)
In essence I need to choose between
- 1. Having 1 timeframe on 1 row, so each
timeframe capture has
Columns required = e+max(c)i = 810
Rows required = 1
Good: Easy to manipulate, data on one row, 1 row describes all contenders in event per row.
Bad: Huge number of columns, lots of blank column data if c is less than max(c), hard to search names across multiple columns
or
- 2. Having 1 timeframe on multiple rows,
so each timeframe has
Columns required = e+i =30
Rows required = c
Good: Less columns, easy to search/filter as all names in the same column
Bad: Timeframes in different rows for different contenders
Does it matter? Is it easy for packages to handle data in both forms? My client doesn't know the answer but wants the best solution! I'm tending towards 2. as it's much easier to manage and search in database terms but not sure about preparation for time series analysis? Can anyone one with experience offer some advice?
Thanks
Os
| Data collection and storage for time series analysis | CC BY-SA 2.5 | null | 2011-03-29T10:26:51.437 | 2011-03-31T00:26:24.340 | null | null | null | [
"time-series",
"dataset"
] |
8910 | 2 | null | 8907 | 8 | null | Here is a more minimal example of a 2-d regression problem (I haven't got octave, only matlab, but hopefully the difference won't matter). meanfunc and covfunc should be happy with any number of inputs, provided that the covariance function doesn't have a hyper-parameter per inpit feature (as e.g. `covSEiso` does). Hope this helps
```
[X1,X2] = meshgrid(-pi:pi/16:+pi, -pi:pi/16:+pi);
Y = sin(X1).*sin(X2) + 0.1*randn(size(X1));
imagesc(Y); drawnow;
x = [X1(:) X2(:)];
y = Y(:);
covfunc = @covSEiso;
likfunc = @likGauss; sn = 0.1; hyp.lik = log(sn);
hyp2.cov = [0 ; 0];
hyp2.lik = log(0.1);
hyp2 = minimize(hyp2, @gp, -100, @infExact, [], covfunc, likfunc, x, y);
exp(hyp2.lik)
nlml2 = gp(hyp2, @infExact, [], covfunc, likfunc, x, y)
[m s2] = gp(hyp2, @infExact, [], covfunc, likfunc, x, y, x);
m = reshape(m, size(Y));
figure(2); imagesc(m);
```
| null | CC BY-SA 3.0 | null | 2011-03-29T10:28:10.890 | 2012-02-10T21:49:27.770 | 2012-02-10T21:49:27.770 | 9119 | 887 | null |
8911 | 1 | null | null | 4 | 7501 | What free tool can I use to do simple Monte Carlo simulations on OS X?
| What free tool can I use to do simple Monte Carlo simulations on OS X? | CC BY-SA 2.5 | null | 2011-03-29T12:00:38.800 | 2021-12-05T02:32:24.547 | null | null | 1901 | [
"monte-carlo"
] |
8913 | 2 | null | 8908 | 1 | null | Maximum entropy Markov models could go next to hidden Markov models.
| null | CC BY-SA 2.5 | null | 2011-03-29T12:45:45.907 | 2011-03-29T12:45:45.907 | null | null | 3874 | null |
8914 | 2 | null | 8911 | 9 | null | [](http://www.r-project.org/)
What is a probability that a sum of a 3 highest results from 5 throws of a die is divisible by seven?
```
> mean(replicate(1e5,sum(sort(sample(1:6,5,replace=T))[3:5])%%7==0))
[1] 0.16068
> mean(replicate(1e5,sum(sort(sample(1:6,5,replace=T))[3:5])%%7==0))
[1] 0.16032
```
Circa 16%.
| null | CC BY-SA 3.0 | null | 2011-03-29T12:46:58.267 | 2014-09-15T05:43:28.060 | 2014-09-15T05:43:28.060 | 44269 | null | null |
8915 | 2 | null | 8911 | 3 | null | My favourite platforms are
- PyMC and
- OpenBUGS
PyMC runs on OS X out of the box, OpenBUGS is originally for windows, but according to [this](http://www.openbugs.info/w/Downloads) it can be run using Wine.
| null | CC BY-SA 2.5 | null | 2011-03-29T12:56:26.060 | 2011-03-29T12:56:26.060 | null | null | 3911 | null |
8916 | 1 | null | null | 5 | 62 | I am working on joint and conditional density trees for approximating clique potentials in Bayesian Belief Networks. A brief introduction to topic is available from [this paper](http://www.autonlab.org/autonweb/14653.html) in case you'd like to get a better description what I'm talking about.
I am looking for an implementation that supports both discrete and continuous variables in a joint probability distribution. The leaf nodes in the learned tree would provide probability distributions for both discrete and continuous variables.
I'd be very glad to find an open source implementation, but any implementation in any form would work for me, at least as a test for my own implementation (which I'd have to do anyway).
I've seen this approach described in various papers in different contexts, but I could not find any implementations.
| Are there any available implementations of density or conditional density tree learning? | CC BY-SA 3.0 | null | 2011-03-29T12:58:21.373 | 2016-12-09T08:36:42.547 | 2016-12-09T08:36:42.547 | 113090 | 3280 | [
"bayesian",
"multivariate-analysis",
"cart",
"approximation",
"bayesian-network"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.