Id stringlengths 1 6 | PostTypeId stringclasses 7 values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3 values | FavoriteCount stringclasses 3 values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8461 | 2 | null | 8456 | 9 | null | Use a Scatter Plot where the horizontal axis is time. Below is a screenshot of an Excel sheet where two temperatures are shown. Temp 1 starts at 9:00:00 AM and increments by 1 minute and 26 seconds. Temp 2 starts at 9:30:00 AM and increments by 2 minutes and 53 seconds. They are both plotted in the graph.
As shown in the plot, the blue Temp 1 has been "selected" in the graph and the corresponding graph formula is:
=SERIES(Sheet1!$B$1,Sheet1!$A$2:$A$27,Sheet1!$B$2:$B$27,1)
When I select Temp 2, the corresponding graph formula is:
=SERIES(Sheet1!$E$1,Sheet1!$D$2:$D$27,Sheet1!$E$2:$E$27,2)
Both Temp 1 and Temp 2 uses the same x-axis and y-axis.

There is a second way to do this where the blue series has its own x-axis and y-axis, and the red series has a different x-axis and y-axis. However, I wouldn't recommend this scheme because it is typically too confusing for others to interpret the results. It requires that you assign one (or more) of the lines to a secondary y-axis and then a secondary x-axis. Here are the steps:
1) Right click on the series that you want to assign to the secondary axes. Select "Format Data Series". Under "Series Options" select "Secondary Axis". That will assign the selected series to a secondary y-axis.
2) With the series still selected, go to the "Chart Tools" "Layout" "Axes" selection on the ribbon. Click "Secondary Horizontal Axis" and pick whatever options you want. That will assign the selected series to a secondary x-axis.
Below is an example. I purposely changed the color of the text of the lower x-axis and the left y-axis to blue to signify that these are the axes to use for the blue series.
I changed the color of the text of the upper x-axis and the right y-axis to red to signify their use for the red series.
Notice that the two y-axes have different scales. The blue y-axis runs from 0 to 90, while the red y-axis runs from 0 to 100.
Notice also that I set both x-axis to start, stop, and increment at the same values. If you set the increment of the two x-axes to two different values, this graph can be even more confusing.

| null | CC BY-SA 2.5 | null | 2011-03-18T18:13:25.927 | 2011-03-18T18:13:25.927 | null | null | 2775 | null |
8462 | 1 | 8484 | null | 10 | 1107 | We're creating a chart showing traffic by time of day over a given period. So the y-axis is traffic, the x-axis is midnight, 1am, 2am, etc. It could also be days of the week. What's the generic name for this type of chart? I've come up with "cycle chart". Is that the standard? Is there one?

Update:
Just to add a bit more clarity, what's being shown in the top chart is not one day, it's an aggregation of many days. E.g. over the last month, 6am has on average been lower than noon. Similarly, in the bottom chart, over the last year, traffic dips on Saturdays.
| What's a good, generic name for chart of things by time of day? | CC BY-SA 2.5 | null | 2011-03-18T18:19:39.250 | 2011-03-19T15:02:19.393 | 2011-03-18T21:56:26.593 | 1531 | 1531 | [
"time-series",
"data-visualization"
] |
8463 | 2 | null | 8455 | 4 | null | Before you set up your analysis, keep in mind the reality of what the current situation involves.
This meltdown was not directly caused by the earthquake or the tsunami. It was because of a lack of back-up power. If they had enough back-up power, regardless of the earthquake/tsunami, they could have kept the cooling water running, and none of the meltdowns would have happened. The plant would probably be back up and running by now.
Japan, for whatever reason, has two electrical frequencies (50 Hz and 60 Hz). And, you can't run a 50 Hz motor at 60 Hz or vice versa. So, whatever frequency the plant was using/providing is the frequency they need to power up. "U.S. type" equipment runs at 60 Hz and "European type" equipment runs at 50 Hz, so in providing an alternative power source, keep that in mind.
Next, that plant is in a fairly remote mountainous area. To supply external power requires a LONG power line from another area (requiring days/weeks to build) or large gasoline/diesel driven generators. Those generators are heavy enough that flying them in with a helicopter is not an option. Trucking them in may also be a problem due to the roads being blocked from the earthquake/tsunami. Bringing them in by ship is an option, but it also takes days/weeks.
The bottom line is, the risk analysis for this plant comes down to a lack of SEVERAL (not just one or two) layers of back-ups. And, because this reactor is an "active design", which means it requires power to stay safe, those layers are not a luxury, they're required.
This is an old plant. A new plant would not be designed this way.
Edit (03/19/2011) ==============================================
J Presley: To answer your question requires a short explanation of terms.
As I said in my comment, to me, this is a matter of "when", not "if", and as a crude model, I suggested the Poisson Distribution/Process. The Poisson Process is a series of events that happen at an average rate over time (or space, or some other measure). These events are independent of each other and random (no patterns). The events happen one at a time (2 or more events don't happen at the exact same time). It is basically a binomial situation ("event" or "no event") where the probability that the event will happen is relatively small. Here are some links:
[http://en.wikipedia.org/wiki/Poisson_process](http://en.wikipedia.org/wiki/Poisson_process)
[http://en.wikipedia.org/wiki/Poisson_distribution](http://en.wikipedia.org/wiki/Poisson_distribution)
Next, the data. Here's a list of nuclear accidents since 1952 with the INES Level:
[http://en.wikipedia.org/wiki/Nuclear_and_radiation_accidents](http://en.wikipedia.org/wiki/Nuclear_and_radiation_accidents)
I count 19 accidents, 9 state an INES Level. For those without an INES level, all I can do is assume the level is below Level 1, so I'll assign them Level 0.
So, one way to quantify this is 19 accidents in 59 years (59 = 2011 -1952). That's 19/59 = 0.322 acc/yr. In terms of a century, that's 32.2 accidents per 100 years. Assuming a Poisson Process gives the following graphs.

Originally, I suggested a Lognormal, Gamma, or Exponential Distribution for the severity of the accidents. However, since the INES Levels are given as discrete values, the distribution would need to be discrete. I would suggest either the Geometric or Negative Binomial Distribution. Here are their descriptions:
[http://en.wikipedia.org/wiki/Negative_binomial_distribution](http://en.wikipedia.org/wiki/Negative_binomial_distribution)
[http://en.wikipedia.org/wiki/Geometric_distribution](http://en.wikipedia.org/wiki/Geometric_distribution)
They both fit the data about the same, which is not very well (lots of Level 0's, one Level 1, zero Level 2's, etc).
```
Fit for Negative Binomial Distribution
Fitting of the distribution ' nbinom ' by maximum likelihood
Parameters :
estimate Std. Error
size 0.460949 0.2583457
mu 1.894553 0.7137625
Loglikelihood: -34.57827 AIC: 73.15655 BIC: 75.04543
Correlation matrix:
size mu
size 1.0000000000 0.0001159958
mu 0.0001159958 1.0000000000
#====================
Fit for Geometric Distribution
Fitting of the distribution ' geom ' by maximum likelihood
Parameters :
estimate Std. Error
prob 0.3454545 0.0641182
Loglikelihood: -35.4523 AIC: 72.9046 BIC: 73.84904
```
The Geometric Distribution is a simple one parameter function while the Negative Binomial Distribution is a more flexible two parameter function. I would go for the flexibility, plus the underlying assumptions of how the Negative Binomial Distribution was derived. Below is a graph of the fitted Negative Binomial Distribution.

Below is the code for all this stuff. If anyone finds a problem with my assumptions or coding, don't be afraid to point it out. I checked through the results, but I didn't have enough time to really chew on this.
```
library(fitdistrplus)
#Generate the data for the Poisson plots
x <- dpois(0:60, 32.2)
y <- ppois(0:60, 32.2, lower.tail = FALSE)
#Cram the Poisson Graphs into one plot
par(pty="m", plt=c(0.1, 1, 0, 1), omd=c(0.1,0.9,0.1,0.9))
par(mfrow = c(2, 1))
#Plot the Probability Graph
plot(x, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n")
mtext(side=3, line=1, "Poisson Distribution Averaging 32.2 Nuclear Accidents Per Century", cex=1.1, font=2)
xaxisdat <- seq(0, 60, 10)
pardat <- par()
yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3])
axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3)
mtext("Probability", 2, line=2.3)
abline(h=yaxisdat, col="lightgray")
abline(v=xaxisdat, col="lightgray")
lines(x, type="h", lwd=3, col="blue")
#Plot the Cumulative Probability Graph
plot(y, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n")
pardat <- par()
yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3])
axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3)
mtext("Cumulative Probability", 2, line=2.3)
abline(h=yaxisdat, col="lightgray")
abline(v=xaxisdat, col="lightgray")
lines(y, type="h", lwd=3, col="blue")
axis(1, at=xaxisdat, padj=-2, cex.axis=0.7, hadj=0.5, tcl=-0.3)
mtext("Number of Nuclear Accidents Per Century", 1, line=1)
legend("topright", legend=c("99% Probability - 20 Accidents or More", " 1% Probability - 46 Accidents or More"), bg="white", cex=0.8)
#Calculate the 1% and 99% values
qpois(0.01, 32.2, lower.tail = FALSE)
qpois(0.99, 32.2, lower.tail = FALSE)
#Fit the Severity Data
z <- c(rep(0,10), 1, rep(3,2), rep(4,3), rep(5,2), 7)
zdis <- fitdist(z, "nbinom")
plot(zdis, lwd=3, col="blue")
summary(zdis)
```
Edit (03/20/2011) ======================================================
J Presley: I'm sorry I couldn't finish this up yesterday. You know how it is on weekends, lots of duties.
The last step in this process is to assemble a simulation using the Poisson Distribution to determine when an event happens, and then the Negative Binomial Distribution to determine the severity of the event. You might run 1000 sets of "century chunks" to generate the 8 probability distributions for Level 0 through Level 7 events. If I get the time, I might run the simulation, but for now, the description will have to do. Maybe someone reading this stuff will run it. After that is done, you'll have a "base case" where all of the events are assumed to be INDEPENDENT.
Obviously, the next step is to relax one or more of the above assumptions. An easy place to start is with the Poisson Distribution. It assumes that all events are 100% independent. You can change that in all sorts of ways. Here are some links to Non-homogeneous Poisson Distributions:
[http://www.math.wm.edu/~leemis/icrsa03.pdf](http://www.math.wm.edu/~leemis/icrsa03.pdf)
[http://filebox.vt.edu/users/pasupath/papers/nonhompoisson_streams.pdf](http://filebox.vt.edu/users/pasupath/papers/nonhompoisson_streams.pdf)
The same idea goes for the Negative Binomial Distribution. This combination will lead you down all sorts of paths. Here are some examples:
[http://surveillance.r-forge.r-project.org/](http://surveillance.r-forge.r-project.org/)
[http://www.m-hikari.com/ijcms-2010/45-48-2010/buligaIJCMS45-48-2010.pdf](http://www.m-hikari.com/ijcms-2010/45-48-2010/buligaIJCMS45-48-2010.pdf)
[http://www.michaeltanphd.com/evtrm.pdf](http://www.michaeltanphd.com/evtrm.pdf)
The bottom line is, you asked a question where the answer depends on how far you want to take it. My guess is, someone, somewhere will be commissioned to generate "an answer" and will be surprised at how long it takes to do the work.
Edit (03/21/2011) ====================================================
I had a chance to slap together the above mentioned simulation. The results are shown below. From the original Poisson Distribution, the simulation provides eight Poisson Distributions, one for each INES Level. As the severity level rises (INES Level Number rises), the number of expected events per century drops. This may be a crude model, but it's a reasonable place to start.

| null | CC BY-SA 2.5 | null | 2011-03-18T18:47:04.663 | 2011-03-21T17:36:23.963 | 2011-03-21T17:36:23.963 | 2775 | 2775 | null |
8464 | 2 | null | 8462 | 0 | null | The type of chart you've drawn is known as a Histogram [http://en.wikipedia.org/wiki/Histogram](http://en.wikipedia.org/wiki/Histogram)
| null | CC BY-SA 2.5 | null | 2011-03-18T19:05:28.163 | 2011-03-18T19:05:28.163 | null | null | null | null |
8465 | 2 | null | 8462 | 4 | null | What you've illustrated is a time series column (or bar) graph. The two graphs are of differing time resolution or differing time aggregation.
There may be industry specific terms for these types of charts. In finance, for example, the [open-high-low-close chart](http://en.wikipedia.org/wiki/Open-high-low-close_chart) is a very common time series plot:

When the x axis is time, as in your example, it's often common to illustrate the points as a line graph, instead of bars/columns. The reason for this is to put the visual emphasis on the change from one period to the next.
You might also consider graphing period-over-period. For example a [year-over-year](http://spotfire.tibco.com/community/blogs/stn/archive/2009/01/28/create-a-year-over-year-comparison-chart.aspx) would show how the numbers for a given month (typically, although could be month or day) compare to the numbers of the prior year for the same month.

But I realize your question was about naming, not all the other cool graphs you can do ;)
| null | CC BY-SA 2.5 | null | 2011-03-18T19:05:58.343 | 2011-03-18T19:05:58.343 | null | null | 29 | null |
8466 | 1 | 8482 | null | 3 | 10269 | We know that density for a student-t distribution is given as
$$\frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \left(\frac{\lambda}{\pi\nu}\right)^{\frac{1}{2}} \left[1+\frac{\lambda(x-\mu)^2}{\nu}\right]^{-\frac{\nu+1}{2}}$$
with
$\text{E}(X) = \mu$,
$\text{var}(X) = \frac{1}{\lambda}\frac{\nu}{\nu-2}$
where the three parameters are shape factor $\nu$, location $\mu$ and dispersion $\lambda$.
Now if I standardize X to Z as $\frac{X-\text{E}(X)}{\text{Std}(X)}$, what is the distribution of $Z$. If I standardize my input data to $Z$ as above, what is the density function for $f_Z(z)$?
I came across this question in Stephen Taylor's Asset Dynamics book. I am a bit confused after reading this question. As I understand, $\frac{X-\mu}{\sigma}$ where $\sigma=1/\sqrt(\lambda)$ will give the standardized t distribution with center 0 and disperson 1. But not sure what $Z$ above will look like.
| Standardized Student's-t distribution | CC BY-SA 2.5 | null | 2011-03-18T19:07:53.047 | 2015-05-04T08:28:58.207 | 2011-03-19T04:11:17.990 | 919 | 862 | [
"likelihood",
"t-distribution"
] |
8467 | 2 | null | 8462 | 0 | null | Short, simple, descriptive: time series plot.
Edit: In light of the discussion, I'd vote for histogram as well. At least, thats the generic name for this kind of chart, where the hours of the day are a natural division of stacks.
| null | CC BY-SA 2.5 | null | 2011-03-18T19:58:55.227 | 2011-03-18T23:20:04.863 | 2011-03-18T23:20:04.863 | 1766 | 1766 | null |
8468 | 1 | null | null | 3 | 617 | Let's say I'm performing regularized regression and I want to validate the results using holdout. (I'm choosing holdout instead of cross-validation because my dataset is fairly large, so computational power is an issue and the difference between holdout and cross-validation estimates will likely be negligible.) I want to put a confidence interval on the $R^2$ score for the holdout set. Is the following procedure valid?
- Compute the regression coefficients on the training data.
- Compute the predictions on the held out data. Treat each prediction as if it were an observed random variable.
- Compute the Pearson correlation of the predictions with the actual $Y$ values for the held-out data. Compute a confidence interval for this correlation using Fisher's Z-Transformation.
- If the confidence interval of this correlation is some set of real numbers, then the confidence interval of $R^2$ is just the set of the squares of these numbers.
I can't think of any reason why this procedure would be wrong, but I've never heard of it being done before, so I wanted to run it by some more experienced statisticians.
| Confidence Intervals for Holdout R^2? | CC BY-SA 2.5 | null | 2011-03-18T21:04:55.237 | 2011-03-19T00:01:10.343 | 2011-03-19T00:01:10.343 | 159 | 1347 | [
"regression",
"correlation",
"confidence-interval",
"forecasting"
] |
8469 | 2 | null | 8033 | 1 | null | Ten days later this is probably worth an answer:
A normal distribution has about 20% of its distribution more than 0.842 standard deviations above the mean; using the cumulative distribution of standard normal $\Phi$,
$$\Phi(0.842) \approx 0.8$$
so the threshold should be about $70 + 8\times 0.842 \approx 76.7$.
I do wonder slightly why the lecturer would do this instead of just giving an A to the top 20% of students.
| null | CC BY-SA 2.5 | null | 2011-03-18T21:12:59.237 | 2011-03-18T21:12:59.237 | null | null | 2958 | null |
8470 | 1 | null | null | 3 | 1998 | I'm working on a naive Bayes classifier that calculates probabilities using a normal Gaussian distribution. This works very well when I am classifying something into two mutually exclusive buckets (e.g. spam vs. not-spam), but when I am working with a factor that is not easily classified that way (when the classifications are not mutually exclusive) I would like to express the result as a percentage.
When I combine the probability density of several factors (by multiplying them together) I tend to get a very small number and I would like to adjust that so I can express it in a 0-100 percent range, so it will be more easily understood. Is there another factor I can use to adjust the probability density into a percentage range?
For example: in the Wikipedia article for [naive Bayes classifiers](http://en.wikipedia.org/wiki/Naive_Bayes_classifier), there is an example of using height, weight and shoe size variables to classify a person as male or female. After computing the numbers, the posterior result for the female classification is 5.3778E-4 (or .00053778). Out of context that seems minuscule, but if you compare that number to the result for the male classification, the percentage would be 99.99% female. What factors (if any) could I apply to the posterior result to get that percentage, without the benefit of the male result to compare it to?
| Can you express a probability density as a percentage? | CC BY-SA 2.5 | null | 2011-03-18T21:27:39.560 | 2011-04-28T22:54:36.987 | 2011-04-28T22:54:36.987 | null | 3784 | [
"probability"
] |
8471 | 2 | null | 8462 | 4 | null | I'd suggest "diurnal" or "circadian" rhythm chart. For weekly, the latter would be "circaseptan", "circamensual" for "monthly", and "circannual" for "yearly".
| null | CC BY-SA 2.5 | null | 2011-03-18T22:12:17.783 | 2011-03-18T22:20:42.863 | 2011-03-18T22:20:42.863 | 3369 | 3369 | null |
8472 | 1 | 8495 | null | 12 | 52713 | In network motif algorithms, it seems quite common to return both a [p-value](http://en.wikipedia.org/wiki/P-value) and a [Z-score](http://en.wikipedia.org/wiki/Standard_score) for a statistic: "Input network contains X copies of subgraph G". A subgraph is deemed a motif if it satisfies
- p-value < A,
- Z-score > B and
- X > C,
for some user-defined (or community-defined) A, B and C.
This motivates the question:
>
Question: What are the differences between p-value and Z-score?
And the subquestion:
>
Question: Are there situations where the p-value and Z-score of the same statistic might suggest opposite hypotheses? Are the first and second conditions listed above essentially the same?
| What is the difference between Z-scores and p-values? | CC BY-SA 2.5 | null | 2011-03-18T23:33:45.757 | 2011-04-29T00:42:18.500 | 2011-04-29T00:42:18.500 | 3911 | 386 | [
"hypothesis-testing",
"p-value",
"z-statistic"
] |
8473 | 2 | null | 8472 | 6 | null | $p$-value indicates how unlikely the statistic is. $z$-score indicates how far away from the mean it is. There may be a difference between them, depending on the sample size.
For large samples, even small deviations from the mean become unlikely. I.e. the $p$-value may be very small even for a low $z$-score. Conversely, for small samples even large deviations are not unlikely. I.e. a large $z$-score will not necessarily mean a small $p$-value.
| null | CC BY-SA 2.5 | null | 2011-03-18T23:40:41.547 | 2011-03-18T23:40:41.547 | null | null | 3369 | null |
8474 | 2 | null | 8468 | 2 | null | That should work ok provided the lower limit of your confidence interval is positive. Otherwise the square transformation is not monotonically increasing and then the probability coverage is not preserved.
I've never seen it done before either.
| null | CC BY-SA 2.5 | null | 2011-03-19T00:00:53.150 | 2011-03-19T00:00:53.150 | null | null | 159 | null |
8475 | 2 | null | 8470 | 2 | null | You can use any "monotonic" transformation of the probabilities as you choose (at least as far as I know). As long as your transformation preserves the ordering of probabilities, you will not be lead astray in your decision making. Personally, I prefer to use odds ratios. They seem to make intuitive sense to me, and I know what decision to make almost straight away (and the "region of uncertainty" is easily identified, perhaps odds between 0.1 to 10?).
[my answer to this question](https://stats.stackexchange.com/questions/8419/bayesian-classifier-with-multivariate-normal-densities/8434#8434) goes through a worked example of how you would use the odds in naive Bayes classifier.
| null | CC BY-SA 2.5 | null | 2011-03-19T00:23:09.483 | 2011-03-19T00:23:09.483 | 2017-04-13T12:44:44.530 | -1 | 2392 | null |
8476 | 2 | null | 8472 | 8 | null | A $Z$-score describes your deviation from the mean in units of standard deviation. It is not explicit as to whether you accept or reject your null hypothesis.
A $p$-value is the probability that under the null hypothesis we could observe a point that is as extreme as your statistic. This explicitly tells you whether you reject or accept your null hypothesis given a test size $\alpha$.
Consider an example where $X\sim \mathcal{N}(\mu,1)$ and the null hypothesis is $\mu=0$. Then you observe $x_1=5$. Your $Z$-score is 5 (which only tells you how far you deviate from your null hypothesis in terms of $\sigma$) and your $p$-value is 5.733e-7. For 95% confidence, you will have a test size $\alpha=0.05$ and since $p<\alpha$ then you reject the null hypothesis. But for any given statistic, there should be some equivalent $A$ and $B$ such that the tests are the same.
| null | CC BY-SA 2.5 | null | 2011-03-19T00:51:10.900 | 2011-03-19T00:51:10.900 | null | null | 3786 | null |
8477 | 1 | 8489 | null | 1 | 825 | I am reading Stephen Taylor's Asset Dynamics book and came across something I didn't fully understand.
For an ARCH process, the return series is modeled as
$r_t = \mu_t + h_t^{1/2}z_t$ where is $z_t$ is $D(0,1)$ and may be normal/non-normal and $h_t$ is conditional variance (some function of subset of $\theta$).
Then he says the conditional density of $r_t$ is $f(r_t|\theta) = \frac{f(z_t(\theta)|\theta)}{\sqrt{h_t(\theta)}}$.
I don't understand how does the conditional variance term occur in the denominator of density relation and why doesn't $\mu_t$ occur in the relation. Can someone help me how to derive this?
| Expression for conditional density for ARCH processes | CC BY-SA 2.5 | null | 2011-03-19T01:03:17.700 | 2011-03-19T11:10:07.747 | 2011-03-19T11:03:50.653 | null | 862 | [
"likelihood",
"conditional-probability",
"garch"
] |
8478 | 2 | null | 8466 | 0 | null | If you're referring to the density function it's the exact same thing. You have simply shifted the distribution and normalized by the standard deviation.
| null | CC BY-SA 2.5 | null | 2011-03-19T01:03:58.207 | 2011-03-19T01:03:58.207 | null | null | 3786 | null |
8479 | 2 | null | 8446 | 3 | null | This looks a lot like a Beta distribution from its shape and seemingly bounded domain. You can use maximum likelihood estimation to estimate the parameters $\alpha$ and $\beta$.
| null | CC BY-SA 2.5 | null | 2011-03-19T01:10:58.090 | 2011-03-19T01:10:58.090 | null | null | 3786 | null |
8480 | 2 | null | 866 | 39 | null | Generally, when you have many small/medium sized effects you should go with ridge. If you have only a few variables with a medium/large effect, go with lasso.
[Hastie, Tibshirani, Friedman](http://www-stat.stanford.edu/~tibs/ElemStatLearn/)
| null | CC BY-SA 2.5 | null | 2011-03-19T01:21:05.993 | 2011-03-19T01:21:05.993 | null | null | 3786 | null |
8481 | 2 | null | 8401 | 2 | null | There isn't any indication that your variables here are correlated so I dont know why you would use MCMC as opposed to regular Monte Carlo. There are many different sampling methods including the mentioned stratified sampling (Latin hypercube) and QMC. Sparse quadrature methods are very good if the dimension of the problem is not too high (not more than 10) since sparse quadrature grids grow geometrically (curse of dimensionality).
But it sounds like you are on the right track with respect to importance sampling. The key here is to choose a biased distribution that has large probability concentrated near your region of interest and that it has thicker tails than the nominal distribution.
I'd like to add that this is an open research problem so if you can come up with something good it would be of great interest to the community!
| null | CC BY-SA 2.5 | null | 2011-03-19T01:38:59.533 | 2011-03-19T01:38:59.533 | null | null | 3786 | null |
8482 | 2 | null | 8466 | 7 | null | Assume $\nu \gt 2$ so that this distribution actually has a mean and standard deviation (otherwise you cannot standardize it). By direct calculation, its mean equals $\mu$ and its variance equals $\nu / (\lambda (\nu-2))$. Standardizing it, by construction, creates a distribution of the same shape but zero mean and unit standard deviation. Thus, for the standardized distribution, the formula is the same but we must have $\mu = 0$ and $\nu / (\lambda (\nu-2)) = 1$; that is, $\lambda = \nu/(\nu-2)$. Whence, plugging these values into the formula,
$$f_Z(z) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \frac{1}{\sqrt{\pi(\nu-2)}} \left[1+\frac{z^2}{\nu-2}\right]^{-\frac{\nu+1}{2}}.$$
This assumes $\mathbb{E}[X] = \mu$ and $\mathbb{E}[(X-\mu)^2]$ are known in advance, not estimated from data.
| null | CC BY-SA 2.5 | null | 2011-03-19T04:10:02.780 | 2011-03-19T04:10:02.780 | null | null | 919 | null |
8483 | 1 | null | null | 3 | 707 | I am researching age at first sexual debut and HIV prevalence in Lesotho. I want to analyse the data using logistic regression with SPSS. My variables are age, sex, social status, education level, and environment. How am I to allocate my variables?
| Analysis plan using logistic regression | CC BY-SA 2.5 | null | 2011-03-19T07:48:59.207 | 2011-03-19T14:07:17.233 | 2011-03-19T14:07:17.233 | 919 | null | [
"logistic"
] |
8484 | 2 | null | 8462 | 7 | null | Nick Cox [(Stata Journal 2006, p403)](http://stata-journal.com/sjpdf.html?articlenum=gr0025) calls this sort of plot a 'cycle plot', but notes that:
>
Cycle plots have been discussed under other names in the literature, including cycle-subseries plot, month plot, seasonal-by-month plot, and seasonal subseries plot.
(followed by a load of refs to textbooks and papers)
Many of these are clearly specific to seasonality, i.e. periods of one year. I still like the suggestion of 'periodic plot/chart' that I made in a comment to the question, but it appears the questioner's original suggestion of 'cycle plot/chart' is in fact the more standard generic term.
| null | CC BY-SA 2.5 | null | 2011-03-19T08:32:20.230 | 2011-03-19T08:32:20.230 | null | null | 449 | null |
8485 | 1 | 8509 | null | 29 | 11457 | I want to learn how Gibbs Sampling works and I am looking for a good basic to intermediate paper. I have a computer science background and basic statistic knowledge.
Anyone has read good material around? where did you learn it?
| A good Gibbs sampling tutorials and references | CC BY-SA 4.0 | null | 2011-03-19T09:07:52.767 | 2021-04-29T00:08:47.333 | 2021-04-29T00:08:47.333 | 11887 | 3788 | [
"references",
"gibbs"
] |
8486 | 1 | null | null | 2 | 625 | I want to compare the slopes of progression of a variable that is measured in percentage versus a variable that is measured in decibels. What is a good method to compare?
| How to compare slopes between variables with different scales? | CC BY-SA 2.5 | null | 2011-03-19T09:26:43.973 | 2011-03-19T11:17:48.263 | 2011-03-19T11:17:48.263 | 930 | 3194 | [
"multiple-comparisons"
] |
8487 | 1 | null | null | 17 | 13442 | I've calculated Cohen's d for regression coefficients (from the t statistic), odds ratios and means differences, hoping to pool the results in a meta-analysis and see how it works. However, in Stata, it doesn't seem you're able to pool these results without confidence intervals for Cohen's d, so my question is how do I get around this? Is there a way of calculating it, or is there a way of pooling the results in Stata without this information?
I know there are several negative sides to this sort of meta-analysis, but am intrigued to see how this works in comparison to several smaller analyses of specific effect sizes.
| How do you calculate confidence intervals for Cohen's d? | CC BY-SA 3.0 | null | 2011-03-19T09:37:12.297 | 2020-11-02T16:13:11.490 | 2014-04-20T15:26:01.210 | 22047 | null | [
"cohens-d"
] |
8488 | 2 | null | 8349 | 1 | null | If you only have two charts I would show the following pair. You are right to want to avoid percentages if some items have several errors, but you can show average errors per item which amounts to the same thing without the misleading impression, and could go over 1 without any need for an explanation of how a percentage goes over 100%.
You want to emphasize that: errors peaked in May; the error rate peaked in July; March and June were "good months" with high items and low errors and July was a "bad month"; that type 3 errors are most common; and (if true) the types of error vary between months. I think these do that.

| null | CC BY-SA 2.5 | null | 2011-03-19T10:40:15.893 | 2011-03-19T10:40:15.893 | null | null | 2958 | null |
8489 | 2 | null | 8477 | 3 | null | If the formula in the question is exactly how it is written in the book, then this is a bit of slack notation, with the ambiguous looking $f(.)$. The subscripts, while a bit ugly, are one way to make it more clear what the function represents (for $f(.)$ is essentially defined as two different things, which in "nit-picking" world is a contradiction unless $f(.)=0$ or $f(.)=\pm\infty$). In my notation,a capital letter stands for the "random variable" and a small letter for a potential value of that random variable. And the subscripts represent what the distribution is for (e.g. $f_{X|Y}(x|y)$ is the pdf for $X$, given $Y$ evaluated at values $x$ and $y$). I also used a capital $F$ for CDFs and small $f$ for PDFs.
Now I could have just written down "use the jacobian" as an answer, but I can never quite remember exactly what jacobian to use (is it J, or 1 over inverse J, or inverse J?). Going through the tedious process below (of essentially re-deriving the change-of-variables formula for PDFs) is the only way that I feel sure I've got the right answer.
AS far as I can tell, what has occurred here is that you define a "random variable" or "source of uncertainty", and call it $Z_t$. This has an unspecified PDF, denoted by $f_{Z_t}(z_{t})$. I am going to assume that by $D(0,1)$ you mean that the pdf is to have mean 0 and variance 1, but is otherwise ambiguous.
So we write the cumulative distribution function (CDF) for $R_t$ as:
$$F_{R_t|\theta}(r_t|\theta)=Pr(R_t<r_t|\theta)$$
Now $R_t$, given $\theta$ is a 1-to-1 function of $Z_t$. Now we substitute this relation $R_t=\mu_t(\theta)+\sqrt{h_t(\theta)}Z_t$ and re-arrange the expression to get:
$$F_{R_t|\theta}(r_t|\theta)=Pr(\mu_t(\theta)+\sqrt{h_t(\theta)}Z_t<r_t|\theta)=Pr(Z_t<\frac{r_t-\mu_t(\theta)}{\sqrt{h_t(\theta)}}|\theta)$$
Now you can see that this last expression is just the CDF of $Z_t$ evaluated at $\frac{r_t-\mu_t(\theta)}{\sqrt{h_t(\theta)}}$. I assume this is what is meant by $z_t(\theta)$. Now to get from the CDF to the PDF, you need to differentiate with respect to $r_t$.
$$f_{R_t|\theta}(r_t|\theta)=\frac{\partial}{\partial r_t}\left[F_{R_t|\theta}(r_t|\theta)\right]=\frac{\partial}{\partial r_t}\left[F_{Z_t|\theta}(z_t(\theta)|\theta)\right]$$
Now you use the chain rule for differentiation to get:
$$f_{R_t|\theta}(r_t|\theta)=\frac{\partial z_t(\theta)}{\partial r_t}\frac{\partial}{\partial z_t(\theta)}\left[F_{Z_t|\theta}(z_t(\theta)|\theta)\right]=\frac{\partial z_t(\theta)}{\partial r_t}f_{Z_t|\theta}(z_t(\theta)|\theta)$$
Now the derivative here can be calculated as:
$$\frac{\partial z_t(\theta)}{\partial r_t}=\frac{\partial \left[\frac{r_t-\mu_t(\theta)}{\sqrt{h_t(\theta)}}\right]}{\partial r_t}=\frac{1}{\sqrt{h_t(\theta)}}$$
Substituting this back into the answer gives the density required:
$$f_{R_t|\theta}(r_t|\theta)=\frac{f_{Z_t|\theta}(z_t(\theta)|\theta)}{\sqrt{h_t(\theta)}}$$
Note that this actually holds regardless of the distribution of the $Z_t$. For nowhere did I make use of the $D(0,1)$ assumption.
| null | CC BY-SA 2.5 | null | 2011-03-19T11:10:07.747 | 2011-03-19T11:10:07.747 | null | null | 2392 | null |
8490 | 1 | null | null | 3 | 620 | I'm trying to make sense of some data and get statistical results on them.
What I have is the following:
```
Subject TimeOfDay Test1 Test2 Test3
A 10:00 valA1 valA2 valA3
B 10:00 valB1 valB2 valB3
C 15:00 valC1 valC2 valC3
D 15:00 valD1 valD2 valD3
```
I'm trying to see if there's a significant difference between test notes (which have a normal distribution) at 10:00 and 15:00.
My problem is that I only have a few subjects for each time of day (3 or 4 in reality).
The good thing is here that I expect test notes to be correlated.
My questions are:
- How should I test for these correlations? I guess that I can use the whole population for that, and I was thinking of a standard pearson correlation coefficient and test
- Since I expect these values correlated (and all drawn from a normal distribution), can I just consider each of them as a different subject, same test:
For example:
```
Subject TimeOfDay Test
A1 10:00 valA1
A2 10:00 valA2
A3 10:00 valA3
B1 10:00 valB1
B2 10:00 valB2
B3 10:00 valB3
C1 15:00 valC1
C2 15:00 valC2
C3 15:00 valC3
D1 15:00 valD1
D2 15:00 valD2
D3 15:00 valD3
```
If yes, what kind of test should I do to make sure that what I have done is valid? Or do you see a better way to get the same result (more significant performances results between these times) with another method than this one I've thought of?
A lot for your thoughts!
| What kind of statistical analysis should I do to aggregate these values? | CC BY-SA 2.5 | null | 2011-03-19T11:38:14.290 | 2011-03-20T18:42:55.053 | 2011-03-19T15:32:26.560 | null | 3791 | [
"correlation",
"multivariate-analysis",
"factor-analysis"
] |
8491 | 2 | null | 3542 | 6 | null | I cover table design in the seminars I offer. My sources are primarily Chapter 8 of Few’s Show Me the Numbers and a paper by Martin Koschat:
Koschat, Martin. 2005. “A Case for Simple Tables,” The American Statistician 59:1, 31-40. [https://doi.org/10.1198/000313005X21429](https://doi.org/10.1198/000313005X21429)
Also, Howard Wainer discusses table design in Visual Revelations.
| null | CC BY-SA 4.0 | null | 2011-03-19T12:09:23.263 | 2019-09-10T19:22:57.187 | 2019-09-10T19:22:57.187 | 7290 | null | null |
8492 | 2 | null | 8483 | 2 | null | You will find help on allocating categorical variables with [UCLA's tutorials](http://www.ats.ucla.edu/stat/spss/dae/logit.htm).
```
logistic regression HIV with age sex
/categorical = sex.
```
You might also find help [here](http://www.childrens-mercy.org/stats/weblog2004/categorical.asp).
| null | CC BY-SA 2.5 | null | 2011-03-19T12:43:19.813 | 2011-03-19T12:43:19.813 | null | null | 1351 | null |
8495 | 2 | null | 8472 | 10 | null | I would say, based on your question, that there is no difference between the three tests. This is in the sense that you can always choose A, B, and C such that the same decision is arrived at regardless of what criterion you are using. Although you need to have the p-value be based on the same statistic (i.e. the Z-score)
To Use the Z-score, both the mean $\mu$ and variance $\sigma^2$ are assumed to be known, and the distribution is assumed normal (or asymptotically/approximately normal). Suppose the p-value criterion is the usual 5%. Then we have:
$$p=Pr(Z>z)<0.05\rightarrow Z>1.645\rightarrow \frac{X-\mu}{\sigma}>1.645\rightarrow X > \mu+1.645\sigma$$
So we have the triple $(0.05, 1.645, \mu+1.645\sigma)$ which all represent the same cut-offs.
Note that the same correspondence will apply to the t-test, although the numbers will be different. The two tails test will also have a similar correspondence, but with different numbers.
| null | CC BY-SA 2.5 | null | 2011-03-19T13:49:39.730 | 2011-03-19T13:49:39.730 | null | null | 2392 | null |
8496 | 5 | null | null | 0 | null | These guidelines are for those who are asking and those who would answer homework-related questions.
### They are rooted in two principles:
- It is okay to ask about homework. This site exists to help people learn and provide a standard repository for questions in statistics and machine learning, both simple and complex, and this includes helping students.
- Providing an answer that doesn't help a student learn is not in the student's own best interest. Therefore you might choose to treat homework questions differently than other questions.
### Asking about homework
- Make a good faith attempt to solve the problem yourself first. If we can't see enough work on your part your question will likely be booed off the stage; it will be voted down and closed.
- Ask about specific problems you have encountered in your initial efforts. If you can't do that yet, try some more of your own work first or searching for more general help.
- Admit that the question is homework. Trying to hide it will just get the question closed faster. Do this by mentioning that it is homework in the question text and by using the homework tag.
- Be aware of school policy (if relevant). If your school has a policy regarding outside help on homework, make sure you are aware of it before you ask for/receive help here. If there are specific restrictions (for example, you can receive help, but not full solutions), include them in the question so that those providing assistance can keep you out of trouble.
- Never use a solution you don't understand. It definitely won't help you later (after school, in later assignments, on tests, etc.) and it could be, at best, very embarrassing if you are asked to explain what you turned in.
### Answering homework questions
- Try to provide explanations that will lead the asker in the correct direction. Genuine understanding is the real goal for students, but trying to provide that is seldom unappreciated for any question.
- It's usually better not to provide a complete solution (or code sample) if you believe it would not help the student, using your best judgment. You can use pseudo-code and general descriptions first. In the spirit of creating a resource, you may come back after a suitable amount of time and edit your response to include more details, if the question seems like such information will have lasting value.
- Don't downvote others who answer homework questions in good faith, even if they break these guidelines. It's not always obvious at first glance that a question is homework, especially when you're not expecting to see it here. It is a good idea to suggest editing the response in a comment.
- Don't ridicule a student because they haven't yet learned something obvious or developed the good habits you'd expect from an expert. Do add a respectful comment or answer that points them towards best practices and better style.
- Don't downvote a homework question that follows the guidelines and was asked in
- Don't edit a question to add the homework tag. If there's any room for doubt at all, it's best to leave it as is. Instead, add a comment first requesting that the asker clarify the situation.
(Adapted from an [SO post by Joe Coehoorn](http://meta.stackexchange.com/questions/10811/how-to-ask-and-answer-homework-questions/10812#10812) as suggested in a [meta discussion](http://meta.stats.stackexchange.com/q/12/919).)
| null | CC BY-SA 3.0 | null | 2011-03-19T13:54:41.287 | 2011-03-19T13:54:41.287 | 2014-04-23T13:43:43.010 | -1 | 919 | null |
8497 | 4 | null | null | 0 | null | A routine question from a textbook, course, or test used for a class or self-study. This community's policy is to "provide helpful hints." | null | CC BY-SA 2.5 | null | 2011-03-19T13:54:41.287 | 2011-03-19T13:54:41.287 | 2011-03-19T13:54:41.287 | 919 | 919 | null |
8498 | 2 | null | 8462 | 0 | null | Your charts are a diurnal hourly-average bar chart, and a one-week daily-average bar chart, respectively.
| null | CC BY-SA 2.5 | null | 2011-03-19T15:02:19.393 | 2011-03-19T15:02:19.393 | null | null | 3794 | null |
8499 | 2 | null | 8455 | 2 | null | The underlying difficulty behind the question is that situations that have been anticipated, have generally been planned for, with mitigation measures in place. Which means that the situation should not even turn into a serious accident.
The serious accidents stem from unanticipated situations. Which means that you cannot assess probabilities for them - they are your Rumsfeldian unknown unknowns.
The assumption of independence is clearly invalid - Fukushima Daiichi shows that. Nuclear plants can have common-mode failures. (i.e. more than one reactor becoming unavailable at once, due to a common cause).
Although probabilities cannot be quantitatively calculated, we can make some qualitative assertions about common-mode failures.
For example: if the plants are all built to the same design, then they are more likely to have common-mode failures (for example the known problem with pressurizer cracks in EPRs / PWRs)
If the plant sites share geographic commonalities, they are more likely to have common-mode failures: for example, if they all lie on the same earthquake fault line; or if they all rely on similar rivers within a single climatic zone for cooling (when a very dry summer can cause all such plants to be taken offline).
| null | CC BY-SA 2.5 | null | 2011-03-19T15:10:37.453 | 2011-03-19T15:10:37.453 | null | null | 3794 | null |
8500 | 2 | null | 8459 | 2 | null | I think that there might be a conceptual problem with this approach. If your piece of paper is not flat it is possible that a "kink" in the paper at a point with no ink dot might be higher than surrounding areas with ink dots. The proposed algorithm might inadvertently average away the very points of interest. Also "I then noticed that repeating this process many times" might introduce spuriousness due to the Slutsky Yule Effect. Might I suggest that you use more specialised approaches e.g. consider your data to be an image and use the relevant tools from a package such as [this](http://octave.sourceforge.net/image/index.html)?
| null | CC BY-SA 2.5 | null | 2011-03-19T15:40:13.427 | 2011-03-19T15:56:45.143 | 2011-03-19T15:56:45.143 | 226 | 226 | null |
8501 | 1 | null | null | 4 | 1503 | I'm using the following function to calculate Edwards R^2 (formula 19 in Edwards et al. 2008) of a mixed effects model (I hope the implementation is correct):
```
r2lmer <- function(model) {
require(aod) # need the aod package for wald.test function
if (class(model) != "mer") stop("mer object expected")
n <- model@dims[['n']] # the number of observations
p <- model@dims[['p']] # number of parameters
df.mod <- n - p
wald.model <- wald.test(b = fixef(model), Sigma = vcov(model), df = df.mod, Terms = 2:p)
wald.F <- as.numeric(wald.model$result$Ftest[1])
( (p - 1) * 1 / df.mod * wald.F ) / (1 + (p - 1 ) * 1 / df.mod * wald.F )
}
```
In order to internally valdiate a mixed effects model, I would like to estimate the variance explained when the model is applied to a new data set, that is, when only the data changes, but not the parameter estimates. I'm not sure if it's even theoretically possible. Any help is highly appreciated.
### Edit
If it's not a good idea to use an R^2 statistic for LMM, what other performance measures could one use to internally validate LMM? I have a LMM with just one random effects factor (varying intercept due to repeated measures) and several fixed effects factors. I'am primarily interested in the fixed effects and it is my understanding that Edward's R^2 is a good measure for the variance explained by the fixed effects. Edward's R^2 is also recommended for cross-validation in this paper:
Cheng et al. Real longitudinal data analysis for real people: Building a good enough mixed model. Statistics in Medicine. 2010, 29 504-520
| Variance explained of a mixed effects model in a new data set | CC BY-SA 2.5 | null | 2011-03-19T15:56:42.623 | 2011-04-18T10:01:20.853 | 2011-04-05T15:50:11.470 | 919 | 3795 | [
"r",
"mixed-model",
"variance",
"validation"
] |
8502 | 1 | 8684 | null | 6 | 2729 | I have two sets of coefficients from similar data taken at different times. What I want to do is combine the two sets of coefficients giving greater weight to the more most recent set.
The goal is building a predictive model. So say I have dataset A from 2009, and dataset B from 2010.
My coefficients for A are:
```
param1: 0.33
param2: 1.224
param3: -0.119
```
My coefficients for B are:
```
param1: 0.42
param2: 1.309
param3: -0.011
```
If I wanted B to be considered twice as important, would it be sound to use a formula like this:
```
(2*B + A) / 3 = New Coeeficient
```
And do that for each parameter? Or am I suggesting something that is fundamentally flawed?
In general could one combine coefficients effectively using the basic forumula:
```
(Weight * DatasetACoeffcient + DatasetBCoeffient) / (Weight + 1)
```
Edit
This is a multivariate linear regression problem where the datasets may not be available when someone decides something like this needs to be done.
| Combining 2 sets of coefficients, weighting one of the sets | CC BY-SA 2.5 | null | 2011-03-19T16:32:07.477 | 2011-04-29T01:05:55.050 | 2011-04-29T01:05:55.050 | 3911 | 3491 | [
"time-series",
"multivariate-analysis",
"predictive-models"
] |
8503 | 2 | null | 8502 | 1 | null | There is no reason accounting for the use of convex linear combinations of coefficients in order to "average" two models.
At best, you could consider the three coefficients for each dataset are realizations of the same three random variables, and you would be interested in the distribution of each random variable.
What I would do would be to fit the model again with a new dataset (of size $n$) consisting of a random sample of size $\lambda\times n$ taken from the B dataset, and $\left(1-\lambda\right)\times n$ of the A dataset. You could use $\lambda=\frac{2}{3}$ for instance.
| null | CC BY-SA 2.5 | null | 2011-03-19T16:40:18.300 | 2011-03-19T16:46:40.243 | 2011-03-19T16:46:40.243 | 1351 | 1351 | null |
8504 | 1 | null | null | 8 | 5197 | car packages's [ellipse function](http://finzi.psych.upenn.edu/R/library/car/html/Ellipses.html) asks for a `radius` parameter. In help says that is the "radius of circle generating the ellipse". Could you please tell me which circle is this?
Thank you very much
| Help in drawing confidence ellipse | CC BY-SA 2.5 | null | 2011-03-19T17:49:13.440 | 2011-03-19T19:40:46.057 | 2011-03-19T18:55:07.057 | null | 339 | [
"r",
"confidence-interval",
"multivariate-analysis"
] |
8505 | 1 | null | null | 23 | 9204 | A Poisson regression is a [GLM](http://en.wikipedia.org/wiki/Generalized_linear_model) with a log-link function.
An alternative way to model non-normally distributed count data is to preprocess by taking the log (or rather, log(1+count) to handle 0's). If you do a least-squares regression on log-count responses, is that related to a Poisson regression? Can it handle similar phenomena?
| Poisson regression vs. log-count least-squares regression? | CC BY-SA 3.0 | null | 2011-03-19T17:58:42.400 | 2011-08-25T22:23:22.477 | 2011-08-25T19:12:30.840 | 919 | 3799 | [
"regression",
"poisson-distribution",
"generalized-linear-model"
] |
8506 | 2 | null | 8505 | 25 | null | On the one hand, in a Poisson regression, the left-hand side of the model equation is the logarithm of the expected count: $\log(E[Y|x])$.
On the other hand, in a "standard" linear model, the left-hand side is the expected value of the normal response variable: $E[Y|x]$. In particular, the link function is the identity function.
Now, let us say $Y$ is a Poisson variable and that you intend to normalise it by taking the log: $Y' = \log(Y)$. Because $Y'$ is supposed to be normal you plan to fit the standard linear model for which the left-hand side is $E[Y'|x] = E[\log(Y)|x]$. But, in general, $E[\log(Y) | x] \neq \log(E[Y|x])$. As a consequence, these two modelling approaches are different.
| null | CC BY-SA 2.5 | null | 2011-03-19T18:27:09.653 | 2011-03-19T18:27:09.653 | null | null | 3019 | null |
8507 | 2 | null | 8504 | 4 | null | An ellipse can be parametrized as the affine image of any given circle. If we consider the unit circle:
$$x=a \cos (t)$$
$$y=b \sin (t)$$
```
ellipse(center, shape, radius, log="", center.pch=19, center.cex=1.5,
segments=51, add=TRUE, xlab="", ylab="",
col=palette()[2], lwd=2, fill=FALSE, fill.alpha=0.3, grid=TRUE, ...)
```
You can notice the `ellipse` function asks for the center and the radius of the circle, as well as the covariance matrix, which is equivalent to giving the parameters of the affine transformation.
```
center 2-element vector with coordinates of center of ellipse.
shape 2 * 2 shape (or covariance) matrix.
radius radius of circle generating the ellipse.
```
Let us have a look at the `car` package function:
```
ellipse <- t(center + radius * t(unit.circle %*% chol(shape)))
```
The `radius` parameter can be set to 1 if you want to use the covariance matrix directly for the `shape` parameter. I believe it was introduced to help people use normalized matrices instead if they prefer so.
---
Edit: As mentioned in whuber's comment, the two ellipses below are the same.
```
> library(car)
> s=matrix(c(1,0,0,1), nrow=2, ncol=2)
> plot(0, 0, xlim=c(-5,5), ylim=c(-5,5))
> ellipse(c(0,0), 4*s, 1)
> ellipse(c(0,0), s, 2)
```
| null | CC BY-SA 2.5 | null | 2011-03-19T18:49:26.673 | 2011-03-19T19:40:46.057 | 2011-03-19T19:40:46.057 | 1351 | 1351 | null |
8508 | 2 | null | 8487 | 11 | null | According to [p238](http://books.google.com/books?id=cQxN792ttyEC&pg=PA238) of standard text on meta-analysis in social science [The Handbook of Research Synthesis](http://books.google.com/books?id=cQxN792ttyEC), the variance of Cohen's $d$ is
$$\left( \frac{n_1 + n_2}{n_1 n_2} + \frac{d^2}{2(n_1+n_2-2)}\right) \left(\frac{n_1 + n_2}{n_1+n_2-2} \right), $$
where $n_1$ and $n_2$ are the sample sizes of the two groups being compared and $d$ is Cohen's $d$.
Taking the square-root of this variance gives the standard error of $d$, needed as input by several of the user-written [meta-analysis packages for Stata](http://www.stata.com/support/faqs/stat/meta.html). (Some of them also accept confidence intervals as input, but they simply convert them to standard errors internally anyway.)
| null | CC BY-SA 2.5 | null | 2011-03-19T19:59:33.950 | 2011-03-19T20:05:45.363 | 2011-03-19T20:05:45.363 | 449 | 449 | null |
8509 | 2 | null | 8485 | 21 | null | I'd start with:
Casella, George; George, Edward I. (1992). "[Explaining the Gibbs sampler](http://www.jstor.org/stable/2685208)". The American Statistician 46 (3): 167–174. ([FREE PDF](http://biostat.jhsph.edu/~mmccall/articles/casella_1992.pdf))
>
Abstract: Computer-intensive algorithms, such as the Gibbs sampler, have become increasingly popular statistical tools, both in applied and theoretical work. The properties of such algorithms, however, may sometimes not be obvious. Here we give a simple explanation of how and why the Gibbs sampler works. We analytically establish its properties in a simple case and provide insight for more complicated cases. There are also a number of examples.
The American Statistician is often a good source for short(ish) introductory articles that don't assume any prior knowledge of the topic, though they do assume you have the background in probability and statistics that could reasonably be expected of a member of the [American Statistical Association](http://www.amstat.org).
| null | CC BY-SA 3.0 | null | 2011-03-19T20:19:38.393 | 2012-03-19T00:37:18.213 | 2012-03-19T00:37:18.213 | 183 | 449 | null |
8510 | 2 | null | 8485 | 12 | null | One online article that really helped me understand Gibbs Sampling is [Parameter estimation for text analysis](http://www.arbylon.net/publications/text-est.pdf) by Gregor Heinrich. It's not a general Gibbs sampling tutorial but it discusses it in terms of latent dirichlet allocation, a fairly popular Bayesian model for document modeling. It goes into the math in fair detail.
One that goes into even more exhaustive mathematical detail is [Gibbs Sampling for the Uninitiated](http://drum.lib.umd.edu/handle/1903/10058). And I mean exhaustive in that it assumes you know some multivariate calculus and then lays out every step from that point. So while there's a lot of math, none of it is advanced.
I assume these will be more useful to you then something that derives more advanced results, such as those that prove why Gibbs sampling converges to the correct distribution. The references I point out don't prove this.
| null | CC BY-SA 2.5 | null | 2011-03-19T21:18:56.360 | 2011-03-19T21:18:56.360 | null | null | 3167 | null |
8511 | 1 | null | null | 63 | 126468 | Christopher Manning's [writeup on logistic regression in R](http://nlp.stanford.edu/~manning/courses/ling289/logistic.pdf) shows a logistic regression in R as follows:
```
ced.logr <- glm(ced.del ~ cat + follows + factor(class),
family=binomial)
```
Some output:
```
> summary(ced.logr)
Call:
glm(formula = ced.del ~ cat + follows + factor(class),
family = binomial("logit"))
Deviance Residuals:
Min 1Q Median 3Q Max
-3.24384 -1.34325 0.04954 1.01488 6.40094
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.31827 0.12221 -10.787 < 2e-16
catd -0.16931 0.10032 -1.688 0.091459
catm 0.17858 0.08952 1.995 0.046053
catn 0.66672 0.09651 6.908 4.91e-12
catv -0.76754 0.21844 -3.514 0.000442
followsP 0.95255 0.07400 12.872 < 2e-16
followsV 0.53408 0.05660 9.436 < 2e-16
factor(class)2 1.27045 0.10320 12.310 < 2e-16
factor(class)3 1.04805 0.10355 10.122 < 2e-16
factor(class)4 1.37425 0.10155 13.532 < 2e-16
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 958.66 on 51 degrees of freedom
Residual deviance: 198.63 on 42 degrees of freedom
AIC: 446.10
Number of Fisher Scoring iterations: 4
```
He then goes into some detail about how to interpret coefficients, compare different models, and so on. Quite useful.
However, how much variance does the model account for? A [Stata page on logistic regression](http://www.ats.ucla.edu/stat/stata/output/old/lognoframe.htm) says:
>
Technically, $R^2$ cannot be computed the same way in logistic regression as it is in OLS regression. The pseudo-$R^2$, in logistic regression, is defined as $1 - \frac{L1}{L0}$, where $L0$ represents the log likelihood for the "constant-only" model and $L1$ is the log likelihood for the full model with constant and predictors.
I understand this at the high level. The constant-only model would be without any of the parameters (only the intercept term). Log likelihood is a measure of how closely the parameters fit the data. In fact, Manning sort of hints that the deviance might be $-2 \log L$. Perhaps null deviance is constant-only and residual deviance is $-2 \log L$ of the model? However, I'm not crystal clear on it.
Can someone verify how one actually computes the pseudo-$R^2$ in R using this example?
| How to calculate pseudo-$R^2$ from R's logistic regression? | CC BY-SA 3.0 | null | 2011-03-19T22:44:06.767 | 2023-01-05T19:47:56.373 | 2022-02-18T15:37:58.850 | 11887 | 2849 | [
"r",
"logistic",
"likelihood",
"pseudo-r-squared"
] |
8512 | 1 | null | null | 3 | 69 | Suppose I have some stochastic process $X_t$. At each time $t$, I receive an estimated probability distribution for $x_t$, followed by an observation $x_t$. After receiving a set of observations ${x_1, \ldots, x_n}$, I want to go back and re-estimate the probability distribution for each $x_t$, $1 \le t \le n$. What are some ways of doing this? Let's make the assumption that the estimated probability distributions are "pretty good", so that we do want to use the information they contain; what other assumptions do I need in order to make this an interesting/tractable problem?
For example, some (potentially totally misguided) thoughts I had were:
- Suppose $X_t$ is a "first-order continuous" model: $x_t$ depends only on $x_{t-1}$, and if $x_{t-1}, x_{s-1}$ are fairly close, then the probability distributions for $x_t, x_s$ should also be fairly close. Then to revise my estimate of the probability distribution for $x_t$, I could use a kind of kernel density bootstrap sample and mix it with the original estimate: take all points $x_s$ such that $x_{s-1}$ is close to $x_{t-1}$, use these points $x_s$ in some weighted fashion to form a kernel density estimate $K$, and average $K$ with the original estimated probability distribution to get a new estimated probability distribution for $x_t$.
- Suppose $X_t$ is a linear Gaussian model (so that it satisfies the assumptions of a standard Kalman filter). I could try forming a different kind of bootstrap sample: for each $x_t$, use a window around its estimated probability distribution to sample a new observation $x'_t$, and then use Kalman smoothing on the observations ${x'_t}$ to get a set of smoothed observations $\tilde{x_t}$. Use these bootstrap samples to generate new estimated probability distributions.
This is a mostly random question a friend asked me a while ago that I just thought of again, so I don't have a particular application (and hence why I don't have a concrete set of assumptions on the process).
| Updating a set of estimated forecasts | CC BY-SA 2.5 | null | 2011-03-20T04:19:28.777 | 2011-03-20T04:19:28.777 | null | null | 1106 | [
"estimation",
"forecasting",
"stochastic-processes",
"smoothing"
] |
8513 | 1 | null | null | 13 | 6752 | Let's say $y$ is a linear function of $x$ and a dummy $d$. My hypothesis is that $d$ itself is like a hedonistic index of a vector of other variables, $Z$. I have support for this in a $MANOVA$ of $Z$ (i.e. $z_1$, $z_2$, ..., $z_n$) on $d$. Is there any way to test the equivalence of these two models:
Model 1: $y = b_0 + b_1 \cdot x + b_2\cdot d + e_1$
Model 2: $y = g_0 + Z\cdot G + e_2$
where $G$ is the column vector of parameters.
| Test equivalence of non-nested models | CC BY-SA 2.5 | null | 2011-03-20T06:47:01.650 | 2013-06-25T10:34:46.157 | 2011-03-20T09:53:58.310 | 2645 | 3671 | [
"r",
"hypothesis-testing",
"regression",
"model-selection"
] |
8514 | 1 | null | null | 12 | 2851 | I am by no means good in statistics, but I think I have come to the right place.
My question is simple:
My problem consists of comparing the population of several states in a small country, but some states have a population of 3000,000 and some a population of 2,000.
I am painting it on a map, and the "intensity" of the color depends on how the population of every state compares to the population of the whole country.
The problem is that the states with a lot of population are shown with really intense colors and the small states barely have any color.
Is there an easy way to "normalize" or make the data comparable?
I dont know if I am explaining myself properly but I hope some one can help me.
Please comment if my question is not clear and I will clarify.
Thank you for your help!
| How to make a good color intensity scale? | CC BY-SA 2.5 | null | 2011-03-20T07:38:30.500 | 2011-05-13T21:01:47.547 | 2011-03-20T10:11:32.580 | null | 3803 | [
"data-visualization"
] |
8515 | 1 | 89026 | null | 9 | 3905 | Positive stable distributions are described by four parameters: the skewness parameter $\beta\in[-1,1]$, the scale parameter $\sigma>0$, the location parameter $\mu\in(-\infty,\infty)$, and the so-called index parameter $\alpha\in(0,2]$. When $\beta$ is zero the distribution is symmetric around $\mu$, when it is positive (resp. negative) the distribution is skewed to the right (resp. to the left). Stable distributions allow fat tails when $\alpha$ decreases.
When $\alpha$ is strictly less than one and $\beta=1$ the support of the distribution restricts to $(\mu,\infty)$.
The density function only has a closed-form expression for some particular combinations of values for the parameters. When $\mu=0$, $\alpha<1$, $\beta=1$, and $\sigma=\alpha$ it is (see formula (4.4) [here](http://kb.unipune.ac.in/bitstream/123456789/200/1/9.pdf)):
$f(y) = -\frac{1}{\pi y} \sum_{k=1}^{\infty} \frac{\Gamma(k\alpha+1)}{k!} (-y^{-\alpha})^k \sin(\alpha k \pi)$
It has infinite mean and variance.
Question
I would like to use that density in R. I use
```
> alpha <- ...
> dstable(y, alpha=alpha, beta=1, gamma=alpha, delta=0, pm=1)
```
where the [dstable](http://math.furman.edu/~dcs/courses/math47/R/library/fBasics/html/013A-StableDistribution.html) function comes with the fBasics package.
Can you confirm this is the right way to compute that density in R?
Thank you in advance!
EDIT
One reason why I am suspicious is that, in the output, the value of delta is different from that in the input. Example:
```
> library(fBasics)
> alpha <- 0.4
> dstable(4, alpha=alpha, beta=1, gamma=alpha, delta=0, pm=1)
[1] 0.02700602
attr(,"control")
dist alpha beta gamma delta pm
stable 0.4 1 0.4 0.290617 1
```
| The positive stable distribution in R | CC BY-SA 2.5 | null | 2011-03-20T07:49:43.677 | 2018-11-10T16:15:06.827 | 2018-11-10T16:15:06.827 | 11887 | 3019 | [
"r",
"stable-distribution"
] |
8516 | 2 | null | 8514 | 3 | null | You could divide by the total population. This would ensure that everything lies between 0 and 1. If the scales are still too disparate, consider a log scale.
| null | CC BY-SA 2.5 | null | 2011-03-20T08:08:21.577 | 2011-03-20T08:08:21.577 | null | null | 3786 | null |
8517 | 2 | null | 8514 | 5 | null | Good question, One solution is to rescale the colors to have them more uniformly distributed, or to a distribution with lower tails... but then your legend has to be clear enough because deforming the scale, somehow, is unfair...
For example, in R, rescaling a normal to a uniform . (what you have maybe goes more the other way since you have large tails and you want them smaller, but the principle is the same)
```
X=array(rnorm(10000),c(100,100))
ramp=colorRamp(c("blue","cyan","white","yellow","red"),space ="rgb")
kleur <- rgb( ramp(seq(0,1,length=200)),max = 255)
par(mfrow=c(1,2))
image(X,col=kleur)### image without rescaling
Fn=ecdf(X)
ScaledX=array(Fn(X),c(100,100))
image(ScaledX,col=kleur)
```
| null | CC BY-SA 2.5 | null | 2011-03-20T08:10:04.853 | 2011-03-20T08:15:42.257 | 2011-03-20T08:15:42.257 | 223 | 223 | null |
8518 | 2 | null | 8358 | 1 | null | What time-lag might you expect between cases recorded, and fatality? What time-lag between start of treatment and impact on fatality rates?
If either of those numbers is much greater than one year, then there may be a case for aggregating all your data from first year of treatment impact (i.e. 1996+time to impact) to 2010, and just test to see if mean CFR rates vary. If possible, ask whoever rejected the time-series approach whether this would satisfactorily deal with the confounding factors that concern them.
Do look at specifics of the confounding factors: for example, if palliative care for terminal patients was better in State 2, then those patients originally in State 1 for whom treatment had failed, might move to State 2 for their final days/weeks/months of life; in that case, their deaths would be registered in state 2, and the numbers you have won't provide you much useful information, unless you could get the numbers of terminal migrations between states.
| null | CC BY-SA 2.5 | null | 2011-03-20T09:03:55.500 | 2011-03-20T09:03:55.500 | null | null | 3794 | null |
8519 | 2 | null | 8513 | 8 | null | To begin with you have to define the equivalence concept. One may think that two models are equivalent when they do produce almost the same forecasting accuracy (this one would be relevant for time series and panel data), another one could be interested in if the fits from the model are close. The former is the object for different cross-validation (jack-knife usually or some out-of-sample tests, Rob's `accuracy()` do this nicely), the latter goes for the minimization of some information criterion.
In microeconometrics the choice is [$BIC$](http://en.wikipedia.org/wiki/Bayesian_information_criterion), though you may also consider [$AIC$](http://en.wikipedia.org/wiki/Akaike_information_criterion) if you are working with small sample sizes. Note, that the choice based on minimization of information criterion is also relevant for nested models.
A nice discussion is given in must-have-it [book](http://books.google.ru/books?id=Zf0gCwxC9ocC&pg=PA278&lpg=PA278&dq=information+criteria+non-nested+models&source=bl&ots=CY24qI6KtY&sig=_kPXnMgkG3g-X3wQaF2oGJ955X8&hl=ru&ei=qbKFTf23DJDAswby69GQAw&sa=X&oi=book_result&ct=result&resnum=10&ved=0CGAQ6AEwCQ#v=onepage&q&f=false) by Cameron and Trivedi (Chapter 8.5 provides excellent review of the methods), more specific theoretical details are found in Hong and Preston [here](http://www.stanford.edu/~doubleh/eco273B/nonnestmsc.pdf).
Roughly speaking, choosing from two models the more parsimonious (having less parameters to estimate, therefore more degrees of freedom) will be suggested as preferable. An information criterion introduces a special penalty function that restricts the inclusion of additional explanatory variables into linear model conceptually similar to the restrictions introduced by adjusted $R^2$.
However you may not be just interested in choosing the model that minimizes selected information criterion. Equivalence concept implies that some test statistic should be formulated. Therefore you may go for likelihood ratio tests either Cox or Voung $LR$ tests, Davidson-MacKinnon $J$ test.
Finally, according to the tags, you may be just interested in `R` functions:
```
library(lmtest)
coxtest(fit1, fit2)
jtest(fit1, fit2)
```
Where `fit1` and `fit2` are two non-nested fitted linear regression models, `coxtest` is the Cox $LR$ test, and `jtest` Davidson-MacKinnon $J$ test.
| null | CC BY-SA 3.0 | null | 2011-03-20T09:09:12.033 | 2013-06-25T10:34:46.157 | 2013-06-25T10:34:46.157 | 21054 | 2645 | null |
8520 | 2 | null | 665 | 4 | null | In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it.
In statistics, the distribution is unknown in advance. It may even be unknowable. Assumptions are hypothesised about the probability distribution behind observed data, in order to be able to apply probability theory to that data in order to know whether a null hypothesis about that data can be rejected or not.
There is a philosophical discussion about whether there is such a thing as probability in the real world, or whether it is an ideal figment of our mathematical imaginations, and all our observations can only be statistical.
| null | CC BY-SA 2.5 | null | 2011-03-20T09:27:37.320 | 2011-03-20T09:27:37.320 | null | null | 3794 | null |
8521 | 1 | null | null | 11 | 225 | I have a set of systems where uncertainties accumulate within it. These aren't always purely additive - sometimes they are, sometimes they aren't. I've had some success in using fan-charts, bar charts with confidence-intervals, and box plots for communicating single items.
But how can I show how uncertainties accumulate and combine - while also showing the data points around which the uncertainties lie?
| What graphical methods are useful for visualising how uncertainties aggregate? | CC BY-SA 3.0 | null | 2011-03-20T09:36:05.143 | 2016-02-04T15:47:00.453 | 2015-09-29T22:07:37.863 | 22228 | 3794 | [
"data-visualization",
"confidence-interval",
"uncertainty"
] |
8522 | 1 | null | null | 3 | 310 | We're thinking of adding an interactive near real-time analytics functionality (a-la "Google Analytics") to a product Movie Recommender Engine.
We need to let the user interactively create analyses deciding on a case by case basis the analysis dimensions (e.g. by Genre, by Actor, by Publisher), metrics (e.g. Views, Purchases, Ratings) and the time window of the analysis.
We're considering several options like:
- charting libraries + custom built
- reporting engines (e.g. BIRT)
- OEM interactive analysis tools (e.g.
Tableau)
Our solution is Oracle and java-based. The front-end is built using the Liferay Portal framework
| Suggestions for embedded interactive analytical functionalities? | CC BY-SA 2.5 | null | 2011-03-20T12:30:45.973 | 2011-05-24T14:18:05.413 | 2011-03-20T15:53:55.847 | 930 | 3804 | [
"data-visualization",
"interactive-visualization",
"recommender-system"
] |
8523 | 2 | null | 2181 | 4 | null | If you look at [Paul Hewison's webpage](http://www.plymouth.ac.uk/staff/phewson), you can find his free book on Multivariate Statistics and R. Another free book is by Wolfgang Hardle and Leopold Simar. I have been
working my way through Johnson and Wichern, a book that has been used in the US for
over twenty years; you will have to buy this book.
| null | CC BY-SA 2.5 | null | 2011-03-20T12:49:47.597 | 2011-03-21T22:38:56.370 | 2011-03-21T22:38:56.370 | 3582 | 3805 | null |
8524 | 1 | null | null | 4 | 1259 | Here's what I know:
I have read the chapter (p347ff) in Agresti, 1990, regarding dependent two-way tables, and I believe I understand the basics. My problem is that Agresti's model-based approaches seem to rely on large-sample theory.
I have questions from 24 students where they rate something on a scale from 1-5. If I collapse 1-2=Agreement, 3=Neutrality, 4-5=Disagreement I still have relatively sparse data. The relevant question is the strength of evidence that the change in opinions between before and after is not due to random variation in response.
Currently I am using mh_test in the coin package in R. Here are some specific questions:
- How can I see what the mh_test is actually doing? When I type print(mh_test) it will not show me the function, even though I can use the function after loading the package.
- Does the distribution="approximate" use a bootstrap method to obtain a p-value, and is that a way to do with the sparseness of data?
- Does anyone know of an exact version of the test of marginal homogeneity in this situation, and ideally how to implement such a test in R/S?
Thanks for reading. -DB
| Is there an exact version of marginal homogeneity test? | CC BY-SA 2.5 | null | 2011-03-20T13:06:51.123 | 2011-03-20T20:52:23.870 | 2011-03-20T20:52:23.870 | null | null | [
"r",
"heteroscedasticity"
] |
8525 | 2 | null | 8514 | 2 | null | I feel awkward asking it, but are you really committed to using colour to portray a quantitative amount? Is there no way to put a bar in each state, whose height represents the quantity?
Another way might be to show the map with areas representing the geographic areas, together with a map where each state's area is proportional to the population size - similar to how the [sensory homunculus](http://tumbledore.tumblr.com/post/249784948/sensory-homunculus-i-miss-this-high-school) does. But that would be a painful amount of drawing - I don't know of any way to automate that (though it may exist)
| null | CC BY-SA 2.5 | null | 2011-03-20T13:25:00.360 | 2011-03-20T13:25:00.360 | null | null | 3794 | null |
8526 | 1 | 8529 | null | 5 | 646 | I have a rather basic question about [Probabilistic Principal Component Analysis](http://research.microsoft.com/en-us/um/people/cmbishop/downloads/Bishop-PPCA-JRSS.pdf), which I am now trying to apply to a real-world problem.
In PPCA, the crucial assumption is that the generating process of the observations in $R^n$ is $t=Wx +\sigma^2\epsilon$, where x are iid standard gaussians in $R^q$ (with $q \le n$) and $\epsilon$ are iid standard gaussian vectors in $R^n$. The authors find that the MLE solution is $\hat W=U_q (\Lambda_q - \sigma^2 I)^{1/2}$, where $\Lambda_q$ is the diagonal matrix of the $q$ largest eigenvalues of the empirical covariance matrix, and $U_q$ is the corresponding eigenvectors submatrix, and $\sigma^2$ is the average of the remaining smaller eigenvalues (see sec. 3.2 of the linked paper).
My question is simple. The covariance matrix is $C=WW'+\sigma^2 I$. Directly plugging in the above result, we obtain $\hat C=U_q\Lambda_q U_q$ (the $\sigma^2$ terms cancel out). Can this be correct? The covariance matrix would be rank-deficient. Am I missing something?
| Question about probabilistic principal component analysis | CC BY-SA 2.5 | null | 2011-03-20T13:33:59.587 | 2011-03-21T15:11:04.680 | 2011-03-21T15:11:04.680 | 919 | 30 | [
"pca",
"dimensionality-reduction"
] |
8528 | 1 | 8550 | null | 9 | 2667 | First of all I'd like to apologize for the vague title, I couldn't really formulate a better one just now, please feel free to change, or advice me to change, the title to make it better fit the core of the question.
Now about the question itself, I have been working on a software in which I have come across the idea of using an empirical distribution for sampling, however now that it's implemented I am not sure how to interpret it all. Allow me to describe what I have done, and why:
I have a bunch of calculations for a set of objects, yielding a final score. The score as it is however is very ad-hoc. So in order to make some sense out of the score of a particular object, what I do is to do a large number of (N = 1000) calculations of scores with mock/randomly generated values, yielding 1000 mock scores. Estimating an empirical "score distribution" for that particular object is then achieved by these 1000 mock score values.
I have implemented this in Java (as the rest of the software is also written in Java environment) using [Apache Commons Math library](http://commons.apache.org/math/), in particular the [EmpiricalDistImpl class](http://commons.apache.org/math/api-2.1/org/apache/commons/math/random/EmpiricalDistributionImpl.html). According to the documentation this class uses:
>
what amounts to the Variable Kernel
Method with Gaussian smoothing:
Digesting the input file
Pass the file once to compute min and max.
Divide the range from min-max into binCount "bins."
Pass the data file again, computing bin counts and univariate
statistics (mean, std dev.) for each
of the bins
Divide the interval (0,1) into subintervals associated with the bins,
with the length of a bin's subinterval
proportional to its count.
Now my question is, does it make sense to sample from this distribution in order to calculate some sort of an expected value? In other words how much could I trust/rely on this distribution? Could I for instance draw conclusion about significance of observing a score $S$ by checking the distribution?
I realize that this is perhaps an unorthodox way looking at a problem like this but I think it would be interesting to get a better grip over the concept of empirical distributions, and how they can/can't be used in analysis.
| How to use/interpret empirical distribution? | CC BY-SA 3.0 | null | 2011-03-20T14:07:14.917 | 2017-09-18T11:18:10.080 | 2017-09-18T11:18:10.080 | 60613 | 3014 | [
"distributions",
"sampling",
"java"
] |
8529 | 2 | null | 8526 | 6 | null | Unless I'm missing something, I think $U_q U_q' \neq I$ here (almsot surely, at least). The columns of $U_q$ are orthogonal, not the rows since the last $n-q$ columns are removed.
| null | CC BY-SA 2.5 | null | 2011-03-20T14:19:44.563 | 2011-03-20T14:19:44.563 | null | null | 26 | null |
8530 | 2 | null | 8524 | 5 | null | 1: `mh_test()` is an `S3` generic function, you can check what methods it has using `methods("mh_test")`. To show the source of a non-visible method, you can use `getAnywhere()` or `getS3method()`:
```
library(coin) # for mh_test()
methods("mh_test") # available methods for mh_test(), all non-visible ...
getS3method("mh_test", "table") # get appropriate method -> uses SymmetryProblem ...
getS3method("mh_test", "SymmetryProblem") # get relevant method ...
```
The code probably won't help you very much without reading the theory behind the `coin` package which is explained in `vignette("coin_implementation")`. To check what `mh_test()` does in the asymptotic $\chi^{2}$ case, just compare it with the manual calculation:
```
> one <- sample(LETTERS[1:3], 24, replace=TRUE) # observations condition 1
> two <- sample(LETTERS[1:3], 24, replace=TRUE) # observations condition 2
> cTab <- table(one, two) # cross tabulation
> addmargins(cTab) # marginal frequencies
two
one A B C Sum
A 3 3 0 6
B 2 1 3 6
C 5 3 4 12
Sum 10 7 7 24
> mh_test(cTab) # test for marginal homogeneity
Asymptotic Marginal-Homogeneity Test
data: response by groups (one, two) stratified by block
chi-squared = 2.6588, df = 2, p-value = 0.2646
# manual calculation following textbook formulas: S will be the estimated
# covariance matrix for the differences in marginal frequencies
> S <- -(cTab + t(cTab))
> diag(S) <- rowSums(cTab) + colSums(cTab) - 2*diag(cTab) # change diagonal
> keep <- 1:(nrow(cTab)-1) # last category is pre-determined
> d <- rowSums(cTab) - colSums(cTab) # differences in marginal frequencies
> (chisqVal <- t(d[keep]) %*% solve(S[keep, keep]) %*% d[keep]) # test statistic
2.658824
> (smmhDf <- nrow(cTab)-1) # degrees of freedom
[1] 2
> (pVal <- 1-pchisq(chisqVal, smmhDf)) # p-value from chi-square distribution
0.2646329
```
2: `coin` implements a permutation-test framework, not a bootstrapping framework. `distribution=approximate(B=9999)` means that instead of using all possible permutations of the data for generating the distribution of the test statistic, it only uses a random sample of size `B` of these permutations. The value of the test-statistic will be the same, but the p-value will differ from the $\chi^{2}$ approximation. IMHO, it's a good idea to compare the p-values.
3: An exact permuation test might be done using functions in package `vegan`, but I haven't tried that myself.
| null | CC BY-SA 2.5 | null | 2011-03-20T14:40:12.293 | 2011-03-20T16:35:29.667 | 2011-03-20T16:35:29.667 | 1909 | 1909 | null |
8531 | 2 | null | 8455 | 1 | null | To answer the pure probabilistic question that J Presley presented, using bayer's notation (p=probability of an item failing), the the probability of at least one element failing is 1-P(none fail)= 1-(1-p)^n. This type of calculation is common in system reliability where a bunch of components are linked in parallel, so that the system continues to function if at least one component is functioning.
You can still use this formula even if each plant item has a different failure probability (p_i). The formula would then be 1- (1-p_1)(1-p_2)...(1-p_n).
| null | CC BY-SA 2.5 | null | 2011-03-20T15:18:34.720 | 2011-03-20T15:18:34.720 | null | null | 1945 | null |
8532 | 2 | null | 8522 | 1 | null | I love interactive visualization software like Spotfire and Tableau because it is easy to use and very insightful. My MBA students also become addicted.
I am more familiar with Spotfire, so I can say that they have a nice solution (Spotfire Silver) that allows you to create a dashboard of visualizations and post it to the Web so that users can play with a structured visualization or answer particular queries of interest.
| null | CC BY-SA 2.5 | null | 2011-03-20T15:26:14.557 | 2011-03-20T15:26:14.557 | null | null | 1945 | null |
8534 | 2 | null | 8515 | 6 | null | What I think is happening is that in the output `delta` may be reporting an internal location value, while in the input `delta` is describing the shift. [There seems to be a similar issue with `gamma` when `pm=2`.] So if you try increasing the shift to 2
```
> dstable(4, alpha=0.4, beta=1, gamma=0.4, delta=2, pm=1)
[1] 0.06569375
attr(,"control")
dist alpha beta gamma delta pm
stable 0.4 1 0.4 2.290617 1
```
then you add 2 to the location value.
With `beta=1` and `pm=1` you have a positive random variable with a distribution lower bound at 0.
```
> min(rstable(100000, alpha=0.4, beta=1, gamma=0.4, delta=0, pm=1))
[1] 0.002666507
```
Shift by 2 and the lower bound rises by the same amount
```
> min(rstable(100000, alpha=0.4, beta=1, gamma=0.4, delta=2, pm=1))
[1] 2.003286
```
But if you want the `delta` input to be the internal location value rather than the shift or lower bound, then you need to use a different specification for the parameters. For example if you try the following (with `pm=3` and trying `delta=0` and the `delta=0.290617` you found earlier), you seem to get the same `delta` in and out. With `pm=3` and `delta=0.290617` you get the same density of 0.02700602 you found earlier and a lower bound at 0. With `pm=3` and `delta=0` you get a negative lower bound (in fact -0.290617).
```
> dstable(4, alpha=0.4, beta=1, gamma=0.4, delta=0, pm=3)
[1] 0.02464434
attr(,"control")
dist alpha beta gamma delta pm
stable 0.4 1 0.4 0 3
> dstable(4, alpha=0.4, beta=1, gamma=0.4, delta=0.290617, pm=3)
[1] 0.02700602
attr(,"control")
dist alpha beta gamma delta pm
stable 0.4 1 0.4 0.290617 3
> min(rstable(100000, alpha=0.4, beta=1, gamma=0.4, delta=0, pm=3))
[1] -0.2876658
> min(rstable(100000, alpha=0.4, beta=1, gamma=0.4, delta=0.290617, pm=3))
[1] 0.004303485
```
You may find it easier simply to ignore `delta` in the output, and so long as you keep `beta=1` then using `pm=1` means `delta` in the input is the distribution lower bound, which it seems you want to be 0.
| null | CC BY-SA 2.5 | null | 2011-03-20T15:45:20.570 | 2011-03-20T15:45:20.570 | null | null | 2958 | null |
8536 | 2 | null | 665 | 72 | null | It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this makes a big difference in terms of how they are addressed.
Probability is a branch of pure mathematics--probability questions can be posed and solved using axiomatic reasoning, and therefore there is one correct answer to any probability question.
Statistical questions can be converted to probability questions by the use of probability models. Once we make certain assumptions about the mechanism generating the data, we can answer statistical questions using probability theory. HOWEVER, the proper formulation and checking of these probability models is just as important, or even more important, than the subsequent analysis of the problem using these models.
One could say that statistics comprises of two parts. The first part is the question of how to formulate and evaluate probabilistic models for the problem; this endeavor lies within the domain of "philosophy of science". The second part is the question of obtaining answers after a certain model has been assumed. This part of statistics is indeed a matter of applied probability theory, and in practice, contains a fair deal of numerical analysis as well.
See:
[http://bactra.org/reviews/error/](http://bactra.org/reviews/error/)
| null | CC BY-SA 3.0 | null | 2011-03-20T16:02:25.653 | 2016-03-25T04:33:28.400 | 2016-03-25T04:33:28.400 | 108339 | 3567 | null |
8537 | 2 | null | 2181 | 6 | null | Hands down best basic text on multivariate regression is (still) Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, (L. Erlbaum Associates, Mahwah, N.J., 2003).
Cohen made his name in statistics yet was a psychologist; still if you want social psychology-focused treatment of multivariate, one not limited to multivariate regression (although it definitely favors it over ANOVA & MANOVA, which ought to be banned by some sort of Intellectual Human Rights Commission), then your best bet is Judd, C.M., McClelland, G.H. & Ryan, C.S. Data analysis : a model comparison approach, (Routledge/Taylor and Francis, New York, NY, 2008). Judd also has a very very good chapter on multivariate regression in Judd, C.M. Everyday Data Analysis in Social Psychology: Comparisons of Linear Models. in Handbook of research methods in social and personality psychology (eds. Reis, H.T. & Judd, C.M.) 370-392 (Cambridge University Press, New York, 2000).
I agree that Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models, (Cambridge University Press, Cambridge ; New York, 2007), is amazing, but it is really more geared to someone already comfortable w/ basics of multivariate regression--it's primarily about multilevel modeling. Also is focused on observational study methodology--not experimental (Judd is best for that; Cohen okay too.
If you want something on interactions in multivariate -- which you likely will if you are using experimental methods -- then best two texts are Aiken, L.S., West, S.G. & Reno, R.R. Multiple Regression: Testing and Interpreting Interactions, (Sage Publications, Newbury Park, Calif., 1991) & Jaccard, J. & Turrisi, R. Interaction Effects in Multiple Regression, (Sage Publications, Thousand Oaks, Calif., 2003). (Both Cohen & Cohen & Judd do treat this topic, though.)
On "free" side, you probably know about [http://faculty.chass.ncsu.edu/garson/PA765/statnote.htm](http://faculty.chass.ncsu.edu/garson/PA765/statnote.htm)
Last bit of advice: Never ever split your continuous variables!!! It's amazing how many social psychologists, used to ANOVA, still do this even as they make use of multivariate techniques such as regression analysis!
| null | CC BY-SA 2.5 | null | 2011-03-20T16:20:26.003 | 2011-03-20T16:33:12.413 | 2011-03-20T16:33:12.413 | 11954 | 11954 | null |
8538 | 2 | null | 8501 | 2 | null | I tend to be more and more convinced that this just generally isn't a good idea because the meaning the R^2 isn't really exactly the same as in a conventional linear regression. As such, one always runs into interpretation issues and it often distracts from the meat of the story. Time spent writing a good description of the fit of model is time better spent... but I could be wrong.
[A good answer to a question on how to write up the fit of an LMM model would be really handy (which a cursory search didn't show me).]
| null | CC BY-SA 2.5 | null | 2011-03-20T18:11:16.920 | 2011-03-20T18:11:16.920 | null | null | 601 | null |
8539 | 2 | null | 8514 | 6 | null | I'm sorry, but to me it sounds like you are trying to fix what isn't broken. In fact, you might even be trying to break what isn't broken. When you have a quantitative variable (here, population) that spans a wide range, then whatever metric you use to represent it should also span a wide range.
But for all things related to color (and esp. maps), the key source is, I think [ColorBrewer](http://colorbrewer2.org/)
| null | CC BY-SA 2.5 | null | 2011-03-20T18:38:25.820 | 2011-03-20T18:38:25.820 | null | null | 686 | null |
8540 | 2 | null | 8490 | 2 | null | If you only have three or four people, then the right test is IOTT - the inter-ocular trauma test. That is, it hits you between the eyes. To allow the data to hit you properly, I would recommend graphics. In particular, I'd put time on the x-axis, score on the y-axis, and put lines for each person.
| null | CC BY-SA 2.5 | null | 2011-03-20T18:42:55.053 | 2011-03-20T18:42:55.053 | null | null | 686 | null |
8541 | 1 | 8607 | null | 5 | 414 | I've been trying to get my hands on a substantial resource for using Gibbs sampling in hybrid Bayesian networks, that is, networks with both continuous and discrete variables.
So far I can't say I have succeeded. I'm interested in hybrid networks where there are no constraints regarding discrete children having continuous parents.
Gibbs sampling is a very widely employed method in approximate inference in Bayesian methods, and yet, I can't seem to find detailed resources that focus on hybrid networks
| Resources about Gibbs sampling in hybrid Bayesian networks | CC BY-SA 3.0 | null | 2011-03-20T19:09:39.007 | 2016-05-01T20:20:44.297 | 2016-05-01T20:20:44.297 | 7290 | 3280 | [
"machine-learning",
"bayesian",
"references",
"gibbs"
] |
8543 | 2 | null | 8358 | 3 | null | I don't know why I took the time to answer this. Is it because I can or maybe it's because DrWho seems to think it is very important. In either case ....
Though well intentioned
“Time series expert modeler of IBM SPSS Forecast v19 was used. Both exponential smoothening models and ARIMA models were examined.Outliers were detected and prevented from influencing parameter estimates”
may have suffered from an inability to detect level shifts i.e. “Intercept Changes” which are a sequence of pulses with the same value and sign. Note below a reasonable model for STATE1 using all 17 values suggests a Level Shift at 2004 ( period 11). This model [AR(1)] was used to cleanse STATE1 of unspecified background factors that may have been present to cause significant changes in Y given X.
Y(T) = -87.899
+[X1(T)][(+ .158)] M_CASES
+[X2(T)][(- 77.4369)] :PULSE 7 I~P00007STATE1
+[X3(T)][(- 65.2775)] :PULSE 15 I~P00015STATE1
+[X4(T)][(- 112.77 )] :LEVEL SHIFT 11 I~L00011STATE1
+[X5(T)][(+ 43.0999)] :PULSE 9 I~P00009STATE1
+[X6(T)][(- 58.4117)] :PULSE 4 I~P00004STATE1
+ [(1- .840B** 1)]**-1 [A(T)]
Leading to a cleansed set of values FOR STATE1
1994 383.0000000000
1995 257.0000000000
1996 263.0000000000
1997 465.4116693551
1998 149.0000000000
1999 434.0000000000
2000 246.4369202361
2001 275.0000000000
2002 134.9000542626
2003 142.0000000000
2004 131.0000000000
2005 336.0000000000
2006 108.0000000000
2007 75.0000000000
2008 118.2775018144
2009 40.0000000000
2010 26.0000000000
Notice that a simple line graph between Y and X visually support the change in the relationship between Y and on or about period 11 (2004) such that the Y values are clearly lower than expected for the period 2004-2010 ( 11-20) as compared to period 1994-2003 (1-10). This is a classic case of an outside factor impacting either Y or X ( but not both !) starting at time period 11. Normal statistical commentary would refer to this level shift as a “lurking variable” confounding simple analysis if untreated.
For STATE2
Y(T) = -6.4072
+[X1(T)][(+ .213)] M_CASES
+[X2(T)][(- 254.25 )] :PULSE 17 I~P00017STATE2
+[X3(T)][(+ 214.32 )] :PULSE 12 I~P00012STATE2
+[X4(T)][(- 92.6947)] :PULSE 16 I~P00016STATE2
+[X5(T)][(- 98.6907)] :PULSE 15 I~P00015STATE2
+[X6(T)][(+ 39.8298)] :PULSE 13 I~P00013STATE2
+ [A(T)]
1994 121.0000000000
1995 227.0000000000
1996 179.0000000000
1997 76.0000000000
1998 195.0000000000
1999 275.0000000000
2000 253.0000000000
2001 199.0000000000
2002 133.0000000000
2003 237.0000000000
2004 228.0000000000
2005 1285.6763010529
2006 488.1702389682
2007 645.0000000000
2008 635.6907401611
2009 648.6947149772
2010 748.2497352909
Note that the Pulses were identified GIVEN the number of cases.
Now proceeding with the CHOW TEST to test the significant difference between the two sets of regression coefficients:
We get the following OLS Model for the combined 34 (cleansed values used )
With Error Sum of Squares = 363716.
For STATE1 we get
Y(T) = -49.293
+[X1(T)][(+ .150)] M_CASES
+ [A(T)]
Sum of Squares 142759.
And for STATE2
Y(T) = -6.4072
+[X1(T)][(+ .213)] M_CASES
+ [A(T)]
Sum of Squares 1445.64
As before we have the following F test
Numerator = [363716 – (142759+1445)] /2 = 109,776
Denominator = 363716/30 = 12,123
Computed F value of 9.0 with 2 and 30 degrees of freedom is significant at alpha less than .001 . Thus one could conclude that there is a statistically significant difference between the two states at about a 99.9% Level of Confidence given that there was a significant effect in STATE1 at or about 2004.
| null | CC BY-SA 2.5 | null | 2011-03-20T20:31:57.940 | 2011-03-20T20:31:57.940 | null | null | 3382 | null |
8544 | 2 | null | 8515 | 6 | null | Also of note: Martin Maechler just refactored the code for the stable distributed and added some improvements.
His new package [stabledist](http://cran.r-project.org/package=stabledist) will be used by fBasics as well, so you may want to give this a look as well.
| null | CC BY-SA 2.5 | null | 2011-03-20T20:50:31.327 | 2011-03-20T20:50:31.327 | null | null | 334 | null |
8545 | 1 | null | null | 4 | 7984 | I have some problems in using (and finding) the Chow test for structural breaks in a regression analysis using R. I want to find out if there are some structural changes including another variable (represents 3 spatial subregions).
Namely, is the regression with the subregions better than the overall model. Therefore I need some statistical validation.
I hope my problem is clear, isn't it?
Kind regards
marco
Toy example in R:
```
library(mlbench) # dataset
data("BostonHousing")
# data preparation
BostonHousing$region <- ifelse(BostonHousing$medv <=
quantile(BostonHousing$medv)[2], 1,
ifelse(BostonHousing$medv <=
quantile(BostonHousing$medv)[3], 2,
ifelse(BostonHousing$medv >
quantile(BostonHousing$medv)[4], 3, 1)))
BostonHousing$region <- as.factor(BostonHousing$region)
# regression without any subregion
reg1<- lm(medv ~ crim + indus + rm, data=BostonHousing)
summary(reg1)
# are there structural breaks using the factor "region" which
# indicates 3 spatial subregions
reg2<- lm(medv ~ crim + indus + rm + region, data=BostonHousing)
```
------- subsequent entry
I struggled with your suggested package "strucchange", not knowing how to use the "from" and "to" arguments correctly with my factor "region". Nevertheless, I found one hint to calculate it by hand (https://stat.ethz.ch/pipermail/r-help/2007-June/133540.html). This results in the following output, but now I am not sure if my interpetation is valid. The results from the example above below.
Does this mean that region 3 is significant different from region 1? Contrary, region 2 is not? Further, each parameter (eg region1:crim) represents the beta for each regime and the model for this region respectively? Finally, the ANOVA states that there is a signif. difference between these models and that the consideration of regimes leads to a better model?
Thank you for your advices!
Best Marco
```
fm0 <- lm(medv ~ crim + indus + rm, data=BostonHousing)
summary(fm0)
fm1 <- lm(medv ~ region / (crim + indus + rm), data=BostonHousing)
summary(fm1)
anova(fm0, fm1)
```
Results:
```
Call:
lm(formula = medv ~ region/(crim + indus + rm), data = BostonHousing)
Residuals:
Min 1Q Median 3Q Max
-21.079383 -1.899551 0.005642 1.745593 23.588334
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.40774 3.07656 4.033 6.38e-05 ***
region2 6.01111 7.25917 0.828 0.408030
region3 -34.65903 4.95836 -6.990 8.95e-12 ***
region1:crim -0.19758 0.02415 -8.182 2.39e-15 ***
region2:crim -0.03883 0.11787 -0.329 0.741954
region3:crim 0.78882 0.22454 3.513 0.000484 ***
region1:indus -0.34420 0.04314 -7.978 1.04e-14 ***
region2:indus -0.02127 0.06172 -0.345 0.730550
region3:indus 0.33876 0.09244 3.665 0.000275 ***
region1:rm 1.85877 0.47409 3.921 0.000101 ***
region2:rm 0.20768 1.10873 0.187 0.851491
region3:rm 7.78018 0.53402 14.569 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.008 on 494 degrees of freedom
Multiple R-squared: 0.8142, Adjusted R-squared: 0.8101
F-statistic: 196.8 on 11 and 494 DF, p-value: < 2.2e-16
> anova(fm0, fm1)
Analysis of Variance Table
Model 1: medv ~ crim + indus + rm
Model 2: medv ~ region/(crim + indus + rm)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 502 18559.4
2 494 7936.6 8 10623 82.65 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
| Identifying structural breaks in regression with Chow test | CC BY-SA 2.5 | 0 | 2011-03-20T20:54:24.877 | 2011-03-21T12:39:22.767 | 2011-03-21T12:39:22.767 | 1390 | null | [
"r",
"chow-test",
"structural-change"
] |
8546 | 2 | null | 8456 | 2 | null | Simple answer:
Select one set of X and Y values, and create your XY chart.
Copy the second set of X and Y values, select the chart, and use paste special to add the data as a new series.
| null | CC BY-SA 2.5 | null | 2011-03-20T21:41:48.500 | 2011-03-20T21:41:48.500 | null | null | null | null |
8547 | 2 | null | 8545 | 5 | null | The [strucchange](http://cran.r-project.org/web/packages/strucchange/index.html) package contains Chow and F tests for structural changes in regression models. The package comes with a vignette which shows how to use the package.
| null | CC BY-SA 2.5 | null | 2011-03-20T21:51:03.190 | 2011-03-20T21:51:03.190 | null | null | 1390 | null |
8548 | 2 | null | 8541 | 0 | null | I think this is still an open research question and there has been little consensus on the best way to do this.
| null | CC BY-SA 2.5 | null | 2011-03-20T22:23:16.703 | 2011-03-20T22:23:16.703 | null | null | 3816 | null |
8549 | 2 | null | 6033 | 4 | null | There are a number of ways that "a structural break" may occur.
If there is a change in the Intercept or a change in Trend in "the latter portion of the time series" then one would be better suited to perform Intervention Detection (N.B. this is the empirical identification of the significant impact of an unspecified Deterministic Variable such as a Level Shift or a Change in Trend or the onset of a Seasonal Pulse ). Intervention Detection then is a pre-cursor to Intervention Modelling where a suggested variable is included in the model. You can find information on the web by googling "AUTOMATIC INTERVENTION DETECTION" . Some authors use the term "OUTLIER DETECTION" but like a lot of statistical language this can be confusing/imprecise . Detected Interventions can be any of the following (detecting a significant change in the mean of the residuals );
a 1 period change in Level ( i.e. a Pulse )
a multi-period contiguous change in Level ( i.e. a change in Intercept )
a systematic Pulse ( i.e. a Seasonal Pulse )
a trend change (i.e. 1,2,3,4,5,7,9,11,13,15 ..... )
These procedures are easily programmed IN R/SAS/Matlab and routinely available in a number of commercially available time series packages however there are many pitfalls that you need to be wary of such as whether to detect the stochastic structure first or do Intervention detection on the original series. This is like the chicken and egg problem. Early work in this area was limited to type 1's and as such will probably be insufficient for your needs as your examples illustrate LEVEL SHIFTS.
There is a lot of material on the web and even a free program at [http://www.autobox.com/30day.exe](http://www.autobox.com/30day.exe) that even allows you to use your own data for 30days. You might learn lot "by simply watching" as Yogi once said and replicate their results.
The web references for the exact equations for you to use can be found starting at page 134 in
[http://www.autobox.com/pdfs/autoboxusersguide.pdf](http://www.autobox.com/pdfs/autoboxusersguide.pdf) . I am one of the authors of AUTOBOX.
| null | CC BY-SA 2.5 | null | 2011-03-21T00:08:22.847 | 2011-04-01T09:08:51.430 | 2011-04-01T09:08:51.430 | 3382 | 3382 | null |
8550 | 2 | null | 8528 | 5 | null | Empirical distributions are used all the time for inference so you're definitely on the right track! One of the most common use of empirical distributions is for bootstrapping. In fact, you don't even have to use any of the machinery you've described above. In an nutshell, you make many draws (with replacement) from the original samples in a uniform fashion and the results can be used to calculate the confidence intervals on your previously calculated statistical quantities. Furthermore, these samples have well developed theoretical convergence properties. Check out the wikipedia article on the topic [here](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29).
| null | CC BY-SA 2.5 | null | 2011-03-21T02:15:26.610 | 2011-03-21T08:29:02.110 | 2011-03-21T08:29:02.110 | 2116 | 3786 | null |
8551 | 1 | null | null | 2 | 3141 | What do you call a curve that is just the first half of a bell curve. For example, let's say in a typical bell curve of letter grades, a few students get F grades most get C grades and just a few get A grades.
I'd like a curve that is the first half of the bell curve so that a few students get F grades and the most common grade is an A grade.
(I'm not actually doing this for grading. It's just the example that came to mind.)
| What do you call just the first half of a bell curve? | CC BY-SA 2.5 | null | 2011-03-21T03:31:20.583 | 2017-05-26T01:00:50.850 | null | null | 3820 | [
"distributions"
] |
8552 | 2 | null | 8551 | 7 | null | A "bell curve" in the non-technical sense could refer to one of a family of statistical distributions which are bell-shaped. In the context of grading I've only ever seen the normal distribution (and it is by far the most common in general), but others include the logistic, t, etc. The [half-normal distribution](https://secure.wikimedia.org/wikipedia/en/wiki/Half-normal_distribution) is generated by taking the absolute value of a (zero-mean) normal distribution. That is to say, it represents the case where most students get an F and few get A's. By making a suitable transformation of this distribution (taking the negative to get the mirror image then shifting so that the maximum is 100 points), you can get the distribution you're after.
| null | CC BY-SA 2.5 | null | 2011-03-21T03:43:56.787 | 2011-03-22T05:20:15.463 | 2011-03-22T05:20:15.463 | 2975 | 2975 | null |
8553 | 2 | null | 1556 | 10 | null | Why stop at $t$-tests?
You can think of two variables being uncorrelated as two orthogonal vectors, exactly like the $x$ and $y$ axes in a two dimensional Cartesian coordinate system.
When either of two vectors, let's say $\mathbf{x}$ and $\mathbf{y}$ is correlated with the other, there will be a certain part of x that can be projected onto y and vice versa. With that in mind, it's fairly easy to see that since,
$$
\begin{align*}
\left<\mathbf{x},\mathbf{y}\right>&=\|x\|\|y\|\cos\left(\theta\right)\\
\frac{\left<\mathbf{x},\mathbf{y}\right>}{\|x\|\|y\|}&=\cos\left(\theta\right)=r
\end{align*}
$$
Where $r$ is Pearson's correlation coefficient and $\left<\cdot,\cdot\right>$ is the inner product of the arguments. When I learned this I was totally blown away by how geometrically simple the idea of correlation is. And this is definitely not the only way to measure the correlation between two (or more) variables.
Significance testing is a different ball game. Often we want to know by how much two (or more) groups differ on some outcome variable as a result of some manipulation that was performed on said groups. Like Brian said, you want to know if the two groups come from the same distribution, thus you compute the probability of sampling the mean difference (scaled by the standard error of the mean) that you obtained from your experiment, given that the null hypothesis (there's no significant difference in the means) is true. In behavioral research (and often elsewhere) if this probability is less 0.05, you can conclude that the difference in the two (or more) means is likely due to your manipulation.
EDIT: Dilip Sarwate pointed out that two uncorrelated variables can be statistically dependent, so I took out the first part. Thanks for that.
| null | CC BY-SA 3.0 | null | 2011-03-21T04:16:34.817 | 2012-03-24T22:50:53.610 | 2012-03-24T22:50:53.610 | 2660 | 2660 | null |
8555 | 5 | null | null | 0 | null | The [Pearson correlation](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient) between two random variables $X$ and $Y$ is defined as
$$ {\rm cor}(X,Y) = \frac{ E(XY) - E(X)E(Y) }{ \sqrt{ {\rm var}(X) {\rm var}(Y) } }$$
and is bounded between $-1$ (perfect negative linear relationship) and $1$ (perfect positive linear relationship). The numerator of ${\rm cor}(X,Y)$ is known as the [covariance](http://en.wikipedia.org/wiki/Covariance) between $X$ and $Y$.
If the correlation is $0$, we say the two variables are linearly independent.
| null | CC BY-SA 4.0 | null | 2011-03-21T04:30:19.377 | 2022-09-04T18:17:14.293 | 2022-09-04T18:17:14.293 | 919 | 4856 | null |
8556 | 4 | null | null | 0 | null | A measure of the degree of linear association among a pair of variables. | null | CC BY-SA 3.0 | null | 2011-03-21T04:30:19.377 | 2012-04-23T01:23:09.580 | 2012-04-23T01:23:09.580 | 919 | 2660 | null |
8557 | 1 | null | null | 14 | 15869 | The whole point of AIC or any other information criterion is that less is better. So if I have two models M1: y = a0 + XA + e and M2: y = b0 + ZB + u, and if the AIC of the first (A1) is less than that of the second (A2), then M1 has a better fit from the information theory standpoint. But is there any cutoff benchmark for the difference A1-A2? How much less is actually less? In other words, is there a test for (A1-A2) other than just eyeballing?
Edit: Peter/Dmitrij... Thanks for responding. Actually, this is a case where my substantive expertise is conflicting with my statistical expertise. Essentially, the problem is NOT choosing between two models, but in checking if two variables which I know to be largely equivalent add equivalent amounts of information (Actually, one variable in the first model and a vector in the second. Think about the case of a bunch of variables as against an index of them.). As Dmitrij pointed out, the best bet seems to be the Cox Test. But is there a way of actually testing the difference between the information contents of the two models?
| Testing the difference in AIC of two non-nested models | CC BY-SA 2.5 | null | 2011-03-21T04:57:51.157 | 2013-06-25T10:38:20.780 | 2011-03-21T11:28:01.733 | 3671 | 3671 | [
"regression",
"aic"
] |
8558 | 2 | null | 8557 | 15 | null | Is the question of curiosity, i.e. you are not satisfied by my answer [ here ](https://stats.stackexchange.com/questions/8513/test-equivalence-of-non-nested-models/8519#8519)? If not...
The further investigation of this tricky question showed that there do exist a commonly used rule-of-thumb, that states two models are indistinguishable by $AIC$ criterion if the difference $|AIC_1 - AIC_2| < 2$. The same you actually will read in wikipedia's article on [$AIC$](http://en.wikipedia.org/wiki/Akaike_information_criterion#How_to_apply_AIC_in_practice) (note the link is clickable!). Just for those who do not click the links:
>
$AIC$ estimates relative support for a model. To apply this in practice, we start with a set of candidate models, and then find the models' corresponding $AIC$ values. Next, identify the minimum $AIC$ value. The selection of a model can then be made as follows.
As a rough rule of thumb, models having their $AIC$ within $1–2$ of the minimum have substantial support and should receive consideration in making inferences. Models having their $AIC$ within about $4–7$ of the minimum have considerably less support, while models with their $AIC > 10$ above the minimum have either essentially no support and might be omitted from further consideration or at least fail to explain some substantial structural variation in the data.
A more general approach is as follows...
Denote the $AIC$ values of the candidate models by $AIC1$, $AIC2, AIC3, \ldots, AICR$. Let $AICmin$ denotes the minimum of those values. Then $e^{(AICmin−AICi)/2}$ can be interpreted as the relative probability that the $i$-th model minimizes the (expected estimated) information loss.
As an example, suppose that there were three models in the candidate set, with $AIC$ values $100$, $102$, and $110$. Then the second model is $e^{(100−102)/2} = 0.368$ times as probable as the first model to minimize the information loss, and the third model is $e^{(100−110)/2} = 0.007$ times as probable as the first model to minimize the information loss. In this case, we might omit the third model from further consideration and take a weighted average of the first two models, with weights $1$ and $0.368$, respectively. Statistical inference would then be based on the weighted multimodel.
Nice explanation and useful suggestions, in my opinion. Just don't be afraid of reading what is clickable!
In addition, note once more, $AIC$ is less preferable for large-scale data sets. In addition to $BIC$ you may find useful to apply bias-corrected version of $AIC$ criterion $AICc$ (you may use this `R` [code](http://www.awblocker.com/AICc.R) or use the formula $AICc = AIC + \frac{2p(p+1)}{n-p-1}$, where $p$ is the number of estimated parameters). Rule-of-thumb will be the same though.
| null | CC BY-SA 3.0 | null | 2011-03-21T05:56:28.850 | 2013-06-25T10:38:20.780 | 2020-06-11T14:32:37.003 | -1 | 2645 | null |
8559 | 1 | null | null | 4 | 136 | Related to [my previous question](https://stats.stackexchange.com/questions/8236/how-to-find-relationships-between-different-types-of-events-defined-by-their-2d), I have a dataset of 2D points with an associated label (this label can take 6 different values). As suggested in the answers to my other question, this can be modeled as a marked point process (or 6 different point processes), allowing to apply standard tools to study this dataset.
I would like to take the approach that I first suggested in my first question, and try to apply PCA on this dataset, to see if the different types of points are correlated or not (i.e. are some types always happening together?). Here's how I want to do it:
- Split my 2D space in a grid
- For each cell of that grid, count the number of points of each type. For one cell, this gives me a point in $\mathbb{R}^6 : x_i = (N_1(A_i), N_2(A_i), N_3(A_i), N_4(A_i), N_5(A_i), N_6(A_i))$, where $N_k(A_i)$ is the number of points of the $k^{th}$ point process (corresponding to points of type $k$) in the cell $A_i$
- Combine all the $x_i$ into a matrix $X \in \mathbb{R}^{6 \times M}$ and apply PCA to this matrix.
My question is the following: how do I build the grid? In other words, how do I rediscretize this dataset?
Indeed, the intensities of each process are not equal: some types appear more than others. If I just use a regular grid (all cells have the same area), the resulting points will have one or two components that dominates the others.
I was thinking of building my grid such that each cell has at most $N$ points, thus bounding the norm of the data points, but I don't think this will solve my "balance" problem.
Any suggestion, or pointer to litterature, are appreciated.
| How can I rediscretize my data? | CC BY-SA 2.5 | null | 2011-03-21T07:48:30.303 | 2011-03-21T08:57:00.910 | 2017-04-13T12:44:20.840 | -1 | 3699 | [
"pca",
"multivariate-analysis",
"normalization"
] |
8560 | 2 | null | 8557 | 8 | null | I think this may be an attempt to get what you don't really want.
Model selection is not a science. Except in rare circumstances, there is no one perfect model, or even one "true" model; there is rarely even one "best" model. Discussions of AIC vs. AICc vs BIC vs. SBC vs. whatever leave me somewhat nonplussed. I think the idea is to get some GOOD models. You then choose among them based on a combination of substantive expertise and statistical ideas. If you have no substantive expertise (rarely the case; much more rarely than most people suppose) then choose the lowest AIC (or AICc or whatever). But you usually DO have some expertise - else why are you investigating these particular variables?
| null | CC BY-SA 2.5 | null | 2011-03-21T10:13:20.787 | 2011-03-21T10:13:20.787 | null | null | 686 | null |
8561 | 1 | null | null | 5 | 2330 | I wonder if it possible to include a mediation effect in multinomial logistic regression. I have a categorical (3 categories) outcome variable and four predictors (all continuous). I expect one of the predictors (X1) to mediate the relationship between the outcome variable and another predictor (X2). I also expect direct effects of X1 and X2 on the outcome.
Apparently, I could not see an option of running the regression in two steps (or blocks). Is there a way of including mediation in this analysis? I would be glad if you could inform me how it is possible.
thanks,
PS: I am using PASW 18 for Mac.
| How to assess mediation effect in multinomial logistic regression? | CC BY-SA 2.5 | null | 2011-03-21T11:28:58.193 | 2011-03-21T11:37:40.173 | 2011-03-21T11:37:40.173 | 930 | null | [
"logistic",
"spss",
"multinomial-distribution",
"mediation"
] |
8562 | 1 | null | null | 4 | 381 | I have a completely within-subjects design with 3 independent variables:
- Trial type (3 levels)
- Task order (2 levels)
- Modality (3 levels)
However, for one of my levels of Trial type, the Task order and Modality levels are redundant (because it is essentially a baseline measurement).
Ideally, I'd like to run a 3 (trial type: 1, 2, or 3) x 2 (task order: a or b) x 3 (modality: vm, av, hm) repeated measures ANOVA.
Do you know of a way around this? If I were to duplicate the Task 3 data across 'modalities' and 'task orders' I would have the correct number of levels for each within-subjects factor (i.e. six identical columns of data to represent imaginary modalities and task orders for Task 3). I'm assuming this violates an assumption but I'd be worried about looking at a similar analysis with a huge number of paired sample t-tests. Is lots of t-tests the way to go?
Thanks for your help!
| How to deal with a specific case of unbalanced within-subjects design? | CC BY-SA 2.5 | null | 2011-03-21T11:49:35.680 | 2011-04-13T14:22:25.767 | 2011-03-21T12:28:42.143 | 930 | 3822 | [
"anova",
"repeated-measures"
] |
8566 | 1 | 8616 | null | 6 | 1445 | Actually this question may be simple for you, but I need to learn the correct answer.
If I remove misclassified instances from data set with Naive Bayes (it gives minimum FP rate) and then train logistic classifier on this filtered data set, will it overfit or not?
Thanks in advance.
| Overfit by removing misclassified objects? | CC BY-SA 2.5 | null | 2011-03-21T13:07:22.040 | 2011-11-24T10:38:49.250 | 2011-03-21T13:53:24.840 | null | 2170 | [
"machine-learning",
"naive-bayes"
] |
8567 | 1 | null | null | 5 | 2293 | I am trying to use DLM to model a time series. Candiate model includes local level, local trend and local trend with seasonal part. I do not know how to do model selection. Can AIC be calculated? I found no function in the R package [dlm](http://cran.r-project.org/web/packages/dlm/index.html).
| How to do model selection in dynamic linear model? | CC BY-SA 2.5 | null | 2011-03-21T13:29:38.997 | 2016-11-29T10:48:52.983 | 2011-03-21T13:49:35.403 | null | null | [
"r",
"time-series",
"model-selection",
"dlm"
] |
8568 | 1 | 8838 | null | 5 | 135 | The research group I work for have developed a theoretical growth model for a particular species of fish. The idea is that if you provide some initial starting values for the model you then generate an expected growth curve along with 95% confidence bands. To extend the model we would like to be able to update the model and recalculate the curve when real data becomes available. For example, imagine that at age 6 the model predicts an average weight of 34g, but in a random sample of fish we find that the mean age is, say, 24g, we would like to be able to use this new data to 'tweak' our previously estimated curve. At the moment I am not sure quite how to address this problem so any suggestions would be much appreciated.
| Updating/ adjusting theoretical growth curves when raw data becomes available | CC BY-SA 4.0 | null | 2011-03-21T15:08:27.397 | 2020-01-25T02:34:44.757 | 2020-01-25T02:34:44.757 | 11887 | 3136 | [
"regression",
"time-series",
"forecasting",
"growth-model"
] |
8569 | 1 | null | null | 2 | 288 | I am trying to test importance sampling for a simple a Wiener process $W_t$ in R:
```
set.seed(123)
Z <- matrix(rnorm(12*1000),12,1000)
W <- apply(Z,2,cumsum) #Wiener process simulated 1000 times for 12 periods
B <- W-(1:12) #Brownian motion with drift -1
w <- exp(-W[12,]) #Radon-Nikodym derivative (actually w <- exp(-W[12,]-12/2))
mean(B[12,]) #-11.973
sum(W[12,]*(w/sum(w))) #-7.881 to compare with -12!
```
I think that the estimate with the derivative is not really "precise"! Any suggestions to improve the accuracy, besides increasing the number of simulations?
| Change of measures with Wiener process | CC BY-SA 2.5 | null | 2011-03-21T15:10:02.380 | 2011-03-21T15:46:03.440 | 2011-03-21T15:46:03.440 | 1443 | 1443 | [
"r",
"monte-carlo"
] |
8570 | 1 | 8574 | null | 11 | 808 | I'm looking for a solid reference (or references) on numerical optimization techniques aimed at statisticians, that is, it would apply these methods to some standard inferential problems (eg MAP/MLE in common models). Things like gradient descent (straight and stochastic), EM and its spinoffs/generalizations, simulated annealing, etc.
I'm hoping it would have some practical notes on implementation (so often lacking in papers). It doesn't have to be completely explicit but should at least provide a solid bibliography.
Some cursory searching turned up a couple of texts: Numerical Analysis for Statisticians by Ken Lange and Numerical Methods of Statistics by John Monahan. Reviews of each seem mixed (and sparse). Of the two a perusal of the table of contents suggests the 2nd edition of Lange's book is closest to what I'm after.
| References on numerical optimization for statisticians | CC BY-SA 2.5 | null | 2011-03-21T15:24:18.457 | 2015-04-22T14:02:28.933 | 2011-04-30T12:04:29.620 | 930 | 26 | [
"estimation",
"references",
"optimization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.