Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
609250
2
null
609168
2
null
The `(1|ID)` term in your model specifies random intercepts among the `ID` values. The intercept, with default R coding, is the estimate of the outcome when all fixed categorical predictors are at reference levels and all fixed continuous predictors have values of 0. So there's no problem with having the fixed predictors involved in interactions with each other; those random intercepts still make sense in terms of variation about the overall model intercept. A potential problem is that your model omits individual coefficients for `B` and `C` and a `B:C` interaction. The term `A/(B*C)` is not the same as `A*B*C`: ``` attr(terms(formula(~A/(B*C))),"term.labels") # [1] "A" "A:B" "A:C" "A:B:C" attr(terms(formula(~A*B*C)),"term.labels") #[1] "A" "B" "C" "A:B" "A:C" "B:C" "A:B:C" ``` In general, it's [poor practice to omit lower-level coefficients](https://stats.stackexchange.com/q/11009/28500) of terms involved in higher-level interactions. As discussed on the linked page, there are some simple situations where you might get away with that, but with the three-way `A:B:C` interaction I suspect that you will be better off including individual coefficients for `B` and `C` and for a `B:C` interaction.
null
CC BY-SA 4.0
null
2023-03-13T07:43:47.710
2023-03-13T07:43:47.710
null
null
28500
null
609251
2
null
609237
0
null
The best approach would typically be an ensemble (2) of multi-class segmentation models (3). However, the best next step would be to just see how well a multi-class model (i.e. using softmax) works before jumping into ensembles. The main reason why you want to have one model to predict multiple classes rather than multiple models that each predict one class, is that the features that you need for one class will probably be better than the features for other classes. This means that the learned features will generally be stronger and you do not need to learn the features for each class separately. Once you have one or more multi-class models, you can typically squeeze out slightly more performance by training an ensemble of these models.
null
CC BY-SA 4.0
null
2023-03-13T07:50:33.433
2023-03-13T07:50:33.433
null
null
95000
null
609253
2
null
609241
6
null
AIC does guard against overfitting, but it will not completely prevent it. No metric can do that. So it is still important to sanity check one's model. I find AR or MA orders of 8 or 10 on seasonally differenced data rather questionable. I could not really imagine a data generating process that really looked back 10 years, rather than just 5. [There are good reasons why automatic ARIMA modeling procedures do not consider orders above 5.](https://stats.stackexchange.com/q/285093/1352) Also, you are only modeling the seasonal part: your model is ARIMA(0,0,0)(10,1,8)[12]. The seasonal part is extremely complex per above, but the nonseasonal part is a flat line. That looks strange. You may want to follow the seasonal ARIMA modeling workflow illustrated [here](https://otexts.com/fpp2/seasonal-arima.html) or [here](https://otexts.com/fpp3/seasonal-arima.html) to see how one can set both seasonal and non-seasonal orders. I personally am not a big fan of interpreting ACF/PACF plots and differencing back and forth to decide on ARIMA models. On the one hand, it's extremely hard, requires a lot of statistical understanding and is likely not replicable between analysts (if you have classmates, ask them what model they got - I would be surprised if half the class agreed). And you can't do it if you have more than one or a handful of time series. [The newer order selection process based on information criteria makes more sense to me.](https://stats.stackexchange.com/q/595150/1352) (However, your teacher likely won't accept "I threw the time series into `auto.arima()`, and this is what came out" as a solution.)
null
CC BY-SA 4.0
null
2023-03-13T07:58:50.967
2023-03-13T07:58:50.967
null
null
1352
null
609254
2
null
357963
0
null
Minimizing an importance sampling estimate of the KL divergence is equivalent to minimizing the cross entropy loss of these importance samples.
null
CC BY-SA 4.0
null
2023-03-13T08:02:57.423
2023-03-13T08:02:57.423
null
null
298651
null
609255
1
609259
null
2
33
I am using the utilities package in R to conduct the DES for a forecasting project. My problem involves forecasting the service requirement for patients. On average five patients get admitted to the ward daily and there are 46 beds in a ward. In order to do this simulation I have set K=46 (since I believe K = users/total facility capacity). I have set n=20, the number of patients and lambda=5 (avg no. of patients). The revival time is 30 minutes. SOME CODE IS RE-USED FROM SUPPLEMENTARY MATERIAL OF THE PACKAGE The use time of the facility: `use_time <- mu * runif(K) ` where mu=1440 minutes (avg time the facility service remains in use. The arrival time: ` ARRIVE <- cumsum(rexp(K, rate = 1/lambda))` Hence the code for QUEUE function: ``` QUEUE <- queue(arrive = ARRIVE, use.full = use_time, n = 20, revive = 30) ``` ``` lambda <- 5 # avg number of arrivals per day K <- 46 # users of the facility ARRIVE <- cumsum(rexp(K, rate = 1/lambda)) mu <- 1440 use_time <- mu * runif(K) library(utilities) QUEUE <- queue(arrive = ARRIVE, use.full = use_time, n = 20, revive = 30) plot(QUEUE) ``` My question is: the time I see at the bottom of the plot is in minutes? Or should this be in hours? Second, are the parameters set correctly? I am new to this hence I am not sure if I am on the right track. [](https://i.stack.imgur.com/wDgz7.png)
Need help in understanding how to carryout Discrete Event Simulation for forecasting using the utilities package in R
CC BY-SA 4.0
null
2023-03-13T08:11:55.343
2023-03-14T01:18:08.433
null
null
369492
[ "r", "forecasting", "simulation" ]
609256
1
null
null
0
26
I am computing forecasts with ARDL models of different lag length across both dependent and independent variables. Regardless of the lag lengths, the actual observations for the dependent variable lag the forecast values consistently by one time period (a quarter in this case). What could be a possible explanation for this? Thank you in advance!
Lag between forecasted and actual values ARDL
CC BY-SA 4.0
null
2023-03-13T08:14:01.457
2023-03-13T09:25:08.590
null
null
374038
[ "time-series", "forecasting", "econometrics", "stata", "ardl" ]
609257
2
null
609238
5
null
#### Grid approximations let you compute a discrete posterior approximation The [Cartesian grid](https://en.wikipedia.org/wiki/Regular_grid) used in this grid approximation is for one-dimensional parameter space so it consists of a set of vertices at evenly spaced points over the parameter range. If you were to use a Cartesian grid for a two-dimensional parameter space it would look like a standard square lattice, and if you were to use a Cartesian grid for a three-dimensional parameter space it would look like a standard cubic lattice. The idea of using the grid is that it gives you a discrete prior distribution with support on a finite number of points in the parameter space, which makes it simple to compute the corresponding posterior (which is also a discrete distribution over the vertices in the grid). Remember that when you do computing, the computer can only handle a finite number of calculations, so a grid approximation to a continuum lets you compute an answer. The discrete prior on the vertices of the grid approximate a prior on the larger continuum over the parameter range. So long as the grid is sufficiently "fine" relative to the changes in the true prior and likelihood, it should give you a good approximation to the continuous posterior it is designed to approximate.
null
CC BY-SA 4.0
null
2023-03-13T08:26:50.780
2023-03-13T08:26:50.780
null
null
173082
null
609258
2
null
609238
8
null
[Bayes theorem](https://en.wikipedia.org/wiki/Bayes_theorem) is $$ p(\theta|X) = \frac{p(X|\theta)\,p(\theta)}{p(X)} $$ where, by the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability), for discrete distributions $p(X) = \sum_\theta \,p(X|\theta)\,p(\theta)$. So for the numerator, you multiply the likelihood by the prior, and for the denominator, you need to do the same for all the possible values of $\theta$ and sum them to [normalize it](https://stats.stackexchange.com/questions/129666/why-normalizing-factor-is-required-in-bayes-theorem). It gets more complicated if we're dealing with continuous variables. If $\theta$ is continuous, "all" the values of theta mean infinitely many real numbers, we can't just sum them. In such a case, we need to take the integral $$ p(X) = \int p(X|\theta)\,p(\theta)\, d\theta $$ The problem is that this is not necessarily straightforward. That is why to do this we often use approximations like Laplace approximation, variational inference, MCMC sampling (see [Monte Carlo integration](https://en.wikipedia.org/wiki/Monte_Carlo_integration)), or other ways of [numerically approximating the integral](https://en.wikipedia.org/wiki/Numerical_integration). Grid approximation is one of those methods. It approximates the integral with [Riemann sum](https://en.wikipedia.org/wiki/Riemann_sum) $$ \int p(X|\theta)\,p(\theta)\, d\theta \approx \sum_{i \in G} p(X|\theta_i)\,p(\theta_i) \, \Delta\theta_i $$ where $G$ is our grid and $\Delta\theta_i = \theta_i - \theta_{i-1}$. Notice that in the example from the book uniform prior was used, so $\Delta\theta_i$ was constant and was dropped from the calculation. The grid is simply a set of points used to evaluate the function used for the sake of approximating the integral. The more points, the more precise the approximation is. It should also cover the range of possible values for $\theta$, e.g. if it is a Gaussian, it should not range lower than two or three standard deviations to account for [95% or more of the possible values](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). Picking the grids is a separate subject in itself (how large, uniform or not, etc). Using Riemann sum intuitively makes sense, as [you can think of](https://www.khanacademy.org/math/integral-calculus/ic-integration/ic-definite-integral-definition/v/riemann-sums-and-integrals) the integral as a sum over infinitely many elements $$ \lim_{n \to \infty} \sum_{i=1}^n f(x_i) \Delta x_i = \int \,f(x) \,dx $$ If the concepts of integral calculus and Riemann sums are not clear to you, I highly recommend the [Khan academy videos](https://www.khanacademy.org/math/integral-calculus/ic-integration) explaining them in greater detail. That said, there are many much better alternatives to grid approximation, so unless you are solving a simple, low-dimensional, problem, this is not something you should use.
null
CC BY-SA 4.0
null
2023-03-13T08:27:24.620
2023-03-13T09:15:59.557
2023-03-13T09:15:59.557
35989
35989
null
609259
2
null
609255
2
null
There are quite a few problems with your analysis at present. First of all, you say that there are 46 beds in the facility, but then your subsequent analysis uses $n=20$ instead (the parameter `n` is the number of facilities). Secondly, the arrival times you have generated (based on the example code) does not conform to your description of the number of patients that arrive each day and how long they are served. Even if you were to fix these issues, the main problem is that you have proposed to use a large number of facilites relative to the number of patients you actually have arriving at the amenity. If you have five patients arriving each day and they stay for one day each then you are going to have about five patients at the amenity at a time. Since this is far less than the proposed number of facilities, the facilities will easily be enough to deal with all the patients and so the analysis will not be very interesting; all patients will be served immediately with no waiting time. If you want to conduct a meaningful analysis here then you will need to simulate the arrival times and use times in a way that conforms to the behaviour of your patients. If you would like to understand how to do this type of analysis I recommend you read [O'Neill (2021)](https://arxiv.org/ftp/arxiv/papers/2111/2111.07064.pdf), which describes the model and computation. This paper discusses the queuing model used in the `queue` function and it shows you how to use the function. As can be seen in that paper (and in the function documentation), the function takes in inputs for the arrival times, use times and patience times for each of the users. If you want to use this function effectively, you will need to first simulate appropriate inputs for these items for your patients. The `queue` function can then model the outcomes that will occur given any number `n` of hospital beds. > ...the time I see at the bottom of the plot is in minutes? Or should this be in hours? The time in the queuing object and the resulting plot is in whatever units were used for the input. If your input times are in minutes then the times shown in the output will be in minutes, and so on.
null
CC BY-SA 4.0
null
2023-03-13T08:49:45.503
2023-03-14T01:18:08.433
2023-03-14T01:18:08.433
173082
173082
null
609260
2
null
609234
18
null
#### This suggests a daily periodic signal with harmonics When looking at time-series data in the frequency domain, it is common for periodic signals in the data to consist of a main frequency and then a set of [harmonics](https://dspillustrations.com/pages/posts/misc/fourier-series-and-harmonic-approximation.html). The harmonic frequencies are integer multiples of the main frequency of the signal. This occurs because each of the spikes in the frequency domain correspond to a sinusoidal wave in the time domain, and the actual periodic signal in the time domain is often not a perfect sinusoidal wave. Thus, what you have here sugggests that your wind power data has a periodic daily signal that is not a perfect sinusoidal wave (but which can be created as a weighted sum of a main sinusoidal wave and then four harmonics that are smaller sinusoidal waves at the harmonic frequencies).
null
CC BY-SA 4.0
null
2023-03-13T09:00:38.277
2023-03-13T09:00:38.277
null
null
173082
null
609262
2
null
509052
3
null
> Why do we use product - max(product) in the probability calculation? I actually find this part to be a bit silly. The other answers here say it is to get a "relative probability" taken with respect to the maximally probable element, but that doesn't make sense to me. (Why would you want to know that?) A much better way to do this would be to scale things to get a proper probability distribution where the probabilities add up to one. To do this in log-space you would compute $\mathbf{l} - \text{logsumexp}(\mathbf{l})$ instead of $\mathbf{l} - \max(\mathbf{l})$. This can be done with the following alternate code for the last part: ``` # map the grid_function over all candidate parameter pairs # multiply LL by prior and convert to probability full_tbl <- two_param_grid %>% mutate(log_likelihood = map2(shape_grid, scale_grid, grid_function)) %>% unnest() %>% mutate( shape_prior = 1, scale_prior = 1 ) %>% mutate(product = log_likelihood + shape_prior + scale_prior) %>% mutate(probability = exp(product - matrixStats::logSumExp(product))) ```
null
CC BY-SA 4.0
null
2023-03-13T09:14:40.233
2023-03-13T09:14:40.233
null
null
173082
null
609263
2
null
609256
2
null
This is quite common in time series forecasting, and there is little you can do about it. If the time series is close to a random walk, the best prediction is close to the last observed value, as the changes in the time series are essentially unpredictable. This way even the best possible forecast ends up lagging behind the actual series by one period.
null
CC BY-SA 4.0
null
2023-03-13T09:25:08.590
2023-03-13T09:25:08.590
null
null
53690
null
609264
2
null
196745
0
null
For a normal distribution then, approximately: $\mu_g=\mu_a-0.5\sigma^2$, where $\sigma$ is the standard deviation of the normal distribution. I'm defining $\mu_g=\sqrt[n]{(1+r_1)(1+r_2)...(1+r_T)}-1$ where $r_j$ are drawn from a normal distribution. I'm thinking here of the case of finance where r are asset returns, it does make sense to consider geometric returns as they represent growth rates.
null
CC BY-SA 4.0
null
2023-03-13T09:31:26.203
2023-03-13T09:31:26.203
null
null
382794
null
609265
2
null
609241
5
null
> I differenced the data seasonally and non-seasonally. and > This seems like quite a lot of parameters, and the non-seasonal differencing adds more characterizes a typical mistake known as overdifferencing. You would normally difference your time series if has a unit root. Simple differencing is for a simple unit root, and a seasonal differencing is for a seasonal unit root. Eyeballing your series, I do not see any of the two. But what happens if you difference a series that does not have a unit root? You introduce a unit-root moving-average component. With both simple and seasonal differencing, you may have introduced two of them. They produce certain autocorrelations that were not in the original series, and so you end up trying to fit these using high-order ARMA terms. What I would do instead is think twice before differencing and see e.g. what `auto.arima` suggests in two cases: allowing for arbitrary orders of differencing and forcing the orders of differencing to zero.
null
CC BY-SA 4.0
null
2023-03-13T09:35:37.677
2023-03-13T12:00:29.977
2023-03-13T12:00:29.977
53690
53690
null
609266
1
null
null
0
22
I have the following problem Given the training data for a linear regresison problem as follow: |Input |Output | |-----|------| |0 |0 | |1 |2 | |-1 |-2 | |2 |3 | After the first iteration, the values of the two coefficients are Θ0 = 2 and Θ1 = 1. What are the initial values of Θu and Θ1? Given the learning rate α. My approach to the problem: We denote Θ01 and Θ11 as the new values of Θ0 and Θ1, and Θ00 and Θ10 as the initial values of Θ0 and Θ1 We can calculate the updated values by: Θ01 = Θ00 - α$\frac{1}{m}$$\sum_{i=1}^4$(Θ00$\cdot$x+Θ10-y)$\cdot$x Θ11 = Θ10 - α$\frac{1}{m}$$\sum_{i=1}^4$(Θ00$\cdot$x+Θ10-y) $\begin{cases} Θ01 = Θ00 -\frac{4}{4}\sum_{i=1}^4(Θ00 \cdot x+Θ10-y)\cdot x \\ Θ11 = Θ10 -\frac{4}{4}\sum_{i=1}^4(Θ00 \cdot x+Θ10-y) \end{cases}$ $\begin{cases} 2 = Θ00 -\sum_{i=1}^4(Θ00 \cdot x+Θ10-y)\cdot x \\ 1 = Θ10 -\sum_{i=1}^4(Θ00 \cdot x+Θ10-y) \end{cases}$ $\begin{cases} 2 = Θ00 -[(Θ00 \cdot 0+Θ10-0)\cdot 0 + (Θ00+Θ10-2)\cdot 1 + (Θ00 \cdot (-1)+Θ10+2)\cdot (-1)+(Θ00 \cdot 2+Θ10-3)\cdot (2)] \\ 1 = Θ10 -[(Θ00 \cdot 0+Θ10-0) + (Θ00+Θ10+2) + (Θ00 \cdot (-1)+Θ10+2)+(Θ00 \cdot 2+Θ10-3)] \end{cases}$ $\begin{cases} 2 = Θ00 - (0 + Θ00 + Θ10 -2 + Θ00 - Θ10 - 2 + 4Θ00 + 2Θ10 -6) \\ 1 = Θ10 - (Θ10 + Θ00 + Θ10-2 - Θ00 + Θ10 + 2 + 2Θ00 + Θ10 - 3)\end{cases}$ $\begin{cases} 2 = Θ00 - (6Θ00 + 2Θ10 -10) \\ 1 = Θ10 - (2Θ00 + 4Θ10 - 3)\end{cases}$ $\begin{cases} 2 = Θ00 - 6Θ00 - 2Θ10 +10 \\ 1 = Θ10 - 2Θ00 - 4Θ10 + 3\end{cases}$ $\begin{cases} 2 = -5Θ00 - 2Θ10 +10 \\ 1 = -2Θ00 - 3Θ10 + 3\end{cases}$ $\begin{cases} 5Θ00 + 2Θ10 = 8 \\ 2Θ00 + 3Θ10 = 2\end{cases}$ $\begin{cases} 15Θ00 + 6Θ10 = 24 \\ 4Θ00 + 6Θ10 = 4\end{cases}$ $\begin{cases} 11Θ00 = 20 \\ 4Θ00 + 6Θ10 = 4\end{cases}$ $\begin{cases} Θ00 = \frac{20}{11} \\ 2Θ00 + 3Θ10 = 2\end{cases}$ $\begin{cases} Θ00 = \frac{20}{11} \\ \frac{40}{11} + 3Θ10 = 2\end{cases}$ $\begin{cases} Θ00 = \frac{20}{11} \\ Θ10 = -\frac{18}{33} \end{cases}$ Hence, the initial values of Θ0 and Θ1 are $\frac{20}{11}$ and -$\frac{18}{33}$ I had read some documents where the updated coefficient via gradient descent, is calculated by the formula: Θj := Θj - α$\frac{1}{2m}$$\sum_{i=1}^m$[($h_{Θ}$$(x_{i})$-y)$\cdot$$(x_{i})]$ but I fail to understand why it includes $\frac{1}{2m}$ instead of $\frac{1}{m}$. As such, is my approach to the problem correct, and did I use the correct formulas? I would really appreciate any comments. Thank you all very much !
Calculating initial values of a Linear Regression Model
CC BY-SA 4.0
null
2023-03-13T09:38:15.753
2023-03-13T09:38:15.753
null
null
383080
[ "regression", "machine-learning", "gradient-descent", "linear-algebra", "calculus" ]
609268
2
null
609225
3
null
Here are some papers giving theoretical guarantees for Multiple Linear Layer Networks (more generally known in the literature as Deep Linear Networks) which should be of interest to you : - First, the paper Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global shows that, under some technical assumptions, all local minima of Deep Linear Networks are in fact global. This is good news as it guarantees that Deep Linear Networks trained with SGD will generalize "almost" as well as the Empirical Risk Minimizer (where "almost" corresponds to the how close your SGD output is to the minimizer) - As for actual speed-up of the training, you can have a look at these : Exact natural gradient in deep linear networks and its application to the nonlinear case, Global Convergence of Gradient Descent for Deep Linear Residual Networks, Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks and A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks. They are all quite technical, but from my understanding, the main takeaway message from all of these is that, when properly initialized, (stochastic) gradient descent on Deep Linear Networks converges extremely (read : exponentially) fast towards the global minimum, and the speed of convergence seemingly increases with the number of linear layers you add. This behavior is specific to Deep Linear Networks and does in general not hold for Non-Linear networks. In conclusion, yes, stacking linear layers is beneficial for training purposes : if you add more linear layers, the above papers indicate that SGD should converge to the global minimum quicker, and hence generalize almost as well as the global minimizer. However, as you correctly point out, the representation power of deep linear models is very limited (basically the same as a regular linear model), so these models remain of little practical relevance for solving complex regression tasks, and the main interests of the above papers is that they provide some theoretical tools which help us build a theory towards a better understanding of Deep (Non-Linear) Networks. As a quick last note, I just want to point out that Deep Linear Models are far from being completely useless : they are for instance at the core of [Deep Matrix Factorization Models](https://www.ijcai.org/Proceedings/2017/0447.pdf), which have numerous applications.
null
CC BY-SA 4.0
null
2023-03-13T10:03:55.997
2023-03-13T10:03:55.997
null
null
305654
null
609269
1
null
null
1
74
Update prior to posting So I had just finished writing my post and got suggested "time series segmentation" for my tags. Then I looked it up and it seems like it is the thing I need to do. I did some reading just now and found some options such as [Hidden Markov Models](https://en.wikipedia.org/wiki/Time-series_segmentation#Hidden_Markov_Models) as well as a python toolbox ([seglearn](https://dmbee.github.io/seglearn/user_guide.html)) that seems like it can do what I need. I decided to still post this to get some input, especially since there seems to be a couple of approaches to this analysis and I am not sure which would suit my case best. Thank you! My original Post Dear community I have body (body-parts-in-space-coordinates and joint-angles) data while people are walking and I would like to design an algorithm that automatically detects and extracts (returns the timepoints of) step-cycles (i.e., the start to end time points of people making a step with a given leg - the full time series has multiple steps in addition to people turning to walk back etc.). Here is an Example plot: [](https://i.stack.imgur.com/6e1BT.png) For example, let's imagine I would like to extract the time window of the box labelled "step" in the top panel (it is a bit more complicated in reality but considering only the simple example will be enough for now) I already developed an algorithm (using gradients and amplitude % criteria) that works if the data look similar to our example. However, since we also have patient data, signals, and the step cycles within them can look quite different to one another. For example, consider this time series: [](https://i.stack.imgur.com/zQtEB.png) My algorithm breaks for such data. I was considering a supervised ML approach, something like: - Train on full time series with step-cycle time-windows being labelled as classes of interest. - Test on unknown time series and asking the model to return time-windows of where it predicts step cycles to have occurred. - Compare to human performance (at the moment, people manually select step cycle time windows by hand) I have been googling for some days and found quite some time series classification approaches and toolboxes (such as [sktime](http://www.sktime.net/en/latest/) for example) but I felt like people are mostly interested in forecasting with time series ML. Also - I should add that our dataset has probably 3-6 features that will likely be useful for the algorithm to classify this (and as mentioned before the "real" classification problem is a bit more complex than just the "single bump pattern" of the blue time series of my earlier example - e.g., step cycles as well as time series themselves vary in duration) It would be great if any of you have any ideas or could point me to some keywords/papers. It really does not seem too difficult of a classification problem and it bugs me that I wasn't able to find anything about it thus far. So thank you very much for your help, I really do appreciate it.
(supervised ML) approach for time series segmentation
CC BY-SA 4.0
null
2023-03-13T10:13:05.690
2023-03-13T10:13:05.690
null
null
186968
[ "machine-learning", "time-series", "timeseries-segmentation" ]
609270
1
null
null
2
31
I'm currently submitting a paper about elements in soil solution. Reviewers told me to justify the conclusion by the data, and I need some of your unlimited knowledge. Here is a plot of $\textrm{Al }$ present in soil solution over $6$ years. My conclusion to be supported is that during winter (trees dormancy, the purple line is $21~\rm Dec. $ of each year), more $\rm Al$ is released in solution. [](https://i.stack.imgur.com/t5vNs.png) My initial intuition is to support it using time series analysis. However, despite knowing some stuff about statistics, I never did time series analysis. So, here are my questions: - Is my analysis relevant given my conclusion to be supported? - How in R, analyze using the forecast library? I already did : `sarima_model <- auto.arima(soil_data, seasonal = TRUE)` `summary(sarima_model)` which gives me : ``` Series: tsdata ARIMA(0,0,2) with non-zero mean Coefficients: ma1 ma2 mean 0.3591 0.3215 0.1127 s.e. 0.0569 0.0564 0.0119 sigma^2 estimated as 0.01387: log likelihood=197.92 AIC=-387.84 AICc=-387.69 BIC=-373.4 Training set error measures: ME RMSE MAE MPE MAPE MASE ACF1 Training set -0.0001118715 0.1171314 0.07782753 768.5151 1255.721 0.9025749 0.02918578 ``` But I am not able to interpret results.
Statistically support a conclusion of time series
CC BY-SA 4.0
null
2023-03-13T10:17:03.183
2023-03-14T12:44:23.670
2023-03-14T12:33:03.987
35989
360604
[ "r", "time-series" ]
609272
1
null
null
1
48
Suppose we have a posterior sample of parameter $\theta$ obtained by fitting some Bayesian model to $n$ data points. In black is the empirical posterior density and in red is a normal approximation to the posterior. [](https://i.stack.imgur.com/7V2Q3.png) Is there a way to write the normal approximation to the posterior in terms of $n$? (My goal is to have an idea of what the posterior distribution would have been if I had had a larger $n$.)
Normal approximation to the posterior distribution
CC BY-SA 4.0
null
2023-03-13T10:26:44.357
2023-03-13T10:26:44.357
null
null
7064
[ "bayesian" ]
609273
1
609278
null
1
42
I have fit a GAM with two continuous independent variables and one discrete covariate with two levels ("gray" and "brown"). All variables are scaled and the model runs fine: ``` mod3 = gam(cont1 ~ disc1 * cont2 * cont3 + s(cont2, cont3, by=disc1, k=4), data=data_so) summary(mod3) Family: gaussian Link function: identity Formula: cont1 ~ disc1 * cont2 * cont3 + s(cont2, cont3, by = disc1, k = 4) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.015009690 0.038191859 0.39301 0.69440639 disc1brown -0.032717084 0.066537035 -0.49171 0.62304187 cont2 0.212856916 0.018793283 11.32622 < 2.22e-16 cont3 0.005457858 0.019394264 0.28142 0.77845557 disc1brown:cont2 0.061748331 0.021911015 2.81814 0.00493549 disc1brown:cont3 0.016191669 0.027321828 0.59263 0.55357838 cont2:cont3 0.118947202 0.035662553 3.33535 0.00088654 disc1brown:cont2:cont3 -0.098224982 0.071036399 -1.38274 0.16708507 Approximate significance of smooth terms: edf Ref.df F p-value s(cont2,cont3):disc1gray 1.8362810 2.0676743 1.08907 0.1846056 s(cont2,cont3):disc1brown 0.7998747 0.7998747 9.65092 0.0055757 Rank: 10/14 R-sq.(adj) = 0.128 Deviance explained = 13.5% GCV = 0.88013 Scale est. = 0.87182 n = 914 ``` However, the approximate significance of each smooth term does not match my a priori expectation. More importantly, it also doesn't match the visualized model fit, by which "gray" should be non-linear, and "brown" somewhat linear: ``` vis.gam(mod3, view=c("cont2","cont3"), plot.type="contour", cond=list(disc1="gray"), main="disc1: gray") vis.gam(mod3, view=c("cont2","cont3"), plot.type="contour", cond=list(disc1="brown"), main="disc1: brown") ``` [](https://i.stack.imgur.com/PtzXP.png) If I run the model only with either class, it appears to match the figure (high F/low P value for approx. sign. of smooth term for "gray, and the opposite for the "brown" model: ``` > mod3a = gam(cont1 ~ cont2 * cont3 + + s(cont2, cont3, k=4), + data=data_so[disc1=="gray"]) > summary(mod3a) Family: gaussian Link function: identity Formula: cont1 ~ cont2 * cont3 + s(cont2, cont3, k = 4) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.021064013 0.038786204 0.54308 0.5872749 cont2 0.175710375 0.019469841 9.02475 < 2.22e-16 cont3 -0.003258934 0.019493737 -0.16718 0.8672855 cont2:cont3 0.118727033 0.036447026 3.25752 0.0011869 Approximate significance of smooth terms: edf Ref.df F p-value s(cont2,cont3) 1.590295 1.828967 34.76433 < 2.22e-16 Rank: 5/7 R-sq.(adj) = 0.127 Deviance explained = 13.2% GCV = 0.91631 Scale est. = 0.90938 n = 609 > mod3b = gam(cont1 ~ cont2 * cont3 + + s(cont2, cont3, k=4), + data=data_so[disc1=="brown"]) > summary(mod3b) Family: gaussian Link function: identity Formula: cont1 ~ cont2 * cont3 + s(cont2, cont3, k = 4) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.01997134 0.05075102 -0.39352 0.69422 cont2 0.17893076 0.02588782 6.91177 0.000000000028642 cont3 0.01326603 0.02519297 0.52658 0.59888 cont2:cont3 0.02072222 0.05873067 0.35283 0.72446 Approximate significance of smooth terms: edf Ref.df F p-value s(cont2,cont3) 1.075582 1.075582 2.71988 0.06871 Rank: 5/7 R-sq.(adj) = 0.133 Deviance explained = 14.2% GCV = 0.80732 Scale est. = 0.79673 n = 305 ``` What is going on here? Might be related: [Why do gam predictions not match gam smooth?](https://stats.stackexchange.com/questions/210810/why-do-gam-predictions-not-match-gam-smooth)
GAM: covariate specific fits do not appear to match approximate significance of smooth terms
CC BY-SA 4.0
null
2023-03-13T10:28:18.047
2023-03-13T15:34:56.357
2023-03-13T12:18:04.087
345611
123056
[ "r", "regression", "interaction", "generalized-additive-model", "smoothing" ]
609274
1
null
null
0
7
Some context : I work on insects and most of our tests are mortality assays : we subject groups of insects, in replicates (usually 5-6), to insecticides and record the mortality 24h later. Since we prepare our own assays, we include one or two susceptible strains in each one : those strains should have a 100% mortality but very often don't, so it helps 'calibrate' our tests and if the mortality is low in all tested strained and controls, we don't overreact. Note that there's no negative control because we have basically non-existent natural mortality. My problem's this : we have done a good portion of those assays ourselves, and the results are pretty consistent. Some of the work was also done by people in other countries, on different strains with at least one of the two control strains, and my task now is to compile all the data and analyze it. The question is : how can I compare all the different assays, knowing that all my controls have different mortalities, sometimes lower than the tested strains, when they should be at 100%? And how to account for the cases where some assays have one control and not both ? I tried reversing [Abbott's mortality correction formula](https://en.wikipedia.org/wiki/Walter_Sidney_Abbott), thinking that if in its classic form it can 'eliminate' the natural part of the recorded mortality, then if reversed it would compensate the 'non-observed' part of the mortality that we miss because our tests don't kill enough. Thr formula I used is basically a cross-multiplication : (motality / control mortality) x 100. I'm not sure this is mathematically sound. In short : how to control for control disparities in multiple experiments ?
How to control for mortality in an assay with multiple tests and inconsistent control mortality results?
CC BY-SA 4.0
null
2023-03-13T10:41:14.640
2023-03-13T10:41:14.640
null
null
383081
[ "control-group", "mortality" ]
609275
1
609276
null
5
201
I wonder who first suggested and defined weak stationarity, and which paper it is. It seems that many paper using it as given, but I'm just curious of how it is defined and proved, and the discussions over the definition.
Who first suggested weak stationarity and strict stationarity?
CC BY-SA 4.0
null
2023-03-13T10:50:11.083
2023-03-13T16:03:14.307
2023-03-13T11:11:22.730
362671
369368
[ "stochastic-processes", "stationarity", "history" ]
609276
2
null
609275
7
null
It was developed by Khintchine in Korrelationstheorie der stationare stochastischen Processe, Math. Ann. 109, 604-615. As $\rm [I]$ notes: > The second line of development began with a series of papers in 1932-1934 by the Russian mathematician Khintchine who introduced both stationary and weakly stationary stochastic processes and developed the correlation theory for weakly stationary processes [see Khintchine (1934)]. This development was important not only for time series analysis but was also one of the pioneering works in the modern theory of stochastic processes. Later, Kolmogorov (1941a) developed the geometric theory of weakly stationary time series and Cramér (1942) discovered the important spectral decomposition of weakly stationary processes… --- ## Reference: $\rm [I]$ The Spectral Analysis of Time Series, Lambert H. Koopmans, Academic Press, $1995, $ sec. $2.1, $ p. $29.$
null
CC BY-SA 4.0
null
2023-03-13T11:10:50.040
2023-03-13T16:03:14.307
2023-03-13T16:03:14.307
44269
362671
null
609278
2
null
609273
3
null
So there are a number of issues I have to address here. First off, you should almost never use GCV, as it typically undersmooths and creates overly "wiggly" lines in your GAM (Simpson, 2018). Second, if you use a tensor product term, there is no need to standardize your variables in the model (see Baayen et al., 2017 and Pedersen et al., 2019 for more info on these). I express why standardization is problematic from a practical view [in this answer.](https://stats.stackexchange.com/a/604393/345611) But by far the biggest issue I see in your model is you have actually modeled most of your variables as parametric coefficients, which means they have no estimated curvilinearity in your regression. In order to estimate main effects as well as the interaction of each variable, you should give each their own spline. I'm assuming what you actually wanted was something like this: ``` mod3 <- gam(cont1 ~ disc1 + s(cont2) + s(cont3) + ti(cont2, cont3, by=disc1, k=c(4,4), data = data_so, method = "REML") ``` An example of what this is doing is shown here in Baayen et al., 2017 (here they use `ti` for the main effect splines, but `s` for main effects is functionally equivalent): [](https://i.stack.imgur.com/qIDWS.png) This may be partially why your interactions don't match expectation. So you should revise your model to have proper spline terms first, then try to understand what is going on thereafter. #### Citations - Baayen, H., Vasishth, S., Kliegl, R., & Bates, D. (2017). The cave of shadows: Addressing the human factor with generalized additive mixed models. Journal of Memory and Language, 94, 206–234. https://doi.org/10.1016/j.jml.2016.11.006 - Pedersen, E. J., Miller, D. L., Simpson, G. L., & Ross, N. (2019). Hierarchical generalized additive models in ecology: An introduction with mgcv. PeerJ, 7, e6876. https://doi.org/10.7717/peerj.6876 - Simpson, G. L. (2018). Modelling palaeoecological time series using generalised additive models. Frontiers in Ecology and Evolution, 6(149), 1–21. https://doi.org/10.3389/fevo.2018.00149
null
CC BY-SA 4.0
null
2023-03-13T12:16:21.683
2023-03-13T15:34:56.357
2023-03-13T15:34:56.357
345611
345611
null
609282
1
null
null
0
47
I'm trying to determine whether it's appropriate to use the Cochrane-Armitage trend test on this set of data? I don't have any specific hypotheses I need to investigate, just to choose an appropriate test to look for a significant difference. I've attached the contingency table. There is an observed trend in the latter 3 categories but not in the first. Do you need an obvious observed trend throughout the data in order to warrant using CA? [](https://i.stack.imgur.com/6G2OC.png)
Does Cochrane-Armitage test require an obvious trend in contingency table?
CC BY-SA 4.0
null
2023-03-13T12:49:53.790
2023-03-13T13:51:15.983
2023-03-13T13:51:15.983
11887
382520
[ "hypothesis-testing", "statistical-significance", "contingency-tables" ]
609283
1
null
null
1
45
I have the following problem: [](https://i.stack.imgur.com/HRb7K.jpg) Here is my approach: With the activation function: $F(x) = x^2 + 2x + 3$, we can calculate the activation of the two units of the second layer by: $a_1^2 = F(w_{13}\cdot x_1 + w_{23}\cdot x_2) = F(2\cdot 1 + (-3)\cdot (-1)) = F(5) = 38$ $a_2^2 = F(w_{14}\cdot x_1 + w_{24}\cdot x_2) = F(1\cdot 1 + 4\cdot (-1)) = F(-3) = 6$ With new inputs, we can now calculate the Output by: $h(x) = F(w_{35}\cdot a_1^2 + w_{45}\cdot a_2^2)$ = $F(2\cdot 38 + (-1)\cdot 6) = F(70) = 5043$ I was wondering whether my approach to the problem was correct or not? I would really appreciate any comments. Thank you all!
Calculate the output of a Neural Network
CC BY-SA 4.0
null
2023-03-13T12:59:06.480
2023-04-09T00:03:10.297
2023-03-13T13:14:26.880
383080
383080
[ "machine-learning", "self-study", "neural-networks", "linear-algebra", "calculus" ]
609284
1
null
null
0
21
My current work involves asking subjects from different groups to pronounce multiple words, and I am interested in understanding the relationship between word duration and groups. In R syntax, this can be expressed as: `Duration ~ Group` However, since there is a random effect on the durations that is unique to each subject, I plan to include the Subject variable as a random effect by using: `Duration ~ Group + (1|Subject)` The challenge is that the random effect variable `Subject` may be correlated with the fixed effect variable `Group`, leading to collinearity (in the case of LMM) or concurvity (in the case of GAM). Despite searching for answers to similar questions, I have yet to find a solution.
How to deal with collinearity or concurvity between fixed effect variables and random effect variables?
CC BY-SA 4.0
null
2023-03-13T13:11:35.310
2023-04-07T07:36:39.463
2023-04-07T07:36:39.463
1390
294933
[ "multicollinearity", "glmm", "generalized-additive-model", "gamm4", "concurvity" ]
609285
1
null
null
0
36
I am having a debate with one of my co-workers about the variance of a new random variable created from taking the mean of one random variable minus another random variable. The random variable R is created by taking the mean of the random variable A and subtracting the random variable P $$ R = \bar{A} - P $$ What is the resulting variance of R? I say the variance of R is equal to the variance of P because when taking the mean of A it becomes a constant making it have no variance or covariance. $$ Var(R) = Var(\bar{A}) + Var(P) - 2*Cov(\bar{A}, P) = Var(P) $$ My co-worker says $\bar{Y}$ is a random variable and its variance needs to be included in the variance of R. $$ Var(R) = Var(\frac{\sum A_i}{n} - P_i) = Var(\frac{\sum A_i}{n}) + Var(P_i) - 2*Cov(\frac{\sum A_i}{n}, P_i) $$ Can anyone help settle this debate?
What is the variance of the new random variable created from taking the mean of one random variable minus another random variable?
CC BY-SA 4.0
null
2023-03-13T13:26:46.640
2023-03-13T13:26:46.640
null
null
212398
[ "distributions", "variance", "random-variable" ]
609286
1
null
null
0
30
I am running a sem using `piecewiseSEM` package. One of my three models is gamma distributed. I wanted to calculate scale standardized coefficients as it would be very informative for my study. While reading the sem book ([https://jslefche.github.io/sem_book/index.html](https://jslefche.github.io/sem_book/index.html)) I realised that scaling coefficients from GLMMs requires many assumptions about the mean, variance and R2. I understand the theory behind scaling negative binomial and poisson models' coefficients, but I have no idea how to approach it in gamma distributed models. Is it possible, and more importantly, is is sensible to calculate scaled coefficients for gamma models?
Standardizing coefficients in gamma distributed model - piecewiseSEM
CC BY-SA 4.0
null
2023-03-13T13:30:52.600
2023-03-13T13:30:52.600
null
null
383096
[ "r", "regression-coefficients", "structural-equation-modeling", "r-squared", "gamma-distribution" ]
609287
2
null
609282
0
null
The C-A test looks for a trend over all the ordinal categories. It seems to be a reasonable approach in this situation. But note that it's testing if the trend is different across the dichotomous category variable. It looks like the trend is similar for both categories of Longcommute.
null
CC BY-SA 4.0
null
2023-03-13T13:32:02.993
2023-03-13T13:32:02.993
null
null
166526
null
609288
1
null
null
0
12
I am doing a problem in variational inference. I have some data $y$ and I want to understand which distribution it came from. I have the ELBO defined as - $$\text{ELBO}(\phi, D) = \sum_{n=1}^{N} E_{q(z_n|y_n,\phi, D)} \left[ \ln \frac{p(y_n,z_n|D)}{q(z_n|y_n, \phi, D)} \right]$$ The $z_n$ are the latent variables and $D$ is the model parameter, $q(z_n|y_n, \phi, D)$ is the variational distribution and $p(y_n,z_n|D)$ is the joint. My goal is to maximize the ELBO wrt the model parameter $D$ and $\phi$. I have implemented this is pytorch. Now the issue I am running into is that the optimization heavily depends on the initial conditions. So I wish to put a prior on my D. How can I do that?
How to put a prior over model parameters?
CC BY-SA 4.0
null
2023-03-13T13:42:08.103
2023-03-13T13:43:44.613
2023-03-13T13:43:44.613
362671
260464
[ "prior", "variational-bayes", "variational-inference" ]
609289
1
null
null
2
26
I have just started learning about using reproducing kernel hilbert spaces for regularisation in machine learning. I am looking for some examples of reproducing kernels that produce identifiable and non identifiable models of the following form: $$ Y_i \sim \text{EF}(\mu_i,\phi),~g(\mu_i) = \gamma + f(x_i), f \in \mathcal H_k $$ where EF here some an exponential family distribution, $x_i \in \mathbb{R}^p$ and $\mathcal H_k$ is a RKHS with reproducing kernel $k(x,x')$. We consider only $\mathcal H_k \subset \{ f \colon \mathbb{R}^p \to \mathbb{R} \}$. I would like examples of kernels for which this model is identifiable, and those under which it isn’t. My understanding is that the model is identifiable if each distinct set of $(\gamma,\phi, f)$ corresponds to different exponential family distributions EF. Is this correct? If so, I am then unsure about what restrictions need to be on $f$ such that our model is identifiable. More generally, I have seen that if we have $f \in \mathcal C^2(x)$ (twice differentiable functions) then our model isn't identifiable, because we have $g(\mu_i) = \gamma + f(x_i) = (\gamma - c) + (f(x_i) + c)$ and $(f(x) + c) \in \mathcal C^2(x)$. This makes sense to me, but I am unclear about how we can then make this model identifiable by changing the properties of $f$, or rather by restricting the domain on which $f$ exists. I am also unsure how to implement these domain restrictions by using different kernels to define our RKHS. Any help / examples of reproducing kernels would be appreciated.
Identifiability of models on RKHS
CC BY-SA 4.0
null
2023-03-13T13:51:30.123
2023-03-13T13:51:30.123
null
null
307087
[ "machine-learning", "kernel-trick", "identifiability" ]
609291
2
null
561568
2
null
I am not sure I understand the question, because I don't understand the relation with convolutions. If we assume that $X$ and $Y$ are independent, then $pX+(1-p)Y$ is indeed a convolution, but what if there are not assumed independent? If $X$ and $Y$ are assumed independent, then taking $p=1$ and $p=0$ allows us to recover the law of $X$ and $Y$, and therefore of $(X,Y)$. If the law of $(X,Y)$ is uniquely determined by its moments (for example if the law of $(X,Y)$ has compact support), then from the knowledge of the law of $pX+(1-p)Y$ for each $p \in [0,1]$, we can recover all the moments $\mathbb{E}[X^nY^m]$ (since the map $p \mapsto \mathbb{E}[(pX+(1-p)Y)^{n+m}]$ is polynomial, and $\mathbb{E}[X^nY^m]$ can be recovered from the coefficients of said polynomial) and we can therefore recover the law of $(X,Y)$. Following the remark of @whuber, we can also try to recover the characteristic function of $(X,Y)$. Indeed, we have that $\phi_{pX+(1-p)Y}(t) = \phi_{(X,Y)}(tp, t(1-p))$. If we only know the law of $pX +(1-p)Y$ for $p \in [0,1]$, we can recover half of the values (that is, on all pairs $(\lambda_1,\lambda_2)$ that have the same sign) of the characteristic function. If we know the law of $pX + (1-p)Y$ for all $p \in \mathbb{R}$, then we can recover the characteristic function of $(X,Y)$ on all the plane but one line (the line $\{(\lambda, - \lambda) \ \vert \ \lambda \in \mathbb{R}\}$), but by continuity, we can deduce the values of the characteristic function on the missing line.
null
CC BY-SA 4.0
null
2023-03-13T14:07:28.467
2023-03-13T16:14:40.283
2023-03-13T16:14:40.283
189701
189701
null
609292
1
null
null
0
15
I am working with the demand forecasting project using time series. The problems is that there are too many items that need demand forecasting model. So I want to use the most general ways that can use with most of the data. Now, I have use zscore and modified zscore to clean out the outliers and. I try to avoid using decomposition time series to clean the noise because it need to manually check the order of differencing trend and season. So i looking another method like smoothing? Is it possible that I smooth the time series may be using moving average and train the models over the smoothing value?
Implemented model over the smoothing value in timeseries
CC BY-SA 4.0
null
2023-03-13T14:16:45.493
2023-03-13T14:17:45.340
2023-03-13T14:17:45.340
383100
383100
[ "machine-learning", "time-series", "arima", "exponential-smoothing", "prophet" ]
609293
1
null
null
1
51
Suppose I am a doctor trying to predict the probability of a patient getting a heart attack. The way I would approach this, in a clinical setting, is by using a logistic regression. For exmaple: P(HA) = 1/1+e^-(b0 + b1age + b2race + b3sex + ... + bnvariable_n). Is it possible to obtain predicted probabilites from other classification models? Say I used XGboost and it had a better performance than logistic regression for my classification. I then chose the 10 most important features in the model. I would like to obtain an equation that predicts that a patient would get a heart attack knowing their 10 characteristics ? I do not want to only classify the person, I would like to obtain a probability. The reason I am asking this is because I would like to predict the different probabilities of different combinations of characteristics of high risk groups. i.e I would like to obtain different probabilites' profiles given the different combinations of 10 characteristics. Is this possible for all following classification models (linear SVM, KNN,LDA) ?
Predicting patients' probabilities from different classification models
CC BY-SA 4.0
null
2023-03-13T14:42:32.963
2023-04-09T15:32:45.427
2023-04-09T00:06:21.470
247274
383104
[ "machine-learning", "probability", "predictive-models", "supervised-learning" ]
609294
1
null
null
0
22
I want to compare observations with a set of simulated data, but I'm struggling to find the best way of doing so: - My observations consist on a set of datapoints (~100) in a 3 dimensional space - I simulated these observation using a model that depends on several parameters, as well as on the initial conditions Since there are many variables involved, I can't easily compare observations and simulations, so my idea was to use the observations to define a probability distribution function, and then check the likelihood of each simulated point to be generated by this PDF. I don't know much about statistics, but for what I've read a Kolmogorov-Smirnof test would be the best option if my data was 1D. However, since it's not, I'm not sure what would be the best approach to do this comparison. I'd appreciate any ideas
Comparing 3D distribution
CC BY-SA 4.0
null
2023-03-13T15:05:00.943
2023-03-13T15:05:00.943
null
null
383106
[ "statistical-significance", "kolmogorov-smirnov-test", "model-comparison" ]
609296
2
null
78207
0
null
A quick and easy way to smooth any signal would be using a filter. For use with python, I'd recommend the scipy.signal.filtfilt function as it doesn't introduce any phase lag.
null
CC BY-SA 4.0
null
2023-03-13T15:23:37.020
2023-03-13T15:23:37.020
null
null
383110
null
609297
1
null
null
0
74
I have the following dataset: |Date |ID 1 |ID 2 |ID1avg_rtg |ID2avg_rtg |ID1avg_Importance |ID2avg_Importance | |----|----|----|----------|----------|-----------------|-----------------| |16/7/2022 |1001 |1000 |1.21 |1.68 |23 |68 | In this dataset I have the average score of electric output samples and the average importance with which this rating came. So rating 1.21 was achieved with importance 23. The higher the rating the better, but the lower the importance the better. This means that an importance of 1 for example carries much more weight than importance 100. So ID1 scored 1.21 with importance of 23, while ID2 scored a better rating, 1.68 but with a lesser importance (68). I would like to create a new feature column, called Weighted_rtg. Just multiplying the id1 rating by id1 importance doesn't work, because higher (numerically) importance is less important. But since higher rating is better, I need some kind of mathematical function to create this new column. Most of my ratings are going to be between the range of 0.5 and 2. So in the new column I would ideally like to keep this range as well, if possible. Importance should also range from 1 to about 200. I thought of using the log function, doing 1/log(rating*rank). But I'm not sure this is a good idea. Are there are better ways or functions to model this new column?
How to correctly create a new weighted feature column from 2 opposing features
CC BY-SA 4.0
null
2023-03-13T15:28:19.673
2023-03-15T16:59:58.727
2023-03-13T15:33:11.723
362671
383084
[ "machine-learning", "feature-selection", "feature-weighting" ]
609299
1
null
null
1
23
Assume I have a pair of vectors $a,b$. Upon discussing their correlation $\rho_{a,b}$ we can usually test whether $\rho_{a,b}=0$, $\rho_{a,b}>0$ or $\rho_{a,b}<0$ (as asked previously [here](https://stats.stackexchange.com/questions/225874/one-sided-hypothesis-test-for-correlation) and [here](https://stats.stackexchange.com/questions/148203/one-sided-significance-test-for-correlation)). These tests can be easily conducted in R using `cor.test`. For Pearson's $\rho$ we even have a specified distribution of the test statistic under the null hypothesis on independence: $$t=\rho\sqrt{\frac{n-2}{1-\rho^2}}\sim t_{n-2}$$ Now, assume I would like to test for a strong positive correlation. That is, my null hypothesis would be $H_0:\rho_{a,b}\ge 0.7$. What would be the distribution of the test statistic? Is there a way to conduct this using `cor.test`?
Non-Central One-Sided Correlation Test
CC BY-SA 4.0
null
2023-03-13T15:42:09.887
2023-04-12T13:15:03.907
2023-04-12T13:15:03.907
11887
144600
[ "hypothesis-testing", "correlation", "correlation-test" ]
609300
1
null
null
0
18
In the literature in case of POMDPs, the policy is conditioned on the state and the observation/belief, which should summarize the history of the states/actions taken so far. To do so, usually a RNN of some sort is used, so that the recurrent state represents the observation, and the problem goes back being markovian. However, in case of infinite horizon, I don't see how this can be implemented using any sort of RNNs: in particular, if we want to use back-propagation, we need an end, so that we can calculate and backpropagate the gradient, which in the infinite horizon is not possible. So the question now is: how is the training performed? is that a fallacy of backprop that we can't get around? is the infinite horizon just discretizied in chunks of N steps which should "approximate" the infinite horizon itself?
How are RNNs trained for infinite horizon reinforcement learning
CC BY-SA 4.0
null
2023-03-13T16:03:26.757
2023-03-13T16:03:26.757
null
null
346940
[ "reinforcement-learning", "recurrent-neural-network", "markov-decision-process" ]
609302
2
null
554894
4
null
Every $n\times p$ matrix $X$ has a [Singular Value Decomposition](https://stats.stackexchange.com/a/220324/919) $$X = USV^\prime$$ where $U$ is an $n\times d$ matrix for $d\le p,$ $S$ is a nonnegative diagonal $d\times d$ matrix, and $V$ is a $p \times d$ matrix with $U^\prime U = 1_d = V^\prime V$ (that is, $U$ and $V$ are orthogonal matrices). The "not of full rank" situation of the question occurs when $p-d \gt 0.$ Re-expressing the parameter vector as $$(\alpha_1, \alpha_2, \ldots, \alpha_d) = \alpha = V^\prime\beta$$ and the response as the $d$-vector $$z = U^\prime y$$ casts the model in the form $$E[z] = E[U^\prime y] = U^\prime E[y] = U^\prime X\beta = U^\prime(USV^\prime)(V\alpha) = (U^\prime U)S(V^\prime V)\alpha = S\alpha,$$ which is a set of $d$ equations of the form $E[z_j] = s_j\alpha_j,$ $j = 1, 2, \ldots, d.$ The least squares objective function can therefore be universally expressed by applying basic matrix identities and completing the square as $$\begin{aligned} ||y - X\beta||^2 &= (y-X\beta)^\prime(y-X\beta)\\ &= (y - US\alpha)^\prime(y - US\alpha)\\ &= \alpha^\prime(SU^\prime\,US^\prime)\alpha- 2y^\prime US\alpha + y^\prime y\\ &= \alpha^\prime (S^2)\alpha - 2y^\prime US\alpha + y^\prime y\\ &= \alpha^\prime (S^2)\alpha - 2z^\prime S\alpha + ||y||^2\\ &= \left(\sum_{j=1}^d s_j^2 \alpha_j^2 - 2s_jz_j\alpha_j\right) + ||y||^2\\ &= \left(\sum_{j=1}^d \left(s_j \alpha_j\right)^2 - 2\left(s_j\alpha_j\,z_j\right) + z_j^2\right) + ||y||^2 - \sum_{j=1}^d z_j^2\\ &= \sum_{j=1}^d \left(s_j\alpha_j - z_j\right)^2 + \left(||y||^2 - ||z||^2\right). \end{aligned}$$ The right hand side is a constant $||y||^2 - ||z||^2$ plus a sum of squares $(s_j\alpha_j - z_j)^2.$ Obviously it is uniquely minimized when all the squares are zero, whence $$\hat\alpha_j = \frac{z_j}{s_j}$$ for $j = 1, 2, \ldots, d.$ The original parameter vector $\beta$ therefore is minimized for all solutions to $$V^\prime \hat\beta = \hat\alpha.$$ Linear algebra tells us there is a $p-d$ - dimensional space of solutions given by picking any $\hat\beta_0$ for which $V^\prime \hat\beta_0 = \hat\alpha$ and adding any element of the kernel of $V^\prime$ (the solutions to the homogeneous system of equations $V^\prime\beta = 0$). The value of the least squares objective will have a constant value of $||y||^2 - ||z||^2$ on this space.
null
CC BY-SA 4.0
null
2023-03-13T16:23:47.273
2023-03-13T16:23:47.273
null
null
919
null
609303
2
null
608439
1
null
There is another way to obtain these coefficients in a single regression. It consists in having six variables, one for each coefficient we want to estimate. If all you want are the OLS coefficients, then the idea is that we vectorize the response, and then place the correct values in each of the six columns so they correspond to the right problem. The rest of the elements should be zero, so they cancel the respective coefficient. See the example below: ``` # generate X, Y, Z as a 100x3 matrix A = matrix(rnorm(300), ncol=3) # generate a 3x3 mixing matrix M M <- matrix(rnorm(9), ncol=3) # generate a 100x3 matrix of observed data B <- data.frame(A %*% M) mu = colMeans(B) A = B B = scale(B, center=TRUE, scale=FALSE) vecA = Reduce(c, B) vecA12 = vecA13 = vecA23 = vecA21 = vecA31 = vecA32 = vecA * 0 vecA12[101:200] = vecA[1:100] vecA13[201:300] = vecA[1:100] vecA23[201:300] = vecA[101:200] vecA21[1:100] = vecA[101:200] vecA31[1:100] = vecA[201:300] vecA32[101:200] = vecA[201:300] coef(lm(vecA ~ 0 + vecA12 + vecA13 + vecA23 + vecA21 + vecA31 + vecA32)) vecA12 vecA13 vecA23 vecA21 vecA31 vecA32 # 0.09034396 0.67427978 -1.03436202 0.17768726 0.45582021 -0.35552393 ``` Compare with: ``` lm1 = lm(X1 ~ 0 + ., data = data.frame(B)) lm2 = lm(X2 ~ 0 + ., data = data.frame(B)) lm3 = lm(X3 ~ 0 + ., data = data.frame(B)) coef(lm1) # X2 X3 #0.1776873 0.4558202 coef(lm2) # X1 X3 # 0.09034396 -0.35552393 coef(lm3) # X1 X2 # 0.6742798 -1.0343620 ``` --- Note that this only works for getting the coefficients: if you want the test-statistics, covariance matrix of coefficients, or confidence intervals, these will obviously differ from the individual regressions: the assumption of homoscedasticity is obviously violated.
null
CC BY-SA 4.0
null
2023-03-13T16:31:58.903
2023-03-14T14:53:10.163
2023-03-14T14:53:10.163
60613
60613
null
609304
1
null
null
0
35
Background: I have a longitudinal analysis and am running a linear mixed effects model in R (nlme library). I tried 3 different models: - Random intercept only, - Random intercept + Autocorrelation structure on the errors, and - Autocorrelation structure on the errors only (using gls() command). I fit model 3 because I've been taught that sometimes an autocorrelation structure is enough for longitudinal data. For model 1, variance of random effect (intercept) was 676.9 (and accounted for 62% of total variance). AIC was 8444.01. ``` m1 <- lme(y ~ x + z, random = ~1|id, na.action=na.omit, data=data, method="REML") ``` For model 2, variance of random effect (intercept) was much smaller, 0.001 (and accounted thus for <1% of total variance). AIC was 7830.01. ``` m1 <- lme(y ~ x + z, random = ~1|id, correlation = corAR1(form=~1|id), na.action=na.omit, data=data, method="REML") ``` For model 3, AIC was 7828.01. ``` m3 <- gls(y ~ x + z, correlation = corAR1(form=~1|id), na.action=na.omit, data=data, method="REML") ``` My question: Does this mean that the autocorrelation structure alone was enough to explain much of the variance in the outcome? Should I abandon the LME model, and use model 3 instead of model 2, since it estimates 1 fewer parameter and has comparable AIC? Or am I misinterpreting the output?
How to pick between models with random intercept only VS. autocorrelation structure only VS. both?
CC BY-SA 4.0
null
2023-03-13T16:22:46.917
2023-03-14T15:15:33.560
2023-03-14T15:15:33.560
11887
382890
[ "r", "mixed-model", "lme4-nlme", "panel-data", "autocorrelation" ]
609305
1
null
null
3
41
I study the use of the emergency treatment by children with asthma at home using electronic monitoring devices. For each child, I will have the date and time of each actuation of their emergency treatment. Thus, on a x-axis corresponding to time, I have for each asthma attack the following patterns: ``` Child A: ++++++++ + + + + + Child B: +++ ++++ ++++ ++++ +++ Child Etc. ``` I also know if children were improved after their use of treatment (= success) or not (= failure). I am interested in determining the pattern of use of treatment associated with success (in other words, I would like to know if it is better to use a lot of treatment in the early phase then decrease quickly like child A, or to give few puffs every X minutes like child B, or any other pattern, like child etc.). I have no a priori idea of the pattern I will find, thus my idea would be to model the pattern of use of treatment for asthma attacks were symptoms improved (= success), and the pattern of use of treatment for asthma attacks were symptoms did not improved (= failure), and to compare the two patterns. I would be very interested in your ideas on whether it would make sense to compare the two "patterns of use", and how from a statistical perspective.
Comparing two curves of medical longitudinal data
CC BY-SA 4.0
null
2023-03-13T16:52:25.220
2023-03-16T16:04:49.830
2023-03-16T16:04:49.830
44269
383117
[ "time-series", "mixed-model", "panel-data", "biostatistics" ]
609306
2
null
570311
2
null
If you want to check for a correlation of $1$, there is no statistical test to run. A correlation of $1$ means that the variables move together perfectly (and linearly). Empirically (in data), there can not be any deviation from a perfect linear relationship. Therefore, if the empirical correlation is not exactly $1$, the true correlation is not exactly $1$, and you know this with certainty. [This answer](https://stats.stackexchange.com/a/438952/247274) of mine is probably a duplicate.
null
CC BY-SA 4.0
null
2023-03-13T17:05:22.733
2023-03-13T17:05:22.733
null
null
247274
null
609307
1
null
null
0
23
During capture-recapture sampling, we aim to estimate a population size (e.g. of organisms) by capturing a sample of size $ n_1 $, marking them, releasing them, then re-sampling (assuming they have mixed) with size $ n_2 $ and counting how many are marked. If we find that $ m $ out of $ n_2 $ are marked, then we estimate the population size by simply equating proportions as $ \frac{n_1}{N} = \frac{m}{n_2} $ so $ N = \frac{n_1 n_2}{m} $. This technique is good because the estimate of $ N $ is independent of any population growth that may occur between samples. I'm interested in looking at exactly how likely it is that the true population is $ N $, or within some confidence interval. I want to estimate the distribution of $ N $, viewed as a random variable. I have learned that if we observe $ \alpha - 1 $ 'successes' and $ \beta - 1 $ 'failures' in a binomial model, then the probability $ p $ of observing a success has a Beta distribution: $ p \sim \text{Beta}(\alpha, \beta) $, which is what you get if you apply Bayes' rule to the binomial distribution, and that the number of successes has a Beta-Binomial distribution: $ m \sim \text{BetaBin}(n, \alpha, \beta) $. If this understanding is wrong please correct me. My question is, is it possible to estimate the expectation and variance of the full population by doing this: - We know the values of $ n_1 $, $ n_2 $ and $ m $. - The prevalence of marked organisms is $ p \sim \text{Beta}(m + 1, n_2 - m + 1) $ - The conditional distribution of the initial number of marked organisms is $ n | N \sim \text{BetaBin}(N, m + 1, n_2 - m + 1) $. - By Bayes' rule, $$ P(N = N | n = n_1) = \dfrac{P(N = N)}{P(n = n_1)} \times P(n = n_1 | N = N) \\ = \dfrac{P(N = N)}{\sum_{i=0}^{\infty} P(n = n_1 | N = i) P(N=i)} \times P(n = n_1 | N = N) \\ = \dfrac{P(n = n_1 | N = N)}{\sum_{i=0}^{\infty} P(n = n_1 | N = i) } $$ - We now have the distribution for the population and can compute the mean and variance. Is this even close to a valid method? I am not competent or confident with the maths here and am interested to see how it is done. Thankyou.
Use of the Beta-Binomial Distribution in Capture-Recapture Sampling
CC BY-SA 4.0
null
2023-03-13T17:45:09.060
2023-03-13T17:45:09.060
null
null
291238
[ "sampling", "fitting", "beta-distribution", "beta-binomial-distribution", "capture-mark-recapture" ]
609309
2
null
493762
2
null
When you collect data, there is always some chance that, just due to rotten luck, you observe something in the data that is not really there. This is why, for instance, we can observe two groups with unequal empirical means of $\bar x_1$ and $\bar x_2$, yet not immediately conclude that $\mu_1\ne\mu_2$. It could be that, just due to some bad luck, we happen to observe unequal sample means, despite equal population means. In fact, it can be that $\bar x_1<\bar x_2$, yet $\mu_1>\mu_2$, as the simulation below shows. ``` set.seed(2023) N <- 5 # Group 1 has the higher population mean mu # group1 <- rnorm(N, 0.1, 1) group2 <- rnorm(N, 0.0, 1) # Yet group 2 has the higher sample mean x-bar # mean(group1) # -0.6522852 mean(group2) # 0.06226405 ``` How this relates to regression and an $R^2$ better than chance is that, even if you add a totally unrelated variable to your regression model, it might be that, just due to the luck of what you happen to observe, there is a slight empirical relationship. Therefore, the $R^2$ will increase slightly upon adding the variable, despite the lack of a relationship. ``` set.seed(2023) N <- 5 x <- runif(N) y <- rnorm(N) L <- lm(y ~ x) summary(L)$r.squared ``` Despite the fact that the simulation above has `x` and `y` totally unrelated to each other, just due to the luck of the values that happen to be observed/simulated, there is a fairly high $R^2 = 0.784$. If you do this kind of simulation many times, you see that high $R^2$ values can occur fairly often. ``` set.seed(2023) N <- 5 R <- 10000 x <- runif(N) r2 <- rep(NA, R) for (i in 1:R){ y <- rnorm(N) r2[i] <- (cor(x, y))^2 } plot(r2, ecdf(r2)(r2)) abline(v = 0.5) ``` [](https://i.stack.imgur.com/1O6Eg.png) From the graph, we see that about $20\%$ of observed $R^2$ values exceed $0.5$, despite there being absolutely no relationship between `x` and `y`. Therefore, if you run a regression and get $R^2 = 0.5$, despite that looking like a solid score (at least in many cases), it is so easy to get such a value just by chance that that you cannot really consider your performance to be above the chance level. Fortunately, as the sample size increases, it becomes highly unlikely to observe high $R^2$ values just by chance. For instance, when I bump up the sample size to $50$ (so still not very high), not one $R^2$ value in $10,000$ iterations exceeds $0.28$. ``` set.seed(2023) N <- 50 R <- 10000 x <- runif(N) r2 <- rep(NA, R) for (i in 1:R){ y <- rnorm(N) r2[i] <- (cor(x, y))^2 } max(r2) # 0.2792793 ``` Increasing the sample size to $500$ results in a highest observed $R^2$ a bit above $0.03$. What seems to be happening in that `sklearn` documentation is that feature importance is tested by permuting the values. By doing so, you break any relationship between the feature and the outcome, except for the occasional bad break. When the $R^2$ with the original feature is higher than the vast majority of $R^2$ values the result from permuting the feature, you conclude that there is a true relationship between the feature and your outcome ($y$). I will demonstrate below. ``` set.seed(2023) N <- 250 R <- 10000 x <- runif(N) # Make y depend on x # y <- x + rnorm(N) observed_r2 <- (cor(x, y))^2 r2 <- rep(NA, R) for (i in 1:R){ # Scramble the values of x to break the dependence # x <- sample(x, N, replace = F) r2[i] <- (cor(x, y))^2 } plot(r2, ecdf(r2)(r2), xlim = c(0, max(c(r2, observed_r2)))) abline(v = observed_r2) ``` Not one of the $10000$ permutations gave an $R^2$ greater than the observed $R^2$ with the original $x$. That seems like evidence of the $x$ variable being predictive of $y$, which we can see from the simulation is true. [](https://i.stack.imgur.com/tAzQ7.png) Compare this with a stuation where $x$ has no relationship with $y$. ``` set.seed(2023) N <- 250 R <- 10000 x <- runif(N) # Make y depend on x # y <- 0*x + rnorm(N) observed_r2 <- (cor(x, y))^2 r2 <- rep(NA, R) for (i in 1:R){ # Scramble the values of x to break the dependence # x <- sample(x, N, replace = F) r2[i] <- (cor(x, y))^2 } plot(r2, ecdf(r2)(r2), xlim = c(0, max(c(r2, observed_r2)))) abline(v = observed_r2) ``` [](https://i.stack.imgur.com/YtWRB.png) Many $R^2$ values with permuted $x$ exceed the observed $R^2$ using the original $x$, suggesting that $x$ is not contributing to predicting $y$ any better than it would be expected to by chance alone. Where this can be quite useful in a machine learning context is if you have a complicated model. Perhaps you have lots of interaction terms or nonlinear functions of the original features, such as $y = x_1 + x_2 + x_1^2 + x_2^2 + x_1x_2$, and you just want to know if $x_1$ is a contributor of any kind to $y$. If you permute $x_1$, you can figure that out. I will simulate it below. ``` set.seed(2023) N <- 2500 R <- 10000 x1 <- runif(N) x2 <- runif(N) # Make y depend on x # y <- x2 + x2^2 + rnorm(N) L <- lm(y ~ x1 + x2 + I(x1^2) + I(x2^2) + x1*x2) observed_r2 <- summary(L)$r.squared r2 <- rep(NA, R) for (i in 1:R){ # Scramble the values of x1 to break the dependence # x1 <- sample(x1, N, replace = F) L <- lm(y ~ x1 + x2 + I(x1^2) + I(x2^2) + x1*x2) r2[i] <- summary(L)$r.squared if (i %% 100 == 0){ print(i/R*100) } } plot(r2, ecdf(r2)(r2), xlim = c(min(c(r2, observed_r2)), max(c(r2, observed_r2)))) abline(v = observed_r2) 1 - ecdf(r2)(observed_r2) ``` [](https://i.stack.imgur.com/Ye9Zs.png) About $15\%$ of models with permuted $x_1$ result in $R^2$ exceeding the $R^2$ of the original model. Does that make it sound like $x_1$ plays a role in determining $y?$ I reveal the answer below, but think about it! > You can see from the code that the x1 variable plays no role in determining y, consistent with the permutation test. After all, when y is determined in the y <- x2 + x2^2 + rnorm(N) line, x1 does not appear at all.
null
CC BY-SA 4.0
null
2023-03-13T17:49:09.543
2023-03-13T17:49:09.543
null
null
247274
null
609310
1
null
null
2
48
I've been asked to help determine the design of an RCT. The design is primarily interested in the benefit of using a laser on some dental procedure in addition to a local anti-biotic and scaling, versus just the anti-biotic and scaling. The outcome us gum attachment loss. A dentist will probe the area between the tooth and gum to measure the depth where there is no attachment between the two. This is done on a tooth by tooth basis and is measured in mm. The aim of the RCT is to determine efficacy of the laser (meaning, does attachment loss decrease when using the laser in addition to drug and scaling). Because there are two groups (Laser + drug + scaling v. drug + scaling), it would be easy to suggest randomizing subjects to one of the two and measure attachment loss on a single tooth or set of teeth. I would run an ANCOVA controlling for baseline attachment loss plus other clinical factors associated with attachment loss (like smoking status). I believe there is the option to also use patients as controls for themselves, which should improve efficiency. Under the assumption that all teeth in the mouth are exchangeable, the proposition would be: - Randomly select one side of the mouth to receive treatment (laser + drug + scaling) and the other receive control (drug + scaling). - Match two teeth on either side on baseline attachment loss. - Treat those teeth and measure improvement after treatment. - Compare the difference in attachment loss between the two teeth and perform a regression on this. To be clear, if $y_t$ was the attachment loss after treatment on the treated tooth, and $y_c$ was the attachment loss after treatment on the control tooth, then the ANCOVA would be on $\delta = y_t - y_c$. I'm interested in the knowing if I my conception on matching teeth is valid and if the analysis I good, or if there is some other analysis I might be able to perform which might be better suited.
Design of a Dental RCT in Which Patients Can Act as Their Own Control
CC BY-SA 4.0
null
2023-03-13T17:58:25.817
2023-03-13T18:34:49.440
null
null
111259
[ "modeling", "experiment-design", "ancova" ]
609311
1
null
null
0
16
I have a dataset with two variables. The outcome is a binary variable that can be either 0 or 1. The other is a datetime variable showing when the outcome was measured; it looks like this 2022-11-17T13:45:00.000Z. What should I do to test if there is a lag in the correlations between the columns? The specific question I am trying to answer is: assuming seasonality, within a 24 hour window is there a lag in correlation? I am kind of lost given that I could not find much on the internet that would help me attack this problem. I would appreciate any advice/guidance on what type of model & approach to use.
Finding the correlation pattern of a binary variable in a time series
CC BY-SA 4.0
null
2023-03-13T18:02:14.100
2023-03-15T02:03:51.333
2023-03-15T02:03:51.333
11887
296486
[ "time-series", "correlation", "autocorrelation", "binary-data" ]
609312
1
null
null
0
29
I'd like to use SHAP in a specific manner to explain contribution of features for `average score per date` For example let's craft some toy dataset import pandas as pd import numpy as np import xgboost as xgb import shap X , y = shap.datasets.adult() X, y = X.iloc[:100], y[:100] X['date_code'] = np.random.choice(10, len(X)) # date code corresponds to some month `X` has the next values for `date_code` ``` date_code 6 14 5 12 9 12 0 11 3 11 7 11 4 10 2 8 8 6 1 5 dtype: int64 ``` Then we train XGBoost model ``` model = xgb.XGBClassifier(nestimators=100, max_depth=2).\ fit(X.drop(['date_code'], axis=1).values, y) ``` Example of plot ``` X['preds'] = model.predict_proba(X.drop(['date_code'], axis=1))[:,1] X.groupby('date_code')['preds'].mean().plot(kind='line') ``` [](https://i.stack.imgur.com/3tx0F.png) As we can see my new dataset has `date_code` as index and `mean_score` as targets. Now, I'd like to explain this plot similar to `force_plot` in terms of original features. Only with `date_code` as sample index and mean score as output. The example of shap plot is supposed to look similar to that one [](https://i.stack.imgur.com/gCnI3.png) ``` def wrapper_model(X): out = [] # drop previous date_code and preds for prediction preds = model.predict_proba(X[:,:-2])[:, 1].reshape(-1,1) # add new preds column X = np.concatenate([X, preds], axis=1) grs = np.unique(X[:,-3]) # compute mean witthin each group for g in grs: mask = (X[:,-3] == int(g)).astype(int) out.append(np.mean(X[mask, -1])) return np.array(out) print(wrapper_model(X.values)) explainer = shap.KernelExplainer(wrapper_model, X) shap_values = explainer(X) ``` But I get the error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[200], line 12 10 return np.array(out) 11 print(wrapper_model(X.values)) ---> 12 explainer = shap.KernelExplainer(wrapper_model, X) 13 shap_values = explainer(X) File ~/miniconda3/lib/python3.9/site-packages/shap/explainers/_kernel.py:95, in Kernel.__init__(self, model, data, link, **kwargs) 93 if safe_isinstance(model_null, "tensorflow.python.framework.ops.EagerTensor"): 94 model_null = model_null.numpy() ---> 95 self.fnull = np.sum((model_null.T * self.data.weights).T, 0) 96 self.expected_value = self.linkfv(self.fnull) 98 # see if we have a vector output ValueError: operands could not be broadcast together with shapes (10,) (100,) ``` SHAP values seems to allow to interpret those kind of transformations of data. But I hasn't found out how to implement it
SHAP features contibution plot for aggregrated model metrics explanaition
CC BY-SA 4.0
null
2023-03-13T18:04:41.347
2023-03-13T18:16:39.177
2023-03-13T18:16:39.177
226615
226615
[ "python", "shapley-value", "explanatory-models", "explainable-ai" ]
609313
2
null
493251
2
null
By using an instrumental variable, you are not estimating the coefficients using vanilla ordinary least squares linear regression. Consequently, you can achieve a lower sum of squared residuals than you would if you predicted the mean value of $y$ every time. This results in the numerator of the fraction below exceeding the denominator, so the entire formula is less than zero. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ This is a standard way to calculate $R^2$, equal to the squared correlation between predicted and true values in the OLS linear regression case, and equal to the squared correlation between $x$ and $y$ in the simple linear regression case (again, assuming OLS estimation). [This is the equation that allows for the "proportion of variance explained" interpretation of $R^2$, too.](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2) All of this is to say that such a formula for $R^2$ is totally reasonable and sure seems to be how your software is doing the calculation. (After all, squaring a real correlation between the predictions and true values will not result in a number less than zero, so your software is not squaring a Pearson correlation and must be doing some other calculation.) When you do estimation using a method other than OLS, it can happen that the numerator exceeds the denominator. I will demonstrate below using an estimation based on minimizing the sum of absolute residuals, but the idea is the same if you use any other non-OLS estimation technique (such as instrumental variables). ``` library(quantreg) set.seed(2023) N <- 10 x <- runif(N) y <- rnorm(N) L <- quantreg::rq(y ~ x, tau = 0.5) preds <- predict(L) sum((y - preds)^2) # I get 7.260747 sum((y - mean(y))^2) # I get 4.731334 1 - sum((y - preds)^2)/sum((y - mean(y))^2) # Consequently, the R^2 is subzero at -0.5346087 ```
null
CC BY-SA 4.0
null
2023-03-13T18:04:46.133
2023-03-13T18:04:46.133
null
null
247274
null
609315
1
null
null
1
27
If I want to see if the individual predictors in my multiple regression have a linear relationship to the dependent variable, I can do the following: ``` plot(dependent Variable, independent Variable XY) ``` However, if my dependent variable is a latent variable in a path model. How can I test for a linear relationship to each predictor for this latent variable? Do I even need to do this?
How can I investigate the linearity between latent variables and their predictors in a path model?
CC BY-SA 4.0
null
2023-03-12T20:06:21.437
2023-03-14T15:16:29.630
2023-03-14T15:16:29.630
11887
null
[ "r", "regression", "latent-variable" ]
609317
2
null
609310
1
null
This [split-mouth design](https://dx.doi.org/10.1002/sim.3634) is pretty common in dental research and sounds pretty reasonable. You want to make sure the sides are truly randomly selected (and at least vaguely comparable a-priori, because otherwise e.g. comparing one side that barely needed treatment vs. side that was in a really bad shape might not be ideal) and similarly the selection of the teeth to be matched. You also want to do as much as possible to minimize bias in the outcome assessment (ideally done by a person that's not aware which side was treated which way, or for consistency reasons always by the same person for all patients). Analysis-wise, forming a subject difference is one way of proceeding that makes perfect sense. An alternative is to run e.g. a model with a subject random effect (modeling the average for each side), or to even model separately each tooth. Given that you intend to always use two teeth per side, the different options shouldn't make too much of a difference. However, this also limits you: - Patients with only one side needing treatment (or only needing treatment for one tooth on one of the two sides) cannot be used (while with the last option that would be possible). - You cannot adjust for any covariates on a tooth level (e.g. some pre-treatment assessment, something you can measure from an X-ray or whatever else might make sense), while with the last option you could.
null
CC BY-SA 4.0
null
2023-03-13T18:30:49.347
2023-03-13T18:30:49.347
null
null
86652
null
609318
2
null
609310
1
null
I like the idea of randomizing each side of the mouth for each patient, and also matching teeth. I wonder if it's best to match by position of the teeth (i.e. using symmetry of the mouth) rather than by baseline loss, especially since you're also controlling for baseline loss. I'm not sure what the consequences of the different matching would be. In terms of modeling, I would use a random effects model. Enumerate the teeth on the right side from $t=1,..,T$, and assume that each of these teeth can be matched to the left side. For each measurement (2T measurements per subject) we note the subject, the tooth, the side, the treatment/control status, and covariates of the patients. Then the model in lme4 would be ``` fit <- lmer(y~ treatment + covariates + (1|subject) + (1|subject:tooth), data = dat) ``` and you would look at the coefficient for treatment
null
CC BY-SA 4.0
null
2023-03-13T18:34:49.440
2023-03-13T18:34:49.440
null
null
362564
null
609319
2
null
389325
1
null
This is a crude way of saying that a large sample size allows you to use more parameters without sacrificing estimation accuracy of those parameters. That is, when you have a large sample size, it is "safe" to use many features. The risk of overfitting to those features is less severe than it would be with a small sample size. Since it is "safe" to use many features, that allows you to control for many factors that might influence the outcome. You can get a smaller residual variance by controlling for these variables, allowing you to have greater power to detect effects by your variables of interest. This is all a bit crude, but I do believe that to be the idea behind the quote, and I agree with the quote to a limited extent. However, I do not believe the quote to suggest that including many features helps with overfitting. The quote seems to allude to the fact that, due to the large sample size, overfitting is less of a concern if you want to include, say, twenty features than it would be if you had a small sample size.
null
CC BY-SA 4.0
null
2023-03-13T18:50:21.200
2023-03-13T18:50:21.200
null
null
247274
null
609320
1
null
null
1
28
Does significance testing (p-value, 95% prediction interval of a forecast, etc.) of an ARIMA model require time series data to be normally distributed? As in, according to the tests used to assess normality before linear regression (histogram, qqplot, etc.)? If not, why? Similarly, what about state-space models, such as ETS?
Normality of data used to estimate significance of ARIMA parameters?
CC BY-SA 4.0
null
2023-03-13T18:57:22.730
2023-03-13T20:01:11.847
2023-03-13T20:01:11.847
53690
383125
[ "time-series", "hypothesis-testing", "arima", "normality-assumption", "state-space-models" ]
609321
2
null
279855
1
null
I would argue that you should perform no oversampling at all. Oversampling almost always arises from using accuracy as a performance metric, since class imbalance can mean that your model with an impressive-looking $98\%$ accuracy is outperformed by a model that predicts the majority class every time and scores $99\%$ accuracy. This is related to my discussions [here](https://stats.stackexchange.com/questions/605450/is-the-proportion-classified-correctly-a-reasonable-analogue-of-r2-for-a-clas) and [here](https://stats.stackexchange.com/questions/605818/how-to-interpret-the-ucla-adjusted-count-logistic-regression-pseudo-r2) about comparing classification accuracy to the accuracy of such a baseline, "must-beat" model, and I believe that discussion to expose accuracy as being less useful that it might first seem. However, [accuracy is problematic even when the classes are balanced](https://stats.stackexchange.com/a/312787/247274), and [metrics like $F_1$ score, sensitivity, and specificity offer minimal improvement](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp). This is because most models do not make hard classifications. They make predictions on a continuum (typically on the interval $[0,1]$), and the hard classifications come from applying some kind of threshold. This is troubling for multiple reasons. - The typical software default of $0.5$ might be wildly inappropriate (even for balanced classes, as Stephan Kolassa explains in his explanation about why accuracy is a problematic performance metric). If there is a small probability of something happening that would lead to a disaster if you miss it, you might be inclined to proceed as if that event will happen, even if it is unlikely. If this sounds silly, perhaps an example will help. There is (hopefully) a pretty small probability of being in a car accident, but there is so little cost to buckling when there is no accident and such a potential cost to not buckling when there is a wreck that most people buckle their seat belts. - It might be that multiple decisions can be made, despite there being only two categories. For instance, an extreme prediction either way might be taken as predicting a categorical outcome, but a prediction in the middle might be more of a signal to collect more information. Hopefully, some of the below links can convince you of the probabilities being useful and that you need not mess with the data by oversampling in order to get accurate predicted probabilities. [Cross Validated: Why is accuracy not the best measure for assessing classification models?](https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models) [Cross Validated: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he) [Cross Validated: Academic reference on the drawbacks of accuracy, F1 score, sensitivity and/or specificity](https://stats.stackexchange.com/questions/603663/academic-reference-on-the-drawbacks-of-accuracy-f1-score-sensitivity-and-or-sp) [Cross Validated: Calculating the Brier or log score from the confusion matrix, or from accuracy, sensitivity, specificity, F1 score etc](https://stats.stackexchange.com/a/603869/247274) [Cross Validated: Upweight minority class vs. downsample+upweight majority class?](https://stats.stackexchange.com/questions/569878/upweight-minority-class-vs-downsampleupweight-majority-class/609131#609131) [Cross Validated Meta: Profusion of threads on imbalanced data - can we merge/deem canonical any?](https://stats.meta.stackexchange.com/questions/6349/profusion-of-threads-on-imbalanced-data-can-we-merge-deem-canonical-any) [Frank Harrell's Blog: Classification vs. Prediction](https://hbiostat.org/blog/post/classification/index.html) [Frank Harrell's Blog: Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules](https://hbiostat.org/blog/post/class-damage/index.html) These links are (shamelessly) taken from an [answer](https://stats.stackexchange.com/a/609206/247274) I posted in the past few days. I bring this up because class imbalance and what models output seem to be poorly understood by many machine learning practitioners.
null
CC BY-SA 4.0
null
2023-03-13T19:07:33.047
2023-03-13T19:26:44.967
2023-03-13T19:26:44.967
247274
247274
null
609322
1
null
null
1
25
I am replicating the Bhalotra-Figueras paper on women's political agency and health ([Paper Here](https://www.jstor.org/stable/pdf/43189382.pdf?casa_token=z-9aV91gw78AAAAA:fPeO9c5PuRPNeQyIAroW2gjydUE6Peyne7VrYsFz6_q3oZl0MS1PhSM7jxEtlpGLwJ658OLmK1KVZJ1dN3ZEW-vDva_lwXQ7PTcgg1ToOZdRfjWOeJ8)) an IV model where $x$ is measured at the district-level and have an instrument $z$ measured at the smaller constituency level. They use the mean of $z$ at the district-level as an IV for their endogenous $x$. They cluster their standard errors at the district-level given that the regressor of interest, women a district-level measure of female politicians, is entered as a district-level share. My question is, given that the instrument voter turnout and margin of victory is measured at the smaller constituency level, shouldn't they cluster it at that point? When I try to do that with their data, their f-stat is very low and the Instrument fails (they do not report which f-statistic they are using - Cragg Donaldson or Kleibergen Paap). The instrument also fails when I cluster their standard errors at the state-level. At this point I should say that while the f-stat is miniscule they have some modicum of significance, but this should not be this falible for a top published article. Can anyone help me understand why they should have clustered their standard errors as they did at the district level as opposed to where the instrument is measured or even higher at the state-level? I have also consulted the Athey, Abadie, Imbens and wooldridge paper ([Here](https://arxiv.org/abs/1710.02926)) but can't really seem to agree that the SE clustering should not be at where the IV was measured.
Instrument fails when I cluster standard errors
CC BY-SA 4.0
null
2023-03-13T19:09:01.437
2023-03-13T19:42:28.767
2023-03-13T19:42:28.767
3277
247418
[ "econometrics", "instrumental-variables", "clustered-standard-errors" ]
609323
1
null
null
1
40
I have utilized the ClusterBootstrap package in R to run bootstrap analysis for mixed models. Is this possible in SPSS? Using the BOOTSTRAP-command in SPSS combined with > /RANDOM=INTERCEPT | SUBJECT(id) COVTYPE(VC) does not work for repeated measures data unless there is a way to identify the clusters to be sampled. As such the bootstrap is just sampling rows of the data and I want it to sample all measurements of the selected subjects (often called "cluster bootstrap").
Can I run mixed models in SPSS applying bootstrap (using "cluster bootsrap")?
CC BY-SA 4.0
null
2023-03-13T19:28:47.337
2023-03-13T19:28:47.337
null
null
381145
[ "mixed-model", "repeated-measures", "spss", "bootstrap" ]
609324
1
null
null
0
38
Are the Grenander conditions on the explanatory variable to ensure OLS has consistent treatment effect estimates applicable to GLM/GLMM where you have count,binary data? If you don't know what Grenander conditions are, but know the econometrics term "exogeneity" do you need to have strict exogeneity in glm to have treatment effect estimator consistency? If the Grenander conditions are violated in GLM/there are endogenous regressors, are the estimators of the treatment effects consistent in glm?
Are the Grenander conditions on the explanatory variable to ensure OLS has consistent treatment effect estimates applicable to GLM/GLMM?
CC BY-SA 4.0
null
2023-03-13T19:53:32.123
2023-03-13T19:58:37.920
2023-03-13T19:58:37.920
null
null
[ "generalized-linear-model", "treatment-effect", "consistency", "explanatory-models", "exogeneity" ]
609325
1
null
null
3
71
In considering ROC AUC, [there is a sense in which $0.5$ is the performance of a random model](https://stats.stackexchange.com/a/590093/247274). Conveniently, this is true, no matter the data or the prior probability of class membership; the ROC AUC of a random model is $0.5$ whether the binary classes are balanced or not. Does PR AUC have a similar notion that is data-independent? (Since precision depends on the prior probability, I would imagine not.) If not, is there a notion that is a function of the prior probability? EDIT "ROC" means receiver-operator characteristic. "PR" means precision-recall. "AUC" means area under the curve. Thus, "PR AUC" is the area under the curve that plots precision as a function of recall.
ROC AUC has $0.5$ as random performance. Does PR AUC have a similar notion?
CC BY-SA 4.0
null
2023-03-13T19:54:40.013
2023-03-13T22:13:55.517
2023-03-13T20:23:56.133
247274
247274
[ "machine-learning", "classification", "supervised-learning", "auc", "precision-recall" ]
609329
2
null
603616
0
null
It looks like your question is no more complicated than adjusting for time and drug as a categorical variable rather than a continuous variable. You might consider an ANOVA contrast for the effect of DRUG. The line and error bar plot is useful because we can see the effects appear quite heterogeneous, i.e. response does not increase linearly with time, nor are the treatment-specific curves parallel to each other.
null
CC BY-SA 4.0
null
2023-03-13T20:24:09.490
2023-03-13T20:24:09.490
null
null
8013
null
609330
2
null
366663
0
null
It depends on what is meant by overfitting. If you take the definition to be that out-of-sample performance is terrible despite good in-sample performance, then such a definition precludes good performance by a model that has overfitting. If, however, you take overfitting to mean that a simpler model would have better out-of-sample performance despite having worse in-sample performance, then good performance is still possible despite the overfitting. For example, consider the following image on the Wikipedia article on overfitting. Red is out-of-sample performance ($\epsilon$ stands for the "error"), while blue is in-sample performance. [](https://i.stack.imgur.com/TgRb0.png) Overfitting starts to occur to the right of the dotted line, as the red out-of-sample performance starts to worsen despite the blue in-sample performance getting better. However, just to the right of the dotted line represents the performance of a model that has overfit yet still performs rather well, even out-of-sample. That is, models to the right of the dotted line have overfit. However, if "good" performance is defined as anything lower than the bottom of that triangle, then all models to the right of that dotted line have "good" performance, despite the overfitting.
null
CC BY-SA 4.0
null
2023-03-13T20:50:29.753
2023-03-13T20:50:29.753
null
null
247274
null
609331
2
null
599113
5
null
Intuitively, having more data will tell the neural network where to turn, by how much, and in what direction (up/down, left/right, combinations, extensions in high-dimension spaces, etc). Imagine your true function to be a parabola. However, you only have two data points. You have no way to capture the curvature. You cannot figure out if the parabola opens up or down. You cannot figure out how wide the parabola is. When you add a third point, you can start to figure out some of this. However, that assumes you know the shape to be a parabola. If you do not know the function, how can you distinguish that from something like an absolute value function that uses straight lines? By having more data, you provide more opportunities to penalize the network for turning incorrectly, even if the fit on fewer points is perfect. This sounds like resolution, and I suspect that there is a way to tie this notion to the [Nyquist rate](https://en.wikipedia.org/wiki/Nyquist_rate) and [Nyquist–Shannon sampling theorem](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem) in signal processing (or some generalization to higher-dimension spaces) to bring full mathematical rigor.
null
CC BY-SA 4.0
null
2023-03-13T21:05:29.713
2023-03-13T21:05:29.713
null
null
247274
null
609332
1
null
null
1
14
I'm preparing for an exam and I got stuck on this question. [](https://i.stack.imgur.com/YJ44N.png) I understand that the alpha values 'affects' how much influence corresponding data point has on the position of the decision boundary, and that the alphas are between 0 and C=4 in this case. Therefore, since point b) is closest to the decision boundary it makes sense that it should have alpha = 4. But I don't understand what they mean in the given solution when they say that "since the point is within the margin, slack is used here, hence, alpha = C = 4" Is the slack value C always equal to the alpha value for points within the margin? Or is it only true in this case? [](https://i.stack.imgur.com/YezZs.png)
SVM, Is the slack value always equal to the alpha value for points within the margin?
CC BY-SA 4.0
null
2023-03-13T21:28:02.560
2023-03-14T10:50:57.317
2023-03-14T10:50:57.317
383138
383138
[ "machine-learning", "svm" ]
609333
1
609385
null
1
57
I have data of the following form: |Rating |1 |2 |3 | |------|-|-|-| |control |0 |20 |11 | |treatment |6 |14 |12 | Where 1 is a plant of top quality, 2 is a plant of lesser quality that USED TO BE top quality, and 3 is a plant of poor quality that USED TO BE of type 2 quality. The values listed are the counts of each type of plant from an untreated control group and with a treated group. I had been using a simple chi-square test to determine whether these treatments had any effect, but I've come to learn that that test assumes no order to the categories, whereas my categories do have an order. Can someone please help me to understand how to determine: A) whether the control and treatment are statistically significantly different from one another while taking into account the ordering of the categories. B) How to determine an effect size from these results to use in statistical power calculations to determine sample size requirements for future studies. For example, ordinal regression has been suggested, but it's not clear to me how such a calculation would be performed in this case (What is the dependent variable? How does one determine the significance parameter? How is effect size determined?) Another suggestion has been the Kruskal-Wallis test, but I'm not clear how the order of the values is represented in that test. Thanks for any advice that you can provide.
Statistical significance test for ordered data and related effect size
CC BY-SA 4.0
null
2023-03-13T21:32:35.133
2023-03-21T17:27:28.797
2023-03-13T22:14:30.260
164936
367293
[ "hypothesis-testing", "statistical-significance", "ordinal-data", "ordered-logit" ]
609334
2
null
599916
1
null
This is what various distribution plots are for. Standard plots include histograms and boxplots. I like kernel density estimation (KDE) plots, too. Briefly, KDE can be thought of as a continuous histogram. Let's simulate some data and look at the distributions. ``` library(ggplot2) set.seed(2023) N <- 100 x <- sample(c("Cat", "Dog", "Elephant"), N, replace = T) y <- rnorm(N) d <- data.frame( Species = x, Weight = y ) ggplot(d, aes(y = Weight, fill = Species)) + geom_boxplot(alpha = 1.0) ``` [](https://i.stack.imgur.com/JzRYs.png) The plots show the groups not to be so different, consistent with the simulation setup that creates the `weight` without regard for the `species`. If you increase the sample size to `1000`, the differences becomes even less. If you see something like this, then you have some evidence that the distributions are not so different for the thre species. Even better, however, is to visualize the entire distribution, since boxplots ignore a lot of information (such as multi-modality). The common way to do this is with a histogam, though I would prefer to look at the empirical CDFs or KDE plots. ``` library(ggplot2) set.seed(2023) N <- 1000 x <- sample(c("Cat", "Dog", "Elephant"), N, replace = T) y <- rnorm(N) d <- data.frame( Species = x, Weight = y ) ggplot(d, aes(x = Weight, fill = Species)) + geom_density(alpha = 0.3) d0 <- data.frame( Weight = y[x == "Cat"], Quantile = ecdf(y[x == "Cat"])(y[x == "Cat"]), Species = "Cat" ) d1 <- data.frame( Weight = y[x == "Dog"], Quantile = ecdf(y[x == "Dog"])(y[x == "Dog"]), Species = "Dog" ) d2 <- data.frame( Weight = y[x == "Elephant"], Quantile = ecdf(y[x == "Elephant"])(y[x == "Elephant"]), Species = "Elephant" ) d <- rbind(d0, d1, d2) ggplot(d, aes(x = Weight, y = Quantile, col = Species)) + geom_line(size = 1.5) ``` The KDE for the three species are all about on top of each other. [](https://i.stack.imgur.com/gIJNY.png) The empirical CDFs for the three species are all about on top of each other. [](https://i.stack.imgur.com/0tig2.png) Next, let's look at linear differences. To me, this would mean that the distributions are the same except for a linear shift, such as $N(0, 1)$, $N(1, 1)$, and $N(4, 1)$. ``` library(ggplot2) set.seed(2023) N <- 1000 x <- c( rep("Cat", N), rep("Dog", N), rep("Elephant", N) ) y <- c( rnorm(N, 0, 1), rnorm(N, 1, 1), rnorm(N, 4, 1) ) d <- data.frame( Species = x, Weight = y ) ggplot(d, aes(y = Weight, fill = Species)) + geom_boxplot() ggplot(d, aes(x = Weight, fill = Species)) + geom_density(alpha = 0.3) d0 <- data.frame( Weight = y[x == "Cat"], Quantile = ecdf(y[x == "Cat"])(y[x == "Cat"]), Species = "Cat" ) d1 <- data.frame( Weight = y[x == "Dog"], Quantile = ecdf(y[x == "Dog"])(y[x == "Dog"]), Species = "Dog" ) d2 <- data.frame( Weight = y[x == "Elephant"], Quantile = ecdf(y[x == "Elephant"])(y[x == "Elephant"]), Species = "Elephant" ) d <- rbind(d0, d1, d2) ggplot(d, aes(x = Weight, y = Quantile, col = Species)) + geom_line(size = 1.5) ``` [](https://i.stack.imgur.com/6nCuH.png) [](https://i.stack.imgur.com/XFxh0.png) [](https://i.stack.imgur.com/FFJS5.png) All three of these visualizations suggest that the difference in the distribution for each of the three animals is just in shifting up or down. Finally, let's consider nonlinear differences. This one is a bit iffy, because the conditional mean is either shifted up, shifted down, or not changed, so the relationship between a categorical feature and the conditional mean is necessarily linear. However, the entire distribution does not have to shift the same way, and this can suggest alternative modeling, such as generalized linear models. ``` library(ggplot2) set.seed(2023) N <- 1000 x <- c( rep("Cat", N), rep("Dog", N), rep("Elephant", N) ) y <- c( rexp(N, 5) - 1/5, rt(N, 5), rchisq(N, 1) - 1 ) d <- data.frame( Species = x, Weight = y ) ggplot(d, aes(y = Weight, fill = Species)) + geom_boxplot() ggplot(d, aes(x = Weight, fill = Species)) + geom_density(alpha = 0.3) d0 <- data.frame( Weight = y[x == "Cat"], Quantile = ecdf(y[x == "Cat"])(y[x == "Cat"]), Species = "Cat" ) d1 <- data.frame( Weight = y[x == "Dog"], Quantile = ecdf(y[x == "Dog"])(y[x == "Dog"]), Species = "Dog" ) d2 <- data.frame( Weight = y[x == "Elephant"], Quantile = ecdf(y[x == "Elephant"])(y[x == "Elephant"]), Species = "Elephant" ) d <- rbind(d0, d1, d2) ggplot(d, aes(x = Weight, y = Quantile, col = Species)) + geom_line(size = 1.5) ``` [](https://i.stack.imgur.com/QEUC0.png) [](https://i.stack.imgur.com/eQFaK.png) [](https://i.stack.imgur.com/Vxfx9.png) These three plots show that there is much more to the difference between the three distributions than just sliding up and down the real line. They have different skewnesses. They have different variances. (They actually have equal means.) Depending on what you are modeling, you might be quite interested in these differences. If you want to quantify the strength of a relationship between a categorical variable and a continuous outcome, an analogous statistic to Pearson correlation between a continuous feature and a categorical outcome would be to use regression and take the square root of the $R^2$. If you do this for a continuous feature, you get the (magnitude of) the Pearson correlation, so this seems like a reasonable generalization. I will demonstrate below. ``` set.seed(2023) x <- c( rep("Cat", N), rep("Dog", N), rep("Elephant", N) ) y <- c( rnorm(N, 1, 1), rnorm(N, 2, 1), rnorm(N, 3, 1) ) L <- lm(y ~ x) sqrt(summary(L)$r.squared) ``` I get a fairly strong "correlation" of $0.6278021$. Since the feature is categorical, there is not really a notion of direction, so the sign does not matter, though I would go with the positive square root out of convenience. If you need to convey the strength of this relationship, you can use the boxplots, KDEs, and CDFs above (maybe histograms, too), but another option, which I confess I have not used (but I do like the idea), is to simulate some contnuous data with that calculated "correlation". For instance, the "correlation" between the outcome $y$ and categorical feature is $0.6278021$. Simulate some data with that correlation and graph that bivariate data, such as below. ``` library(MASS) set.seed(2023) X <- MASS::mvrnorm(N, c(0, 0), matrix(c( 1, sqrt(summary(L)$r.squared), sqrt(summary(L)$r.squared), 1 ), 2, 2)) d <- data.frame( x = X[, 1], y = X[, 2] ) ggplot(d, aes(x = x, y = y)) + geom_point() ``` [](https://i.stack.imgur.com/a9fwd.png) To me, this demonstrates the fairly strong relationship between the categorical feature and continuous outcome, and it does it in way that should be familiar and comfortable to stakeholders.
null
CC BY-SA 4.0
null
2023-03-13T22:05:48.800
2023-03-13T22:18:58.817
2023-03-13T22:18:58.817
247274
247274
null
609335
2
null
609325
2
null
Firstly, you will want to have a look at [precision-recall-gain curves](http://people.cs.bris.ac.uk/%7Eflach/PRGcurves/), which enable comparison of classifier performance across datasets with different base rates. It's basically just a clever (and theoretically justified) nonlinear rescaling such that PRG curves cover [0,1]x[0,1] and are baseline-independent. Snippet from Flach and Kull, NeurIPS 2015, see link below. Left: normal PR curve. Right: rescaled PRG curve. [](https://i.stack.imgur.com/XuIft.png) Secondly, in PR(G) space, the baseline to beat is the always-positive classifier, not the random classifier. That model has precision=baseline and recall=1. Thirdly, in [their NeurIPS paper on PRG curves](http://people.cs.bris.ac.uk/%7Eflach/PRGcurves/PRcurves.pdf), Flach and Kull show that classifiers with the same $F_1$ score as the always-positive classifier lie on the (0,1)--(1,0) diagonal in PRG space. Any model operating point below that line thus has worse $F_1$ score than the always-positive classifier. The only aspect that is still unclear to me (and they also don't write this explicitly in the paper) is whether that also implies a meaningful baseline of AUPRG=0.5 to beat, i.e., whether that baseline is also meaningful for the area under the curve, and not just pointwise. It seems to me that you could have AUPRG < 0.5, but as long as you have an operating point above the diagonal, the model could still be very useful if employed in that operating point? I think the AUROC=0.5 baseline relies crucially on the fact that any point below the diagonal can be "mirrored" by taking 1-the score instead of the actual score. It's not clear to me whether something like that can also be done in PR(G) space. Lastly, if you really want to see what the random classifier is doing in PR(G) space, Flach and Kull also have an expression for its $F_1$ score as a function of the baseline. You may also be interested in [this earlier paper on the relationship between PR and ROC curves](https://dl.acm.org/doi/10.1145/1143844.1143874).
null
CC BY-SA 4.0
null
2023-03-13T22:13:55.517
2023-03-13T22:13:55.517
null
null
131402
null
609336
2
null
595799
0
null
I see two approaches. - Approach the problem as some kind of panel data, where you determine the category in each month. - If you only care if someone belongs to group B at some point in the year, you might be interested in treating this as a multi-label problem. Briefly, a multi-label problem models the probability of membership in each category, allowing for a high probability of being in both categories. This sounds like your situation, especially if you only have data on the groups to which someone belonged during a given year, rather than knowing they were in group A in the beginning of the year before moving to group B.
null
CC BY-SA 4.0
null
2023-03-13T22:30:08.967
2023-03-13T22:30:08.967
null
null
247274
null
609337
1
null
null
1
49
I have a small survey dataset with 10 variables (Gender, International (Yes or No), and some Likert scale data) and all of them have categorical data. Having performed descriptive analysis, I was curious about what can I do to make good inferences. I have worked with a regression with numerical variables but never with so many categorical variables. I was looking into dummy coding but I found articles that said too many dummy variables may give bad results. What are your options here? My hypotheses are: H0: There will be no significant difference in English menu preference between the international and national groups H1: The international group will have a preference for an English menu compared to the national group. The variables I have are as follows: - Language (English, German, Other) - International (Yes and No) - Difficulty with the current menu (Likert scale data) - Preference for English Menu (Likert scale data) - Gender - Age - Number of years lived in Germany (If the number is more, he or she might be okay with the current menu) - and three other variables. I want to see how all these variables can affect Preference for English Menu.
Analysis of Categorical data
CC BY-SA 4.0
null
2023-03-13T22:38:15.660
2023-03-14T09:30:30.573
2023-03-14T09:30:30.573
56940
383124
[ "regression", "multiple-regression", "categorical-data", "inference" ]
609338
2
null
603496
18
null
A table adds little, but a picture can add a lot more to our understanding. I offer two pictures. --- Unlike the Box-Cox transformation, which applies to positive numbers, the Yeo-Johnson transformation applies to all numbers. It does so by splitting the real line at zero, shifting the positive values by $1$ and the negative values by $-1,$ and applying a Box-Cox transformation to the absolute values, negating them when the argument is negative. In effect, it sews two Box-Cox transformations together. However, they have "inverse" Box-Cox parameters. The natural origin of the Box-Cox parameters is $\lambda = 1$ and the "inverse" parameter is $$\lambda^\prime = 2 - \lambda,$$ reflecting the parameter line around $\lambda = 1.$ The sewing is smooth (as you will see in the first plot below) because all Box-Cox transformations are by design made to agree with the identity transformation at $x = 1.$ For pictures of the Box-Cox transformations and some explanation of their construction, see [https://stats.stackexchange.com/a/467525/919](https://stats.stackexchange.com/a/467525/919). These transformations are given by $$\operatorname{BC}(x;\lambda) = \frac{x^\lambda - 1}{\lambda}$$ (which has the limiting value of $\log(x)$ when $\lambda = 0$). They can be inverted: when $y$ is the transformed value, the original $x$ is recovered by $$\operatorname{BC}^{-1}(y;\lambda) = (1 + \lambda y)^{1/\lambda}$$ (limiting to the exponential function when $\lambda = 0$). The Yeo-Johnson transformation is $$\operatorname{YJ}(x;\lambda) = \left\{\begin{aligned}\operatorname{BC}(1+x,\lambda), && x \ge 0\\ -\operatorname{BC}(1-x, \lambda^\prime),&& x \lt 0.\end{aligned} \right.$$ These can all be inverted by inverting the positive and negative values separately. The implementation in any programming language is thereby simple. In `R`, for instance, it is ``` BC <- function(x, lambda) ifelse(lambda != 0, (x^lambda - 1) / lambda, log(x)) YJ <- function(y, lambda) ifelse(y >= 0, BC(y + 1, lambda), -BC(1 - y, 2-lambda)) ``` The graphs of $\operatorname{YJ}$ show the effects on the data for various $\lambda,$ [](https://i.stack.imgur.com/sGW0c.png) Here's what they do to a reference (Normal) distribution (the green distribution for $\lambda = 1$ in the middle panel): [](https://i.stack.imgur.com/gZQ4F.png) Like the Box-Cox family, these transformations make a distribution more positively skewed when $\lambda \gt 1$ and more negatively skewed when $\lambda \lt 1.$
null
CC BY-SA 4.0
null
2023-03-13T23:15:44.173
2023-03-13T23:22:31.677
2023-03-13T23:22:31.677
919
919
null
609339
1
null
null
0
22
I just ran a Koyck test on my data, and I got this as an output - I'd like to understand why Y.1 and X.t are significant and what they being significant mean. I also don't know what other information I could get from the p-value, the diagnostic tests and the geometric coefficients. ``` > koyck1 <- koyckDlm(brent$dT10Y2YM, brent$dFEDFUND) > summary(koyck1, diagnostic=T) Call: "Y ~ (Intercept) + Y.1 + X.t" Residuals: Min 1Q Median 3Q Max -0.818363 -0.064612 0.004479 0.062796 0.420666 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.005184 0.006809 -0.761 0.4469 Y.1 0.543633 0.052221 10.410 <2e-16 *** X.t -0.491048 0.205231 -2.393 0.0172 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1342 on 392 degrees of freedom Multiple R-Squared: 0.4858, Adjusted R-squared: 0.4832 Wald test: 155.2 on 2 and 392 DF, p-value: < 2.2e-16 Diagnostic tests: df1 df2 statistic p-value Weak instruments 1 392 23.4357538 1.861385e-06 Wu-Hausman 1 391 0.1898209 6.633062e-01 alpha beta phi Geometric coefficients: -0.01136002 -0.4910481 0.5436332 ```
How can I read the output of a Koyck test in R?
CC BY-SA 4.0
null
2023-03-13T20:56:10.747
2023-03-14T03:05:03.440
2023-03-14T03:05:03.440
383155
383155
[ "r", "regression", "hypothesis-testing", "statistical-significance", "economics" ]
609341
1
null
null
1
79
I have the follow variable $$y_t = e_t + u_t + \theta u_{t-1}$$ Here $u_t$ and $e_t$ are mutually independent i.i.d and $u_t \sim N(0, \sigma_u^2)$ and $e_t \sim N(0, \sigma_e^2)$. I am trying to show that I can re-write $y_t$ as an MA(1) process, i.e. : $y_t = \lambda_t + \phi\lambda_{t-1}$ where $\{\lambda_t\}$ are iid with mean zero and normal with variance $\sigma_{\lambda}^2$. I thought we could calculate the autocovariance function to prove that but I wasn't successful. Also, I'd like to find some expressions for $ \phi$ and $\sigma_{\lambda}^2$.
Prove that a random variable follows MA(1) process
CC BY-SA 4.0
null
2023-03-13T23:40:03.087
2023-04-29T23:15:05.200
2023-03-14T00:24:18.137
362671
378721
[ "time-series", "self-study", "random-variable", "covariance", "moving-average" ]
609342
1
609356
null
0
72
Right censoring was not a major issue in my prior research on education because I was modeling outcomes like grades and graduation rates where all participant results occur on the same day. But now I am seeing loads of hypothesis tests where data arrive sequentially such as when patients enroll in a medical study or online customers accumulate in an industry study. In both cases right censoring of the outcomes is a given: some patient and customer outcome events happen after data collection has ended. We have to truncate the outcomes in order to conduct our analysis, and I have found little formal guidance on how to handle these truncation issues: - If we stop data collection on a fixed date, then the earlier enrollees have had a much longer window in which to complete their event (e.g., return for lab work or check out their shopping cart). - If we apply a standard baking period for the metric (e.g., 7-day return rate for for lab work or 7-day checkout rate), what are the implications for truncating events that occur after day 7? What factors can introduce bias when modeling these kinds of truncated outcomes?
How to account for right-censored data in a hypothesis test
CC BY-SA 4.0
null
2023-03-13T23:55:54.410
2023-03-14T22:09:10.290
2023-03-14T22:09:10.290
137431
137431
[ "hypothesis-testing", "censoring" ]
609343
2
null
608379
0
null
universal approximation theorem says that using linear output (and other assumptions) you can approximate any function, however if you have a bounded output, you can for sure use bounded activation... However, ReLU would not be a nice choice for your example, and pretty much for any example, since it have a 0 gradient when less than zero... this means that initially, if the network predicts something less than zero, the ReLU would make it 0, however when calculating the loss and taking the gradient, that gradient would be zero, so the network won't correct its output. Instead you could use $ELU(x) + 1$ which does not have zero gradient (but it can saturate, which is another small problem) Also, keep in mind that some activations for the final layer are born with some output distribution in mind (sigmoid for Bernoulli, softmax for categorical, linear for gaussian/Laplacian and so on), and since you assume a distribution, you can optimize it via the maximum likelihood principle, from which MSE,MAE, BCE, CCE and so on come from (for example, sigmoid output layer with MSE have a problem of saturating on the extremis )
null
CC BY-SA 4.0
null
2023-03-14T00:09:41.527
2023-03-14T00:09:41.527
null
null
346940
null
609344
1
null
null
3
127
Suppose I take two samples, $a$ and $b$, from a population (or super population) but that I only observe $b$. However, I want to construct a confidence interval for $\bar{a}$, the average of the sample that I don't observe. Now, a confidence interval that contains the population mean 95 percent of the time cannot also contain the sample mean, $\bar{a}$, 95 percent of the time unless the sample mean is always equal to the population mean, which it generally is not. As best as I can tell, a confidence interval that covers $\bar{a}$ can be constructed as follows, $$ \bar{b} \pm \cdot \sqrt{\frac{S_B}{n_B} \times 2} $$ given sufficiently large samples. I want to prove that this in fact is the case. As I've likely made some mistakes in notation above, here's a simple simulation to illustrate that one does, in fact, need to double the variance. In case I wasn't clear enough above, I want a confidence interval that covers $\bar{a}$, not $E(\bar{A})$, 95 percent of the time. ``` set.seed(206) func <- function(){ A <- rnorm(500) B <- rnorm(500) ub_hi <- mean(B) + 1.96 * sqrt( var(B) / (n/2) ) ub_lo <- mean(B) - 1.96 * sqrt( var(B) / (n/2) ) unbiased <- ifelse(mean(A) <= ub_hi & mean(A) >= ub_lo, 1, 0) names(unbiased) <- "Single_Variance" b_hi <- mean(B) + 1.96 * sqrt( 2 * var(B) / (n/2) ) b_lo <- mean(B) - 1.96 * sqrt( 2 * var(B) / (n/2) ) biased <- ifelse(mean(A) <= b_hi & mean(A) >= b_lo, 1, 0) names(biased) <- "Double_Variance" return(list(unbiased, biased)) } y <- lapply(1:1000, function(i) {unlist(func())}) %>% bind_rows() %>% rowwise() mean(y$Single_Variance) mean(y$Double_Variance) ``` My results are ~85 percent coverage for the single variance approach and ~95 percent coverage for the double variance approach. Note Thanks to the comments below, I have rewritten this question. The first version of this question included my argument that there is no unbiased estimator for $\bar{a}$ such that a bias terms appears that is equal in expectation to zero but that introduces additional variance in the estimator. I've removed this to focus more clearly on the question, how do you derive/prove the confidence interval for $\bar{a}$ that involves doubling the variance of $\bar{b}$. Thanks to everyone for helping me to formulate this more clearly.
Confidence interval for (unobserved) sample mean
CC BY-SA 4.0
null
2023-03-14T00:27:36.387
2023-03-15T15:08:47.313
2023-03-15T15:07:12.343
266571
266571
[ "estimation", "prediction-interval" ]
609345
1
609942
null
1
45
If your task is to predict $t_{n+1}$ given tokens $(t_1,...,t_n)$, you could do two things: - Straight NN - feed $t=(t_1,...,t_n)$ into a neural network as an n-dimensional input and train it on predicting $t_{n+1}$ (with all the embedding / layernorm / skip connection stuff that transformer models have) - Attention - take $t_n$ plus information from $t_1,...,t_{n-1}$ relevant to $t_n$ computed in the attention layer (plus positional encoding) and feed that into a neural network to predict $t_{n+1}$ In one case you're putting everything into the NN, equally weighted. In another, you're taking certain information from each token weighted by relevance to $t_n$. My question is: why can't the straight NN learn this? What was wrong with the straight NN model so that we needed into introduce attention? If it was helpful to upweight certain things by a relevance metric, why wouldn't the NN learn that? My only thought so far is long-range dependency: that a NN might forget information far back in the sequence (as in general it will be less useful). For example, 'he walked into the kitchen, took the chocolate from the drawer, opened wide and put it into' – to predict the next word you really need to look all the way back to the start of the sentence.
Intuitive difference between NN and attention for text prediction
CC BY-SA 4.0
null
2023-03-14T01:00:18.310
2023-03-19T08:12:30.167
2023-03-19T05:01:41.703
11887
383143
[ "neural-networks", "natural-language", "language-models", "attention" ]
609346
2
null
210150
1
null
Beyond the excellent answer given by hplieninger here, many times these issues can be surmised long before you do any statistical testing of the ideas. I have met some people that have blown past these assumptions and just plugged in their data into alpha functions with whatever software they are using, and yet speaking to them you can gather that reliability for their measures would be inaccurate. So while hp's answer deals with the quant side, mine deals with the qual side of these assumptions: - Assumption of unidimensionality. This one can be quite obviously off without any stats testing involved. I've seen a number of people use some survey on something like anxiety, but then they explain that it tests sub-factors like test anxiety, foreign language anxiety, etc. It's highly likely that this sort of measure would become multi-dimensional and would require reworking. So if you see something like this, the test should be redeveloped or as hp mentioned some kind of omega coefficient can be used to assess what latent variable structure is in place that supports their measure. - Error terms are uncorrelated. This one is probably the least obvious of them all, but this paper does a good job of explaining the underlying causes. Some of the reasons are related to unidimensionality concerns, but there can also be some contextual effects that influence this. For example, you may have a number of items that measure the same thing but cluster together due to the way the questions are framed. As an example, if a test booklet has "item bundles", you may to a degree expect there may be some issues with errors being correlated among these items. - Tau equivalence. Let's say you have a test of reading ability. Its a simple composite composed of dichotomous correct and incorrect answers. The first ten items are super easy, then the next 50 items are super hard. Even though these are all reading items, its very likely that tau equivalence may not hold up in this case, as the way these 60 items will correlate will differ based on the difficulty of the items. So these illustrate that even some heuristic pre-checks can assist in avoiding a lot of these issues in the first place.
null
CC BY-SA 4.0
null
2023-03-14T01:01:57.080
2023-03-14T01:01:57.080
null
null
345611
null
609347
1
null
null
0
18
I am looking for recommendations for a sample size calculator for a multivariant (i.e. 5 variant) test. I was previously using t-tests to evaluate, but that seems to gravely overestimate sample size. Any other recommendations?
Best Power Analysis sample size calculator for multivariant AB Test
CC BY-SA 4.0
null
2023-03-14T01:12:28.843
2023-03-14T01:12:28.843
null
null
383145
[ "r", "hypothesis-testing", "statistical-power", "ab-test" ]
609348
1
null
null
1
15
Rookie question - is it possible to compare a single year's total number of cases of a disease in a specific region compared with previous years. I have a data set showing total yearly cases of a certain disease over the past 10 years. I want to know whether the increase in 2022 is statistically significant compared to previous years. Thanks for any assistance
How can I assess the statistical significance of one year's total cases of a disease against previous years?
CC BY-SA 4.0
null
2023-03-14T01:20:20.420
2023-03-14T02:34:10.620
null
null
383149
[ "statistical-significance", "disease" ]
609349
2
null
609337
0
null
For your Likert scale questions if you have psychometricr questions, you can try drivers analysis. But again, before running this analysis you need to establish some objective/hypothesis. key driver analysis, Or relative importance analysis, quantifies the importance of a series of predictor variables in predicting an outcome variable. It's very common in market research.
null
CC BY-SA 4.0
null
2023-03-14T01:28:12.200
2023-03-14T01:28:12.200
null
null
254526
null
609350
1
null
null
1
22
I have a problem deciding whether my data has dependant/pairwise or independant samples. I have read multiple online guides now, but I can't figure it out: I am testing about 10 different surface materials, which are placed outside for about half a year. In cold weather those materials begin to ice over. This icing behaviour is monitored by taking a picture every 30min. Using automatic image recognition the icing intensity on the samples was categorized from 0-4 at each one of those moments. So my data looks like this: $$\begin{array}{c|c|c|} & \text{Material 1} & \text{Material 2} & \text{Material 3} \\ \hline \text{TimeStamp 1} & 0 & 1 & 0 \\ \hline \text{TimeStamp 2} & 1 & 1 & 1 \\ \hline \text{TimeStamp 3} & 3 & 2 & 3 \\ \hline \text{TimeStamp 4} & 2 & 2 & 4 \\ \hline \text{TimeStamp 5} & 2 & 3 & 3 \\ \hline \text{TimeStamp 6} & 3 & 2 & 1 \\ \hline \end{array}$$ My research question is if any of those materials shows significantly less icing. I can't figure out though if I have independant samples (requiring a Kruskal-Wallis-Test) or paired samples (requiring a Friedman-Test)?
Trying to decide whether the sample is dependent or independent
CC BY-SA 4.0
null
2023-03-14T01:28:32.140
2023-03-14T03:20:31.067
2023-03-14T03:20:31.067
362671
383144
[ "time-series", "hypothesis-testing", "independence", "non-independent", "engineering-statistics" ]
609351
2
null
467494
0
null
The answers provided here are already useful. I just wanted to drop a couple more resources for learning about Box-Cox. There is actually an [excellent episode](https://quantitudepod.org/s4e17-box-cox/) from Quantitude dedicated to this very subject, which explains both the history and intuition behind this transformation (including the amusing history that the creators literally did this for the sole reason that their [authorship on the paper would rhyme](https://onlinestatbook.com/2/transformations/box-cox.html)). I also would like to recommend [this paper](https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1238&context=pare) as a less math heavy article on the subject that shows examples of the transformation if you would like to see the practical side of the transformation in action. One of the useful things they do in this paper is show how different anchoring points affect the transformation, with an example shown below: [](https://i.stack.imgur.com/dlMGj.png)
null
CC BY-SA 4.0
null
2023-03-14T02:00:29.240
2023-03-14T02:00:29.240
null
null
345611
null
609352
1
null
null
1
90
## Background The lecturer of statistical computing asked such a question in title. To be specific, the population distribution is $$ f(x_1, \cdots, x_p) = \left(x_1^{p-1} + \cdots + x_p^{p-1}\right)I(0<x_i<1,\forall 1\le i\le p) $$ The theory of drawing an approximate sample is conditional distribution. $$ f(x_1, \cdots, x_p) = f_1(x_1) f_2(x_2 | x_1) f_3(x_3|x_2, x_1) \cdots f_p(x_p | x_{p-1}, ..., x_1) $$ We need to calculate marginal density which has been given to derive each entry above. $$ f(x_1, \cdots, x_j) = \left(x_1^{p-1} + \cdots + x_j^{p-1}\right) + \frac{p-j}{p} $$ which implies for every $j$, $$ f_j(x_j|x_{j-1},...,x_1) = \frac{f(x_j,..., x_1)}{f(x_{j-1}, .., x_1)} $$ Let's just skip the annoying calculations and give the sampling procedure directly. - generate $U_1, R_1 \sim U(0,1)$ dependently, let $x_1 = U_1^{\frac1p}$ if $R_1 \le \frac1p$ else $x_1 = U_1$ - given $x_1$, generate $U_2, R_2 \sim U(0,1)$, let $x_2 = U_2^{\frac1p}$ if $R_2 \le \frac1{px_1^{p-1}+p-1}$ else $x_2 = U_2$, - given $(x_1, ..., x_{j-1})$, generate $U_j, R_j \sim U(0,1)$, let $x_j = U_j^{\frac1p}$ if $R_j \le \frac1{p\sum_{i=1}^{j-1}x_{i}^{p-1}+p-j+1}$ else $x_j= U_j$, for $3\le j \le p$. ## Simulation Let $p=5$, ``` ## R code p <- 5 n <- 1e4 set.seed(1) ## generate a vector ~ F with dim p generateVector = function(p) { vec = c() for (i in 1:p) { point = 1 / (sum(p * (vec ^ (p - 1))) + p - i + 1) threshold = runif(1) if (threshold < point) { vec = c(vec, (runif(1) ^ (1 / p))) } else{ vec = c(vec, (runif(1))) } } return(vec) } dta <- data.frame() for (i in 1:n) { dta <- rbind(dta, generateVector(p)) } colnames(dta) <- paste0('x', 1:p) head(dta) ``` ``` x1 x2 x3 x4 x5 \ 0.3721239 0.9082078 0.8983897 0.6607978 0.0617863 \ 0.1765568 0.3841037 0.4976992 0.9919061 0.7774452 \ 0.2121425 0.1255551 0.8266908 0.8250891 0.3403490 \ 0.5995658 0.1862176 0.6684667 0.1079436 0.4112744 \ 0.6470602 0.5530363 0.7893562 0.8624731 0.6927316 \ 0.8612095 0.2447973 0.6302822 0.5186343 0.4068302 ``` My question is how to verify the data `dta` is indeed a sample from $f$ by visualization when $p=5$ or is there any hypothesis test help? If $p=1$, we can do that by plot histogram and add the density function curve to it, and apply $\chi^2$ test or Kolmogorov test.
How can I visualize the dataset with $n$ samples and $p$ variables to check whether it is from a specific and known distribution to check?
CC BY-SA 4.0
null
2023-03-14T02:14:01.887
2023-03-17T23:18:05.487
2023-03-16T08:57:08.533
22047
376068
[ "distributions", "sampling", "data-visualization" ]
609353
1
609360
null
0
24
Consider the probability distribution $D \sim X - a$ where $X \sim \text{Exp}(\lambda)$. My task is to find the MGF of probability distribution $D$. I think I have a solution but it contradicts what I thought would be the intuitive solution, so I would like to confirm my work: For $D \sim X - 5$ where $X \sim \text{Exp}(\lambda)$, the PDF of $D$ is given by $$ f_D(x) = \lambda e^{-\lambda(x+a)} \quad \text{ for } x \geq -a $$ and so \begin{align*} M_D(t) &= \text{E}_D\left[ e^{tx} \right] \\ &= \int_{-a}^{\infty} e^{tx} \lambda e^{-\lambda(x+a)} \text{d}x \\ &= \lambda e^{-\lambda a} \int_{-a}^{\infty} e^{-x(\lambda - t)} \text{d}x \\ &= \lambda e^{-\lambda a} \left( \frac{1}{t - \lambda} \right) \left( 0 - e^{a(\lambda - t)} \right) \\ &= e^{-at} \left( \frac{\lambda}{\lambda-t} \right) \end{align*} This result seems believable, but (before doing any actual math) I had originally suspected $M_D(t) = \frac{\lambda}{\lambda - (t+a)}$. Are there flaws with my solution method, or does $M_D(t) = e^{-at} \left( \frac{\lambda}{\lambda-t} \right)$ seem reasonable? I am having a hard time convincing myself.
Confirmation of MGF of Shifted Exponential Distribution
CC BY-SA 4.0
null
2023-03-14T02:33:09.810
2023-03-14T03:37:37.790
null
null
373223
[ "probability", "measure-theory", "moment-generating-function" ]
609354
2
null
609348
0
null
Yes, there are many different ways in which you can do that. Some ways are simple but don't take into account all your information. Some other ways are more complex and take more information into account. First, you should know that the technical name for your data is "count data". Using that term will greatly help you find relevant information. The book [Introduction to Categorical Data Analysis](https://mregresion.files.wordpress.com/2012/08/agresti-introduction-to-categorical-data.pdf) has a lot of pointers. One way to go about your problem would be to fit a Generalized Linear Model (GLM) to your data. The intuition behind this, as applied to your problem would be the following: Every year there is a certain probability of someone in the population getting infected, or lets call it infectious force. Besides the infectious force, there is some normal random variability that can change the number of infections from year to year. For example, in two years with the same infectious force, you may see 930 infections the first year and 943 the second. But random variability can only have so much effect, e.g. it may explain a difference between 930 and 943, but not between 930 and 10000 cases. You observed a certain number of infections in the year 2022. You want to know whether the difference with respect to previous years can be explained by random variability, or whether it is evidence of a higher infectious force. The infectious force is an invisible variable that you don't know. Statistically speaking, it could be the probability of success of a Binomial distribution, or the expected mean of a Negative Binomial or a Poisson distribution. The GLM approach would be to fit a GLM regression model to your count data, to estimate what the infectious force was for the first 10 years, and whether it changed in the year 2022. For this first you need to choose a model of what the variability in your data looks like. You could use Poisson regression, Binomial (logistic) regression by using the total number of people in the population, or negative binomial regression. Then, you would fit a model to your count data, which has a shared infectious force between the previous 10 years (that is a parameter A), and which has the same infectious force + a difference (the difference being parameter B) for the year 2022. Then you can ask the model whether parameter B is statistically different from 0, that is, if that parameter B is needed to explain the data of year 2022, or whether that datapoint can just be explained by the standard variability. Some resources for doing this kind of thing (though you will have to adapt them for your problem) are found here [https://stats.oarc.ucla.edu/r/dae/poisson-regression/](https://stats.oarc.ucla.edu/r/dae/poisson-regression/) [Statistical Test to Compare Count Data](https://stats.stackexchange.com/questions/489093/statistical-test-to-compare-count-data) [https://www.guru99.com/r-generalized-linear-model.html](https://www.guru99.com/r-generalized-linear-model.html) There are even more sophisticated analyses that you could do. For example, it could be expected that there is some variability in the infectious force in the previous 10 years. Does the year 2022 also depart from the normal between-year variability for this disease? That you could answer with a Generalized Linear Mixed Model, or with a Hierarchical Bayesian Regression. Anyhow, implementing any of this will require careful thinking about the model that your fitting, and how to adapt other coding examples to your specific problem.
null
CC BY-SA 4.0
null
2023-03-14T02:34:10.620
2023-03-14T02:34:10.620
null
null
134438
null
609355
1
609362
null
2
34
Can $\chi^2$-test be used to compare two densities? For example count of something per 5000 meters: like 2050 counts/5000 meters and 1500 counts/5000 meters. Can $\chi^2$-test be used to see if these two densities are significantly different? I know $\chi^2$-test is used to compare two proportions but not sure if it can work here.
Can $\chi^2$-test be used to compare two densities?
CC BY-SA 4.0
null
2023-03-14T02:49:26.500
2023-03-14T04:06:50.230
2023-03-14T03:22:20.950
362671
383153
[ "chi-squared-test" ]
609356
2
null
609342
4
null
What you describe isn't right truncation, it's right censoring. That might seem like a trivial difference in a choice of words, but it's a [critical distinction in statistical analysis](https://stats.stackexchange.com/tags/truncation/info). Right truncation means something different from your situations. It means that, if the time to an event is beyond some value, then you have no observation at all. For example, as [Klein and Moeschberger](https://www.springer.com/us/book/9780387953991) explain (Section 1.19, Second Edition), some work on the time between HIV infection and development of AIDS only included individuals who had developed the disease by the end of the study. Others, whose time since infection hadn't yet been long enough to lead to AIDS, were omitted. Such data are considered right-truncated, as the data set provides no information about potentially longer times to events. The scenarios you describe are different. You have at least some information for all cases, but for some you only have a lower limit for the time until the event. Such times to events are considered right-censored. This [review](https://doi.org/10.1146/annurev.publhealth.18.1.83) discusses many ways that censoring can occur and how to deal with it. Right censoring, as in your scenarios, is readily handled by survival analysis methods. In the economics literature, regression involving censored observations might be called "[tobit regression](https://en.wikipedia.org/wiki/Tobit_model)" instead, but the principles are the same and implementation can simply be behind-the-scenes invocation of survival analysis routines. This web site currently has almost 3000 pages tagged [survival](https://stats.stackexchange.com/tags/survival/info). The R [survival package](https://cran.r-project.org/package=survival) provides many useful tools and vignettes that help you learn how to use them.
null
CC BY-SA 4.0
null
2023-03-14T02:57:11.977
2023-03-14T14:02:58.013
2023-03-14T14:02:58.013
28500
28500
null
609358
1
null
null
0
25
Using a generalized linear model; I got particularly large or small OR with 95% CI; while these are all insignificant as P > 0.05 for these, I just wanted to ask on the community if I am missing something? Some of the strange values are as follows: ``` # Intercept OR: 3.913049e+08 2.5%: 2.776314e-29 97.5%: 2.034168e+266 # Some other values - OR: 1.861355e+07 2.5%: 1.308682e-21 97.5%: 6.239984e+199 ```
Unconventional odds ratio and 95% CI
CC BY-SA 4.0
null
2023-03-14T03:19:26.140
2023-03-14T17:36:34.920
2023-03-14T17:24:11.267
7290
340052
[ "r", "generalized-linear-model", "odds-ratio" ]
609359
1
609363
null
1
32
Is it possible to rewrite $$ \frac{-1}{2}\left(x^T\Gamma^{-1}(\mu_1-\mu_0)+(\mu_1-\mu_0)^T\Gamma^{-1}x\right) $$ as $$ -\theta^Tx $$ where $x,\mu\in\mathbb{R}^d, \Gamma\in\mathbb{R}^{d\times d}$ and $\theta$ is some function of $\mu,\Gamma?$ Tried to rewrite as $$ \sum x_i\Gamma^{-1}_i\mu_i + \sum \mu_i\Gamma^{-1}_ix_i $$ $$ \sum x_i\Gamma^{-1}_i\mu_i + \mu_i\Gamma^{-1}_ix_i $$ $$ 2\sum \Gamma^{-1}_i\mu_ix_i $$ where $\mu_i=(\mu_1-\mu_0)$ because $x_i, \mu_i$ are scalars so it doesn't matter the order in which it multiplies the ith row of $\Gamma$?
Rewrite linear expression
CC BY-SA 4.0
null
2023-03-14T03:28:52.793
2023-03-14T13:38:50.093
2023-03-14T13:38:50.093
380140
380140
[ "generative-models" ]
609360
2
null
609353
2
null
What is the effect of change of origin and scale on moment generating function? Specifically if $X\mapsto \frac{X-a}{h}=: U, $ then what is $M_U(t) $ and is it related to $M_X(t) $ in any way? The observation is simple: $$ M_U(t) =\mathbb E\exp(tU)=\mathbb E\exp\left[\frac{-at}{h}+\frac{tX}{h}\right]=\exp\left[\frac{-at}{h}\right]\mathbb E\exp\left[\frac{tX}{h}\right]=\exp\left[\frac{-at}{h}\right] M_X\left(\frac{t}{h}\right).\tag 1$$ Using this, you can check if $X\sim\textrm{Exp}(\lambda), $ then $M_D(t) =\exp\left[{-at}\right] M_X\left({t}\right)= \exp\left[{-at}\right] \left(\frac{\lambda}{\lambda-t}\right).$
null
CC BY-SA 4.0
null
2023-03-14T03:37:37.790
2023-03-14T03:37:37.790
null
null
362671
null
609361
1
null
null
1
68
I have the following matrix: $ U = \begin{bmatrix}u_{1} \\ u_{2} \\ \vdots \\u_{T} \end{bmatrix}$ where $u_i$ are $k \times N $ matrices and $E[vec (u_i) vec (u_i)'] = \Sigma_i \otimes I_k $ for $i=1, \ldots,T$ and $I_k$ is the $k \times k$ identity matrix. The $\Sigma_i$ matrices look like: $\Sigma_i = \begin{bmatrix} \sigma_{11,i} & \ldots & \sigma_{1N ,i}\\ \vdots & \ddots & \vdots\\ \sigma_{N1,i} & \ldots & \sigma_{NN,i}\end{bmatrix}$ The correlation between the elements in $u_i, u_j$ is zero for $i\neq j$ and $i,j=1, \ldots, T$ and also $E[u_i]= 0_{k\times N}$ How can I obtain an expression of $ E[vec(U)vec(U)'] $ as a function of all the $\Sigma_i$ with $i = 1,\ldots, T$ matrices? For example, by putting all the $\Sigma_i$ in a big block diagonal matrix or in general defining another big matrix that contains all the $\Sigma_i$?
Covariance matrix and vec operation
CC BY-SA 4.0
null
2023-03-14T03:53:43.397
2023-03-14T20:38:26.867
2023-03-14T20:38:26.867
269632
269632
[ "variance", "covariance", "matrix", "linear-algebra" ]
609362
2
null
609355
1
null
Slightly indirectly, yes. [I'd probably have called those intensities or rates; calling them 'densities' without some modifying adjective or similar, such as a noun taking the role of an adjective, will tend to lead to confusion with probability densities in a discussion such as this one. It took me a while to realize you weren't asking about comparing probability densities.] If the counts occur independently and in proportion to the length in meters (as they would if the densities of the events were the same), then yes. Indeed the length in meters don't have to be the same, even though they are in your example. If the rate per meter is the same (as would be assumed under H0) then you can treat the whole set as two draws from the same population, which should occur in the ratio of the proportions of the lengths in meters to the whole length. In this case, on average both counts should be $\frac{5000}{5000+5000} = \frac12$ the total. The most common approach would then be to condition on the total and test for equality of proportions; this is equivalent to testing whether the first proportion is one half. Which can indeed be done as a chi-squared goodness of fit test if the sample size is not too small (it's fine in this case). Note that your expected counts in this case are both $(2050+1500)\times\frac{5000}{5000+5000} $ There's other approaches to arrive at essentially the same test.
null
CC BY-SA 4.0
null
2023-03-14T04:06:50.230
2023-03-14T04:06:50.230
null
null
805
null
609363
2
null
609359
2
null
Here $\Gamma$ is a $d\times d$ matrix, and $x,\mu$ are vectors of length $d$. When using matrix algebra it is common convention to express vectors as column vectors. Therefore, we will think of $x$ and $\mu$ as matrices of size $d\times 1$. Therefore, the product $\Gamma^{-1}(\mu_1-\mu_0)$ is a matrix product of size $(d\times d)(d\times 1)$, i.e. it is a matrix of size $d\times 1$. In your expression you have $x^t\Gamma^{-1}(\mu_1-\mu_0)$, therefore the matrix product is $(1\times d)(d\times d)(d\times 1)$, this is because you wrote $x^t$, so your replace $x$, which is assumed to be in column form, into row form and so the dimension numbers get flipped. Now, $x^t\Gamma^{-1}(\mu_1-\mu_0)$ is a scalar, because it is a $(1\times 1)$ matrix, i.e. a number. So, in particular, it is equal to its own transpose (the transpose of a $(1\times 1)$ matrix is itself): $$ \left( x^t \Gamma^{-1}(\mu_1 - \mu_0) \right)^t = x^t \Gamma^{-1}(\mu_1 - \mu_0) $$ But, at the same time, let us see what happens if we use the ``transpose properties'': - Transpose of a product is the reverse-product of the transposes - Transpose of an inverse is the inverse of the transpose - Transpose of the transpose is the original matrix Therefore, by using these three, $$ \left( x^t \Gamma^{-1}(\mu_1 - \mu_0) \right)^t = (\mu_1 - \mu_0)^t (\Gamma^{-1})^t (x^t)^t = (\mu_1 - \mu_0)^t (\Gamma^t)^{-1} x$$ It appears that the missing information is that $\Gamma$ is symmetric, i.e. $\Gamma^t = \Gamma$. If we accept the symmetric assumption then we have derived that, $$ x^t \Gamma^{-1}(\mu_1 - \mu_0) = (\mu_1 - \mu_0)^t \Gamma^{-1} x$$ From here you can substitute this in and get the $\theta$ you are looking for.
null
CC BY-SA 4.0
null
2023-03-14T04:18:43.070
2023-03-14T04:18:43.070
null
null
68480
null
609364
1
609366
null
6
620
I am confusing about the relationship between z-score and the normal distribution. Do we apply z-score normalization to get a data with normal distribution, or given we have a normal distribution, we can apply and use z-score? What is the exact relationship?
Relationship between z-score and the normal distribution
CC BY-SA 4.0
null
2023-03-14T05:30:14.237
2023-03-14T09:17:48.420
null
null
216235
[ "mathematical-statistics", "normal-distribution", "z-score" ]
609365
2
null
609364
7
null
There is no relationship. The (sample) z-score is defined as $$ z_i = \frac{ x_i - \bar x } {s} $$ where $i$ indexes observations $\{x\}$, $\bar x$ is the sample mean, and $s$ is the sample standard deviation. There is nothing in this definition which states that the data has to be normally distributed, or that you can get normally distributed data by applying this transformation. The z-score represents the number of standard deviations that a data is from the mean. You may be getting mixed up with a z-test. In a z-test, we assume that under the null hypothesis, the test statistic of interest (e.g. a sample mean) has a normal distribution. The z-test procedure can be found [here](https://en.wikipedia.org/wiki/Z-test).
null
CC BY-SA 4.0
null
2023-03-14T05:44:08.057
2023-03-14T05:44:08.057
null
null
369002
null
609366
2
null
609364
5
null
The ``normal distribution'' is an entire family of different distributions. We use the notation $\textbf{Normal}(\mu,\sigma^2)$ to indicate what type of normal we get. If you pick a certain choice for $\mu$ and you pick another choice (positive) for $\sigma$, then you get a different type of Normal. Here are some pictures taken from Wikipedia: [](https://i.stack.imgur.com/LVuB3.png) If you vary $\mu$ you are varying where the center of the distribution. If you vary $\sigma$ you are varying how spread out the distribution is. The "standard normal distribution", is the one where $\mu=0$ and $\sigma = 1$. Since there is an infinite family of normal distributions it would be annoying to have different functions/calculators for each one. So it is convenient to convert all normal distributions into the "standard normal" form. If $x_1,x_2,...,x_n$ are samples from some normal distribution you replace each $x_i$ in that list by, $$ x_i \mapsto \frac{x_i - (\text{sample mean})}{(\text{sample deviation})}$$ This is known as "calculating the z-score for each $x_i$". By doing this process you have transformed your original data set $x_1,...,x_n$ into a new data set $z_1,...,z_n$, but in such a way so that the new data set follows a normal distribution of type $\text{Normal}(0,1)$.
null
CC BY-SA 4.0
null
2023-03-14T05:48:33.410
2023-03-14T05:48:33.410
null
null
68480
null
609367
1
null
null
0
21
I am interested in investigating the association between blood cholesterol and dietary fiber controlling for other covariates (age, gender, bmi etc) in around 40 participants. I have some data from people with only one timepoint, and others with 2 data points. I understand that if I want to include individuals with multiple data points, I should use a mixed-effects model to account for the correlation between repeated measures within individuals. To fit a mixed-effects model in STATA, I wondered whether I could use the "mixed" command. Below is an example of how I could use the "mixed" command to analyse the association between blood cholesterol and diet quality, while accounting for the correlation between repeated measures within some individuals: ``` mixed cholesterol dietfiber age gender || id: ``` Just for clarity, I am not interested in the role of time, or changes over time, but just whether there is an association between cholesterol and dietary fiber irrespective of time and controlling for the other covariates.
Association between two variables (blood cholesterol and diet fibre)- data from individuals with one datapoint and others with multiple
CC BY-SA 4.0
null
2023-03-14T05:52:13.033
2023-03-15T02:01:00.013
2023-03-15T02:01:00.013
11887
358089
[ "mixed-model", "stata", "biostatistics" ]
609369
1
null
null
0
27
While fine-tuning a deep neural network I ran into the following situation: - My train- and validation loss are both decreasing and have very similar values throughout training. Especially the train-loss is not significantly lower than the validation loss. Still both loss values are rather high. Thus, I would argue the model is underfitting. - After training is complete, I am calculating various metrics such as precision and recall on the trainingset and the validationset. Here, the metrics on the trainingset seem sound but the metrics on the validationset show very poor performance. Thus, I would argue the model is overfitting. This seems very contradictory to me. Usually I would argue that the loss function I am using is not useful for the task I want the model to learn, however, I do not have the possibility to change it. What can I do now? And: is my model underfitting or overfitting?
Loss indicates Underfitting, Metrics indicate overfitting. What now?
CC BY-SA 4.0
null
2023-03-14T07:38:57.030
2023-03-14T07:40:35.193
2023-03-14T07:40:35.193
362671
383162
[ "neural-networks", "model-evaluation", "overfitting" ]
609370
1
null
null
0
25
I ran a goal setting program at my university. Students had four goals each. For each goal they could choose either basic or advanced level. They rated themselves between 1-5 each week. I want to compare self-rating levels for basic and advanced goals to see if there's a link between the level of goal selected, and the level of self-rating. In total, students performed 1197 advanced self-ratings over 12 weeks. When I add up all these self-ratings the total score is 4618, with an average score 3.857978279 and the standard deviation was 1.420659883. In comparison, the students performed 1439 basic self-ratings, with a total score of 4549, and an average score 3.161223072, standard deviation was 1.607204839. There seems like quite a big difference between the average scores for each goal category of M = 3.16 and M = 3.86. It looks like students who select advance goals rate themselves higher. I'd like to present this finding, but I'm not sure what test to use (and how to clean the data). I'm trying to use JASP... TIA!
What test should I use to compare self-ratings of two types?
CC BY-SA 4.0
null
2023-03-14T07:39:16.697
2023-03-14T07:47:04.623
2023-03-14T07:47:04.623
382959
382959
[ "probability" ]