Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
610620
2
null
610615
2
null
With fewer than 100 observations, you are limited in what you can accomplish. The risk in this type of modeling is overfitting the data, matching your model to the peculiarities of the data sample instead of elucidating true underlying associations between predictors and outcome. For the unpenalized regression models you seem mostly to be using, you typically need [on the order of 15 observations per coefficient that you will estimate](https://stats.stackexchange.com/q/29612/28500). That would restrict you to 6 or 7 unpenalized coefficients, one of which you have already assigned to `ecosystem`. That count of coefficients includes not only the specific predictors you have in mind, but also any interaction terms needed to evaluate differences in predictor associations with outcome between the ecosystems. In your work so far, you have exacerbated that problem with data-driven attempts to transform the data and stepwise predictor selection. Any use of the outcomes to choose the predictors or their transformations will, at the least, make p-values and the like unreliable. Stepwise or other methods for automated predictor selection are [poor choices](https://stats.stackexchange.com/q/20836/28500). What you need is a model that can flexibly fit the data in a way that lets the data tell you the functional shapes of the association between predictors and outcome, but done in a way that penalizes the magnitudes of coefficients to avoid overfitting the data at hand. A [generalized additive model](https://stats.stackexchange.com/tags/generalized-additive-model/info) (GAM) can be a good choice for this type of data. There's a reasonably simple explanation of the principles in Section 7.7 of [An Introduction to Statistical Learning](https://www.statlearning.com). The [mgcv package](https://cran.r-project.org/package=mgcv) is a popular implementation in R. This site has nearly 900 questions tagged with [generalized-additive-model](https://stats.stackexchange.com/questions/tagged/generalized-additive-model?sort=newest), and several individuals with experience in such models who regularly contribute to this site.
null
CC BY-SA 4.0
null
2023-03-24T17:15:28.167
2023-03-24T17:15:28.167
null
null
28500
null
610622
1
null
null
0
16
I'll provide physical context just to be able to write my question clearly. In an experiment, we're trying to measure the electric field using a device called an electrofield meter $EFM$. The way it works, roughly speaking, is that it converts the field to a potential difference that gets measured by an avometer. To obtain the field, we then multiply the potential difference by a factor, that depends on the calibration of the device, to get the field. I want to know the uncertainty in such a measurement. It results from two devices. But I don't know if they are unrelated/independent, because the desired value is just a multiple of the actual measurement. My teacher said the total error would be $\sigma^{}_{total} = \sqrt{(\sigma^{}_1)^2 + (\sigma^{}_2)^2}$ where $\sigma^{}_1$ and $\sigma^{}_2$ are the errors of the respective devices. But I thought that was when the measurements are made independently, like for example using two devices to measure two different quantities and the desired quantity is their product or their ratio. Am I wrong?
Error in measurement from two correlated devices
CC BY-SA 4.0
null
2023-03-24T17:29:35.397
2023-03-24T17:47:37.680
null
null
384063
[ "measurement-error", "measurement" ]
610623
1
null
null
5
332
I'm trying to solve biochemistry problems (think protein folding) with DNNs. Are there 2D / 3D coordinate systems that are particularly well suited for deep neural networks (DNNs) to process? For example, if we were training a DNN to predict gravitational attraction between two objects, cartesian coordinates would presumably do worse than polar coordinates, because the key value to calculate gravitation attraction is distance, a value directly expressed in polar coordinates but would require a DNN to learn the Pythagorean rule when using cartesian coordinates. As another example, arguably the positional embeddings based on sine and cosine used in early Transformers was an interesting "coordinate system" in 1D. Another answer might be not to worry about it - that DNNs will figure it out no matter what coordinate system (within reason) you pick.
Options for 3D coordinate systems?
CC BY-SA 4.0
null
2023-03-24T17:37:31.190
2023-03-24T21:35:41.347
2023-03-24T17:40:30.630
22311
254989
[ "neural-networks", "feature-engineering" ]
610624
2
null
610623
4
null
If we know something about the problem we're trying to model, then representing that knowledge to the model via a good set of features is incredibly valuable. This is often called "[feature-engineering](/questions/tagged/feature-engineering)," and it can yield dramatic improvements in model quality. [Sextus](https://stats.stackexchange.com/users/164061/sextus-empiricus)'s [answer](https://stats.stackexchange.com/a/610645/22311) provides an example, in the context of protien folding. But there's no single "best representation" of arbitrary data, because not all problems have the same underlying mechanics. In other words, even if you do an experiment and validate that polar coordinates are more useful than Cartesian on one task, there's surely a different task where you can demonstrate the opposite result. Another layer of nuance is considering how many model parameters are required to achieve the desired result. It may be the case that having a more refined feature representation allows one to achieve similar results using a smaller network (measured by the number of parameters).
null
CC BY-SA 4.0
null
2023-03-24T17:45:05.170
2023-03-24T21:35:41.347
2023-03-24T21:35:41.347
22311
22311
null
610626
1
null
null
2
53
I have a dataset where the primary result is the test results (a binary variable 0 or 1) from doctors. I want to see if a continuous variable X is going to affect the test result. I think in this dataset, one row represents one test. But in this dataset, there are multiple rows from the same doctor, which means some doctors conducted the tests more than once. So I wanted to build a model to include doctor as a random effect, like this: ``` glmer (test_result ~ X + (1| doctor_ID),family= binomial) ``` How is this different from a regular logistic regression where you put doctor ID as a covariate? ``` glm(test_result ~ X + doctor_ID, family= binomial) ``` What if I have another random effect variable `patient_ID` (tests results from the same patient as a cluster). How do I add this to the model?
logistic regression model with mixed effects?
CC BY-SA 4.0
null
2023-03-24T17:50:19.623
2023-03-24T22:56:21.360
2023-03-24T22:56:21.360
345611
313293
[ "regression", "logistic", "mixed-model", "generalized-linear-model", "glmm" ]
610627
2
null
610568
0
null
The ratio of interest to you is a ratio of two random variables. That is, a count of survivors divided by a count of original cases. This often occurs in biological measurements. These could be counts made in the field or in a petri dish or other places. Both the components of a ratio appear to be random. (The denominators don't seem to be fixed by the design of the experiment.) Ratios of this sort are often highly variable. Most rates are close to zero, but some can have long tails to the right. Ratios of this sort are often analyzed on a log scale. (Sometimes there is a biological basis for this.) You can do this by computing the log of each ratio to start. Then calculate the average of these logs and finally the antilog of the average. Use this as an estimate of the rate. A standard error and confidence interval can also be calculated on the log scale, eg, a 95% interval. The antilogs of these lower and upper values can be used as a confidence interval for the rate above, that is, the antilog of the average of the logs of the ratios.
null
CC BY-SA 4.0
null
2023-03-24T17:54:48.087
2023-03-24T17:54:48.087
null
null
154840
null
610628
1
610638
null
4
189
From what I understood when we are doing PCA, we can work both with raw or standardised data, depending on the situation we're in. Is it true that the average of the eigenvalues is equal to 1 when we are working with standardised data? If yes, why?
Does the average eigenvalue equals 1 in PCA applied to standardised data?
CC BY-SA 4.0
null
2023-03-24T18:26:27.907
2023-03-24T19:33:20.743
2023-03-24T19:33:20.743
56940
377525
[ "pca", "standardization", "eigenvalues" ]
610629
2
null
517102
0
null
In general, if $X$ takes at most countable many values, then for any function $f$, the entropy $H(f(X))$ of the random variable $f(X)$ cannot be greater than the entropy $H(X)$ of $X$. In short: $H(X) \geq H(f(X))$ for any $f$. This is due to the monotonicity of the logarithm, implying $$(a+b)\log(\frac{1}{a+b})=a\log(\frac{1}{a+b})+b\log(\frac{1}{a+b})\leq a\log(\frac{1}{a})+b\log(\frac{1}{a})$$ for any $a, b >0$. Now consider $A = \{x\in X(\Omega): \exists x'\in X(\Omega)\text{ with } x \neq x'\text{ and } f(x)=f(x')\}$ and notice that $$H(X) - H(f(X)) = \sum_{x\in A}P(X=x)\log\Big(\frac{1}{P(X=x)}\Big)-\sum_{y\in f(A)}P(f(X)=y)\log\Big(\frac{1}{P(f(X)=y)}\Big).$$
null
CC BY-SA 4.0
null
2023-03-24T18:30:27.933
2023-03-24T18:30:27.933
null
null
323131
null
610630
1
null
null
0
31
I am (1) generating bivariate normal deviates $(Y_1, Y_2)$ (2) dichotomizing one of the normal deviates according to quantiles of N(0,1) to yield the normal, binary pair $(Y_1, Y_2^*)$. I want to generate $(Y_1, Y_2)$ according to some intermediate correlation $r_{b}$ so that $(Y_1, Y_2^{*})$ has correlation $r_{pb}$. From what I understand, the Pearson correlation of $(Y_1, Y_2)$ is referred to as the biserial correlation ($r_b$) and the Pearson correlation of ($Y_1$, $Y_2^*$) is referred to as the point biserial correlation ($r_{pb}$). $r_b$ and $r_{pb}$ are one-to-one functions of each other, and I am aware of the closed form of this relationship. Let $q$ represent the $(1-p)^{th}$ quantile of $Y_2 \sim N(0,1)$ and construct $Y_2^{*}=1$ if $y_2 \geq q $ and 0 otherwise so that $Y_2^{*} \sim$ Bernoulli($p$). Then, $$r_b = \frac{r_{pb} \sqrt{p(1-p)}}{\phi(q)}$$ where $\phi$ is the $N(0,1)$ density. [This question and answer](https://stats.stackexchange.com/questions/313861/generate-a-gaussian-and-a-binary-random-variables-with-predefined-correlation) was helpful for this. Now, suppose instead that I am (1) generating bivariate normal deviates $(Y_1, Y_2)$ (2) dichotomizing both of the normal deviates to yield the binary pair $(Y_1^{*}, Y_2^*)$. I want to generate $(Y_1, Y_2)$ according to some intermediate correlation $r_t$ so that $(Y_1^{*}, Y_2^{*})$ has correlation $r_{\phi}$. From what I understand, the Pearson correlation of $(Y_1, Y_2)$ is referred to as the tetrachoric correlation ($r_t$), and the Pearson correlation of ($Y_1^{*}$, $Y_2^*$) is referred to as the phi coefficient ($r_{\phi}$). My question: In the case where the latent variables are both normal, and the observed variables are both binary, is there a closed form relationship between the latent correlation $r_t$ and observed correlation $r_\phi$? The [phi2tetra function in the psych R library](https://personality-project.org/r/html/phi2poly.html) seems to find the corresponding values based on simulation.
Closed form relationship between phi coefficient and tetrachoric correlation?
CC BY-SA 4.0
null
2023-03-24T18:31:27.617
2023-03-24T19:46:13.020
2023-03-24T19:46:13.020
235199
235199
[ "correlation", "simulation", "psychometrics" ]
610632
2
null
610628
2
null
Not quite. The sum of the eigenvalues of any PCA performed on a $P$ by $P$ correlation matrix ($\textbf{R}$), where $P$ is the number of variables, exactly equals $P$. This is why such eigenvalues are typically interpreted as apportioning the total variance in the data among $P$ different components. In this case the sum of the eigenvalues averaged over $P$ exactly equals 1. Sometimes, however, a PCA is performed on the variance/covariance matrix ($\mathbf{\Sigma}$) instead. This is typically done when $\mathbf{\Sigma} \approx \textbf{R}$, so you get roughly similar interpretations if the approximation is close, with a similar value of eigenvalues averaged over $P$. $\mathbf{\Sigma} = \textbf{R}$ exactly when the data are standardized. Otherwise PCA on $\mathbf{\Sigma}$ can produce eigenvalues that do not sum to $P$, and thus the sum of eigenvalues averaged over $P$ will not equal 1. Properly speaking, PCA is not performed on the data but on either $\mathbf{\Sigma}$ or $\textbf{R}$.
null
CC BY-SA 4.0
null
2023-03-24T18:48:51.130
2023-03-24T18:48:51.130
null
null
44269
null
610633
1
null
null
0
11
I have a set of CT and MRI images that I aim to use for a classification task (disease presence: yes/no). The CT and MRIs do not correspond to the same patient but for a different set of patients (some have CT and some MRIs). Can I use a multi-input network in this case? Do multi-input networks should have 1:1 correspondence?
Should the inputs correspond to the same subject when training a multi-input network?
CC BY-SA 4.0
null
2023-03-24T18:54:34.450
2023-03-24T18:54:34.450
null
null
336916
[ "neural-networks", "classification" ]
610634
1
null
null
2
24
If I have 50 people randomly split into 5 groups of 10 people, I can rank the groups based on their combined weight. If I were to rank each person based on their individual weight, what would be the most accurate way to recreate the original group rankings without knowing the actual weights of the individuals, just their rankings?
What is the best way to combine individual rankings to provide an overall group ranking?
CC BY-SA 4.0
null
2023-03-24T18:55:41.973
2023-03-24T18:55:41.973
null
null
384069
[ "ranking" ]
610635
1
null
null
0
33
This may sound simple but I'm lost. I have a simple experiment 15 controls and 15 treatments. 13 of the 15 controls were positive, but only one of the treatment samples was positive. This makes the SD = 0 so t-tests and the like fail. It's obvious there is a difference in the samples, but I have no test to verify. I could make a model based off the control results and simulate some data, but the sample size is so small that model would stink. I could use probabilities, thus 86% are expected to be positive, but only 0.06% were found. I could use a hypothesized error rate, like 13% (2 of the 15) and the results are still well beyond expectations. Any advice? I see this question has been asked similarly but there has not been a clear solution to significance. Thanks --- OK thanks for thanks for that null hypothesis testing link. Here is the answer I've come up with to justify the rareness of the results. Although it is still tricky to explain. But here is a probability table for my example. Essentially there are 155117520 possible combinations, but only 225 in which there is only a single positive, so 0.0001%. Compared to the control which had 13 positives which had 11025 possibilities, which might occur 13% of the time. So being the control this would be 87%. In this case SE is 1.41 and I want a 95% confidence interval so I would expect 95*1.41% points, which is 1.34%. So, to accept the null that the treatment was the control I would expect a value of 87% +- 1.34% or really 12 to 14 positives. The question then become how to explain. I would think I would expect my treatment to be with in 95% CI of my control. This sound correct? [](https://i.stack.imgur.com/Pz24k.png)
Significance of a single point, or population of 1
CC BY-SA 4.0
null
2023-03-24T18:59:59.190
2023-03-27T19:22:31.573
2023-03-27T19:22:31.573
22311
368173
[ "t-test", "small-sample", "z-test" ]
610636
1
null
null
0
21
Following this [post](https://stats.stackexchange.com/questions/151212/using-individuals-as-a-random-effect/610621#610621) I was wondering how to plot or even communicate gene expression using relative quantification, meaning through transformation of cycle threshold values. In my particular experiment, a longitudinal study made up by 3 groups with 2 measurements I performed the following calculations: dCt_post = Ct_target_post - Ct_ref_post dCt_pre = Ct_target_pre - Ct_ref_pre ddCt = dCt_post - dCt_pre To express the variation caused by treatment there are several possibilites reflected in bibliography: - log2(dCt_post/dCt_pre) - ratio: dCt_post/dCt_pre - There is the possibility of adding a calibrator or even a pool of untreated sample in the different batches which will result in another ddCt value dCt_test = Ct_target,test - Ctref_test dCt_calibrator = Ct_target,calib -Ct_ref, calib ddCt = dCt_test - dCt_calibrator. The non explicit part here is that Ct_target,test in longitudinal studies is made up by a pre- and post- values. This is just a note, not pertinent to the topic I am a little bit confused between the ratio and the log2 of the dCt values. I have read something about variances in the post @EdM, but crudely speaking I don`t know which contribution make each one statistically speaking. Is there one approach more appropiate in this context, from a mathematical aspect, coming from an inverse log space (sigmoid function) as RT-qPCR? Interaction and main effects are calculated with linear mixed-effects model
Rationale about the difference between logarithms quocient and ratio? Fold-change?
CC BY-SA 4.0
null
2023-03-24T19:02:18.700
2023-03-25T18:07:15.267
null
null
339186
[ "r", "mixed-model", "logarithm", "ratio" ]
610637
1
null
null
3
207
In this page from scikit learn, about [GLM](https://scikit-learn.org/stable/modules/linear_model.html#generalized-linear-models), the notion of unit deviance is introduced as loss function (from the machine learning perspective). I want to know if there is equivalence between these two notions: unit devance vs. loss function. [](https://i.stack.imgur.com/XGYdp.png) For linear regression, there is equivalence, since the loss function is the squared error and it is equivalent to maximizing the Gaussian distribution in MLE (Maximum Likelihood Estimation). For logistic regression, I struggled to understand, since the deviance is: $$2({y}\log \frac{y}{\hat{y}}+({1}-{y})\log \frac{{1}-{y}}{{1}-\hat{y}})$$ Whereas the classic log loss is: $$y \log(p) + (1-y) \log(1 - p)$$ And [this post](https://stats.stackexchange.com/questions/157870/scikit-binomial-deviance-loss-function) seems to confirm that the deviance is the same as log loss. As for Gamma regression, I found [this post discussing the loss function for gamma in XGBoost](https://stats.stackexchange.com/questions/484555/loss-function-in-for-gamma-objective-function-in-regression-in-xgboost) and the result is different from the one seen in scikit-learn.
Is unit deviance (statistics) equivalent to the loss function (machine learning)
CC BY-SA 4.0
null
2023-03-24T19:20:57.097
2023-03-25T03:26:03.140
2023-03-25T00:10:25.977
96531
96531
[ "logistic", "generalized-linear-model", "gamma-distribution", "deviance" ]
610638
2
null
610628
6
null
Definitely yes. To see this let's consider the population case. Suppose $X$ is a $p\times 1$ random vector with mean $\mu$ and covariance matrix $\Sigma$ and consider $Y = X\text{diag}(\Sigma)^{-1/2}$. If $\Sigma_Y$ is the covariance matrix of $Y$, then this has values 1 in its main diagonal. Now consider the PCA applied to $Y$, i.e. consider the eigen decomposition $\Sigma_y = \Gamma\Lambda \Gamma^\top$, where $\Lambda = \text{diag}(\lambda_1,\ldots,\lambda_p)$. The principal components of $Y$ are the components of the vector $$ Z = Y\Gamma. $$ You can easily check that $\text{cov}(Z) = \Lambda$ and thus $$p^{-1}\text{trace}(\Sigma_Y) =p^{-1}\text{trace}(\Lambda) = p^{-1} \sum_{i=1}^p \lambda_i= 1.$$ Of course, a similar property holds also for the sample PCA.
null
CC BY-SA 4.0
null
2023-03-24T19:24:58.350
2023-03-24T19:24:58.350
null
null
56940
null
610639
1
null
null
0
22
This distribution fails all tests for the normal distribution (of course). But it looks half-normal. If I plot a normal dist. over it with the median and std deviation of the actual data, it's a poor fit. If I tweak the parameters with trial and error it looks better (see picture). Since my only justification is 'it looks ok', I'm wary of claiming this is half-normal and just ploughing ahead with parametric stats. Particularly because I adjusted all parameters pretty much at random. But I have little to no stats experience - so is there a rigorous way to test for a half-Gaussian? If not, are there any options at all other than just look at the plot and state it's a match? [](https://i.stack.imgur.com/ZiCLv.png)
Half-normal distribution stats tests vs 'by eye'
CC BY-SA 4.0
null
2023-03-24T19:35:45.833
2023-03-24T19:35:45.833
null
null
379480
[ "normal-distribution", "normality-assumption" ]
610641
1
null
null
0
35
Looking at cumulative incidence of CMV and EBV infections at 12 months. I can compile a plot of cuminc based on first event to occur, but I have two patients who had BOTH CMV and EBV events during the 0-12 month period. How do I account for those? I have been using x <- cuminc(timetoevent, status) then plot(x). Here my status is coded as 0 for no event, 1 for CMV, or 2 for EBV.
Cumulative incidence for more than one event per patient
CC BY-SA 4.0
null
2023-03-24T19:48:26.410
2023-03-24T19:48:26.410
null
null
384072
[ "competing-risks" ]
610642
1
610647
null
3
77
In Bayesian statistics, we may want to determine at what interval for example 95% of the posterior probability exists. For this we may want to use the Highest Posterior Density Interval (HPDI) which is [1]: > The HPDI is the narrowest interval containing the specified probability mass. If you think about it, there must be an infinite number of posterior intervals with the same mass. But if you want an interval that best represents the parameter values most consistent with the data, then you want the densest of these intervals. That’s what the HPDI is. Or the Percentile Intervals (PI) [1]: > Intervals of this sort, which assign equal probability mass to each tail, are very common in the scientific literature. Richard McElreath mentions that HPDI has some advantages over PI, but HPDI is more computationally intensive than PI and suffers from greater simulation variance. So both have their advantages and disadvantages. But PI mainly works well if the distribution isn't too asymmetrical, which in practice is most of the time the case, right? So I was wondering if anyone could explain and show in what situation you would prefer to use `PI` over `HPDI` when computational intensity doesn't matter? --- Reference: - Statistical Rethinking, Richard McElreath [1]
Differences between HPDI and PI intervals
CC BY-SA 4.0
null
2023-03-24T19:52:01.670
2023-03-27T09:29:32.363
2023-03-27T09:29:32.363
56940
323003
[ "probability", "self-study", "bayesian", "credible-interval", "highest-density-region" ]
610643
2
null
610184
2
null
Interesting question. From what I understand, you wish to combine the features you extract from the image, which have different shapes, `1042xN` along with clinical information which is "constant" over the image, that includes, let us assume, some tabular data $X_i$ (patient $i$'s age, marital status, for example). If I am correct and that is the case, then I my suggestion is actually to try to convert your signals from being `1042xN` into a scalar and use it in regression at the second stage. In other words, my suggestion is to use a stacking ensemble. It also connected `usεr11852`'s suggestion but allows for more agile solutions. In general, this approach involves multiple steps: - Convert your image from 1042xN to Cx1, where C can be either an embedding dimension, or a score for the image (see elaboration on that bellow). - Train a model using $[X_i,C_i]$ to predict $y_i$, which, if I got it correctly, gets the value of 1 if patient $i$ has at least $K$ of their cells being malignant (frankly, this is a generic framework, you can define your $y_i$ as you go, but there are some caveats, see below). 1st stage At this stage, we look at the different tiles as a series of images. We aim to transform this series of images into one score in $[0,1]$. You can choose between multiple approaches here. One, for example, could be, an average of the classifications of the different tiles. In your description, you said that this data is often used to build a per-tile score. So you can do the same, and calculate the aggregated score to be $g(T)=\dfrac{1}{n}\sum_{j=1}^N f(t)$, where $f(t)$ is the per tile score given by model $f$, and $j$ is the tile index running up to $N$. Using the average is just intuitive but you can define any function you'd like, depending on your domain knowledge. Another approach to dealing with such data, is to treat the per-tile images as a time series. Under this approach you'd have to change the way you look at y. You can define it as the proportion of malignant cells per person, or a binary variable that gets the value of 1 if patient $i$ has more malignant cells than a certain threshold. This new $y_i$ is constant across tiles, and changes from only on the subject's level, in contrast with $y_{ij}$ which was the target for each tile. This is actually the way [vision transformers](https://en.wikipedia.org/wiki/Vision_transformer) analyze visual data today, and it has proven to be beneficial in terms of predictive capabilities. If you chose to use any of the above methods, now you have one number for each image, and this number reflects your belief regarding person `i`'s risk, that rises from their image. We can continue to the second stage. [](https://i.stack.imgur.com/6LX2s.png) 2nd stage At this second stage, we are combining the diagnosis from the image, with the patient's metadata, just as in real life. How we would do that? Using your favorite model. A note about `usεr11852`'s suggestion If you want to increase the weight of the diagnosis of the image mechanically, instead of returning a scalar value as suggested, you can try to return a vector. This requires a bit different approach, because under this approach we won't use the predicted value of the sequence of the tiles, but rather their representation on some latent space instead. But since you want to change both of the axis (you don't want to add $1042$ scalars nor you don't want to add $N$ scalar instead, so the question that is rising is which dimension we want to reduce, and if both of them, then how?). Sorry, I don't have a quick answer for that. Caviates If you use stacking ensemble (that is the process that was described below, where you train a model on part of the data, and another model that one of its inputs is your first model's output), then you MUST NOT USE the same observations for that. That is, if you choose to train a tile-level classifier or a time-series model on those tiles, that will return a single score for an image, then you should exclude those patients for the second stage. Otherwise, the score's importance will be strongly upward biased, as the score is expected to be correlated with the dependent variable just from construction (it is because models ALWAYS have some degree of overfitting, but that is a discussion for another time). Good luck!
null
CC BY-SA 4.0
null
2023-03-24T19:57:57.407
2023-03-24T19:57:57.407
null
null
285927
null
610645
2
null
610623
5
null
Along with the positions of the amino acids, protein folding can also be described by the angles in the peptide bonds such as used in a [Ramachandran plot](https://en.m.wikipedia.org/wiki/Ramachandran_plot), and with global values such as the radius of gyration. You could apply all those values together in a single model such as is being done with [QSAR](https://en.m.wikipedia.org/wiki/Quantitative_structure%E2%80%93activity_relationship) models to predict physiochemical properties of molecules.
null
CC BY-SA 4.0
null
2023-03-24T20:17:23.603
2023-03-24T20:17:23.603
null
null
164061
null
610646
2
null
419279
0
null
The `insight` package allows you to get all variance components using: ``` get_variance(model1) ``` and the random intercept variance components using: ``` get_variance(model1, component = c(get_variance_intercept)) ```
null
CC BY-SA 4.0
null
2023-03-24T20:23:08.020
2023-03-24T20:23:08.020
null
null
237261
null
610647
2
null
610642
3
null
For $0<\alpha <1$, a $100(1-\alpha)\%$ credible set for the parameter $\theta$ based on data $x$ is a subset $C\subset\Theta$ such that $$ P\{C|X = x\} = 1-\alpha. $$ Usually, $C$ is taken to be an interval. Indeed, in the case of $\theta$ being a continuous random variable, letting $\theta^{(1)}, \theta^{(2)}$ be $100\alpha_1\%$ and $100(1-\alpha_2)\%$ quantiles with $\alpha_1+\alpha_2=\alpha$, $C = [\theta^{(1)}, \theta^{(2)}]$ is such an interval since $P\{C|X=x\} = 1-\alpha.$ Usually equal-tailed intervals are chosen so $\alpha_1=\alpha_2=\alpha/2$. However, as you noted, the equal-tailed credible interval need not have the smallest size, namely length or area or volume whichever is appropriate. For that, one needs an HPD (Highest Posterior Density) interval. Formally, an HPD is defined as follows. Let $\theta$ have a unimodal posterior density. Then the HPD interval for $\theta$ is the interval $$ C = \{\theta:\pi(\theta|X=x)\geq k\}, $$ where $k$ is chosen such that $$ P\{C | X=x\} = 1-\alpha. $$ What is nice about HPDs is that they easily generalize to a vector $\theta$. In such a case, we talk about HPD credible sets. In the multivariate case indeed, credible sets based on quantiles are not easily adopted. Clearly, in symmetric posterior distributions, HPD and quantile-based credible intervals coincide. Anyway, whatever the shape of the posterior distribution, > credible intervals are easier to compute than HPDs and > HPDs have a smaller size that their quantile-based counterparts are IMHO the only compelling reasons for their respective choice for practical use.
null
CC BY-SA 4.0
null
2023-03-24T20:23:45.503
2023-03-24T20:23:45.503
null
null
56940
null
610648
2
null
610626
2
null
When you put `doctor ID` as a covariate, then you'd have a fixed effect regression. In contrast, `(1| doctor_ID)` means that you added a random constant for each doctor in your sample. Different Mechanical Effect Fixed effects assume the doctor's identity has a fixed effect on the outcome variable. In other words, the values of the fixed effect do not vary randomly across the observations in the dataset. Random effects, on the other hand, assume that doctor's identity has a random effect on the outcome variable. In other words, the values of the random effect vary randomly across the observations in the dataset. For example, if we are studying the effect of different hospitals on patient outcomes, the hospital where the patient receives treatment would be considered a random effect because it varies randomly across patients in the study. When to use each Type of Effect? Fixed effects are used when the factors of interest are explicitly chosen by the experimenter and are considered to be a fixed part of the experimental design. Fixed effects are appropriate when the goal is to estimate the effects of specific treatments or interventions, such as different doses of a drug or different types of fertilizers. In such cases, the fixed effects capture the systematic variation in the data due to the treatment or intervention. Random effects, are used when the factors of interest are not explicitly chosen by the experimenter and are considered to be a random sample from a larger population. Random effects are appropriate when the goal is to estimate the variability among a larger population of similar units, such as in your case.
null
CC BY-SA 4.0
null
2023-03-24T20:42:09.913
2023-03-24T20:42:09.913
null
null
285927
null
610649
1
null
null
1
109
I have a sample $X = \{\mathbf{x}_i\}$, where each $\mathbf{x}_i$ has a set of discrete features $x_{j}$ and a value $y_i$. I'm interested in the distribution of the sample mean, $\bar{\mathbf{y}}$ - but wait, there's a catch! I know the conditional probability distributions over each feature - or rather, for each feature $x_j$ taking some value $k$, I know the population subgroup means $\mu(j,k)$, and I can assume that the value statistic is Poisson distributed: $y \sim Poisson(\mu(j,k))$ However, I know nothing about the joint distributions of the features - for example, $P(y|x_1=a, x_2=b)$. Given a sample with a known breakdown over each feature (i.e. $|\{\mathbf{x}_i : x_{j} = a\}|$ is known for all $j$), but no information about the size of the intersections between features, what can I say about the distribution of the sample mean $\mathbf{\bar{y}}$? How can I define it based on the various $\mu(j,k)$? To give a concrete example: let's say I'm interested in the number of sexual partners per person, averaged over a sample. I know the average number of partners for men, and the average number for women. I also know the average number of partners for people living in rural vs urban areas, but I do not know the average for urban-dwelling men, etc. I have a sample of size $n$. I know that the sample contains $a$ men and $b$ women, where $a+b=n$; and $x$ rural and $y$ urban dwellers, where $x+y=n$. I do not know how many urban men, rural women etc. are in the sample. I want to describe the sampling distribution of the sample mean. EDIT: Here's an explanation based on my actual problem. The statistic I'm interested in is absence from work, in days per person. The group means are from results published by the national statistics office (the ONS). The ONS publish breakdowns (by sex, age bracket, industry sector, region, full-time vs part-time, etc.) and a small number of intersections where they are confident in the sample size (e.g. by sex and age bracket). I'm interested in a sample with a known feature-breakdown - e.g. a random group of 30 people, of whom a) 12 are men and 18 are women; b) 5 are aged 16-24, 15 are aged 25-34, and 10 are aged 35-49; etc. What can I say about the sampling distribution of the sample mean - i.e. the distribution of days of absence per person, averaged over the 30 people - in terms of the means of the various breakdowns?
Distribution of a sample drawn from distinct Poisson distributions
CC BY-SA 4.0
null
2023-03-24T20:44:41.950
2023-03-28T21:51:45.730
2023-03-28T21:51:45.730
205208
205208
[ "distributions", "mean", "conditional-probability", "central-limit-theorem" ]
610651
1
null
null
0
11
I am not well versed in statistics, so please bear with me. I will try to explain things as well as I can. I have 256 variables, and for each variable I have a before $(\mu_i, \sigma_i, N_i)$ and an after $(\mu_f, \sigma_f, N_f)$. I want to detect, for each variable, if the mean value has changed more than a given threshold value $\Delta$. I am performing an equivalence TOST test as follows: Welch's t-statistics: $$ t_L = \frac{\mu_f - \mu_i + \Delta}{\sqrt{\frac{\sigma_i^2}{N_i} + \frac{\sigma_f^2}{N_f}}} $$ $$ t_U = \frac{\mu_f - \mu_i - \Delta}{\sqrt{\frac{\sigma_i^2}{N_i} + \frac{\sigma_f^2}{N_f}}} $$ Then I get two p-values: one from integrating the t-distribution between $[t_L, \infty]$, the second by integrating between $[-\infty, t_U]$. I keep the biggest value as the p-value for my test. I perform this test on all 256 variables, and I assume that all those with a p-value less than 0.05 are equivalent within $\Delta$; then, those greater or equal than 0.05 have changed significantly. I am not a statistician, so I wrote a simulation taking into account correlation between my variables. Results so far indicate that this procedure makes sense, but I am having trouble deciding what to present as the results of the simulation to convince my peers about this procedure. My thoughts so far: - Run multiple simulations where I change only 1 variable (at random) by exactly $\Delta + 1$. Then I use the average $\frac{\text{false discoveries}}{\text{discoveries}}$ as my FDR. - Run multiple simulations where I change all 256 variables by exactly $\Delta + 1$. Then I use $\frac{\text{true discoveries}}{256}$ as the power of my test. - Get a colormap where x-axis is number of variables changed (between 0 to 256), and the y-axis is the amount by which they change (from 0 to $\Delta$). The color represents the number of discoveries (which in this case are all false because they are still within $\Delta$). I am getting very confused about how to report the results of my simulations. Does the above make sense? I guess that I am assuming that the power and FDR are better for changes much larger than $\Delta$, hence it is OK to just show results at this boundary. In case it matters, my $\sigma_s\approx 200$, $N\approx 30000$, and I want to find changes above $\Delta =75$. My simulation is giving my a power above 95-97% and an FDR virtually 0 (using the definitions in 1 and 2 above). Maybe this makes sense given how large my sample sizes are, but the more I think about this, the more confused I get.
How to report simulation results on TOST procedure for multiple comparisons
CC BY-SA 4.0
null
2023-03-24T21:12:03.190
2023-03-24T21:12:03.190
null
null
375517
[ "multiple-comparisons", "reporting", "equivalence", "tost" ]
610652
2
null
609680
2
null
Say that $D_{it}\in\{0,1,\ldots,K\}$ is a categorical variable denoting whether unit $i$ at time $t$ got treatment $k=0,1,\ldots,K,$ where 0 denotes the absence of treatment (control group). The answer to your question is yes, if by difference-in-differences you mean estimating via least squares the regression \begin{align*} Y_{it} = \alpha_i + \lambda_t + \sum_{\ell=1}^K\tau_\ell D^{\ell}_{it} + \epsilon_{it},\quad i\in[N],t\in[T] \end{align*} where $Y_{it}$ is the outcome of interest, $\alpha_i$ and $\lambda_t$ are unit and time fixed effects, and $D_{it}^\ell=\mathbb{1}(D_{it}=\ell)$. You can add control variables if you feel you need to. The answer to your question is it depends, if on top of the statistical model described above you want to add causal conclusions on the effect of the various treatments on your outcome of interest. If this is the case, there is no one-fits-all strategy and it is really much context-dependent. The answer would depend on how you define the various treatment levels, how your sample looks like, what are your units, how big (i.e., aggregate) your crisis shocks are and so on. However, this does not mean that there is no hope. The causal inference literature has been recently flooded with papers on the topic. Here a (definitely non-exhaustive) list of some relevant papers for your case: - Goldsmith-Pinkham, Hull, and Kolesar (2023) - de Chaisemartin and D'Haultfoeuille (2022) - Sun and Abraham (2022) PS (feel free to ignore this): My hunch is that, even ignoring all the problems mentioned by the papers above, in a setting like the one you describe, it is really unlikely for the parallel trend assumption to hold. Your treatment assignment process -aka the fact that a crisis happens- is very much likely to be endogenous. In other words, certain units might have higher chances of experiencing a crisis and this might be related to their potential outcomes in ways that violate parallel trends. Other two subtler assumptions, quite often ignored in DiD settings, that are likely to be violated are: - the SUTVA (Stable Unit Treatment Value Assumption) would be violated if your shocks are too big/aggregate. The SUTVA require unit $i$'s treatment status to not affect unit $j$'s ($i\neq j$) potential outcomes. This is extremely unlikely if your crisis are systemic or at the local labor market level for example. Much more likely if you are working with small firms, for example, and your definition of crisis is related, say, to the outstanding debt of the company. Again context matters. - The no anticipation assumption is violated if the treatment is endogenous or somehow foreseeable by your units. Again, this is very unlikely to hold if you are studying some sluggish/slow-moving phenomenon.
null
CC BY-SA 4.0
null
2023-03-24T21:16:12.610
2023-03-24T21:16:12.610
null
null
135461
null
610653
2
null
610568
0
null
Working in the log scale as David Smith recommends is often reasonable in biology and biochemistry, if the observations all have positive values. With $\log(Y/X)=\log Y - \log X$, the formula for the [variance of a weighted sum of correlated variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) means that the variance of the log of a ratio is simply: $$ \text{Var} \left(\log \frac{Y}{X}\right)=\text{Var}(\log Y) + \text{Var}(\log X) - 2\text{Cov} (\log Y, \log X),$$ where $\text{Cov}$ is the covariance. The covariance between paired measurements in ratios like yours can give more precise estimates of effects than you might get with unpaired measurements. One first caution: when you start with a log transform of your outcome observations, you are modeling the means of their logs, which isn't the same as the log of their means. A standard error in the log scale doesn't translate back to a single value in the original scale, although confidence intervals can have reasonable interpretations. That's all OK if you and your audience understand that's what you're doing. A second caution: you almost certainly didn't count each of 60 million cells in an inoculate, or each of 4 million remaining in the plate after an hour. Those are presumably estimates based on a much smaller number of counts of some sample of the inoculate and of the plate. You usually want to work as closely as possible in the scales of the original observations, so that you can better estimate the errors inherent in the observations. In particular, observations of counts typically have a Poisson distribution, with the variance equal to the mean. A count of $100$ thus has an inherent Poisson standard error of $\pm 10$, or 10% error. If your per-inoculate or per-plate observations are based on much larger counts (on the order of 1000 or so) then that won't be a big problem, but you should be aware of this in general. A third caution: you should re-think how you handle the standard errors: > Once I have a mean and error for S1 and S2 for each timepoint, I then want to examine whether these means are significantly different between strains, ideally at each timepoint - in essence, is one strain better are surviving on that surface after a given time. How would I then go about that? For that you don't typically want to work with means and standard errors for each timepoint individually. There's a pretty large error in the estimate of the standard error from any small number of observations. You usually use your data more efficiently if you analyze your combined data in a single model to get a pooled estimate of the error. With your paired observations of initial $x$ values and final $y$ values, you might set up a regression model that includes all of your data. There would be one data row per inoculate/plate pair, including the $x$ and $y$ values and annotations of the `strain` and the `surfaceType`. The regression model could use $(\log y - \log x)$ as the outcome, with `strain`, `surfaceType`, and their interaction as predictors. That would allow for different sensitivities of the strains to the various surfaces. Tests of significance would be based on a pooled estimate of the error not accounted for by strain or plate type. Similar models can be extended to multiple time points. Post-modeling tools like those in the R [emmeans package](https://cran.r-project.org/package=emmeans) can use the results of the regression model to evaluate the means and confidence intervals for any estimates of particular conditions, or differences between estimates. They can translate back out of the log scale if you wish, and properly take into account the [multiple comparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem) that are inherent in this type of study.
null
CC BY-SA 4.0
null
2023-03-24T22:10:15.867
2023-03-24T22:10:15.867
null
null
28500
null
610655
1
610763
null
1
23
I am attempting to create a conditional inference tree using the R-package `ctree` for a dataset. Unfortunately, the sample size is small and the effects are weak. As the analysis is discovery-driven, we are rather liberal with the significance level and are not doing a Bonferroni correction within ctree. However, this approach increases the risk of false positives. I would like to know whether it would make sense to perform forward selection/prefiltering of predictors before creating the final ctree model. Is this a valid approach or are there other methods to obtain a robust decision tree? maybe you also know some papers where they tried similar strategies. example R code: ``` # Load packages and data library(partykit) library(mlbench) data("BostonHousing2") # Set random seed for reproducibility set.seed(123) # Create a subset of samples and predictors for speed reasons pred_df <- BostonHousing2[1:200, !(colnames(BostonHousing2) %in% c("town", "chas", "cmedv"))] # Build cforest model and get variable importance cforest_model <- cforest(medv ~ ., data = pred_df, ntree = 50) predictor_varimp <- varimp(cforest_model, conditional = TRUE) # Check variable importance and set an arbitrary threshold (will be decided otherwise) dotchart(sort(predictor_varimp)) abline(v = 1, col = 'red') sig_predictors <- names(predictor_varimp[predictor_varimp > 1]) # Create ctree with only important predictors current_ctree_control <- ctree_control(testtype = "Univariate", minbucket = 20, alpha = 0.05) ctree_model_varimp <- ctree(medv ~ ., data = pred_df[, c('medv', sig_predictors)], control = current_ctree_control) plot(ctree_model_varimp) # Create ctree with all predictors ctree_model <- ctree(medv ~ ., data = pred_df, control = current_ctree_control) plot(ctree_model) ```
prefiltering of predictors using cforest for ctree possible?
CC BY-SA 4.0
null
2023-03-24T22:35:48.243
2023-03-26T09:48:11.503
2023-03-24T22:38:34.703
384075
384075
[ "machine-learning", "random-forest", "importance", "party" ]
610656
2
null
610626
0
null
### GLM Fit (No RE) David provides a summary of when you want fixed and random effects. I'd like to add that what you want from here is often of practical interest. Consider for your example that you have a simple regression, doctor experience in years and whether or not somebody's cancer goes into remission during treatment. We can tests this with the data from [this tutorial.](https://stats.oarc.ucla.edu/r/dae/mixed-effects-logistic-regression/) ``` #### Load Data #### hdp <- read.csv("https://stats.idre.ucla.edu/stat/data/hdp.csv") hdp <- within(hdp, { Married <- factor(Married, levels = 0:1, labels = c("no", "yes")) DID <- factor(DID) HID <- factor(HID) CancerStage <- factor(CancerStage) }) head(hdp) ``` We can then fit a basic logistic regression without considering the doctor covariate. ``` #### Fit GLM #### fit.glm <- glm( remission ~ Experience, family = binomial, data = hdp ) summary(fit.glm) ``` The results show that experience seems to have a significant impact on remission rates, though it doesn't seem to be a large effect. ``` Call: glm(formula = remission ~ Experience, family = binomial, data = hdp) Deviance Residuals: Min 1Q Median 3Q Max -1.1909 -0.8690 -0.7568 1.3757 1.9203 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.320558 0.111349 -20.84 <2e-16 *** Experience 0.081116 0.005985 13.55 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 10353 on 8524 degrees of freedom Residual deviance: 10164 on 8523 degrees of freedom AIC: 10168 Number of Fisher Scoring iterations: 4 ``` What if we entered doctors into the regression? Warning: running this will take awhile and the results are likely not even accurate. ``` #### Fit GLM #### fit.glm.did <- glm( remission ~ Experience + DID, family = binomial, data = hdp ) ``` We now have a massive regression summary...more than 200 coefficients that we now have to interpret separately. ``` Call: glm(formula = remission ~ Experience + DID, family = binomial, data = hdp) Deviance Residuals: Min 1Q Median 3Q Max -2.2300 -0.6685 -0.2522 0.5660 2.6551 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.287e+10 1.099e+12 -0.048 0.962 Experience 2.115e+09 4.395e+10 0.048 0.962 DID2 2.326e+10 4.834e+11 0.048 0.962 DID3 1.269e+10 2.637e+11 0.048 0.962 DID4 1.480e+10 3.076e+11 0.048 0.962 DID5 1.692e+10 3.516e+11 0.048 0.962 DID6 2.326e+10 4.834e+11 0.048 0.962 DID7 1.903e+10 3.955e+11 0.048 0.962 DID8 8.459e+09 1.758e+11 0.048 0.962 DID9 2.326e+10 4.834e+11 0.048 0.962 DID10 1.480e+10 3.076e+11 0.048 0.962 DID11 3.172e+10 6.592e+11 0.048 0.962 DID12 4.229e+09 8.790e+10 0.048 0.962 DID13 2.326e+10 4.834e+11 0.048 0.962 DID14 4.229e+09 8.790e+10 0.048 0.962 DID15 2.538e+10 5.274e+11 0.048 0.962 DID16 2.326e+10 4.834e+11 0.048 0.962 DID17 2.115e+10 4.395e+11 0.048 0.962 DID18 1.480e+10 3.076e+11 0.048 0.962 DID19 8.459e+09 1.758e+11 0.048 0.962 DID20 6.344e+09 1.318e+11 0.048 0.962 DID21 8.459e+09 1.758e+11 0.048 0.962 DID22 6.344e+09 1.318e+11 0.048 0.962 DID23 2.538e+10 5.274e+11 0.048 0.962 DID24 1.269e+10 2.637e+11 0.048 0.962 DID25 1.269e+10 2.637e+11 0.048 0.962 DID26 2.961e+10 6.153e+11 0.048 0.962 DID27 6.344e+09 1.318e+11 0.048 0.962 DID28 3.383e+10 7.032e+11 0.048 0.962 DID29 8.459e+09 1.758e+11 0.048 0.962 DID30 2.961e+10 6.153e+11 0.048 0.962 DID31 4.229e+09 8.790e+10 0.048 0.962 DID32 8.459e+09 1.758e+11 0.048 0.962 DID33 1.903e+10 3.955e+11 0.048 0.962 DID34 2.115e+10 4.395e+11 0.048 0.962 DID35 1.692e+10 3.516e+11 0.048 0.962 DID36 1.057e+10 2.197e+11 0.048 0.962 DID37 1.269e+10 2.637e+11 0.048 0.962 DID38 2.961e+10 6.153e+11 0.048 0.962 DID39 1.903e+10 3.955e+11 0.048 0.962 DID40 4.229e+09 8.790e+10 0.048 0.962 DID41 8.459e+09 1.758e+11 0.048 0.962 DID42 1.480e+10 3.076e+11 0.048 0.962 DID43 2.749e+10 5.713e+11 0.048 0.962 DID44 2.538e+10 5.274e+11 0.048 0.962 DID45 6.344e+09 1.318e+11 0.048 0.962 DID46 2.749e+10 5.713e+11 0.048 0.962 DID47 8.459e+09 1.758e+11 0.048 0.962 DID48 4.229e+09 8.790e+10 0.048 0.962 DID49 2.115e+10 4.395e+11 0.048 0.962 DID50 -4.229e+09 8.790e+10 -0.048 0.962 DID51 8.459e+09 1.758e+11 0.048 0.962 DID52 1.269e+10 2.637e+11 0.048 0.962 DID53 1.692e+10 3.516e+11 0.048 0.962 DID54 3.595e+10 7.471e+11 0.048 0.962 DID55 8.459e+09 1.758e+11 0.048 0.962 DID56 2.115e+10 4.395e+11 0.048 0.962 DID57 8.459e+09 1.758e+11 0.048 0.962 DID58 8.459e+09 1.758e+11 0.048 0.962 DID59 1.057e+10 2.197e+11 0.048 0.962 DID60 1.903e+10 3.955e+11 0.048 0.962 DID61 2.538e+10 5.274e+11 0.048 0.962 DID62 2.749e+10 5.713e+11 0.048 0.962 DID63 1.480e+10 3.076e+11 0.048 0.962 DID64 4.229e+09 8.790e+10 0.048 0.962 DID65 1.692e+10 3.516e+11 0.048 0.962 DID66 1.903e+10 3.955e+11 0.048 0.962 DID67 2.538e+10 5.274e+11 0.048 0.962 DID68 2.326e+10 4.834e+11 0.048 0.962 DID69 2.115e+10 4.395e+11 0.048 0.962 DID70 2.538e+10 5.274e+11 0.048 0.962 DID71 1.480e+10 3.076e+11 0.048 0.962 DID72 2.115e+09 4.395e+10 0.048 0.962 DID73 2.326e+10 4.834e+11 0.048 0.962 DID74 2.326e+10 4.834e+11 0.048 0.962 DID75 2.115e+10 4.395e+11 0.048 0.962 DID76 1.269e+10 2.637e+11 0.048 0.962 DID77 1.480e+10 3.076e+11 0.048 0.962 DID78 2.749e+10 5.713e+11 0.048 0.962 DID79 6.344e+09 1.318e+11 0.048 0.962 DID80 8.459e+09 1.758e+11 0.048 0.962 DID81 1.269e+10 2.637e+11 0.048 0.962 DID82 2.115e+10 4.395e+11 0.048 0.962 DID83 1.480e+10 3.076e+11 0.048 0.962 DID84 2.326e+10 4.834e+11 0.048 0.962 DID85 1.269e+10 2.637e+11 0.048 0.962 DID86 1.057e+10 2.197e+11 0.048 0.962 DID87 2.115e+10 4.395e+11 0.048 0.962 DID88 2.326e+10 4.834e+11 0.048 0.962 DID89 8.459e+09 1.758e+11 0.048 0.962 DID90 1.903e+10 3.955e+11 0.048 0.962 DID91 1.692e+10 3.516e+11 0.048 0.962 DID92 8.459e+09 1.758e+11 0.048 0.962 DID93 2.538e+10 5.274e+11 0.048 0.962 DID94 6.344e+09 1.318e+11 0.048 0.962 DID95 2.115e+09 4.395e+10 0.048 0.962 DID96 6.344e+09 1.318e+11 0.048 0.962 DID97 1.692e+10 3.516e+11 0.048 0.962 DID98 2.115e+10 4.395e+11 0.048 0.962 DID99 2.326e+10 4.834e+11 0.048 0.962 DID100 2.749e+10 5.713e+11 0.048 0.962 DID101 2.749e+10 5.713e+11 0.048 0.962 DID102 1.057e+10 2.197e+11 0.048 0.962 DID103 1.692e+10 3.516e+11 0.048 0.962 DID104 2.115e+10 4.395e+11 0.048 0.962 DID105 1.903e+10 3.955e+11 0.048 0.962 DID106 1.057e+10 2.197e+11 0.048 0.962 DID107 6.344e+09 1.318e+11 0.048 0.962 DID108 8.459e+09 1.758e+11 0.048 0.962 DID109 2.115e+09 4.395e+10 0.048 0.962 DID110 1.480e+10 3.076e+11 0.048 0.962 DID111 1.057e+10 2.197e+11 0.048 0.962 DID112 1.903e+10 3.955e+11 0.048 0.962 DID113 1.381e+01 7.105e+02 0.019 0.984 DID114 2.115e+09 4.395e+10 0.048 0.962 DID115 1.480e+10 3.076e+11 0.048 0.962 DID116 4.229e+09 8.790e+10 0.048 0.962 DID117 1.480e+10 3.076e+11 0.048 0.962 DID118 1.480e+10 3.076e+11 0.048 0.962 DID119 1.480e+10 3.076e+11 0.048 0.962 DID120 8.459e+09 1.758e+11 0.048 0.962 DID121 4.229e+09 8.790e+10 0.048 0.962 DID122 2.115e+10 4.395e+11 0.048 0.962 DID123 1.057e+10 2.197e+11 0.048 0.962 DID124 2.115e+10 4.395e+11 0.048 0.962 DID125 1.269e+10 2.637e+11 0.048 0.962 DID126 2.749e+10 5.713e+11 0.048 0.962 DID127 1.692e+10 3.516e+11 0.048 0.962 DID128 1.346e+01 7.105e+02 0.019 0.985 DID129 1.692e+10 3.516e+11 0.048 0.962 DID130 2.115e+09 4.395e+10 0.048 0.962 DID131 1.692e+10 3.516e+11 0.048 0.962 DID132 8.459e+09 1.758e+11 0.048 0.962 DID133 1.692e+10 3.516e+11 0.048 0.962 DID134 1.903e+10 3.955e+11 0.048 0.962 DID135 1.480e+10 3.076e+11 0.048 0.962 DID136 3.172e+10 6.592e+11 0.048 0.962 DID137 2.115e+10 4.395e+11 0.048 0.962 DID138 1.269e+10 2.637e+11 0.048 0.962 DID139 1.269e+10 2.637e+11 0.048 0.962 DID140 1.057e+10 2.197e+11 0.048 0.962 DID141 2.115e+10 4.395e+11 0.048 0.962 DID142 2.326e+10 4.834e+11 0.048 0.962 DID143 1.692e+10 3.516e+11 0.048 0.962 DID144 -8.459e+09 1.758e+11 -0.048 0.962 DID145 1.903e+10 3.955e+11 0.048 0.962 DID146 1.480e+10 3.076e+11 0.048 0.962 DID147 1.269e+10 2.637e+11 0.048 0.962 DID148 1.903e+10 3.955e+11 0.048 0.962 DID149 2.326e+10 4.834e+11 0.048 0.962 DID150 4.229e+09 8.790e+10 0.048 0.962 DID151 3.172e+10 6.592e+11 0.048 0.962 DID152 1.480e+10 3.076e+11 0.048 0.962 DID153 1.269e+10 2.637e+11 0.048 0.962 DID154 4.229e+09 8.790e+10 0.048 0.962 DID155 1.480e+10 3.076e+11 0.048 0.962 DID156 1.903e+10 3.955e+11 0.048 0.962 DID157 3.172e+10 6.592e+11 0.048 0.962 DID158 2.538e+10 5.274e+11 0.048 0.962 DID159 2.749e+10 5.713e+11 0.048 0.962 DID160 1.480e+10 3.076e+11 0.048 0.962 DID161 2.326e+10 4.834e+11 0.048 0.962 DID162 2.326e+10 4.834e+11 0.048 0.962 DID163 3.383e+10 7.032e+11 0.048 0.962 DID164 1.057e+10 2.197e+11 0.048 0.962 DID165 1.057e+10 2.197e+11 0.048 0.962 DID166 1.692e+10 3.516e+11 0.048 0.962 DID167 2.326e+10 4.834e+11 0.048 0.962 DID168 1.236e+01 7.105e+02 0.017 0.986 DID169 1.057e+10 2.197e+11 0.048 0.962 DID170 1.692e+10 3.516e+11 0.048 0.962 DID171 1.692e+10 3.516e+11 0.048 0.962 DID172 1.057e+10 2.197e+11 0.048 0.962 DID173 1.903e+10 3.955e+11 0.048 0.962 DID174 2.538e+10 5.274e+11 0.048 0.962 DID175 8.459e+09 1.758e+11 0.048 0.962 DID176 1.480e+10 3.076e+11 0.048 0.962 DID177 1.480e+10 3.076e+11 0.048 0.962 DID178 2.115e+10 4.395e+11 0.048 0.962 DID179 1.692e+10 3.516e+11 0.048 0.962 DID180 6.344e+09 1.318e+11 0.048 0.962 DID181 8.459e+09 1.758e+11 0.048 0.962 DID182 1.269e+10 2.637e+11 0.048 0.962 DID183 1.903e+10 3.955e+11 0.048 0.962 DID184 3.383e+10 7.032e+11 0.048 0.962 DID185 1.057e+10 2.197e+11 0.048 0.962 DID186 2.326e+10 4.834e+11 0.048 0.962 DID187 2.115e+10 4.395e+11 0.048 0.962 DID188 2.115e+09 4.395e+10 0.048 0.962 DID189 2.538e+10 5.274e+11 0.048 0.962 DID190 1.903e+10 3.955e+11 0.048 0.962 DID191 1.480e+10 3.076e+11 0.048 0.962 DID192 6.344e+09 1.318e+11 0.048 0.962 DID193 1.903e+10 3.955e+11 0.048 0.962 DID194 1.480e+10 3.076e+11 0.048 0.962 DID195 1.057e+10 2.197e+11 0.048 0.962 DID196 4.229e+09 8.790e+10 0.048 0.962 DID197 1.269e+10 2.637e+11 0.048 0.962 DID198 2.538e+10 5.274e+11 0.048 0.962 DID199 6.344e+09 1.318e+11 0.048 0.962 DID200 1.903e+10 3.955e+11 0.048 0.962 DID201 1.269e+10 2.637e+11 0.048 0.962 DID202 2.326e+10 4.834e+11 0.048 0.962 DID203 2.538e+10 5.274e+11 0.048 0.962 DID204 6.344e+09 1.318e+11 0.048 0.962 DID205 1.057e+10 2.197e+11 0.048 0.962 DID206 -1.362e+01 9.728e+04 0.000 1.000 DID207 2.115e+10 4.395e+11 0.048 0.962 DID208 1.903e+10 3.955e+11 0.048 0.962 DID209 1.269e+10 2.637e+11 0.048 0.962 DID210 1.269e+10 2.637e+11 0.048 0.962 DID211 1.480e+10 3.076e+11 0.048 0.962 DID212 2.326e+10 4.834e+11 0.048 0.962 DID213 2.115e+10 4.395e+11 0.048 0.962 DID214 2.749e+10 5.713e+11 0.048 0.962 DID215 2.326e+10 4.834e+11 0.048 0.962 DID216 8.459e+09 1.758e+11 0.048 0.962 DID217 1.057e+10 2.197e+11 0.048 0.962 DID218 1.480e+10 3.076e+11 0.048 0.962 DID219 2.538e+10 5.274e+11 0.048 0.962 DID220 1.269e+10 2.637e+11 0.048 0.962 DID221 2.749e+10 5.713e+11 0.048 0.962 DID222 1.903e+10 3.955e+11 0.048 0.962 DID223 1.903e+10 3.955e+11 0.048 0.962 DID224 8.459e+09 1.758e+11 0.048 0.962 DID225 3.383e+10 7.032e+11 0.048 0.962 DID226 2.749e+10 5.713e+11 0.048 0.962 DID227 1.269e+10 2.637e+11 0.048 0.962 DID228 1.692e+10 3.516e+11 0.048 0.962 DID229 1.480e+10 3.076e+11 0.048 0.962 DID230 4.229e+09 8.790e+10 0.048 0.962 DID231 2.538e+10 5.274e+11 0.048 0.962 DID232 1.057e+10 2.197e+11 0.048 0.962 DID233 1.269e+10 2.637e+11 0.048 0.962 DID234 2.326e+10 4.834e+11 0.048 0.962 DID235 1.480e+10 3.076e+11 0.048 0.962 DID236 2.115e+10 4.395e+11 0.048 0.962 DID237 1.057e+10 2.197e+11 0.048 0.962 DID238 2.326e+10 4.834e+11 0.048 0.962 DID239 8.459e+09 1.758e+11 0.048 0.962 DID240 1.480e+10 3.076e+11 0.048 0.962 DID241 1.269e+10 2.637e+11 0.048 0.962 DID242 2.115e+10 4.395e+11 0.048 0.962 DID243 1.903e+10 3.955e+11 0.048 0.962 DID244 2.115e+10 4.395e+11 0.048 0.962 DID245 6.344e+09 1.318e+11 0.048 0.962 DID246 1.903e+10 3.955e+11 0.048 0.962 DID247 6.344e+09 1.318e+11 0.048 0.962 DID248 8.459e+09 1.758e+11 0.048 0.962 DID249 1.057e+10 2.197e+11 0.048 0.962 [ reached getOption("max.print") -- omitted 158 rows ] (Dispersion parameter for binomial family taken to be 1) Null deviance: 10352.6 on 8524 degrees of freedom Residual deviance: 6590.8 on 8117 degrees of freedom AIC: 7406.8 Number of Fisher Scoring iterations: 25 ``` ### GLMM Fit (RE Included) What if instead we could just aggregate the variation in doctors and look at their contribution separate from our typical regression summary? This is where something like a mixed model can be really helpful. We can fit one below. For simplicity, I just use a random intercept model and don't include slopes. ``` #### Fit GLMM #### library(lmerTest) fit.glmm <- glmer(remission ~ Experience + (1 | DID), data = hdp, family = binomial) summary(fit.glmm) ``` It seems our estimate has changed now, in that the average effect of experience on remission has increased: ``` Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) [ glmerMod] Family: binomial ( logit ) Formula: remission ~ Experience + (1 | DID) Data: hdp AIC BIC logLik deviance df.resid 7868.6 7889.8 -3931.3 7862.6 8522 Scaled residuals: Min 1Q Median 3Q Max -2.6654 -0.4977 -0.2342 0.4656 4.8474 Random effects: Groups Name Variance Std.Dev. DID (Intercept) 3.439 1.855 Number of obs: 8525, groups: DID, 407 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.40749 0.45797 -7.440 1.0e-13 *** Experience 0.11319 0.02508 4.513 6.4e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Experience -0.975 ``` But what happened to all those doctors? We can see from the regression results that remission rates vary by 1.855 SD between doctors on average. To show this visually, we can run the following code, which plots doctor intercepts onto a caterpillar plot. ``` library(lattice) dotplot(ranef(fit.glmm)) ``` [](https://i.stack.imgur.com/OaRgT.png) We can see that these 200+ doctors here vary a lot. ### Including Patients Including patients here as you mention would be tricky, as it would depend on multiple factors. The main thing is that the subjects need to have repeated observations for this to work (they were seen more than once). Additionally, if they were seen by multiple doctors, this could be a case of crossed random effects. We don't have patient ID in this data, but assuming we had one called `PID`, a within-doctors random effect (patients are only seen by one doctor), would look like this: ``` fit.within <- glmer(remission ~ Experience + (1 | DID/PID), data = hdp, family = binomial) ``` And a crossed effects design (patients were seen by multiple doctors) would look like this: ``` fit.crossed <- glmer(remission ~ Experience + (1 | DID) + (1|PID), data = hdp, family = binomial) ```
null
CC BY-SA 4.0
null
2023-03-24T22:49:02.157
2023-03-24T22:49:02.157
null
null
345611
null
610657
1
null
null
0
26
I would like to understand the connection between a sampling distribution of the sample variance and and the chi square distribution
The chi square and sample variance
CC BY-SA 4.0
null
2023-03-24T23:57:12.097
2023-03-25T04:00:09.230
2023-03-25T04:00:09.230
362671
382334
[ "distributions", "variance" ]
610658
1
null
null
0
17
I have weekly panel data over a year, measuring sales in two electronic marketplaces (the panel data is unbalanced: products may be added or deleted during this time period from each marketplace). I am interested in measuring the effect of a policy change on Marketplace 1 (which occurred about halfway over the time period) on sales (outcome variable), as well as whether the policy may change the coefficient of certain predictor variables. I have two primary points of confusion: (1) In measuring the causal impact on sales, I was thinking of using a DiD specification (with panel data) like the following: $$ Sales_{it} = \alpha_i + \alpha_t + \delta X_{it} + \beta_1 Marketplace1_i + \beta2 Post_t \times Marketplace1_i $$ with product and time fixed effects ($\alpha_i, \alpha_t$), time-varying controls $(X_{it})$, a dummy for being listed on Marketplace 1, and the DiD term. However, given that the panel data is unbalanced, would this specification make sense (and would I need to remove products which do not appear in both the pre- and post- periods)? Alternatively, would it be better to just collapse the time periods (treating all observations before and after the treatment respectively as two cross sections)? (2) I am also interested in measuring how the coefficient of a predictor variable may change after the policy intervention. I only have access to the predictor variable of interest for Marketplace 1 ($Variable$). I am a bit confused as to how to approach and model this. Would a fixed effects model which just includes an interaction term like the following be appropriate? $$ Sales_{it} = \alpha_i + \alpha_t + \delta SalesOnMarketPlace2_t + \beta_1 Variable_{it} + \beta_2 Variable_{it} \times Post_{t} $$ where I include the sales on the other marketplace as a control (sales of these products tend to be seasonal). Are there any other potential specifications which could be used to measure the effects described above which I am missing?
Causal inference—estimating change in coefficient on independent variable
CC BY-SA 4.0
null
2023-03-25T00:36:19.103
2023-03-25T00:49:23.713
2023-03-25T00:49:23.713
351997
351997
[ "regression", "regression-coefficients", "causality", "fixed-effects-model", "difference-in-difference" ]
610659
2
null
610637
2
null
The total deviance is twice the difference in log-likelihood between a "saturated model" and the fitted model. Using notation from Dobson and Barnett's "An Introduction to Generalized Linear Models", the total deviance is equal to $$ D = 2 \left[ l(\mathbf{b}_{\max}; \mathbf{y}) - l(\mathbf{b}; \mathbf{y}) \right] \>.$$ Here, $\mathbf{b}_{\max}$ is a maximum likelihood estimator for the saturated model with true parameter $\boldsymbol{\beta}_{\max}$, and $\mathbf{b}$ is a maximum likelihood estimator for the fitted model. The easiest way to get $\mathbf{b}_{\max}$ is to have 1 parameter per observation, which then just makes each prediction equal to the observation. You can verify all this in R. The `glm` function will compute the deviance for you (listed under residual deviance) ``` library(tidyverse) set.seed(0) N <- 1000 x <- rnorm(N) p <- plogis(-2 + 0.8*x) y <- rbinom(N, 1, p) fit <- glm(y~x,family = binomial()) p_est <- predict(fit, type = 'response') # Deviance D = sum(2*(dbinom(y, 1, y, log = T) - dbinom(y, 1, p_est, log = T))) print(D) #> [1] 739.2842 s <- summary(fit) print(s$deviance) #> [1] 739.2842 ``` Created on 2023-03-24 with [reprex v2.0.2](https://reprex.tidyverse.org) Note that for a given $\mathbf{y}$, $l(\mathbf{b}_{\max}; \mathbf{y})$ is constant, so optimizing $D$ is the same as optimizing the log likelihood. In this sense, the deviance is the same as the loss. Again, easy to verify in R ``` loss<-function(b, X, y){ p <- plogis(X %*% b) ll <- 2*sum(dbinom(y, 1, y, log = T) - dbinom(y, 1, p, log = T)) return(ll) } b <- c(0,0) r <- optim(b, fn=loss, X=model.matrix(~x), y=y) round(r$par, 4) #> [1] -2.0182 0.7623 round(coef(fit), 4) #> (Intercept) x #> -2.0182 0.7622 # Coefficients from optimizing the log-likelihood are the same # As coefficients from optimizing the deviance. ``` Created on 2023-03-24 with [reprex v2.0.2](https://reprex.tidyverse.org) A little algebra goes a long way. First, let's note that $\hat{y}$ is the estimated risk from the logistic regression model (it is not a 0 or 1). Let's re-write half the binomial deviance using log rules as $$ y\log(y) - (1-y)\log(1-y) - y \log(\hat{y}) + (1-y)\log(1-\hat{y}) $$ Now using what we've discussed $$ \underbrace{y\log(y) - (1-y)\log(1-y)}_{l(\mathbf{b}_{\max}; \mathbf{y})} - \overbrace{y \log(\hat{y}) + (1-y)\log(1-\hat{y})}^{l(\mathbf{b}; \mathbf{y})} $$ OK so, what have we know now: - The log-likelihood is the loss for generalized linear models - The deviance is proportional to the log likelihood minus a constant - We showed this with the binomial deviance (the poisson is perhaps the next easiest, I suggest you do that as practice). - This means optimizing the deviance is equivalent to maximizing the optimizing log likelihood. - Hence, deviance and loss should be equivalent in so far as optimizing both leads to the same model. This might be why XGboost has a different loss function for gamma regression (it uses the loglikelihood as opposed to the deviance). Personally, I find it a bit strange to optimize the deviance as opposed to the log likelihood directly, but sklearn has made it very clear they are NOT a statistics package so I'm ultimately not surprised.
null
CC BY-SA 4.0
null
2023-03-25T01:49:53.230
2023-03-25T03:26:03.140
2023-03-25T03:26:03.140
111259
111259
null
610660
1
null
null
0
16
I'm currently testing out models for a research paper I'm doing as an undergraduate in political science. One of my questions of interest was whether rural turnout was higher than urban turnout at local elections in Ireland. My first test with my main explanatory variables produced an adjusted R squared of 0.7207 (higher than expected). Thinking that this may be influenced by the fact that in earlier years, Ireland was more rural than it is today. To try to account for this, I included a fixed effects for year. This results in the adjusted R squared to bounce to 0.8739 in the full model but also 0.8454 in the projected model. After that, I was toying around with my variables and other fixed effects and implemented a two way model with one variable for year and another for county. Suddenly, the proj R squared falls below zero to -0.003231. After this I try the model with a fixed effect for county only, and it bounced back to 0.23 which is a more realistic value. Nonetheless, is the jointed fixed effects model saying that when county and year are accounted for, none of the variation originally captured by the rural variable is useful in predicting turnout and in fact its presence makes the model worse? More generally, I'd appreciate an explanation as to what exactly the models are telling me. was not expecting such a high r squared to begin and such a low one to end
Two way fixed effects proj r squared question
CC BY-SA 4.0
null
2023-03-25T01:02:28.440
2023-03-25T03:04:27.563
2023-03-25T03:04:27.563
11887
null
[ "r", "ggplot2" ]
610661
1
null
null
1
9
I am new to Bayesian statistics. I have a basic idea about priors, posteriors and likelihood. I have a problem to model using bayesian statistics, and apply using python. ( this is a side project i do understand Bayesian statistics) I have a sequential data set of students attempting questions, [0,1,1,0,0,m,1,1,0,m,0,1]. 0 - wrong answer ,1 - correct answer , m- student refer study material. I want to model the impact of study materials(probability of giving a correct answer after referring the study materials) M - referring study material after giving a wrong answer T - Giving a correct answer P(T |M) =P(M|T) xP(T) / P(M) -> is this the correct approach is it correct if I calculate P (T) diving correct answer by total attempts or do I need to use beta(5,5) distribution If i should use beta distribution ,how to build the model . Again what I am trying to achieve is, students can refer study materials if they give a wrong answer and re try the question. i want to calculate the probability of giving the correct answer after referring the study materials
Modelling impact of study materials towards students giving correct answers
CC BY-SA 4.0
null
2023-03-25T03:52:28.543
2023-03-25T06:19:13.337
2023-03-25T06:19:13.337
362671
384086
[ "bayesian", "treatment-effect" ]
610662
1
null
null
1
26
I'm trying to run logistic regression to find whether or not a team will make the playoffs based on their previous year stats, player awards, W-L record, etc. In the NFL, there are 2 conferences and 4 divisions within each conference. Every division winner makes the playoffs, plus 3 wildcard teams (the 3 wildcard teams are the 3 teams with the best record from the remaining non-division winners). This makes 7 teams making the playoffs from each Conference. The problem is, I do not know how to account for this in a logistic regression model. Theoretically, my model could predict 10 teams making the playoffs from one conference, and only 4 teams from the second conference, even though that is not possible. Does anyone know how I could go about accounting for these variables?
How to account for NFL divisions for predicting playoff chances?
CC BY-SA 4.0
null
2023-03-25T04:10:04.580
2023-03-25T17:06:10.647
2023-03-25T06:43:05.640
362671
null
[ "machine-learning", "logistic" ]
610663
2
null
558013
0
null
The $R^2$ in the variance inflation factor does not consider the outcome ($Y$) variable; that $R^2$ refers to the $R^2$ of a linear regression that uses all but one feature to predict the feature for which you want to calculate the VIF. Consequently, run such a regression, and calculate the $R^2$. Then $\dfrac{1}{1-R^2}$ is the VIF of that feature in the original regression. Note that this $R^2$ can and probably will change from feature to feature. I give an example below. ``` set.seed(2023) N <- 1000 x1 <- rnorm(N) x2 <- rnorm(N) x3 <- x1 + rnorm(N, 0, 0.1) # Make x3 super correlated with x1 cor(x1, x3) # I get 0.9950633 y <- x1 - x2 + x3 + rnorm(N, 0, 2) L <- lm(y ~ x1 + x2 + x3) # Use x1, x2, x3 to predict y summary(L) L1 <- lm(x1 ~ x2 + x3) # Use x2, x3 to predict x1 1/(1 - summary(L1)$r.squared) # I get VIF_1 = 101.6301 L2 <- lm(x2 ~ x1 + x3) # Use x1, x3 to predict x2 1/(1 - summary(L2)$r.squared) # I get VIF_2 = 1.00097 L3 <- lm(x3 ~ x1 + x2) # Use x1, x2 to predict x3 1/(1 - summary(L3)$r.squared) # I get VIF_3 = 101.6308 ``` That `x1` and `x3` are highly correlated and have that huge VIF value explains why, despite the effet sizes all being about the same, the coefficients on `x1` and `x2` have much larger p-values than the coefficient on `x2`, due to the much larger standard error caused by the high VIF.
null
CC BY-SA 4.0
null
2023-03-25T04:22:08.517
2023-03-25T04:22:08.517
null
null
247274
null
610665
2
null
275573
2
null
Your problem here is made substantially easier by the fact that you are dealing with a finite sets of musicians and instruments. With ordered sets of $n$ musicians and $n$ instruments, there are $n!$ bijections mapping the musicians to the instruments. Thus, your statistical problem is to predict a finite categorical outcome variable (with $n!$ possible values) using your quantitative skills as explanatory variables. A standard model for this type of analysis is [multinomial logistic regression](https://en.wikipedia.org/wiki/Multinomial_logistic_regression). Since the musicians come in an arbitrary order, you would need to impose an ordering rule on them based on the skill measurements, to have a well-defined bijection from an ordered domain (e.g., you might order the musicians using a [lexicographic order](https://en.wikipedia.org/wiki/Lexicographic_order) on their skills in guitar, drums, bass, timing and creativity respectively). You could also constrain your regression function so that it is "symmetric" with respect to reordering of the musicians and bijection (i.e., if you swap the skills of any two band members then the resulting bijection should be the same, except with the mapping swapped for those two musicians. This would require some care in your coding of the model, but it ought to be possible to do. In your particular case you have $n=4$ so there are $n! = 24$ possible bijections forming possible outputs in the model. It should be feasible to build a multinomial logistic regression for this problem with a reasonable sized corpus of bands for your training data. You would need to code the 24-types of bands as a categorical output variable and specify an appropriate regression function with your skill variables to predict this categorical output variable.
null
CC BY-SA 4.0
null
2023-03-25T04:35:31.253
2023-03-25T04:35:31.253
null
null
173082
null
610666
1
null
null
1
82
A meta-analysis reports that the hazard ratio of a prognostic marker A for predicting mortality in heart failure is 12.5 (95% CI 6.2 - 26.1) and the hazard ratio of a prognostic marker B is 3.2 (95% CI 2.2 - 5.1). Can we conclude that A is better prognostic marker than B?
Compare hazard ratios of a meta analysis
CC BY-SA 4.0
null
2023-03-25T04:42:47.197
2023-03-25T04:42:47.197
null
null
384089
[ "meta-analysis", "hazard" ]
610668
1
null
null
0
6
Per [https://stats.stackexchange.com/a/204977/384097](https://stats.stackexchange.com/a/204977/384097) the suggestion was made that, given a known distribution, applying a cutoff as means of dealing with outliers is not data leakage. I feel uncomfortable with this unless there is a perfect way to ensure autocorrelations are not significant. As an example given [1,2,2,3,3,3,4,4,4,4,5,5,5,6,6,7] we could then generate for each value a count [1 2 3 4 3 2 1] and may choose to throw away the 1 counts yielding [22333444455566]. Suppose the following: By looking at the values, we may suspect that there is an upward trend with oscillations of decreasing frequency as we move away from 4 in both time directions. So context may say the trend is generally more monotonic the further from 4. Question: - Suppose we remove low-frequency values from both sides of the distribution [1 2 3 4 3 2 1], ie resulting in frequencies [23432] ie [22333444455566] - Can we say that this removal of {1,7}, if our suppositional structure is not examined, results in a problematic peeking by removal of 1, but not in removal of 7 since rarity is measured w/r/t other values, and for 1 all those other values are known to be at later times? Whereas looking at 7, even if we know of a generally monotonic structure in time, is a look backward?
Time series -- peeking for truncation of outlying values displaced from mean under some structural assumptions
CC BY-SA 4.0
null
2023-03-25T07:17:05.720
2023-03-25T07:17:05.720
null
null
384097
[ "time-series", "outliers", "data-leakage" ]
610669
1
610689
null
2
62
I am reading the paper Explaining the Gibbs Sampler(Casella, George, and Edward I. George. "Explaining the Gibbs sampler." The American Statistician 46.3 (1992): 167-174). I am stuck with example 2, Figure 2, as how the solid line was created from the equation $(2.9)$. The paper can be downloaded here: [http://www2.stat.duke.edu/~scs/Courses/Stat376/Papers/Basic/CasellaGeorge1992.pdf](http://www2.stat.duke.edu/%7Escs/Courses/Stat376/Papers/Basic/CasellaGeorge1992.pdf) The figure is here : [](https://i.stack.imgur.com/bxLPZ.jpg) The paper says' Gibbs sampling can be used to estimate the density itself by averaging the final conditional densities from each Gibbs Sequence. I know how to create the final Gibbs sequence for example 2. Here is an excellent example of how to create these sequences ([Gibbs sampler for conditionals that are exponential: Example from Casella & George paper](https://stats.stackexchange.com/questions/81103/gibbs-sampler-for-conditionals-that-are-exponential-example-from-casella-geor)). I know I can use the Kernel density method to estimate the density function using the 500 final sample points. However, this was not the method the paper used. Here is the code to produce 500 final sample points, which I copied from the above URL. ``` set.seed(1010) X = rep(0, 500) Y = rep(0, 500) k = 15 for (i in 1:500) { x = rep(1, k) y = rep(1, k) for (j in 2:k) { temp_x = 6 while(temp_x>5) { x[j] = rexp(1,y[j-1]) temp_x = x[j] } temp_y = 6 while(temp_y>5) { y[j] = rexp(1,x[j]) temp_y = y[j] } } X[i] = x[k] Y[i] = y[k] } ``` From these values, we can calculate the density by equation $(2.8a)$. In the equation $(2.9)$ we have a fixed $x$ value, such as if we let $x=\xi $ then equation $(2.9)$ becomes $$\hat{f}(\xi)=\frac{1}{m}\left[f(\xi|y_1)+f(\xi|y_2)+f(\xi|y_3)+...+f(\xi|y_m)\right]$$ I think the Gibbs Sampler did not produce a fixed last value (15th) $\xi$ for the 500 sequences. I am stuck here. I hope you can help me. Anyway, overall the question is how the solid line in figure 2 was produced.
Question on Gibbs sampler, estimate the density function
CC BY-SA 4.0
null
2023-03-25T07:35:35.363
2023-03-27T10:39:11.103
2023-03-27T10:39:11.103
61705
61705
[ "self-study", "gibbs" ]
610670
1
null
null
1
31
A plot of my data and fit: ``` ggplot(ahn.res, aes(rds,ror_rr)) + geom_point() + geom_smooth(method = "nls", formula = y ~ 1-exp(-A*x^B)+0.02799777, method.args = list(start=c(A=2,B=3)), se=F) + ylab("y") + xlab("x") ``` [](https://i.stack.imgur.com/bIOAo.png) I'm interested in the max value of the second derivative of this function, corresponding to the red point on this graph. As I have more data points in the higher portion of graph, does the nls() function skew the fit towards this high data density region of the graph? If so, is there a way to weigh the residuals based on the interval of x, such that a higher interval gets a higher weight?
How to weigh residuals based on x-interval in the nls() function in R?
CC BY-SA 4.0
null
2023-03-25T07:43:28.960
2023-03-25T07:43:28.960
null
null
319765
[ "r", "residuals", "nonlinear-regression", "fitting", "nls" ]
610671
1
null
null
2
40
Let's say that I train a neural network in a classic binary classification setting where all the training data has labels in $\{-1, +1\}$. From my understanding, if I train the network with a log-loss function and softmax output layer, the network outputs will essentially estimate $P(y \mid \mathbf{x})$. How do I prove this convergence mathematically? In other words, if $h(\mathbf{x};y)$ is the output of the network, how do I show that the minimizing the loss brings $h(\mathbf{x};y)$ closer to $P(y \mid \mathbf{x})$ over all $\mathbf{x}$ over time? Thanks!
How to prove that neural network estimates posterior distribution
CC BY-SA 4.0
null
2023-03-25T07:44:02.323
2023-03-25T07:49:35.437
2023-03-25T07:49:35.437
384103
384103
[ "probability", "self-study", "neural-networks", "convergence" ]
610672
2
null
595150
1
null
While there is limited published research directly comparing the Box-Jenkins method based on ACF/PACF plots and information criteria-based methods in ARIMA model selection, most researchers and practitioners have gravitated towards using information criteria methods, such as AIC or BIC, due to their increased efficiency, automation, and out-of-sample forecasting performance. One study that provides a comparison between the two methods is: Makridakis, S., & Hibon, M. (2000). The M3-competition: results, conclusions and implications. International Journal of Forecasting, 16(4), 451-476. The M3 competition involved researchers and practitioners submitting their models for forecasting various time series. While the study does not focus specifically on comparing Box-Jenkins and information criteria-based methods, it does provide insights into which methods performed better in real-world situations. The results of the M3 competition indicate that the methods based on optimizing information criteria tend to perform better in terms of out-of-sample forecasting accuracy. Given the limited direct comparison of the two approaches in the literature, the preference for information criteria-based methods can be attributed to the following reasons: - Automation: Information criteria-based methods automate the model selection process, reducing the need for subjective decisions and interpretation of ACF/PACF plots. - Efficiency: Grid searching over possible model orders and selecting the one that optimizes an information criterion is generally faster and more efficient than manually identifying the optimal order from ACF/PACF plots. - Out-of-sample forecasting performance: In practice, methods based on optimizing information criteria tend to yield better out-of-sample forecasting accuracy than the Box-Jenkins method. While the Box-Jenkins method may still have educational value and could be useful in specific cases, for most practical applications, using an automated approach like the ones provided by the `forecast` or `fable` packages in R is recommended.
null
CC BY-SA 4.0
null
2023-03-25T08:12:25.320
2023-03-25T08:12:25.320
null
null
224266
null
610673
2
null
610610
6
null
It is easier to present these data in a 2x2 contingency table. $$\begin{array}{c|cc|c} &\text{Y}&\text{N} & \text{marginal sum}\\ \hline \text{WT} & 15 & 9 & 24\\ \text{KO} & 14 & 6 & 20 \\\hline \text{marginal sum} & 29 & 15 & 44 \\ \end{array}$$ The type of test to be performed depends on the boundary conditions (whether one or more marginals are fixed or not, e.g. whether the experiment selected a fixed number of cases with WT/KO and/or Y/N) and on the stopping rule (whether the test had a fixed number of the total 44 cases, or whether the test was continued untill some number of a particular class had been observed). You can read about this in an article by Lydersen, Fatherland and Laake [Recommended tests for association in 2×2 tables](https://onlinelibrary.wiley.com/doi/10.1002/sim.3531), but possibly also in many other places and also question already asked here. Depending on the number of marginals that are fixed - both marginals fixed: the values have one degree of freedom which follows a hypergeometric distribution. You can perform Fisher's exact test. Example: the lady tasting tea experiment - one marginal fixed: the values have two degrees of freedom which follow a binomial distribution. You can perform several types of tests. For instance Barnard's test. Also a z-test for differences in proportions is commonly used. Example: a/b testing. - no marginal fixed the values follow a multinomial distribution. You can perform a chi-squared test, which approximates the multinomial distribution with a multivariate normal distribution. The null hypothesis is that the cell probabilities are a product of class probabilities. Example: an observational study where both of the two variables are not controlled. A situation based on a stopping rule. - When we have fixed number of two classes, and observe untill a number of cases have occurred, then the distribution follows a binomial distribution and you can perform tests for that distribution. Example: testing the vaccins against Covid. (Which statistical model is being used in the Pfizer study design for vaccine efficacy?)
null
CC BY-SA 4.0
null
2023-03-25T08:28:17.823
2023-03-25T09:06:26.537
2023-03-25T09:06:26.537
164061
164061
null
610674
1
null
null
0
53
Given the following parameter estimate, how do I find $E[\hat{a}_{MED}]$ and $Var[\hat{a}_{MED}]$? \begin{equation} \label{eq:Estimator_a_Med} \hat{a}_{MED} = - \left( n_0 \right)^4 \cdot \log(0.5) \end{equation} where $n_0$ is the sample median. The density function is: $$f(x) = \frac{4a}{x^5} \exp \left[ {- \frac{a}{x^4}} \right] \quad \text{for} \quad 0 \leq x \leq \infty, \ a>0$$ having a CDF $$F_X(z) = \exp \left[ {-\frac{a}{z^4}} \right]$$
How can I compute the expected value and variance of the 4th power of the sample median?
CC BY-SA 4.0
null
2023-03-25T08:37:45.277
2023-03-25T15:32:58.950
2023-03-25T15:32:58.950
360499
360499
[ "expected-value", "bias", "unbiased-estimator", "median" ]
610675
1
610679
null
0
38
I would like to include the sign of X in a linear regression to highlight the impact it has on Y (see the scatter plot below). I first thought of a dummy, taking the value of 1 if positive and 0 if negative but I had difficulties interpreting it, especially due to the dummy variable trap. So I finally just went with the following independent variables: - absolute value of X - sign of X The results are as follow ``` OLS Regression Results ============================================================================== Dep. Variable: Y R-squared: 0.255 Model: OLS Adj. R-squared: 0.254 Method: Least Squares F-statistic: 334.7 Date: Sat, 25 Mar 2023 Prob (F-statistic): 6.51e-187 Time: 09:08:30 Log-Likelihood: 2567.9 No. Observations: 2938 AIC: -5128. Df Residuals: 2934 BIC: -5104. Df Model: 3 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ Intercept 0.2208 0.004 50.206 0.000 0.212 0.229 sign_X -0.0700 0.004 -15.913 0.000 -0.079 -0.061 np.abs(X) 0.0088 0.003 2.818 0.005 0.003 0.015 sign_X:np.abs(X) -0.0157 0.003 -4.987 0.000 -0.022 -0.010 ============================================================================== Omnibus: 2593.479 Durbin-Watson: 0.948 Prob(Omnibus): 0.000 Jarque-Bera (JB): 153874.664 Skew: 3.917 Prob(JB): 0.00 Kurtosis: 37.577 Cond. No. 20.2 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ``` Am I allowed to do this? Shouldn't it be more 'clean' with a dummy? I feel quite unsure as the model will give important variation of Y when X is switching sign. I feel like it will not be the case with a dummy. Also I have seen that the errors are autocorrelated so I'll have to add variables to the model. The equation of the regression line is as follow: ``` model.params[0] + model.params[1]*np.sign(x_) + model.params[2]*np.abs(x_) + model.params[3]*np.abs(x_)*np.sign(x_) ``` [](https://i.stack.imgur.com/Qkt5X.png)
add the sign of the independent variable in a linear regression
CC BY-SA 4.0
null
2023-03-25T08:40:06.273
2023-03-25T19:01:03.900
2023-03-25T09:47:22.233
22047
384108
[ "regression", "multiple-regression", "categorical-encoding" ]
610676
1
null
null
0
14
I am working on a problem in econometrics that I need help with. The problem proposes a panel data set on 1000 workers across 10 years that tracks earnings, education, age and gender. We regress the variables against earnings. I have included a dummy variable for each year in order to account for time specific variables (that do not vary across individuals) I then used first differences to account for individual fixed effects The Question: Can I include the national unemployment rate in this new model to try and estimate its effect on earnings? My Logic: Because the national unemployment rate is a time specific variable (that does not vary across individuals), the effect of a change in the unemployment rate on earnings is captured in the coefficients of each time dummy variable. Because these coefficients include more than just the unemployment rate they can not be used to estimate the effect we are looking for. I propose that we include the unemployment rate in the original model, which will become the change in unemployment rate in the first difference model. Because it only varies across time and not individuals, the coefficient will likely be inconsistent because we are only looking at data across 10 years. Is my logic correct? Am I even allowed to include a time specific variable in pooled OLS?
Can I include variables that vary only by time (not throughout cross-sections) in my panel data?
CC BY-SA 4.0
null
2023-03-25T08:47:29.290
2023-03-25T08:47:29.290
null
null
384109
[ "regression" ]
610677
1
610683
null
0
31
How to include a dummy interaction term in an ARIMA model? Can we use the dependent variable (in this case, say the log return of an asset price at time $t$) to multiply with the dummy variable as an interaction term?
Dummy interaction term in an ARIMA model
CC BY-SA 4.0
null
2023-03-25T09:00:53.063
2023-03-25T12:33:02.790
2023-03-25T11:40:44.427
53690
369873
[ "arima", "interaction", "categorical-encoding" ]
610678
1
610680
null
2
61
I was revisiting the differences between logistic regression and Naive Bayes, and had a conceptual question. A logistic regression classifier makes intuitive sense to me as a classifier that directly estimates $P(y \mid x)$ without making any assumptions about how the data is distributed (except that the conditional distribution $P(y \mid x)$ is Bernoulli). To my understanding, however, logistic regression is exactly equal to a Naive Bayes classifier with an assumed Gaussian distribution and constant variance. So, I was wondering, why is it equal if Naive Bayes is making the explicit assumption of a Gaussian distribution? Thanks.
Equivalence of Logistic regression to Gaussian naive bayes
CC BY-SA 4.0
null
2023-03-25T09:33:52.383
2023-03-26T11:25:26.010
2023-03-26T11:25:26.010
384103
384103
[ "self-study", "logistic", "central-limit-theorem", "naive-bayes" ]
610679
2
null
610675
0
null
This is as much a substantive issue as a statistical one. The argument for such a parameterisation is that there could be a jump at zero. On the other hand, getting a jump out of a model fit is not especially convincing unless you explore other alternative parameterisations, even say linear or cubic splines or scatterplot smoothers. How much sense that makes for your context would be something that people familiar with your field might be able to comment on if you explained what X is. Anonymising it doesn't make anything clearer. My short answer to the question is that it is not a matter of whether it is allowed. The issue is what parameterisation fits the data and the application both accurately and parsimoniously, which is the trade-off in virtually any model-fitting. Even without knowing your context, the model fit looks implausible: the fitted values fall into utterly distinct groups, but observed outcomes don't. And there appears to be some curvature that the lines aren't matching. If the outcome variable can't be negative, a model ever predicting negative values is qualitatively wrong.
null
CC BY-SA 4.0
null
2023-03-25T09:45:26.810
2023-03-25T19:01:03.900
2023-03-25T19:01:03.900
22047
22047
null
610680
2
null
610678
1
null
Logistic regression assumes a particular functional form for the conditional probabilities $P(c|\mathbf x)$ ($c$ being the class index). Under the Naive Bayes assumptions with some further restrictions ($x_i|c$ are independent, gaussian distributed with the same variance), the conditional probabilities $P(c|\mathbf x)$ happen to have the same functional form. So you can view Logistic regression as a generalized model, of which this specific instance of Naive Bayes is a special case. Of course this can also viewed as one of the motivations for choosing this particular functional form.
null
CC BY-SA 4.0
null
2023-03-25T11:31:00.913
2023-03-25T11:39:47.270
2023-03-25T11:39:47.270
348492
348492
null
610682
1
null
null
0
13
I have been given time series of N stock prices and time series of K sectoral indices. (Sectoral index is index made from all stocks belonging to that sector.) I have not been given names of any of those stocks or indices. I want to know which stocks belong to the same sector and sectoral indices. I am able to cluster the stocks. But how should I decide which sectoral index time series they correspond to? Should I just cluster stocks and indices together (N+K timeseries) instead of clustering N stocks independently? Will that be correct approach?
Determining clusters of time series
CC BY-SA 4.0
null
2023-03-25T12:22:33.023
2023-03-25T12:22:33.023
null
null
298379
[ "time-series", "clustering" ]
610683
2
null
610677
1
null
An ARIMA model with an interaction term would not be called ARIMA anymore, but certainly you can formulate a model of the kind. E.g. starting from ARMA(1,1) and excluding the intercept for simplicity $$ y_t=\varphi_1 y_{t-1}+u_t+\theta_1 u_{t-1} $$ you could obtain $$ y_t=\varphi_1 y_{t-1}\color{blue}{+\gamma_1 d_1y_{t-1}}+u_t+\theta_1 u_{t-1} $$ where $d_1$ is the dummy variable. The dummy interacts with a lag of the dependent variable. I am not sure if it would make any sense to interact it with the (nonlagged) dependent variable, though. (If you explain us what you are trying to achieve, perhaps we can elaborate further.)
null
CC BY-SA 4.0
null
2023-03-25T12:33:02.790
2023-03-25T12:33:02.790
null
null
53690
null
610684
1
null
null
0
27
Im trying to convert anomaly scores into into probabilities. I wanted to do this with logistic regression. The problem is that I can only provide labels for non-anomalous training examples. Training the logistic regression model does not work, training examples need to be provided for two classes. Is there another way to do this conversion if only data for the normal case is known apriori? Thank you.
Convert anomaly scores to probabilities in a one-class way
CC BY-SA 4.0
null
2023-03-25T13:29:08.070
2023-03-25T13:29:08.070
null
null
127037
[ "regression", "probability", "logistic", "anomaly-detection" ]
610686
2
null
308224
1
null
Here is another approach for a continuous random variable, less rigorous than Zen's beautiful solution. Fix $x:x< c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x<c}{=} \frac{P(X <x)}{P(X<c)}=\frac{F_X(x)}{F_X(c)}$ Fix $x:x>c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x>c}{=} \frac{P(X <c)}{P(X<c)}=1$ Then, the conditional pmf is $ f_{X|X<c}(x|x<c)=\frac{f_X(x)}{F_X(c)}I_{(-\infty,c]}(x)$ Therefore, the conditional expectation is $E(X|X<c)=\int_{- \infty}^{+\infty}xf_{X|X<c}(x|x<c)dx=\int_{- \infty}^{c}xf_{X|X<c}(x|x<c)dx$
null
CC BY-SA 4.0
null
2023-03-25T14:37:27.590
2023-03-25T14:37:27.590
null
null
384124
null
610687
1
null
null
0
20
I'm trying to calculate the posterior probability of a coin toss resulting in "heads". We assume the uniform distribution of the prior $p(\theta)=1,\theta\in[0,1]$. Now suppose we toss a coin the first time and see "heads". The posterior distribution after the first toss is: $p(\theta|heads)=\frac{p(\theta) p(heads|\theta)}{\int_0^1 p(\theta)p(heads|\theta) \, d\theta }=\frac{\theta }{\frac{1}{2}}=2 \theta$ Looks ok. Now I update my prior to $p(\theta)=2\theta$, make the second toss and see "tails". To calculate the posterior for the second experiment I do: $p (\theta |tails)=\frac{p(\theta) p (tails|\theta ))}{\int_0^1 p(\theta) p (tails|\theta )) \, d\theta }=-\frac{2 (1-2 \theta ) \theta }{\frac{1}{3}}=-6 (1-2 \theta ) \theta$ Which is incorrect, because $-6 (1-2 \theta ) \theta$ has negative values if $0<\theta <\frac{1}{2}$. What is wrong with my calculations?
Bayesian update for sequential coin toss
CC BY-SA 4.0
null
2023-03-25T14:45:26.890
2023-03-25T14:45:26.890
null
null
255
[ "bayesian" ]
610688
1
null
null
0
12
I have generated a continuous decision tree using rpart. The plot is: [](https://i.stack.imgur.com/scCPX.png) Using predict(rpart_r,newdata) I can predict the dependant variable and for one observation: ``` vehage CC Length Weight 4 1495 4250 1023 ``` I get 80443.87 The result must be interpolated from the decision tree but my question is how? I have search the web but didn't find a clear explanation of how the prediction is made from the constants given in the tree leafs. Any thoughts or pointers greatly appreciated.
rpart anova continuous prediction
CC BY-SA 4.0
null
2023-03-25T14:54:15.433
2023-03-25T14:58:59.140
2023-03-25T14:58:59.140
56940
378189
[ "regression", "cart" ]
610689
2
null
610669
2
null
The kernel density estimator $\hat f(\cdot)$ is a generic density estimation method that produces a density function out of a sequence of generations from the true distribution $f$. It is biased and converging (for the univariate case) at a speed of $n^{-2/5}$, [in general](https://en.wikipedia.org/wiki/Kernel_density_estimation). The Rao-Blackwell(ised) estimator (2.9) is unbiased, meaning that for every entry $x$ $$f(x) = \mathbb E[\hat f(x)] = \mathbb E^Y[f(x|Y)],$$ and it converges at the parametric speed of $n^{-1/2}$. This is a gain exploiting the latent variable simulated by the Gibbs sampler. Note that the above R code is not perfect, as the 500 chains are all starting from the value $X_1=1$, rather than exploiting the higher proximity of the previous chain with the stationary distribution at the end of the 15 iterations. Here is my alternate version with no parallel chain and a thinning of one out of 15: ``` X = Y= runif(500,0,5) y=Y[1] for (t in 1:500) { for (j in 2:15) { temp_x = temp_y = 6 while(temp_x>5)temp_x = x = rexp(1,y) while(temp_y>5)temp_y = y = rexp(1,x) } X[t]=x;Y[t]=y} plot(density(X,from=0,to=5),main="") rb=seq(0,5,le=123) for(i in 1:123) rb[i]=mean(Y*exp(-rb[i]*Y)) lines(seq(0,5,le=123),rb,col="red2",lwd=2) ``` Here is a comparison of the density estimates based on the Gibbs chain $(X_t)$ using a standard non-parametric approach (through R `density`) [in black] and of the Rao-Blackwell estimate based on the Gibbs chain $(Y_t)$ [in red]: [](https://i.stack.imgur.com/h367e.png) The estimators of the densities are close enough to conclude that the Rao-Blackwell version is not completely off.
null
CC BY-SA 4.0
null
2023-03-25T14:57:38.030
2023-03-26T20:16:07.803
2023-03-26T20:16:07.803
7224
7224
null
610690
1
610784
null
0
32
I am trying to implement a QMLE estimation of GARCH(2,2) model as a side project. We can represent GARCH(2,2) as follows: \begin{aligned} r_{t} &= \mu_{t} + \epsilon_{t}, \\ \mu_{t} &= 0, \\ \epsilon_{t} &= \sigma_{t}z_{t}, \quad z_{t} \sim N(0,1), \\ \sigma_{t+1}^{2} &= \omega + \alpha_{1}\epsilon_{t}^{2} + \beta_{1}\sigma_{t}^{2} + \alpha_{2}\epsilon_{t-1}^{2} + \beta_{2}\sigma_{t-1}^{2}. \end{aligned} Using standard arguments, we can write the log-likelihood as $$ \sum_{t}^{T}l(\theta|r_{t}) = \sum_{t=1}^{T}\log\bigg[ \frac{1}{\sqrt{2\pi\sigma^{2}_{t}(\theta)}}\exp\{\frac{r_{t}^{2}}{2\sigma(\theta)_{t}^{2}}\}\bigg] $$ where $\theta$ is just the vector of parameters. This would be estimated, at each time $t$ by plugging in $\omega + \alpha_{1}\epsilon_{t-1}^{2} + \beta_{1}\sigma_{t-1}^{2} + \alpha_{2}\epsilon_{t-2}^{2} + \beta_{2}\sigma_{t-2}^{2}$ instead of $\sigma_{t}^{2}$ and $r_{t}^{2}$ (data). However, my problem is basically that I cannot understand whether - $r_{t}^{2}$ is the data that we have (which would be logical) or - if $r_{t}^{2}$ is simply generated from previous iterations of $\sigma_{t}^{2}$, as $r_{t}^{2} = \epsilon_{t}^{2} = z_{t}^{2}\sigma_{t}^{2}=z_{t}^{2}(\omega + \alpha_{1}\epsilon_{t-1}^{2} + \beta_{1}\sigma_{t-1}^{2} + \alpha_{2}\epsilon_{t-2}^{2} + \beta_{2}\sigma_{t-2}^{2})$. The latter doesn't make sense, and it means I only need to initialize the process and then I wouldn't need any data to fit it, and this is exactly what confuses me. If the former is true, then how can we say that $r_{t} = \epsilon_{t}$? I have already checked [this post](https://quant.stackexchange.com/questions/65704/garch1-1-parameter-estimation-optimization-method), which was quite helpful, but still does not answer my question. Perhaps it has to something with conditioning on information at time $t$, but I am struggling to clearly understand this.
Implementing GARCH(2,2) QMLE: where does the data (squared returns) come into play?
CC BY-SA 4.0
null
2023-03-25T15:14:50.123
2023-03-26T14:38:04.170
2023-03-26T14:30:00.713
53690
246251
[ "maximum-likelihood", "garch" ]
610691
1
null
null
0
56
relatively new to this and this question has been plaguing me. Say I have a dataset with feature A, feature B, and feature C. I need to scale for my model. Based on their distributions, feature A is suited to robust scaling, feature C is suited to standardization and feature B is suited to log transformation. I have been told that it is acceptable to use different scalers or transformations on different features; that it is okay to scale feature A using a robust scaler and then to transform feature B using log transform and to standardize feature C. If this is indeed okay (and I am not sure), why? It seems a bit counter-intuitive to me- won't this change how the variables relate to one another? I would have assumed (before I was told otherwise) that one scaler had to be applied to each feature to keep the relationships intact. I would really love a discussion or explanation if at all possible- seeing the math would probably help me too. I know this is very theoretical but it truly is driving me nuts.
Can I scale a dataset using different methods on different columns and why?
CC BY-SA 4.0
null
2023-03-25T15:23:25.640
2023-03-25T15:30:18.897
null
null
384129
[ "data-transformation", "dataset", "standardization", "multidimensional-scaling", "feature-scaling" ]
610692
2
null
254329
1
null
Sure! In fact, raising or lowering the correlation might be the goal of the treatment. Consider a situation where, before a policy is implemented, there is a strong correlation between how much money someone’s parents make and how much they make, and the policy aims to level the playing field so everyone has a more equal chance at success. (While this is not a politics page, you do not have to agree with such an idea to be aware of the fact that some people would want this.) In that case, the desired outcome is to break the relationship between wealth and parental wealth. Conversely, such a policy might aim to raise the correlation between wealth and IQ, with the hope being that the smartest people will be the wealthiest.
null
CC BY-SA 4.0
null
2023-03-25T15:24:49.747
2023-03-25T15:24:49.747
null
null
247274
null
610693
2
null
610691
0
null
This will depend on what you do with the scaled data afterwards. Without knowing this I don't think it's a good idea, as scaling normally should make variables comparable, i.e. "bring them on the same scale". For sure log transformation does something very different from scaling, and if you do that, normally you should still scale afterwards if you want the values to be comparable with those from other scaled variables. Also if you need robustness, better scale all variables robustly; I don't think this will do harm even for variables that don't have outliers. If you scale them differently, you won't achieve comparability in a well defined sense. (Note though that "robust scaling" will not cure your data from potential issues with outliers; it may make outliers lie out even further, compared with other variables.)
null
CC BY-SA 4.0
null
2023-03-25T15:30:18.897
2023-03-25T15:30:18.897
null
null
247165
null
610694
1
610703
null
0
22
Could you please provide me with information on how I can derive missing input values from input data probability distribution? I mean for instance I have input features which are molar fractions of hydrocarbon compounds like CH4, C2H6, C3H8 ... C10H22. The task is that in almost every training example I don't have data about molar fraction for three random components. Since these are real compositions from oil fields there's some distribution between compounds and some of the missing values can be retrieved considering known values of the rest of the components. The problem is that I cannot find any theory about it. I will be really grateful if someone can help me with the problem! UPD: I have points like (70,12,5), (45,26,12) etc. There's a clear dependence that the more the first value, the less the second and the third. Given points with missing values like (65, missing, 8) I would like to fill them with the most probable value according to joint probability distribution over whole my set (I know that first value is 65, I know the last value which is 8 and I have lots of such data where some values might be missing, but I need to utilize all the information from each entry (point) as much as possible to fill the missing value with the most probable value). I'm sure there's a solution, but I don't know the name of theory that describes how such tasks are solved properly.
Derive missing input values from input data probability distribution,
CC BY-SA 4.0
null
2023-03-25T15:56:23.173
2023-03-25T17:14:01.697
2023-03-25T16:56:47.023
368711
368711
[ "probability", "distributions", "missing-data" ]
610695
1
null
null
0
16
I am looking for theory or examples which discuss of how to deal with outliers in surrogate modeling. In my case I'm trying to emulate a prediction model which predicts arrival times. I don't have access to the data of this model, but I know it's input is GPS-coordinates and speed. Data that I have access to is the previous travel time and the actual arrival time. I'm also able to calculate $y_i - \hat{y}_i = \epsilon$, so I can get the prediction errors of the agnostic model. Now to my concern - how to assign or detect outliers in surrogate modeling? Is the best practice to assign outliers based on the data that I have, or should I assign outliers based on the prediction errors? My way of thinking is that if I base my outliers on the data that I have, I can build a better model with the data that I have available. But if I base my outliers on the prediction errors, I may be able to discard cases where - assuming that the model is decent - the data that was fed to the model was of poor quality. It would be of great help to hear any input on this, and I'd gladly read any papers or books that raise discussion about the subject. Thanks.
Surrogate modeling - dealing with outliers
CC BY-SA 4.0
null
2023-03-25T16:08:38.747
2023-03-25T16:08:38.747
null
null
320876
[ "machine-learning", "inference", "modeling", "outliers", "surrogate" ]
610696
1
null
null
1
52
I need help with following table Regarding the outcome (pcr 3) is there a way to know its P-value ?? [](https://i.stack.imgur.com/PU9xt.jpg)
Multinominal logistics regression
CC BY-SA 4.0
null
2023-03-25T16:19:34.450
2023-03-26T19:29:29.227
2023-03-25T17:34:55.697
383445
383445
[ "regression", "self-study", "spss" ]
610698
1
null
null
0
23
I want to train a linear classifier for image classification. I have a $\mathbf{W}$ of shape $D\times K$ where $D$ is the dimension of the vectorized version of the image (including bias, herein 3073) and $K$ is the number of classes. The Hinge loss version I want to use is the following \begin{equation} h_i = \max(0, 1 + \max_{j \neq y_i} \mathbf{w}_j\cdot \mathbf{x}_i - \mathbf{w}_{y_i} \cdot {\mathbf{x}_i}) \end{equation} \begin{align} h_i = \begin{cases} \mathbf{w}_j^* \cdot \mathbf{x}_i - \mathbf{w}_{y_i} \cdot \mathbf{x}_i , \quad &\text{if } 1 + \mathbf{w}_j^* \cdot \mathbf{x}_i - \mathbf{w}_{y_i} \cdot {\mathbf{x}_i} > 0 \\ 0, \quad &\text{otherwise} \end{cases} \end{align} Therefore calculation of $\nabla_{\mathbf{W}} h_i$ can be broken into 3 cases: \begin{align} \nabla_{\mathbf{w}_j} h_i = \begin{cases} \mathbf{x}_i, \quad &\text{if } j = j^* \\ -\mathbf{x}_i, \quad &\text{if } j = y_i \\ \mathbf{0}, \quad &\text{otherwise} \end{cases} \end{align} Is my calcluation correct? For the ease of notation: $$ \mathbf{w}_j^* = \max_{j \neq y_i} \mathbf{w}_j $$ and $y_i$ is the (index of) ground truth class for sample $i$.
Gradient of multiclass hinge loss (max of max difference version)
CC BY-SA 4.0
null
2023-03-25T16:45:16.937
2023-03-25T16:51:22.467
2023-03-25T16:51:22.467
271176
271176
[ "machine-learning", "loss-functions", "gradient-descent", "hinge-loss" ]
610700
1
610818
null
2
92
I am trying to fit a glmm in R, with a right-skewed response variable that is theoretically continuous, but in my case ranging between 0.4 and 1.8 with more lower values (it's a biological measurement). I also want to include 3 categorical and one integer as predictors, and have two random grouping variables as well. This is the data structure: ``` group pair trial var treatment type numInfluencers Length:164 Length:164 1:42 Min. :0.4472 Length:164 Length:164 Min. :0.000 Class :character Class :character 2:56 1st Qu.:0.4876 Class :character Class :character 1st Qu.:1.000 Mode :character Mode :character 3:66 Median :0.5538 Mode :character Mode :character Median :2.000 Mean :0.6630 Mean :2.085 3rd Qu.:0.7049 3rd Qu.:3.000 Max. :1.7882 Max. :4.000 ``` and a histogram of my response: [](https://i.stack.imgur.com/l72pd.png) I tried fitting it with gamma distribution, but just keep getting lots of warnings (In (function (start, objective, gradient = NULL, hessian = NULL, ... : NA/NaN function evaluation), presumably meaning that the model is not converging, I guess. The only somewhat reasonable fit so far I achieved with a gaussian model and log transforming the response (using a log-link instead was considerably worse). But transforming the variable would not be my preferred approach, and the residual are also still looking less than ideal: [](https://i.stack.imgur.com/oBot8.png) Any advice on how to approach this and what else I could try? Thank you. Update: I now fit a gamma model with log link with a residual plot looking very much like the one above. ``` m <- glmmTMB(var ~ treatment * numInfluencers + type + trial + (1|pair)+(1|group), data = df,family=Gamma(link=log)) ``` Interestingly, I get a very different residual plot when I fit the same model with lme4: ``` m <- glmer(var ~ treatment * numInfluencers + type + trial + (1|pair)+(1|group), data = df,family=Gamma(link=log) ) ``` [](https://i.stack.imgur.com/xQOVu.png) Why is that? And is this type of residual plot even meaningful here? A QQ plot of this model does not look so bad, actually. Lastly, I am aware that ideally there would be more data for this model. If I try running this model anyway, what would be the best way of assessing the goodness of fit and decide whether the model is "reasonable"? This is the model output that I am getting right now. I am slightly suspicious because all the variables are significant, but this is also possible. I tried adding a non meaningful variable into it and it did not show up as significant. ``` Family: Gamma ( log ) Formula: var ~ treatment * numInfluencers + type + trial + (1 | pair) + (1 | group) Data: df AIC BIC logLik deviance df.resid -94.5 -63.5 57.3 -114.5 154 Random effects: Conditional model: Groups Name Variance Std.Dev. pair (Intercept) 0.01799 0.1341 group (Intercept) 0.02580 0.1606 Number of obs: 164, groups: pair, 46; group, 4 Dispersion estimate for Gamma family (sigma^2): 0.0565 Conditional model: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.10100 0.10008 -1.009 0.31286 treatmenttreated -0.23177 0.09579 -2.420 0.01554 * numInfluencers -0.05665 0.01931 -2.934 0.00335 ** typeB -0.15956 0.05516 -2.893 0.00382 ** trial2 -0.17589 0.05727 -3.071 0.00213 ** trial3 -0.18381 0.05972 -3.078 0.00209 ** treatmenttreated:numInfluencers 0.08041 0.03922 2.050 0.04035 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ```
Help fitting glmm with positive, right skewed, continuous data
CC BY-SA 4.0
null
2023-03-25T17:05:11.063
2023-03-26T22:05:36.550
2023-03-26T15:48:36.673
206741
206741
[ "distributions", "glmm", "gamma-distribution" ]
610701
2
null
610662
0
null
"The problem is, I do not know how to account for this in a logistic regression model." Interesting idea. Can you provide more information here? Specifically, how you are coding your outcome variables. The logistic regression model can only predict what you operationalize as your dependent variable. So if you have a multinomial model, which is what I'm assuming you have, the outcome, whether 10 or 4 teams, can only be whatever you code it as.
null
CC BY-SA 4.0
null
2023-03-25T17:06:10.647
2023-03-25T17:06:10.647
null
null
383476
null
610702
1
null
null
0
14
A simple question here, How would you call a factor not associated with the exposition but with the outcome in a historical cohort ? For example, I'm currently studying this article ([https://pubmed.ncbi.nlm.nih.gov/36087610/](https://pubmed.ncbi.nlm.nih.gov/36087610/)) and in this one, they excluded patients with ID of metabolic cause. But this is not a confusion factor, right ? So what can it be ? To sum up, we study the risk of autism or Intellectual disability in patients with an antecedent of maternal infection during pregnancy
Confounding factor?
CC BY-SA 4.0
null
2023-03-25T17:11:56.103
2023-03-25T17:11:56.103
null
null
378883
[ "epidemiology" ]
610703
2
null
610694
0
null
I think I understand what you are asking. I would use MICE (multiple imputations by chained equations). This has the advantage of using multiple imputations, rather than single mean imputations, which is frowned upon nowadays. [https://datascienceplus.com/imputing-missing-data-with-r-mice-package/](https://datascienceplus.com/imputing-missing-data-with-r-mice-package/) ^ here is a really easy and detailed method for it in R [https://cran.r-project.org/web/packages/miceRanger/vignettes/miceAlgorithm.html](https://cran.r-project.org/web/packages/miceRanger/vignettes/miceAlgorithm.html) ^ This also has some information on it too. Helpful for a methods section
null
CC BY-SA 4.0
null
2023-03-25T17:14:01.697
2023-03-25T17:14:01.697
null
null
383476
null
610705
2
null
610636
1
null
The [quantitative real-time polymerase chain reaction](https://en.wikipedia.org/wiki/Real-time_polymerase_chain_reaction#) (qPCR), the basis for this question, subjects a sample to a series of reaction cycles. Ideally, each cycle doubles the amount of a specific DNA sequence. A fluorescent dye reports the (amplified) amount after each cycle. If the original amount of the DNA of interest is $x$, then the amount after $C$ cycles is ideally $2^Cx$. This makes it natural to work in the $\log_2$ scale, transforming that amplified amount to $\log_2 x +C$. This question is whether it's best to work in the log scale related to the number of cycles, or in some other scale. The short answer: it's best to work in the log scale of $x$, using linear combinations of the associated cycle numbers to correct for differences in sample size or differences in pre-processing (e.g., reverse transcription of RNA to DNA). That's because linear modeling works best when error magnitudes are independent of the magnitudes of the observations, and in qPCR that assumption holds best in the scale of cycle numbers. See, for example, [this answer](https://stats.stackexchange.com/a/126352/28500). Details In practice, at low cycle numbers the specific fluorescence can't be distinguished from background fluorescence or instrumental noise, and at high cycles the consumption of reagents limits the fluorescence signal. That provides a sigmoid curve of fluorescence versus cycle number, as noted in the question. Quantitation thus starts with the number of cycles that it takes for the fluorescence generated from a sample to pass a pre-defined threshold value that's higher than background but well below the limiting value. That number of cycles is called the `Ct` or `Cq` for the sample. If you have two sets of qPCR reaction cycles with starting DNA amounts $x_1$ and $x_2$, then given their individual `Ct` values you can (ideally) compare their DNA amounts as follows: $$\log_2 x_2 + \text{Ct}_2 = \log_2 x_1 + \text{Ct}_1 $$ $$\log_2 \frac{x_2}{x_1} = \text{Ct}_1 -\text{Ct}_2 =\Delta \text{Ct} .$$ That can be used to correct for differences in sample amounts or in pre-processing by reverse transcription from RNA to DNA. Then $x_2$ is the DNA of specific interest in an experiment and $x_1$ is a reference DNA unaffected by the experimental manipulations, each determined from the same biological sample. (The `dCt_post` and `dCt_pre` in this question are written in the opposite direction, with the question's `dCt` the negative of $\Delta \text{Ct}$ as written above.) The variance of a $\Delta \text{Ct}$ estimate, based on the formula for the [variance of a weighted sum of correlated variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables), is: $$\text{Var} (\Delta \text{Ct}) = \text{Var} (\text{Ct}_1) + \text{Var} (\text{Ct}_2) - \text{Cov} (\text{Ct}_1, \text{Ct}_2), $$ where $\text{Cov} $ is the covariance (representing the shared errors between $x_2$ and $x_1$ due to their being analyzed in the same original sample). As noted above, these variances are typically constant over wide ranges of `Ct` values, with a normal error distribution often a reasonable approximation in the `Ct` scale. Thus linear modeling is best done via linear combinations of `Ct` values, as used to calculate $\Delta \text{Ct}$ values. The proposed ratios of $\Delta \text{Ct}$ values, or the logs of such ratios, don't make a lot sense. They take you out of the `Ct` scale in which errors are well behaved, and they take you even farther away from the original DNA values $x$. I have seen calculations based on things like $2^{\Delta \text{Ct}}$ to transform back to the $x$ scale of initial DNA amounts. But that also takes you out of the scale in which errors are well behaved. You might consider that type of transformation at the end of analysis, to express values and confidence intervals in terms of DNA amounts. But linear modeling, associated intermediate calculations, and statistical tests should be on the `Ct` scale where the underlying assumptions are best met. Standard curves can be used to correct for inefficiency in PCR, less than the ideal doubling of DNA in each cycle. Standard curves can give estimates for $x_1$ and $x_2$ in original DNA amounts. But as those are still based on `Ct` values, it's best to work with those amounts in the logarithmic scales most directly associated with their `Ct` values.
null
CC BY-SA 4.0
null
2023-03-25T18:07:15.267
2023-03-25T18:07:15.267
null
null
28500
null
610707
1
null
null
0
25
I need to adjust my data set (student grades) to fit what the school wants. I also don't want any students to fail (there are a couple who are). On the other hand, several students did a bunch of extra credit and have ended up over 100%. The university wants me to only have a handful of students with grades over 90. They also would prefer if I didn't fail anyone. Essentially, I need to tighten the statistical bell curve according to some specific constraints. Here are the requirements: - Constrain all values to 60 - 100 - Only 5 values >90 - The majority with values between 70-85 I am using Google Sheets. I can use MS Excel. I don't use R (although it's very cool). I need to know what formulas I can use to do this. I'm not a programmer nor a mathematician. I have scoured the web searching for the answer to what seems like a simple problem and not found one. I understand that it is probably because I took statistics 25 years ago and don't remember the lingo, so please be gentle... I'm referencing a previous question here that was similar, but none of the responses answered my question. [Customization of a standard Bell Curve](https://stats.stackexchange.com/questions/192589/customization-of-a-standard-bell-curve) Thanks to anyone with any ideas on how to do this...
How can I adjust a bell curve, tightening it under specific constraints?
CC BY-SA 4.0
null
2023-03-25T18:12:08.670
2023-03-25T20:48:14.233
2023-03-25T18:21:56.100
384134
384134
[ "weighted-mean" ]
610708
2
null
192589
0
null
So I understand the question as asking for an algorithm to generate random numbers according to a given pdf shape. So a common way of generating random numbers from an "arbitrary" 1D distribution is to use the [inverse CDF](https://en.wikipedia.org/wiki/Inverse_transform_sampling). First - Come up with your desired pdf. - calculate the cumulative distribution function - calculate the inverse function (eg by lookup table) icdf(U) Then to generate a new random number x, with desired pdf - generate a uniform between 0 and 1, eg $u=.3$ - output the corresponding value of the inverse CDF $x=icdf(0.3)$
null
CC BY-SA 4.0
null
2023-03-25T18:16:51.100
2023-03-25T18:16:51.100
null
null
27556
null
610709
1
610967
null
7
186
I came across this question on [Quora](https://www.quora.com/You-have-an-unfair-coin-for-which-heads-turns-up-with-probability-p-3-5-You-flip-the-coin-repeatedly-until-there-have-been-more-heads-than-tails-How-many-flips-on-average-does-this-take?ch=15&oid=156680462&share=e4f681a2&srid=yB8v1&target_type=question). > You have an unfair coin for which heads turns up with probability $p=\frac 35$. You flip the coin repeatedly until there have been more heads than tails. How many flips on average does this take? Let $X$ be the random variable denoting number of flips needed to achieve more heads than tails. I got interested in this question and tried to find the probability distribution of $X$. It seems to me that $P(X=2n)=0$ i.e, it's not possible to achieve more heads than tails in even number of flips. For odd number of flips i.e, $X=2n+1$, there must be $n+1$ heads and $n$ tails, but I'm pretty confused how many suitable arrangements are possible. The answers already given on Quora suggest that $\mathbb E[X]=5$ based on simulation programs. Is there a closed form expression for probability distribution of $X$? Any help would be appreciated. Edit: From the comments, I learned that the order of outcomes when $X=2n+1$ resembles Dyck words. You can read my [Quora answer](https://www.quora.com/You-have-an-unfair-coin-for-which-heads-turns-up-with-probability-p-3-5-You-flip-the-coin-repeatedly-until-there-have-been-more-heads-than-tails-How-many-flips-on-average-does-this-take/answer/Sagnik-Saha-82?ch=15&oid=1477743653383505&share=7e37f9ef&srid=yB8v1&target_type=answer).
Flipping an unfair coin until there are more heads
CC BY-SA 4.0
null
2023-03-25T18:19:36.067
2023-03-30T12:46:35.320
2023-03-28T08:04:03.320
362671
380075
[ "probability", "distributions", "combinatorics" ]
610711
1
null
null
0
25
I am currently taking an econometrics course and in the lecture notes there is a statement about the variance of an OLS estimator that I am unable to prove. The statement is as follows: Suppose the true model is given by $Y_i=α+βX_i+ϵ_i$ and the standard LRM assumptions hold. However, one runs a regression using the model $Y_i=α+βX_i+\gamma W_i +ϵ_i$, where $W_i$ is an irrelevant variable. Then $Var(\hat\beta_n|x,w)=\frac{1}{(1-R_1^2)}\frac{\sigma^2}{x^TM_ix}$, where $M_i$ is the residual maker matrix and $R_1^2$ is the $R^2$ in the regression of $X_i$ on $W_i$ If someone could explain how to prove this claim, I'd be very thankful!
Proof of a statement about the variance of an OLS estimator
CC BY-SA 4.0
null
2023-03-25T18:34:07.940
2023-03-25T18:34:07.940
null
null
384137
[ "variance", "least-squares", "r-squared" ]
610712
1
null
null
0
42
I am training a model for cancer detection by using chest CT scan Image. training set is 70% testing set is 20% validation set is 10%. Data contain 3 chest cancer types which are Adenocarcinoma, Large cell carcinoma, Squamous cell carcinoma , and 1 folder for the normal cell. Model giving accuracy of 89% after training it by CT scan Images. and giving accuracy of 35% on testing and validation. and low precision, recall, f1-score. Here is the code: ``` res_model = ResNet50(include_top=False, pooling='avg', weights='imagenet', input_shape = (IMAGE_SHAPE)) for layer in res_model.layers: if 'conv5' not in layer.name: layer.trainable = False with strategy.scope(): #use TPU/GPU strategy res_model = ResNet50(include_top=False, pooling='avg', weights='imagenet', input_shape = (IMAGE_SHAPE)) for layer in res_model.layers: if 'conv5' not in layer.name: layer.trainable = False resnet_model = Sequential() resnet_model.add(res_model) resnet_model.add(Dropout(0.4)) resnet_model.add(Flatten()) resnet_model.add(BatchNormalization()) resnet_model.add(Dropout(0.4)) resnet_model.add(Dense(N_CLASSES, activation='softmax')) resnet_model.summary() adam_optimizer = tf.keras.optimizers.legacy.Adam(learning_rate= 0.00001, decay= 1e-5) #compiling the model resnet_model.compile(optimizer=adam_optimizer, loss='categorical_crossentropy', metrics=['accuracy']) #need to update to use f1 score too print(resnet_model.summary()) ``` on uploading image to model it is not predicting the correct class which the image belongs to or the type of cancer it have. I'm using ResNet50 for training and Adam optimizer. i have also tried it with vgg16. it is not predicting the correct type of cancer cell name on uploading image to it. Most of the time its giving wrong name. I thing that happening because the accuracy, precision, recall, f1-score is low on testing and validation. i dont know how to increase its accuracy and what to do so it can predict the correct class of image. any suggestion for increasing the test & validation accuracy and improving the confusion matrix graph. so it can give more accurate prediction on uploading image to the model ??
Model giving accuracy of 89% after training it by CT scan Images. and giving accuracy of 35% on testing. and low precision, recall, f1-score
CC BY-SA 4.0
null
2023-03-25T18:39:58.227
2023-03-25T19:15:51.727
null
null
384138
[ "machine-learning", "neural-networks", "python", "conv-neural-network", "image-processing" ]
610713
2
null
610591
0
null
The optimal model of mine is from this syntax: fm1 <- glmer (answer~ (1|subj) + (1|item) + seeconversationmask, data=analysis1, family=binomial, control=glmerControl(optimizer="bobyqa",optCtrl=list(maxfun=2e5))) So I have three variables. They're all categorical. The first variable has two categories; the second one has two categories and the last one has three caetegories. The optimal model is the model with three-way interaction which means all variables are needed to explain the finding. And this is the output I've got from emmeans package to see if there are significant differences in each contrast: [](https://i.stack.imgur.com/eCMiZ.jpg) The example of my interpretations is 'The participants received significantly lower score in ao, clear, dm context than ao, con, dm context (b = -4.74, SE = 0.60, p < 0.01).' However, after running your syntax, it gives me this: [](https://i.stack.imgur.com/QIHS6.jpg) So now I wondered how the output from OR syntax fits with the comparisons of each pair contrast. Could you please suggest me about this?
null
CC BY-SA 4.0
null
2023-03-25T18:43:42.567
2023-03-25T18:43:42.567
null
null
40023
null
610714
2
null
610712
0
null
you described your method extensively but haven't gone through the data and sampling. Is your data balanced? I am guessing not. If so, then looking at the accuracy may be misleading. Usually, the misleading is in one direction - you'd see high accuracy, even though your model hasn't learned anything. Since, from your description, you have such stark differences, the obvious suspect is overfitting. But before you go on, it is good to verify for yourself that your datasets have the same distribution. This is actually an assumption, otherwise, you can't really tell if your model learned anything. You are using ResNet, which is a huge network, and therefore you probably have too high variance in your model. Your model is very good for predicting examples from your training set since having such high variance allows it to fit itself to the training data. The first and easiest thing to do is to regularize your model. Start by adding a penalty to weights, I would go with L1, since where you're at, you might want to "turn off" buttons in the network. Second, I would also add dropout. You should expect a decrease in your performance on your training, accompanied by an increase in your performance on the test set. Other, less trivial approaches would be to get more data. In case your data is too small, then your model after training on such a small training set is failing to generalize to new data (even if your targets have the same distribution across your samples. Another, last piece of advice, keep your test aside, don't look at it. You don't want to contaminate your work.
null
CC BY-SA 4.0
null
2023-03-25T18:55:47.037
2023-03-25T19:01:06.057
2023-03-25T19:01:06.057
285927
285927
null
610715
2
null
610712
1
null
This is what happens when you overfit the training data. Your model identifies idiosyncratic characteristics of the training data that are specific to that data sample but don't generalize to other data samples. So what seems to be excellent performance on the training data does not carry over to new data. [An Introduction to Statistical Learning](https://www.statlearning.com) discusses overfitting and how to minimize it throughout the text. Considerations specific to neural networks are in Chapter 10. In outline, to minimize overfitting you can either have the network learn more slowly or penalize the model's parameters. Also, consider whether a single train/test split is the best way to evaluate your modeling. In other contexts [Frank Harrell recommends resampling-based validation](https://www.fharrell.com/post/split-val/) of the modeling process unless you have tens of thousands of cases. I suspect that similar considerations apply to this type of image classification, although I don't have much experience with it.
null
CC BY-SA 4.0
null
2023-03-25T19:15:51.727
2023-03-25T19:15:51.727
null
null
28500
null
610716
1
null
null
0
32
I have a dataset with 200 groups, and 50-300 observations per group. The target I'm trying to predict is a strictly positive financial metric, which varies 5+ orders of magnitude between groups but is roughly the same order of magnitude within every group. In addition some of the input predictors match the (magnitude) of the target closely. I want to fit a single model to make predictions from any group. I've tried different approches to dealing with this problem, such as log-scaling the targets. So far the models I've produced have significantly worse evaluation metrics on groups with small targets. I have tried Mean Average Precision Error, and this still tends to result in a model that produces good outputs for groups with big targets and useless predictions for groups with small targets. One thing I'd like to consider is adjusting the weights of a loss function like MSE. For an error of \$1000, if group A has targets in the range \$1000-\$5000, and group B has targets in the range \$10,000,000 - \$25,000,000, the prediction error isn't equally meaningful. Does it make sense to do something like an (inverse) power law transform to calculate weights for each group, and increase the weighting of obersvations from low-magnitude groups in the loss function? I guess it will be subjective, but how do I choose how much higher weighted group A should be than group B - i.e., the parameter of the power law transform?
Loss function weighting in regression when the target varies orders of magnitude between groups
CC BY-SA 4.0
null
2023-03-25T19:23:35.840
2023-03-25T19:53:40.353
2023-03-25T19:53:40.353
285647
285647
[ "regression", "multilevel-analysis", "normalization", "finance" ]
610717
1
610725
null
1
32
> Technically, endogeneity occurs when a predictor variable (x) in a regression model is correlated with the error term (e) in the model. This can occur under a variety of conditions, but two cases are especially common in inequality research: (1) when important variables are omitted from the model (called “omitted variable bias”) and (2) when the outcome variable is a predictor of x and not simply a response to x (called “simultaneity bias”). At least part of the latter problem is often called “selection.” Hello, With reference to the above, can I ignore simultaneity bias if my key independent variable occurs before the dependent variable? As such, if my independent variable is a requirement that remains fixed, and I want to study the impact of such requirement on the performance or outcome. Also, there are no empirical studies that determines the level of requirement. With the case I am working on, the requirement varies at a group level. Some groups have higher requirement and some lower and I want to test the level of requirement on their performance. There is no disclosure as to how this requirement is determined for different groups. From private sources I know that the groups can negotiate by demonstrating why the requirement must be lowered or can willingly accept a higher requirement.
When can I ignore endogeneity problem?
CC BY-SA 4.0
null
2023-03-25T19:57:47.657
2023-03-25T21:00:10.397
null
null
369093
[ "validation", "post-hoc", "robust", "endogeneity", "simultaneity" ]
610718
1
null
null
1
72
In introductory books one can see such definition of sample proportion: if $X = (x_1,...,x_n)$ is our sample of length $n$, consists of $0$ and $1$, then sample proportion is $\hat{p} = \frac{\sum_{k=1}^{n}x_k}{n}$. We define our sample $\xi=(\xi_1,...,\xi_n)$ as a sample where each random variable has Bernoulli distribution with unknown parameter $0 < p < 1$. So the sample proportion in this case by definition is just sample mean, $\frac{\sum_{k=1}^{n}\xi_k}{n}$. $\mathbb{E}[\xi_1] = p$, $\mathbb{V}ar[\xi_1] = p-p^2$. I want to understand the formal derivation for confidence interval of this statistics. But as we know from central limit theorem, $\frac{\xi_1+...+\xi_n}{n}\xrightarrow{d} \mathcal{N}(p, \frac{p-p^2}{n})$ and we can get confidence intervals for $p$ from this. [In my other question I was thoroughly answered about](https://stats.stackexchange.com/a/610839/378446) this and got why one cannot use such notation and what do people mean when they say that it's "approximately" normal. So only one question leaves here: Is "sample proportion" just the synonym for the "sample mean" but in the case when our sample came from Bernoulli distribution? To be more clear, we say that $\overline{\xi}= \frac{\sum_{k=1}^{n}\xi_k}{n}$ is a sample proportion iff $\xi = (\xi_1, ...,\xi_n) : \forall 1 \leq i \leq n \ \ \xi_i \sim Bern(p)$. I just didn't understand, why, for example John A. Rice in his "Mathematical Statistics and Data Analysis, Third Edition" on page 214 introduces both sample mean and sample proportion and doesn't say that sample proportion is just a sample mean in a particular case.
Formal definition of sample proportion
CC BY-SA 4.0
null
2023-03-25T19:59:50.377
2023-03-31T10:53:23.187
2023-03-31T10:53:23.187
378446
378446
[ "self-study", "mathematical-statistics", "mean" ]
610719
2
null
610707
1
null
Depending on the original distribution of values, this may not be a simple transformation. My suggestion † would be to break the original values into categories (e.g. < 60, 60–64, 65–69, and so on), and then decide what value you want these to be transformed to. It would be easy to put this algorithm into Excel. But you might have to change it every marking period to get the final distribution you want. --- † Aside from telling the university that they don't understand what student grades are supposed to mean, so they might as well just give up on the concept entirely.
null
CC BY-SA 4.0
null
2023-03-25T20:02:28.877
2023-03-25T20:48:14.233
2023-03-25T20:48:14.233
166526
166526
null
610720
1
null
null
0
96
Several articles I've read stated that MCA and PCA both work as "reducing dimensionality" tools but MCA is used for categorical variables and PCA is used for numerical variables. But is there any difference with the result both methods produce? I have never used MCA nor PCA before so my knowledge is limited to only this. Let's say I have 9 variables, all of them are about power over deciding different types of expenditure in the household but with the same 5-point likert scale (1: least power to 5: most power). Then I run both MCA and PCA (PCA passed the Bartlett Sphericity Test & KMO Measure) and obtain the predictor variables. If I generate a variable that is the sum of all responses from 9 variables (that should amount to maximum of 45) and run a twoway scatterplot with the MCA predictor and PCA predictor separately, I got a graph that looks like this: the sum of response variable and PCA graph has positive relationship, while sum of response variable and MCA has negative relationship. My question is: is this normal? Should MCA and PCA results be similar? How do I interpret both results?
What is the difference between Multiple Correspondence Analysis and Principal Component Analysis result?
CC BY-SA 4.0
null
2023-03-25T20:06:03.577
2023-03-25T20:06:03.577
null
null
382735
[ "pca", "correspondence-analysis" ]
610721
2
null
526467
1
null
Your reference link is broken, however, there's no theoretical reason you can immediately arrive at $T^∗V^∗=V^*$ merely by their definitions, where $T^*$ is assumed to be the Bellman optimality operator and $V^*$ is assumed to be the optimal state value function. This theorem seems just to try to establish this unavoidable conclusion. First the author assumes $V$ to be the fixed point of $T^*$ which could be proved to be a contraction map from analysis, but in order to achieve above goal an additional assumption (there exists a policy $π$ which is greedy w.r.t $V$) is required. Then we can immediately arrive at $T^πV=T^∗V$ since $T^∗$ is greedy by definition and then we have $T^πV=V$ by the above first assumption of this theorem which is nothing but the ordinary Bellman equation following the assumed policy $π$. Now by your reasoning and background knowledge if $V^*$ is the optimal state value then the said $π$ must be its corresponding policy which is also optimal. So the remaining work of this theorem is to prove the fixed point is indeed the optimal state value, and in fact the value iteration algorithm was inspired by such a proof.
null
CC BY-SA 4.0
null
2023-03-25T20:06:10.543
2023-03-25T20:06:10.543
null
null
371017
null
610722
1
null
null
0
9
I have spent a solid hour to remember with the technical term for this. Suppose you have a between-subjects lab design with one control and two treatment conditions. What is the term to describe when some members of T1 invertedly also receive the treatment from T2? What do we call this violation of internal validity? Thanks,
Experiment terminology: control participants invertedly receive treatment
CC BY-SA 4.0
null
2023-03-25T20:15:38.303
2023-03-25T20:15:38.303
null
null
318236
[ "anova", "experiment-design", "terminology", "research-design" ]
610723
1
null
null
0
19
I have been reading up on nesting in Linear Mixed Effects modelling, and typically nesting is for random effects. However, if I want to estimate the effects of language and type of word for each connection between two regions in a hypothesised network, - Would it make intuitive sense for the effects of type and language to be nested within each level of connection? I understand that nesting fixed effects do not change the model predictions, etc. But I just wanted to find out if it made intuitive sense. - If I do find significant interaction between a specific level of region-to-region of connection and type or language, is it possible to conduct post-hoc pairwise comparisons just at that level?
Nested Fixed Effects
CC BY-SA 4.0
null
2023-03-25T20:49:04.557
2023-03-25T20:49:04.557
null
null
379720
[ "mixed-model", "linear-model", "nested-models", "neuroimaging" ]
610724
2
null
610341
1
null
> acf = a*exp(b.*lags) You can get this type of autocovariance function with an AR1 process $$X_{t} = \phi X_{t-1} + w_{t} $$ where the $w_t$ are independent and identical distributed as $N(0,\sigma^2)$ this gives an acf of the form: $$\text{ACF}(\text{lag}) = \frac{\sigma^2}{1-\phi^2} \phi^{\text{lag}} $$ (see the question [Finding the ACF of AR(1) process](https://math.stackexchange.com/questions/335452/finding-the-acf-of-ar1-process)) You can find $\phi$ and $\sigma$ from your $a$ and $b$ if you compare with your function and set $\text{lag} = 0$ and $\text{lag} = 1$ $$\frac{\sigma^2}{1-\phi^2} = a$$ $$\frac{\sigma^2}{1-\phi^2} \phi = a \exp(-b)$$ from which it follows that $\phi = -\exp(b)$ and $\sigma^2 = a[1-\exp(-2b)]$.
null
CC BY-SA 4.0
null
2023-03-25T20:55:18.563
2023-03-26T20:44:48.790
2023-03-26T20:44:48.790
164061
164061
null
610725
2
null
610717
0
null
You should ask yourself: "who decided to impose such requirements? what was the ratio behind them? is the reason related to variables that I'm not including in my model (among the $X$s) and to the outcome variable?". If the requirements happen to be motivated by previous outcomes, then you have omitted variable bias. If the requirements are decided exogenously/independently of your outcome measure, this would not be a potential channel of bias (there could be others though, as usual).
null
CC BY-SA 4.0
null
2023-03-25T21:00:10.397
2023-03-25T21:00:10.397
null
null
135461
null
610726
2
null
177543
0
null
Let $N$ be the sample size, $y_i$ be the $i$th observation, $\hat y_i$ be the prediction of the $i$th observation, and $\bar y$ be the mean of all observations. A common definition of $R^2$ is below. $$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ Next, it is typical to define $MSE$ as $\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2$. $$ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 = N\times MSE\\ \bigg\Updownarrow\\ R^2=1-\left(\dfrac{ N\times MSE }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$
null
CC BY-SA 4.0
null
2023-03-25T21:07:23.513
2023-03-25T21:07:23.513
null
null
247274
null
610727
2
null
163000
1
null
You say the results look good. Why? It looks like every prediction on the red line is higher than the blue point you want to predict. Consequently, I would say that you are doing a rather poor job of predicting those points, which is totally consistent with $R^2<0$.
null
CC BY-SA 4.0
null
2023-03-25T21:10:54.587
2023-03-25T21:10:54.587
null
null
247274
null
610728
2
null
144366
0
null
$$ R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right) $$ If $R^2$ increases, either the numerator decreases or the denominator increases. "All else equal" tells me the $y_i$ are fixed, meaning that $\bar y = \dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}y_i$ is fixed, so the denominator is not changing. Thus, the numerator must decrease by making better predictions $\hat y_i$. (How you keep "all else equal" yet make better predictions is a mystery to me, but something has to give.) As the numerator is a measure of square loss, the numerator is related to the error term variance! This is especially true if "all else equal" means that the parameter count does not change. Since the coefficient standard errors depend on the (estimated) error variance, decreaseing the (estimated) error variance, equivalent to increasing $R^2$, leads to smaller standard errors and narrower confidence intervals. That is, if you can predict better, you tighten up your estimates. I feel like regression ought to work this way.
null
CC BY-SA 4.0
null
2023-03-25T21:21:18.257
2023-03-25T21:21:18.257
null
null
247274
null
610729
2
null
609894
1
null
The difficulty might be that $y = f(x, a, b, c)$ is too abstract of a notation and doesn't fully specify a particular model. So let's take simple linear regression as an example: $y = a + bx + e$ with $e \sim \operatorname{N}(0, c)$. In this model $a$ is the intercept, $b$ is the slope and $c$ is the error variance. - Goal #2 (linear prediction with uncertainty) is to estimate $\operatorname{E}(y_{new} | x_{new}) = a + bx_{new}$. - Goal #3 (predictive distribution for a new observation) is to predict $y_{new} | x_{new} \sim N(a + bx_{new}, c)$. Let's also assume that you've already fitted the model, so you have a sample $\big\{\widehat{a}^{(k)}, \widehat{b}^{(k)}, c^{(k)}\big\}$ from the posterior $(a,b,c) | x$ given a dataset $x$; $k$ indexes the posterior draws. To estimate $\operatorname{E}(y_{new} | x_{new})$, you proceed as you describe: you calculate $\big\{ \widehat{a}^{(k)} + \widehat{b}^{(k)}x_{new} \big\}$ for each posterior draw. (No need for $\widehat{c}$ here.) This is a sample from the posterior distribution of $\operatorname{E}(y_{new} | x_{new})$ and you can get an estimate of its mean & variance, plot its histogram, etc. To predict $y_{new} | x_{new}$, you additionally draw an error $e^{(k)}$ for each posterior draw $k$: $$ \begin{aligned} e^{(k)} &\sim \operatorname{N}\big(0, \widehat{c}^{(k)}\big) \\ y^{(k)} &= \widehat{a}^{(k)} + \widehat{b}^{(k)}x_{new} + e^{(k)} \end{aligned} $$ So "under the model conditional on the specified value of $x_{new}$" means that you know how to sample from the model $f(a,b,c)$ given a specific value for the predictor $x_{new}$ and a set of parameter values $\big(\widehat{a}, \widehat{b}, \widehat{c}\big)$. For simple linear regression, this means drawing an error $e$ from a normal distribution and adding it to the estimate of $\operatorname{E}(y_{new} | x_{new})$. Clearly, there is more uncertainty in predicting a new observation $y_{new} | x_{new}$ than the population mean $\operatorname{E}(y_{new} | x_{new})$ due to the additional variability of drawing the (individual) error $e$. And here is how to do add this extra variability in R code (pp. 116 in Regression and Other Stories): ``` y_pred <- a + b * as.numeric(new) + rnorm(n_sims, 0, c) ``` where I've substituted `sigma` with `c` for consistency.
null
CC BY-SA 4.0
null
2023-03-25T21:23:41.280
2023-03-25T21:23:41.280
null
null
237901
null
610730
2
null
50848
1
null
It depends on how this log-likelihood value is calculated. When you assume a Gaussian error distribution, maximizing log-likelihood is equivalent to minimizing the sum of squared residuals. Since maximization does not depend on constants out front, it is not clear to me how exactly this log-likelihood is calculated. Consquently, I would not use that value. I would directly calculate the sum of the squared residuals: $\overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2$. If you want to normalize this to get some kind of $R^2$, you can calculate $R^2=1-\left(\dfrac{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\hat y_i \right)^2 }{ \overset{N}{\underset{i=1}{\sum}}\left( y_i-\bar y \right)^2 }\right)$. If you apply this formula to your linear model, you will get the same value of $R^2$ are your software returns (unless you lack an intercept in the linear model). Likewise, you can use this equation to convert your model $R^2$ to a sum of squared residuals by calculating the denominator term (though I would expect it to be easier just to calculate the sum of squared residuals). As far as determining if there is a statistical difference between the two models, first, consider what exactly you mean. Especially if sample sizes are large, statistics can detect very small differences that might not be of interest. (My comments [here](https://stats.stackexchange.com/a/602422/247274) about the Princess and the Pea fairy tale concern such a situation.) However, if you want to calculate some statistics about differences in model performance, [Benavoli et al. (2017)](https://www.jmlr.org/papers/volume18/16-305/16-305.pdf) give some standard ways of doing such a comparison and also make a strong argument for why their proposed approach is superior. Even if you do not buy their argument, the paper at least goes through more standard approaches. Benavoli et al. (2017) deal with classification accuracy, rather than regression metrics, but I see no reason why their proposed or referenced approachs could not apply to $R^2$ or the sum of squared residuals. REFERENCE Benavoli, Alessio, et al. "Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis." The Journal of Machine Learning Research 18.1 (2017): 2653-2688.
null
CC BY-SA 4.0
null
2023-03-25T21:42:47.863
2023-03-25T21:42:47.863
null
null
247274
null
610731
2
null
38422
1
null
For non-negative loss functions, this makes perfect sense to me. In nice situations, the usual $R^2$ you give can be interpreted as the proportion of variance in $y$ that is explained by the regression model. This gets lost for [more complicated models](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2) or [estimation techniques](https://stats.stackexchange.com/questions/494274/why-does-regularization-wreck-orthogonality-of-predictions-and-residuals-in-line), but we still have an interpretation of that formula as how our model performs in terms of square loss compared to a model that predicts the mean of $y$ every time. For a model that aims to predict the conditional mean, what better baseline than a model that always predicts the overall mean, $\bar y?$ A measure that compares model performance to the performance of a baseline model makes sense to me. I would be comfortable [applying that to classification accuracy](https://stats.stackexchange.com/a/605451/247274) (setting aside the issues with classification accuracy), and [UCLA does the same, even if it takes some algebra to show the two to be equal](https://stats.stackexchange.com/questions/605818/how-to-interpret-the-ucla-adjusted-count-logistic-regression-pseudo-r2?noredirect=1&lq=1). Also referring to UCLA, [McFadden's $R^2$](https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/) uses this idea with the binomial log-likelihood ("log loss" or "crossentropy loss" in some circles). [Somers's D](https://en.wikipedia.org/wiki/Somers%27_D) applies this to the area under the ROC curve ([where $AUC = 0.5$ is regarded as the performance of a baseline model](https://stats.stackexchange.com/q/590060/247274)). As a final example, [quantile regression seems to have a $D^2$ measure that applies this idea to pinball loss.](https://stats.stackexchange.com/a/580261/247274) If you look at the definition of pinball loss, $L_{\tau}$ below, for quantile regression, it looks pretty "exotic" to me, even if going through what each part means leads to a reasonable interpretation. $$ l_{\tau}(y_i, \hat y_i) = \begin{cases} \tau\vert y_i - \hat y_i\vert, & y_i - \hat y_i \ge 0 \\ (1 - \tau)\vert y_i - \hat y_i\vert, & y_i - \hat y_i < 0 \end{cases}\\L_{\tau}(y, \hat y) = \sum_{i=1}^n l_{\tau}(y_i, \hat y_i) $$ Overall, not only is this a reasonable idea. It seems that there is considerable precedent in the literature for using this exact idea!
null
CC BY-SA 4.0
null
2023-03-25T21:56:20.540
2023-03-26T03:59:15.680
2023-03-26T03:59:15.680
247274
247274
null
610732
2
null
608807
1
null
I am still not sure I understand what you are trying to simulate, but below I have tried to answer the question to how I think it works. ### Simulation setup The simulations depends on three quantities: - $K$, the number of simulations of the $(X_1, s_1)$. That is $(x_1,\tilde s_1)\sim P_{X_1, s_1}$ - $M$, the number of simulations of $S_N(X)|(X_1,s_1)=(x_1,\tilde s_1)$ per simulated value $(x_1,\tilde s_1)$. - $N$, the length of the chain $X_1, \dots, X_N$. And results in $K$ realizations of the conditional expectation (as a random variable), $E[S_N(X)|X_1,s_1]$. Let $\hat P_{K,M,N}$ denote the actual distribution of the simulations and let $P$ be the hypothesized asymptotic distribution. Let $d$ be a favorite distance measure between distributions. The goal of simulations is to determine if $d(\hat P_{K,M,N},P)$ can be made infinitesimally small by increasing $K,M$ and $N$. That can be a daunting task, so let's break it down. ### Size of $K$ For a given size of $K$, it is known how similar the simulated distribution should be to the desired asymptotic distribution. One can simply simulate from the desired asymptotic distribution $K$ times, obtaining a empirical distribution $\hat P_K$ and compute $d(\hat P_K, P)$. ### Size of $M$ The size of $M$ should be so big that the variance of $S_N(X)|(X_1,s_1)$ becomes negligible. I don't know if it actually has a finite variance, but if one could get some bound on it, $L$, it would be known that the variance of $\hat E_M[S_N(X)|(X_1,s_1)]$ is less than $L/M$. Then, for a given $K$ and $M$, we can simulate $K$ realizations of the desired asymptotic distribution and add $N(0,L/M)$ noise, obtaining an empirical distribution $\hat P_{K,M}$ and then compute $d(\hat P_{K,M}, P)$. ### Size of $N$ For a given $K$ and $M$, I suggest that simulations are used for increasing $N$ to show that $d(\hat P_{K,M,N}, P)$ can be made as small as $d(\hat P_{K,M},P)$. The simulations for different values of $N$ doesn't need to be independent to make a convincing argument, so one can simply continue the same $K\cdot M$ chains. In practice one may want to simulate a distribution for $d(\hat P_{K,M}, P)$ instead of just a single number. $K$ and $M$ should be chosen such that $d(\hat P_{K,M},P)$ becomes so small that it is convincing when it is shown that $d(\hat P_{K,M,N}, P)$ is of similar size.
null
CC BY-SA 4.0
null
2023-03-25T22:40:15.490
2023-03-25T22:40:15.490
null
null
89277
null
610733
1
null
null
0
34
I have this time series data presented in the `R` code below, I want to test for the significance of the first and second lag of auto-correlation and partial auto-correlation coefficient. ``` ts <- ts(c(8.6804442, 7.3541134, 8.5977826, 6.8805464, 5.1814928, 5.3389510, 5.7019002, 9.4947107, 5.6794177, -0.6920303, 2.0628462, 3.2439078, 6.1778068, 10.0745755, 7.2153141, 4.0897299, 4.9992670, 5.7246579, 5.7844691, 6.1298377, 4.5406423, 6.5713964, 5.1842466, 4.5171652, 4.1202459, 3.2100360, 4.6116722, 6.9000000), start = 1995, end = 2022, frequency = 1) ``` I understand that one can test for the significance of the whole group of lags of [](https://i.stack.imgur.com/DqqF6.png) and [](https://i.stack.imgur.com/5JUVY.png) as follows: ``` forecast::ggAcf(ts) + ggplot2::theme_bw() forecast::ggPacf(ts) + ggplot2::theme_bw() ``` The reason why I need this is to specifically check if the second lag coefficient of PACF is actually significant because the `auto.arima()` function of the `forecast` package suggests that the model is `ARIMA(0,0,1)` while the interpretation of the ACF and PACF suggests `ARIMA(2,0,1)` ``` (ts_mod <- forecast::auto.arima(ts)) #Series: ts #ARIMA(0,0,1) with non-zero mean #Coefficients: # ma1 mean # 0.8164 5.7382 #s.e. 0.1432 0.5931 #sigma^2 = 3.311: log likelihood = -56 #AIC=118.01 AICc=119.01 BIC=122.01 ```
How Can One Test for Significance of a Particular Lag of Coefficient of ACF and PACF of Time Series Data Using R
CC BY-SA 4.0
null
2023-03-25T23:01:28.577
2023-03-29T17:43:52.480
2023-03-26T14:06:39.910
53690
267929
[ "r", "time-series", "hypothesis-testing", "arima", "acf-pacf" ]
610734
1
null
null
4
49
am looking for some advice with some modelling work I'm doing. My data is proportion data which is known to be positively skewed and not normally distributed - histogram below. [](https://i.stack.imgur.com/lvEpy.png) I want to determine the effect of a quintile based metric on this outcome - I know from descriptive plots that it is likely there is a significant effect (see Fig below). [](https://i.stack.imgur.com/XyatJ.png) I'm trying to fit a general linear mixed effect model using lme4 package in R with the following structure. There is pseudoreplication in the dataset at the level of PROVIDER_NAME and YEAR, hence introducing those as random effects. ``` cont_polar_mod_3 <-glmer(data=ofs2_cont_polar, cbind(Outcome_NUMERATOR, Outcome_DENOMINATOR) ~ QUINTILE + (1 | PROVIDER_NAME) + (1 | YEAR), family=binomial) ``` However, when I inspect the model using DHARMa, I am unconvinced that I am using the right model structure, as the QQ-plot is a long way off a straight line. The data looks under-dispersed to me, but I'm not entirely sure whether this is relevant for the model structure I am using - think it matters for binomial but not for gaussian (might be wrong here)? [](https://i.stack.imgur.com/zKZn3.png) Can anyone advise what I should do? Is it safe to use the model as it is even though the QQplot looks bad, or do I need another structure? For reference I've tried adding with and without an observation level effect term, and have tried using both a calculated proportion column and cbind column as my response variable. I've also tried log transforming the response variable. None of which changes the QQplot very much. With thanks in advance, Katharine [Update 1] - Apologies, realise I set up my cbind column incorrectly. Have now set it up as cbind(success, failure) not cbind(success, total) and get the following plots in DHARMa which look better? [](https://i.stack.imgur.com/blReZ.png) [Update 2] Have now tried modelling with a beta distribution as per @ShawnHemelstrand suggestion, with the revised cbind column. This looks better? [](https://i.stack.imgur.com/TAAIg.png)
GLMER with non-normally distributed proportion data
CC BY-SA 4.0
null
2023-03-25T23:14:17.077
2023-03-26T16:39:14.073
2023-03-26T16:39:14.073
384144
384144
[ "mixed-model", "lme4-nlme" ]
610735
1
null
null
1
20
I am building a ordinal logistic regression model, with 10 independent categorical variables (X1 .... X10). I would like to split the variables into groups of importance and assign 60% weight to group of variables (let's say X5, X6, X7) and the rest with 40%. Is this a post regression treatment on co-efficients or a case of data preparation before executing the regression? Appricate if there is any information on methods addressing this.
Assign dynamic weight to group of variables
CC BY-SA 4.0
null
2023-03-25T23:32:14.443
2023-03-25T23:32:14.443
null
null
384147
[ "ordered-logit" ]