category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
boosting
|
Boosting algorithms only built with decision trees? why?
|
https://datascience.stackexchange.com/questions/101891/boosting-algorithms-only-built-with-decision-trees-why
|
<p>My understanding of boosting is just training models sequentially and learning from its previous mistakes.</p>
<p>Can boosting algorithms be built with bunch of logistic regression? or logistic regression + decision trees?</p>
<p>If yes, I would like to know some papers or books that covers this topic in-depth.</p>
|
<p>Boosting is not limited to tree-based models. Find some more information here:</p>
<blockquote>
<p>P. Bühlmann, T. Hothorn (2007), "<a href="https://arxiv.org/pdf/0804.2752.pdf" rel="nofollow noreferrer">Boosting Algorithms: Regularization,
Prediction and Model Fitting</a>", Statistical Science 22(4), p. 477-505.</p>
</blockquote>
<p>I implemented L2 linear regression boosting from Section 3.3 (p. 483) from the paper above in <a href="https://github.com/Bixi81/R-ml/blob/master/l2_boosting.R" rel="nofollow noreferrer">this R-code</a>. You may replace the L2 model by a logit model and see how it works.</p>
| 362
|
boosting
|
Gradient boosting algorithm example
|
https://datascience.stackexchange.com/questions/9134/gradient-boosting-algorithm-example
|
<p>I'm trying to fully understand the gradient boosting (GB) method. I've read some wiki pages and papers about it, but it would really help me to see a full simple example carried out step-by-step. Can anyone provide one for me, or give me a link to such an example? Straightforward source code without tricky optimizations will also meet my needs.</p>
|
<p>I tried to construct the following simple example (mostly for my self-understanding) which I hope could be useful for you. If someone else notices any mistake please let me know. This is somehow based on the following nice explanation of gradient boosting <a href="http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/" rel="nofollow noreferrer">http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/</a></p>
<p>The example aims to predict salary per month (in dollars) based on whether or not the observation has own house, own car and own family/children. Suppose we have a dataset of three observations where the first variable is 'have own house', the second is 'have own car' and the third variable is 'have family/children', and target is 'salary per month'. The observations are </p>
<p>1.- (Yes,Yes, Yes, 10000)</p>
<p>2.-(No, No, No, 25)</p>
<p>3.-(Yes,No,No,5000)</p>
<p>Choose a number <span class="math-container">$M$</span> of boosting stages, say <span class="math-container">$M=1$</span>. The first step of gradient boosting algorithm is to start with an initial model <span class="math-container">$F_{0}$</span>. This model is a constant defined by <span class="math-container">$\mathrm{arg min}_{\gamma}\sum_{i=1}^3L(y_{i},\gamma)$</span> in our case, where <span class="math-container">$L$</span> is the loss function. Suppose that we are working with the usual loss function <span class="math-container">$L(y_{i},\gamma)=\frac{1}{2}(y_{i}-\gamma)^{2}$</span>. When this is the case, this constant is equal to the mean of the outputs <span class="math-container">$y_{i}$</span>, so in our case <span class="math-container">$\frac{10000+25+5000}{3}=5008.3$</span>. So our initial model is <span class="math-container">$F_{0}(x)=5008.3$</span> (which maps every observation <span class="math-container">$x$</span> (e.g. (No,Yes,No)) to 5008.3.</p>
<p>Next we should create a new dataset, which is the previous dataset but instead of <span class="math-container">$y_{i}$</span> we take the residuals <span class="math-container">$r_{i0}=-\frac{\partial{L(y_{i},F_{0}(x_{i}))}}{\partial{F_{0}(x_{i})}}$</span>. In our case, we have <span class="math-container">$r_{i0}=y_{i}-F_{0}(x_{i})=y_{i}-5008.3$</span>. So our dataset becomes</p>
<p>1.- (Yes,Yes, Yes, 4991.6)</p>
<p>2.-(No, No, No, -4983.3)</p>
<p>3.-(Yes,No,No,-8.3)</p>
<p>The next step is to fit a base learner <span class="math-container">$h$</span> to this new dataset. Usually the base learner is a decision tree, so we use this.</p>
<p>Now assume that we constructed the following decision tree <span class="math-container">$h$</span>. I constructed this tree using entropy and information gain formulas but probably I made some mistake, however for our purposes we can assume it's correct. For a more detailed example, please check </p>
<p><a href="https://www.saedsayad.com/decision_tree.htm" rel="nofollow noreferrer">https://www.saedsayad.com/decision_tree.htm</a></p>
<p>The constructed tree is:</p>
<p><a href="https://i.sstatic.net/yRjle.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yRjle.png" alt="enter image description here"></a></p>
<p>Let's call this decision tree <span class="math-container">$h_{0}$</span>. The next step is to find a constant <span class="math-container">$\lambda_{0}=\mathrm{arg\;min}_{\lambda}\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$</span>. Therefore, we want a constant <span class="math-container">$\lambda$</span> minimizing </p>
<p><span class="math-container">$C=\frac{1}{2}(10000-(5008.3+\lambda*{4991.6}))^{2}+\frac{1}{2}(25-(5008.3+\lambda(-4983.3)))^{2}+\frac{1}{2}(5000-(5008.3+\lambda(-8.3)))^{2}$</span>. </p>
<p>This is where gradient descent comes in handy.</p>
<p>Suppose that we start at <span class="math-container">$P_{0}=0$</span>. Choose the learning rate equal to <span class="math-container">$\eta=0.01$</span>. We have </p>
<p><span class="math-container">$\frac{\partial{C}}{\partial{\lambda}}=(10000-(5008.3+\lambda*4991.6))(-4991.6)+(25-(5008.3+\lambda(-4983.3)))*4983.3+(5000-(5008.3+\lambda(-8.3)))*8.3$</span>. </p>
<p>Then our next value <span class="math-container">$P_{1}$</span> is given by <span class="math-container">$P_{1}=0-\eta{\frac{\partial{C}}{\partial{\lambda}}(0)}=0-.01(-4991.6*4991.7+4983.4*(-4983.3)+(-8.3)*8.3)$</span>. </p>
<p>Repeat this step <span class="math-container">$N$</span> times, and suppose that the last value is <span class="math-container">$P_{N}$</span>. If <span class="math-container">$N$</span> is sufficiently large and <span class="math-container">$\eta$</span> is sufficiently small then <span class="math-container">$\lambda:=P_{N}$</span> should be the value where <span class="math-container">$\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$</span> is minimized. If this is the case, then our <span class="math-container">$\lambda_{0}$</span> will be equal to <span class="math-container">$P_{N}$</span>. Just for the sake of it, suppose that <span class="math-container">$P_{N}=0.5$</span> (so that <span class="math-container">$\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$</span> is minimized at <span class="math-container">$\lambda:=0.5$</span>). Therefore, <span class="math-container">$\lambda_{0}=0.5$</span>. </p>
<p>The next step is to update our initial model <span class="math-container">$F_{0}$</span> by <span class="math-container">$F_{1}(x):=F_{0}(x)+\lambda_{0}h_{0}(x)$</span>. Since our number of boosting stages is just one, then this is our final model <span class="math-container">$F_{1}$</span>.</p>
<p>Now suppose that I want to predict a new observation <span class="math-container">$x=$</span>(Yes,Yes,No) (so this person does have own house and own car but no children). What is the salary per month of this person? We just compute <span class="math-container">$F_{1}(x)=F_{0}(x)+\lambda_{0}h_{0}(x)=5008.3+0.5*4991.6=7504.1$</span>. So this person earns $7504.1 per month according to our model.</p>
| 363
|
boosting
|
Gradient boosting, where did the constant go?
|
https://datascience.stackexchange.com/questions/56041/gradient-boosting-where-did-the-constant-go
|
<p>In the very early papers on gradient boosting, the ensemble would include a constant and a sum of base learners i.e.</p>
<p><span class="math-container">$F(X) = a_0 + \sum\limits_{i} a_i f_i(X)$</span></p>
<p>The constant is fitted first (i.e. if the loss is squared error, the <span class="math-container">$a_0$</span> will be the unconditional mean of the target)</p>
<p>Somewhere along the way this got dropped, i.e. as far as I can see a number of the current "state of art" gbdt algorithms fit a constant (xgboost, catboost etc). I believe the R mboost package does though.</p>
<p>Why is this, does anyone have any reference where it is discussed (I'm performing regression so particularly interested in this aspect). In particular with a regression tree as a base-learner, can it learn the constant more efficiently than the original algorithms?</p>
<p>One of my motivations is that when I boost with no constant, for a problem where the target is always positive, it seems to converge from below so it almost always has a bias in the residuals unless you have enough data such that you can boost for long enough to fit the highest values of the targets without overfitting (specifically I fit the relationship between temperature and electricity demand).</p>
<p>Edit: Some references</p>
<p>[Prokhorenkova et. al.] "CatBoost: unbiased boosting with categorical features" - Algorithm 1 intialises M to 0. I've used catboost a lot, I can confirm that explicitly setting the starting value to the mean produces a differerent result.</p>
<p>[Chen et. al.] "XGBoost: A Scalable Tree Boosting System" - Equation 1 excludes the constant.</p>
<p>[Sigris] "Gradient and Newton Boosting for Classification and Regression" - there's no mention of the constant alough it's not explicitly excluded</p>
| 364
|
|
boosting
|
On gradient boosting and types of encodings
|
https://datascience.stackexchange.com/questions/78092/on-gradient-boosting-and-types-of-encodings
|
<p>I am having a look at this <a href="https://github.com/lesteve/euroscipy-2019-scikit-learn-tutorial/blob/master/notebooks/02_basic_preprocessing.ipynb" rel="noreferrer">material</a> and I have found the following statement:</p>
<blockquote>
<p>For this class of models [Gradient Boosting Machine algorithms] [...] it is both safe and significantly
more computationally efficient use an arbitrary integer encoding [also known as Numeric Encoding] for
the categorical variable even if the ordering is arbitrary [instead of
One-Hot encoding].</p>
</blockquote>
<p>Do you know some references that support this statement? I get that Numeric Encoding is more computationally efficient than One-Hot Encoding, but I would like to know more about their supposed equivalence to encode unordered categorical variables in Gradient Boosting Methods.</p>
<p>Thanks!</p>
|
<p>This is actually a feature of tree-based models in general, not just gradient boosting trees.</p>
<p>Not exactly a reference, but <a href="https://towardsdatascience.com/one-hot-encoding-is-making-your-tree-based-ensembles-worse-heres-why-d64b282b5769" rel="noreferrer">this Medium article</a> explains why ordinal encoding is often more efficient.</p>
<p>On the topic of safety, I think the author should have said that the use of ordinal encoding is <em>more</em> safe compared to linear methods, but still not perfectly safe. It's possible for decision-tree methods to find spurious rules within ordinal encodings, but they don't have the strong assumptions about numeric semantics that linear methods do.</p>
<blockquote>
<p>. . . I would like to know more about their supposed equivalence to encode unordered categorical variables . . .</p>
</blockquote>
<p>Any rule derived with one-hot encoding can also be represented with ordinal encoding, it just might take more splits.</p>
<p>To illustrate, suppose you have a categorical variable <code>foo</code> with possible values <code>spam</code>, <code>ham</code>, <code>eggs</code>. A one-hot encoding would create 3 dummy variables, <code>is_spam</code>, <code>is_ham</code>, <code>is_eggs</code>. Let's say an arbitrary ordinal encoding assigns <code>spam</code> = 1, <code>ham</code> = 2, and <code>eggs</code> = 3.</p>
<p>Suppose the OHE decision tree splits on <code>is_eggs = 1</code>. This can be represented in the ordinal decision tree by the split <code>foo > 2</code>. Suppose the OHE tree splits on <code>is_ham = 1</code>. The ordinal tree will require two splits: <code>foo > 1</code> then <code>foo < 3</code></p>
| 365
|
boosting
|
Why gradient boosting uses sampling without replacement?
|
https://datascience.stackexchange.com/questions/67674/why-gradient-boosting-uses-sampling-without-replacement
|
<p>In Random Forest each tree is built selecting a sample with replacement (bootstrap). And I assumed that Gradient Boosting's trees were selected with the same sampling technique. (@BenReiniger corrected me). <a href="https://catboost.ai/docs/concepts/algorithm-main-stages_bootstrap-options.html" rel="noreferrer">Here there are the sampling techniques implemented for Catboost</a></p>
<p>My questions:</p>
<ul>
<li>Why is Gradient Boosting sampling done without replacement?</li>
<li>Why would it be worst to sample with replacement?</li>
<li>Are there any sampling techniques used in GB that are with replacement?</li>
</ul>
<p>I quote a paper for SGB:</p>
<blockquote>
<p>Stochastic Gradient Boosting is a randomized version of standard Gradient Boosting algorithm... adding randomness into the tree building procedure by using a subsampling of the full dataset. For each
iteration of the boosting process, the sampling algorithm of SGB selects random s·N objects without replacement and uniformly</p>
</blockquote>
|
<p><strong>Why is Gradient Boosting sampling done without replacement?</strong></p>
<p>Your first question seems to suggest that the base classifier will always have a subsampling mechanism but this is not necessarily true.</p>
<p>Notice that in the Catboost documentation it is mentioned "Stochastic Gradient Boosting" not "Gradient Boosting". Stochastic Gradient Boosting is a variation of Gradient Boosting that is precisely based on building the boosting process using a subset of the data at each iteration. Therefore each base model does not see the whole train data, it sees only a subset (or minibatch) of the data.</p>
<p>You might want to do this for 2 main reasons: Faster train time, Regularization effect.</p>
<p><strong>Why would it be worst to sample with replacement?</strong></p>
<p>As long as you do sampling in your base classifier the speed benefit should be quite similar. If your base classifier does not do any sampling then the Stochastich Gradient Boosting algorithm will converge much faster. Just like a Mini Batch version of a Neural network converges much faster than a Full Batch version.</p>
<p><strong>Are there any sampling techniques used in GB that are with replacement?</strong></p>
<p>Standard Gradient Boosting uses all the data to build the gradient at each step of the boosting process.</p>
<p>If you do full batch learning but your base classifier has a "subsample" parameter then you are essentially doing sampling with replacement at every step of the boosting process. </p>
| 366
|
boosting
|
Averaging CNN perform worse than boosting
|
https://datascience.stackexchange.com/questions/31026/averaging-cnn-perform-worse-than-boosting
|
<p>I'm trying to solve <a href="https://www.kaggle.com/c/quora-question-pairs" rel="nofollow noreferrer">Quora Question Pairs</a> with model stacking.</p>
<p>My first layers are:</p>
<ul>
<li>CNN trained to predict the same target as whole model should</li>
<li>"Magic features" like question frequency in whole dataset</li>
</ul>
<p>And the second is gradient boosting.</p>
<p>My first attempt was to train CNN on whole(with 10% to dev) train dataset and than train gradient boosting (GB) on the same split.</p>
<p>As expected it was terrible because GB learned that CNN almost always right.</p>
<p>So I split train data into 10 parts and trained 10 different CNN with 1 part as dev.
Than I trained GB with CNN predictions on dev parts(so it's almost unseen data). And used average of 10 CNNs for same feature on test dataset.</p>
<p>The result was worse than gradient boosting applied to "magic features" alone.</p>
<p>Can you help me to identify my mistakes?</p>
| 367
|
|
boosting
|
Purpose of gamma multiplier in gradient boosting
|
https://datascience.stackexchange.com/questions/60965/purpose-of-gamma-multiplier-in-gradient-boosting
|
<p>looking through the mathematics of gradient boosting on the relevant <a href="https://en.wikipedia.org/wiki/Gradient_boosting" rel="nofollow noreferrer">wikipedia page</a>, intuitively what is the purpose of the multiplier <span class="math-container">$\gamma_i$</span>?</p>
<p><a href="https://i.sstatic.net/mWkBn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mWkBn.png" alt="enter image description here"></a></p>
<p>This term does not appear in the following definition of <span class="math-container">$F_m(x)$</span></p>
<p><a href="https://i.sstatic.net/54lHo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/54lHo.png" alt="enter image description here"></a></p>
<p>But confusingly, then appers in the subsequent definition:</p>
<p><a href="https://i.sstatic.net/b4kcG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b4kcG.png" alt="enter image description here"></a></p>
<p>Any advice appreciated, thanks.</p>
| 368
|
|
boosting
|
Chossing between gradient boosting algorithms
|
https://datascience.stackexchange.com/questions/92717/chossing-between-gradient-boosting-algorithms
|
<p>I just stepped in machine learning competitions and it looks like most of the mid-sized dataset competitions are won by Gradient boosting based models. However I came accross case where <a href="https://lightgbm.readthedocs.io/en/latest/" rel="nofollow noreferrer">LightGBM</a>,<a href="https://catboost.ai/" rel="nofollow noreferrer">Catboost</a> or <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html" rel="nofollow noreferrer">Adaboost</a> had very different scores.</p>
<p>Is there a method to choose between those algorithms?</p>
|
<p>I would say <code>Catboost</code> and <code>lightgbm</code> perform similarly and its purely a matter of choice.
Some of my colleagues prefered <code>Catboost</code> when dataset has lots of categorical columns, but I rarely saw any advantage over <code>lightgbm</code>.</p>
<p>there is a great article comparing <code>CatBoost vs. Light GBM vs. XGBoost</code>
<a href="https://towardsdatascience.com/catboost-vs-light-gbm-vs-xgboost-5f93620723db" rel="nofollow noreferrer">https://towardsdatascience.com/catboost-vs-light-gbm-vs-xgboost-5f93620723db</a></p>
<p>And another article comparing even more boosting algos
<a href="https://medium.com/@divyagera2402/boosting-algorithms-adaboost-gradient-boosting-xgb-light-gbm-and-catboost-e7d2dbc4e4ca" rel="nofollow noreferrer">https://medium.com/@divyagera2402/boosting-algorithms-adaboost-gradient-boosting-xgb-light-gbm-and-catboost-e7d2dbc4e4ca</a></p>
| 369
|
boosting
|
How to do boosting in model-ensembling?
|
https://datascience.stackexchange.com/questions/17147/how-to-do-boosting-in-model-ensembling
|
<p>Boosting is a sequential technique in which, the first algorithm is trained on the entire dataset and the subsequent algorithms are built by fitting the residuals of the first algorithm, thus giving higher weight to those observations that were poorly predicted by the previous model. Examples are adaboost and GBM, etc. My question is that how to perform boosting when ensembling the base learner? Especially how to get the residual if it is a classification problem?</p>
<p>I know how to bagging the base learners and stacking the base learners. I just have no idea how to boosting the base learners.</p>
<p>Thanks!</p>
|
<p>You can use any base learner for boosting (Adaboost requires sample weighting though). Keep in mind however that the original idea is to use weak learners with strong bias and reducing that bias through boosting.</p>
<p>If it is a classification problem, usually logarithmic loss is used to calculating the residual/gradient for boosting.</p>
<p>For Python, there is a nice AdaBoost wrapper in scikit-learn (AdaBoostClassifier) which can take for example a Random Forest as base learner.</p>
| 370
|
boosting
|
Can Boosting and Bagging be applied to heterogeneous algorithms?
|
https://datascience.stackexchange.com/questions/92847/can-boosting-and-bagging-be-applied-to-heterogeneous-algorithms
|
<p>Stacking can be achieved with heterogeneous algorithms such as RF, SVM and KNN. However, can such heterogeneously be achieved in Bagging or Boosting? For example, in Boosting, instead of using RF in all the iterations, could we use different algorithms?</p>
|
<p>The short answer is <strong>yes</strong>. Both boosting and bagging meta-algorithms do not assume specific weak learners, thus any learner can do, no matter if uses same algorithm or different one.</p>
<p>The way the meta-algorithms are defined, they use the weak learners as <strong>black-box models, without reference to their implementation or algorithmic principle, nor similarity</strong>.</p>
<p>For Boosting:</p>
<blockquote>
<p>In machine learning, boosting is an ensemble meta-algorithm for
primarily reducing bias, and also variance[1] in supervised learning,
and a family of machine learning algorithms that convert weak learners
to strong ones.[2] Boosting is based on the question posed by Kearns
and Valiant (1988, 1989):[3][4] "Can a set of weak learners create a
single strong learner?"
A weak learner is defined to be a classifier that is only slightly
correlated with the true classification (it can label examples better
than random guessing). In contrast, a strong learner is a classifier
that is arbitrarily well-correlated with the true classification.</p>
</blockquote>
<p>For Bagging:</p>
<blockquote>
<p>Although it is usually applied to decision tree methods, it can be
used with any type of method. Bagging is a special case of the model
averaging approach.</p>
</blockquote>
<p>For Bagging the required condition to improve performance is that the weak learners should be instable (thus perturbed versions of the learners affect outcomes), but other than that, learners are black boxes, as mentioned above.</p>
<p>For further reference:</p>
<ol>
<li><a href="https://en.wikipedia.org/wiki/Boosting_(machine_learning)" rel="nofollow noreferrer">Boosting, wikipedia</a></li>
<li><a href="https://en.wikipedia.org/wiki/Bootstrap_aggregating" rel="nofollow noreferrer">Bagging, wikipedia</a></li>
<li><a href="https://web.archive.org/web/20121010030839/http://www.cs.princeton.edu/%7Eschapire/papers/strengthofweak.pdf" rel="nofollow noreferrer">The Strength of Weak Learnability</a></li>
<li><a href="https://www.stat.berkeley.edu/%7Ebreiman/bagging.pdf" rel="nofollow noreferrer">Bagging Predictors</a></li>
</ol>
| 371
|
boosting
|
Bagging vs Boosting, Bias vs Variance, Depth of trees
|
https://datascience.stackexchange.com/questions/61771/bagging-vs-boosting-bias-vs-variance-depth-of-trees
|
<p>I understand the main principle of bagging and boosting for classification and regression trees. My doubts are about the optimization of the hyperparameters, especially the depth of the trees</p>
<p><strong>First question</strong>: why we are supposed to use weak learners for boosting (high bias) whereas we have to use deep trees for bagging (high variance) ?
- Honestly, I'm not sure about the second one, just heard it once and never seen any documentation about it.</p>
<p><strong>Second question</strong> : why and how can it happen that we get better results in the grid searches for gradient boosting with deeper trees than weak learners (and similarly with weak learners than deeper trees in random forest)?</p>
|
<blockquote>
<p>why we are supposed to use weak learners for boosting (high bias) whereas we have to use deep trees for bagging (very high variance)</p>
</blockquote>
<p>Clearly it wouldn't make sense to bag a bunch of shallow trees/weak learners. The average of many bad predictions will still be pretty bad. For many problems decision stumps (a tree with a single split node) will produce results close to random. Combining many random predictions will generally not produce good results. </p>
<p>On the other hand, the depth of the trees in boosting limits the interaction effects between features, e.g. if you have 3 levels, you can only approximate second-order effects. For many ("most") applications low-level interaction effects are the most important ones. <a href="https://web.stanford.edu/~hastie/Papers/ESLII.pdf" rel="nofollow noreferrer">Hastie et al. in ESL (pdf)</a> suggest that trees with more than 6 levels rarely show improvements over shallower trees. Selecting trees deeper than necessary will only introduce unnecessary variance into the model! </p>
<p>That should also partly explain the second question. If there are strong and higher-order interaction-effects in the data, deeper trees can perform better. However, trees that are too deep will underperform by increasing variance without additional benefits. </p>
| 372
|
boosting
|
How to interpret gradient descent in boosting ensembles?
|
https://datascience.stackexchange.com/questions/86492/how-to-interpret-gradient-descent-in-boosting-ensembles
|
<p>I struggle to grasp the role of gradient based optimization in boosting ensembles. As far as I understand boosting means combining a bunch of estimators (of the same types, usually decision trees) sequentially -- each subsequent one is learning from the errors of the previous ones (by upweighting the misclassified examples, if I see correctly) and combining the results.</p>
<p>(Subquestion: does this combination mean that we use <em>all</em> the subsequently trained constituent estimators, maybe with different weights, or we <em>just</em> take the final one, which is assumed to be the most accurate?).</p>
<p>However, I cannot figure out how gradient descent and learning rate comes into the picture here. Trees themselves are not gradient based learners, and combining the output (either way) doesn't require any optimization. So what is its role?</p>
| 373
|
|
boosting
|
Importance of feature selection for boosting methods
|
https://datascience.stackexchange.com/questions/11081/importance-of-feature-selection-for-boosting-methods
|
<p>While it is obviously clear that features can be ranked on basis of importance and many machine learning books give examples of random forests on how to do so, its not very clear on which occasions one should do so.</p>
<p>In particular, for boosting methods, is there any reason why one should do feature selection. Wouldn't the boosting methods themselves eliminate the low importance feature.</p>
<p>Isn't is just always better to add more features (if one didn't have the practical problem of time limitations).</p>
|
<p>There is a difference between boosting and features selection.
It is very important to understand that the original boosting algorithm or bagging algorithm have been modified and augmented with many features selection and/or data sampling ( over/ down/ synthetic) to improve the accuracy.
Let us talk about the difference between bagging and boosting :
Booth of them are random subspace based algorithms, the difference is in bagging we used uniform distribution and all the samples have the same weight , in boosting we use non- uniform distribution, during the training the distribution will be modified and difficult samples will have higher probability. The second difference is the voting. In bagging it is average voting , in boosting it is an weighted voting.</p>
<p>Features selection algorithms try to find the best set of features that can separate the classes. But there is no explicit consideration for difficult or easy samples and what is the used training algorithm.
In boosting, the algorithm selects the feature that minimize the error , the error is the sum of prob "weights" of samples that miss classified, since the difficult samples have higher weights , the selected feature will be the one that better distinguish between the difficult samples. </p>
<p>FE ( Features, data) --> feature set
Boosting ( features, data, base learners type, initial distribution, difficult samples) --> feature set </p>
| 374
|
boosting
|
Gradient boosting Regression with zero-inflated outcome
|
https://datascience.stackexchange.com/questions/69949/gradient-boosting-regression-with-zero-inflated-outcome
|
<p>I am trying to tune a Regression gradient boosting model where my target variable is zero inflated (80% zero) and the rest of the values are distributed as positive and negative values (not necessary symmetrically). What are good practices when training a model like this? </p>
<p>Any specific issues which I should be aware of to generate a good model? Based on my research, Tweedie Gradient Boosting is not fitted for this model because my target variable has a mix of negative and positive values around the zero mode; therefore, it doesn’t follow a tweedie distribution.</p>
|
<p>I have been dealing with exactly the same situation but with even more rare non-zero events from marketing conversions. I have a few tips, but I don't feel I have really settled the best practices so I look forward to other people adding their observations! For the record I'm using Catboost. From what I have seen:</p>
<ol>
<li>Be extra careful about overfitting - cross validate and try low learning rates.</li>
<li>You can't use Tweedie loss but do include MAE vs. RMSE as one of the parameters you test in your cross validation. MAE can reduce the emphasis on outliers and improve out of sample performance (sometimes, so cross validate!) I have even tried cross validations over most of the exotic evaluation metrics supported by Catboost but have found only MAE or RMSE to be best.</li>
<li>Expect most predicted values to be on a smaller scale than the real values. The predicted values incorporate both the probability of a non-zero outcome and the expected magnitude, so it makes sense they tend to be close to zero. The problem is that RMSE or MAE evaluation metrics become very hard to interpret. My advice is to use r-squared to get a better intuition for the quality of your model fit.</li>
<li>Although most values will tend to be close to zero be aware that gradient boosting regressions can produce predictions outside the range of your observed outcomes.</li>
</ol>
| 375
|
boosting
|
When does boosting overfit more than bagging?
|
https://datascience.stackexchange.com/questions/28299/when-does-boosting-overfit-more-than-bagging
|
<p>If we consider two conditions:</p>
<ol>
<li>Number of data is huge </li>
<li>Number of data is low</li>
</ol>
<p>For what condition does boosting or bagging overfit more compared to the other one?</p>
|
<p>I read your question as: 'Is boosting more vulnerable to overfitting than bagging?'</p>
<p>Firstly, you need to understand that <strong>bagging decreases variance</strong>, while <strong>boosting decreases bias</strong>. </p>
<p>Also, to be noted that under-fitting means that the model has low variance and high bias and vice versa for overfitting.</p>
<p>So, boosting is more vulnerable to overfitting than bagging.</p>
| 376
|
boosting
|
Feature Selection before modeling with Boosting Trees
|
https://datascience.stackexchange.com/questions/84624/feature-selection-before-modeling-with-boosting-trees
|
<p>I have read in some papers that the subset of features chosen for a boosting tree algorithm will make a big difference on the performance<br />so I've been trying RFE, Boruta, Clustering variables, correlation, WOE & IV and Chi-square</p>
<p>Let's say I have a classification problem with over 40 variables, best results after a long long time testing :<br /></p>
<ul>
<li>all variables for Lightgbm (except of one variable with high linearity)<br /></li>
<li>I removed correlated variables for Xgboost (around 8 correlated ones)<br /></li>
<li>I removed variables based on ElasticNet model for Catboost (around 7 ones)</li>
</ul>
<p><strong>My question is</strong> : what's the proper way to choose the candidates variables for modeling a boosting tree (especially for Lightgbm) ?</p>
<p>I'm using R if there is any suggestion for packages ?</p>
| 377
|
|
boosting
|
Does Gradient Boosting detect non-linear relationships?
|
https://datascience.stackexchange.com/questions/45371/does-gradient-boosting-detect-non-linear-relationships
|
<p>I wish to train some data using the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html#sklearn-ensemble-gradientboostingregressor" rel="nofollow noreferrer">the Gradient Boosting Regressor of Scikit-Learn</a>.</p>
<p>My questions are:</p>
<p>1) Is the algorithm able to capture non-linear relationships? For example, in the case of y=x^2, y increases as x approaches negative infinity and positive infinity. What if the graph looks like y=sin(x)?</p>
<p>2) Is the algorithm able to detect interactions/relationships among the features? Specifically, should I add features that are the sums/differences of the raw features to the training set?</p>
|
<p>GB method works by minimizing a loss function and by splitting each node in a fashion that produces high pure leaves. there is no population formula being estimated and therefore you can estimate all types of relations between the target and the features.<br>
However I wouldn't put in the model correlated variables as: </p>
<blockquote>
<p>For gradient boosted trees, there's generally no strong need to check
for multicollinearity because of its robustness. But practically
speaking, you still should do some basic checks. For example, if you
discover that two variables are 100% the same, then of course there's
no point in keeping both. Even if it's 98% correlated, it's usually
okay to drop one variable without degrading the overall model.<br>
Source: <a href="https://www.quora.com/Is-multicollinearity-a-problem-with-gradient-boosted-trees" rel="nofollow noreferrer">Quora</a></p>
</blockquote>
| 378
|
boosting
|
Gradient boosting vs logistic regression, for boolean features
|
https://datascience.stackexchange.com/questions/18081/gradient-boosting-vs-logistic-regression-for-boolean-features
|
<p>I have a binary classification task where all of my features are boolean (0 or 1). I have been considering two possible supervised learning algorithms:</p>
<ul>
<li>Logistic regression</li>
<li>Gradient boosting with <a href="https://en.wikipedia.org/wiki/Decision_stump" rel="nofollow noreferrer">decision stumps</a> (e.g., xgboost) and cross-entropy loss</li>
</ul>
<p>If I understand how they work, it seems like these two might be equivalent. Are they in fact equivalent? Are there any reasons to choose one over the other?</p>
<hr>
<p>In particular, here's why I'm thinking they are equivalent. A single gradient boosting decision stump is very simple: it is equivalent to adding a constant $a_i$ if feature $i$ is 1, or adding the constant $b_i$ if feature $i$ is 0. This can be equivalently expressed as $(a_i-b_i)x_i + b_i$, where $x_i$ is the value of feature $i$. Each stump branches on a single feature, so contributes a term of the form $(a_i-b_i)x_i + b_i$ to the total sum. Thus the total sum of the gradient boosted stumps can be expressed in the form</p>
<p>$$S = \sum_{i=1}^n (a_i-b_i) x_i + b_i,$$</p>
<p>or equivalently, in the form</p>
<p>$$S = c_0 + \sum_{i=1}^n c_i x_i.$$</p>
<p>That's exactly the form of a final logit for a logistic regression model. That would suggest to me that fitting a gradient boosting model using the cross-entropy loss (which is equivalent to the logistic loss for binary classification) should be equivalent to fitting a logistic regression model, at least in the case where the number of stumps in gradient boosting is sufficient large.</p>
|
<p>You are right that the models are equivalent in terms of the functions they can express, so with infinite training data and a function where the input variables don't interact with each other in any way they will both probably asymptotically approach the underlying joint probability distribution. This would definitely not be true if your features were not all binary.</p>
<p>Gradient boosted stumps adds extra machinery that sounds like it is irrelevant to your task. Logistic regression will efficiently compute a maximum likelihood estimate assuming that all the inputs are independent. I would go with logistic regression.</p>
| 379
|
boosting
|
What is meant by Distributed for a gradient boosting library?
|
https://datascience.stackexchange.com/questions/41266/what-is-meant-by-distributed-for-a-gradient-boosting-library
|
<p>I am checking out XGBoost documentation and it's stated that XGBoost is an optimized <strong>distributed</strong> gradient boosting library. </p>
<p>What is meant by distributed?</p>
<p>Have a nice day</p>
|
<p>It means that it can be run on a <a href="https://en.wikipedia.org/wiki/Distributed_computing" rel="noreferrer">distributed system</a> (i.e. on multiple networked computers).</p>
<p>From XGBoost's <a href="http://dmlc.cs.washington.edu/xgboost.html" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>The same code runs on major distributed environment(Hadoop, SGE, MPI) and can solve problems beyond billions of examples. The most recent version integrates naturally with DataFlow frameworks(e.g. Flink and Spark).</p>
</blockquote>
| 380
|
boosting
|
Is ensemble learning using different classifier combination another name for Boosting?
|
https://datascience.stackexchange.com/questions/33392/is-ensemble-learning-using-different-classifier-combination-another-name-for-boo
|
<p>For implementation I am following the <a href="https://www.mathworks.com/matlabcentral/fileexchange/63162-adaboost" rel="nofollow noreferrer">Matlab code for AdaBoost</a>. Based on my understanding, AdaBoost uses weak classifiers known as base classifiers and creates several instances of it. For example, a weak classifier is a decision tree. So, AdaBoost can create maximum <code>N</code> decision trees (where <code>N</code> = number of samples) and combine the prediction results. This is a Homogeneous boosting method. But I have seen some examples such as this one in Matlab and <a href="https://www.mathworks.com/matlabcentral/fileexchange/38944-ensemble-toolbox" rel="nofollow noreferrer">ensemble-toolbox</a> which have confused me. Can somebody please explain the following concepts with respect to the implementations and what is going on in the code?</p>
<p>1) Does the Matlab code for AdaBoost combine different classifiers? The combination method is unclear to me - -whether they do sum or majority voting or something else.</p>
<p>If they are combining several classifiers, then technically it is a Heterogenous ensemble method and the term for it is stacking and not boosting. Please correct me where I am wrong.
In Boosting methods, the based classifiers are the same. But the given in Matlab code for AdaBoost combines different classifiers, I am not sure.</p>
<p>2) Is ensemble learning or the example in the ensemble toolbox the same as the Adaptive Boosting Matlab code (second link)? Is ensemble learning the same as Adaptive boost?</p>
|
<p><em>Boosting</em> is a type of Ensemble Learning, but it is not the only one. Apart from stacking, <em>bagging</em> is also another type of Ensemble Learning.</p>
<p><strong>Ensemble Learning</strong> is the combination of individual models together trying to obtain better predictive performance that could be obtained from any of the constituent learning algorithms alone.</p>
<p><strong>Boosting</strong> involves incrementally building an ensemble by training each new model instance to emphasize the training instances that previous models mis-classified. It is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. Boosting in general decreases the bias error and builds strong predictive models. Sometimes they may over fit on the training data.</p>
<p><strong>Stacking</strong> involves training a learning algorithm to combine the predictions of several other learning algorithms.</p>
<p><strong>Bagging</strong> tries to implement similar learners on small sample populations and then takes a mean of all the predictions. In generalized bagging, you can use different learners on different population. As you can expect this helps us to reduce the variance error.</p>
| 381
|
boosting
|
Are "Gradient Boosting Machines (GBM)" and GBDT exactly the same thing?
|
https://datascience.stackexchange.com/questions/81963/are-gradient-boosting-machines-gbm-and-gbdt-exactly-the-same-thing
|
<p>In the category of Gradient Boosting, I find some terms confusing.</p>
<p>I'm aware that XGBoost includes some optimization in comparison to conventional Gradient Boosting.</p>
<ul>
<li><p>But are <strong>Gradient Boosting Machines (GBM)</strong> and <strong>GBDT</strong> the same
thing? Are they just different names?</p>
</li>
<li><p>Apart from GBM/GBDT and XGBoost, are there any other models fall into
the category of Gradient Boosting?</p>
</li>
</ul>
|
<p>Boosting is an ensemble technique where predictors are ensembled sequentially one after the other(<a href="https://www.youtube.com/watch?v=sRktKszFmSk&t=2s&ab_channel=AlexanderIhler" rel="noreferrer">youtube tutorial</a>. The term gradient of gradient boosting means that they are ensembled using the optimization technique called gradient descent (<a href="http://papers.nips.cc/paper/1766-boosting-algorithms-as-gradient-descent.pdf" rel="noreferrer">Boosting Algorithms as Gradient Descent</a>.</p>
<p>Given this, you can boost any kind of model that you want (as far as I know). Moreover in the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html" rel="noreferrer">scikit learn library</a>, gradient boosting, its under the ensemble folder. You can boost any kind of model (linear, svm) its just that decision trees achieve normally great results with this kind of ensemble. Same way that you can do a bagging with any kind of estimator, but if you do it with a decision tree and add a couple technichalities more, you can call it Random Forest.</p>
<p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html#sklearn.ensemble.GradientBoostingRegressor" rel="noreferrer">From scikit learn documentation</a>: GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function.</p>
<blockquote>
<p>But are Gradient Boosting Machines (GBM) and GBDT the same thing? Are they just different names?</p>
</blockquote>
<p>Gradient boosting machines are a kind of ensemble and Gradient Boosting Decision Tree the particular case when a tree is used as the estimator.</p>
<blockquote>
<p>Apart from GBM/GBDT and XGBoost, are there any other models fall into the category of Gradient Boosting?</p>
</blockquote>
<p>You can use any model that you like, but decision trees are experimentally the best.</p>
<p>"Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like support vector machines (SVM)." <a href="https://link.springer.com/chapter/10.1007/978-3-642-02326-2_51#:%7E:text=Boosting%20has%20been%20shown%20to,support%20vector%20machines%20(SVM)." rel="noreferrer">Kai Ming TingLian Zhu, Springer</a></p>
| 382
|
boosting
|
How does one decide when to use boosting over bagging algorithm?
|
https://datascience.stackexchange.com/questions/58269/how-does-one-decide-when-to-use-boosting-over-bagging-algorithm
|
<p>What kind of problem, circumstances and data makes it more suitable to apply boosting instead of bagging methods?</p>
|
<p>Bagging and boosting are two methods of implementing ensemble models.</p>
<p>Bagging: each model is given the same inputs as every other and they all produce a model</p>
<p>Boosting: the first model trains on the training data and then checks which observations it struggled most with, it passes this info to the next algorithm which assigns greater weight to the misclassified data</p>
<p>Because of this both bagging and boosting reduce variance. However boosting is better at improving accuracy vs a single model whilst bagging is better at reducing overfitting.</p>
<p>I would advise training a single version of the ensemble aka decision tree for random forest and then seeing where improvements can be made.
<a href="https://quantdare.com/what-is-the-difference-between-bagging-and-boosting/" rel="nofollow noreferrer">Good article explaining boosting vs bagging</a></p>
| 383
|
boosting
|
Regression - How Random Forest and Gradient Boosting really works?
|
https://datascience.stackexchange.com/questions/28672/regression-how-random-forest-and-gradient-boosting-really-works
|
<p>I read a lot about random forest and gradient boosting, but I do not know how these two algorithms really work.</p>
<p>For example, see the simple picture about basketball (picture 1) from <a href="https://www.r-bloggers.com/how-random-forests-improve-simple-regression-trees/" rel="nofollow noreferrer">this link</a>: </p>
<p>How does Random Forest and how does Gradient Boosting work?
Has each tree in the random forest different trainings data AND different features?</p>
<p>Sorry about m question but I don‘t find an easy non-mathematical answer.</p>
|
<p><strong>Random Forest</strong>: Build a decision tree. </p>
<ol>
<li>Sample N examples from your population with replacement (meaning examples can appear multiple times).</li>
<li>At each node do the following
<ol>
<li>Select m predictors from all predictors</li>
<li>Split based on predictor that performs best via some objective function</li>
<li>Go to the next node, select another m predictors and repeat</li>
</ol></li>
</ol>
<p>Combine all trees as an average or weighted via some scheme.</p>
<p><strong>Gradient Boosting</strong></p>
<p>One key note is that random forest trees essentially indepedent of each other. Boosting algorithms add a certain depedency to the model.</p>
<ol>
<li>Initialize a model by finding the minimizer of a certain objective function</li>
<li>For each iteration
<ol>
<li>Compute the partial derivative of -$L(y_{i}, F(x_{i}))$ with respect to $F(x_{i})$ for all i to n</li>
<li>Fit a tree $h_m$ to the the result from above.</li>
<li>solve $\lambda_m =arg min \sum_{i=1}^{n}L(y_{i}, F_{m-1}(x_{i}) + \lambda h_{m}(x_{i})$</li>
</ol></li>
<li>Update model via $F_m(x) = F_{m-1} + \lambda_m h_{m} (x)$ </li>
</ol>
| 384
|
boosting
|
For what condition boosting work better than bagging in Ensemble Learner?
|
https://datascience.stackexchange.com/questions/28173/for-what-condition-boosting-work-better-than-bagging-in-ensemble-learner
|
<p>In boosting process we get a better accuracy for training data, but there is a lot of chance to over fit. For bagging ensemble method over fitting chance is lower than boosting.
Why we use boosting and for what condition ??</p>
|
<p>Bagging (and features sampling) aim to reduce variance by providing low-correlated trees. Estimators can then be aggregated together to reduce variance. Reason is simple decision trees tend to quickly overfit in the bottom nodes. Bagging and features sampling only adress high-variance (overfitting) problems.</p>
<p>On the other hand, boosting is a meta-algorithm that aims to reduce both bias and variance by training estimators in a sequential way. Each predictor tries to reduce residuals from previous estimators.</p>
<p>In my opinion, the two meta-algorithms can't be easily compared that way. They will give you different results depending of data distributions, potential outliers, usecase... Potentially, boosting is more sensitive to outliers than bagging for example. I think trying the 2 methods is your best chance and you don't have to restrict to only one model.</p>
| 385
|
boosting
|
Forecasting using Boosting methods on Non-stationary Time Series data
|
https://datascience.stackexchange.com/questions/88441/forecasting-using-boosting-methods-on-non-stationary-time-series-data
|
<p>Theoretical Noob question -</p>
<p>Can we use boosting methods to effectively forecast the future after being trained on a non-stationary time series? Or do you train/fit on the residual of the training set and then add seasonality/trend components while forecasting?</p>
<p>Thanks in advance.</p>
|
<p>As @10xAI said, a tree-based gradient boosted approach may miss the mark for time series because it cannot forecast a growing trend. However, we can apply gradient boosting methodology to any algorithm. You can mess around with some code I wrote that is based on gradient boosting and decomposition: <a href="https://github.com/tblume1992/LazyProphet" rel="nofollow noreferrer">LazyProphet</a>. The code is badly written and I think the example data pulls break now but the method itself tends to produce some decent results.</p>
<p>Essentially if we do boosting with some piecewise approach we can get new changepoints at each boosting round and update our seasonality + exogenous measures. I use binary segmentation so it ends up being very similar to a wild binary segmentation approach for change points. Round 0 the 'trend' is just the mean/median, then you measure seasonality and set your 'y' to the original time series - (trend + seasonality). Round 1 then finds the optimal point which splits the data into 2 and fits a trend estimator (could be the mean kind of like a tree output) then measures seasonality and adds these measures to what was found in round 0 to find the new residuals to fit for the next round and so on. Hopefully this image makes it clearer: <a href="https://i.sstatic.net/Dnj4x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dnj4x.png" alt="enter image description here" /></a></p>
<p>I do have a much better written and more generalized approach coming soon!</p>
| 386
|
boosting
|
building a boosting model for repeated measurments
|
https://datascience.stackexchange.com/questions/94300/building-a-boosting-model-for-repeated-measurments
|
<p>I am working on an e-commerce data where the goal is to predict how will the user rate a movie from 1 to 5. We have a bunch of data from users but also from products. Some users have previously rated more than 10 (even 100s) and some less. Something like the following.</p>
<p><a href="https://i.sstatic.net/vozKpl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vozKpl.png" alt="enter image description here" /></a></p>
<p>Imaging I am using a boosting model <strong>xgboost/catboost/lightgbm/...</strong> - I can feed the model with the data (excel the user_id) column and get sth that can predict the rate. but I believe this model is not at the personal level - meaning is not tailored to learn what each user like and is rather a generic model. So ideally there would be a tree per user.</p>
<p>I wonder how I can make this model to work at the personal level ? for example, learn that user_id 12 likes <code>x</code> and <code>y</code> but not <code>z</code> and <code>w</code> ?</p>
<p>How can I inform that model to build tree at each user ? I am sorry if I am not using the right terminology or the best title.</p>
| 387
|
|
boosting
|
training gradient boosting algorithm in python testing in Golang
|
https://datascience.stackexchange.com/questions/82636/training-gradient-boosting-algorithm-in-python-testing-in-golang
|
<p>What are the best strategy to train and save a gradient boosting algorithm, e.g. LightGBM or XGboost or Catboost in Python but load the model in GoLang and make prediction with Golang ?</p>
|
<p>There's actually a few libraries that handle the inference part well. <a href="https://github.com/dmitryikh/leaves" rel="nofollow noreferrer">https://github.com/dmitryikh/leaves</a> is probably the most common one and seems to fit your need.</p>
| 388
|
boosting
|
What does "exaggeration" mean in the context of Boosting?
|
https://datascience.stackexchange.com/questions/51646/what-does-exaggeration-mean-in-the-context-of-boosting
|
<p>I am learning <a href="https://youtu.be/t-XJ6AfqULg?t=583" rel="nofollow noreferrer">boosting</a>, the machine learning ensemble meta-algorithm. The professor is grouping 3 weak classifiers into an ensemble and said that before this <a href="https://youtu.be/t-XJ6AfqULg?t=584" rel="nofollow noreferrer">time point</a> it is easy to understand. Take a dataset, train a simple model, find the smallest error rate, something like this. This idea is easy to implement, for instance, gradient descent would take logistic regression to the smallest error rate. Then, the professor talked about the data with an exaggeration of classifier errors.</p>
<p>My question: What does that mean? Can anyone give a sample for this operation based on an open dataset in Python or R?</p>
|
<p>Not sure, but "exaggeration" could be an other way to talk about "overfitting". </p>
<p>Boosting are sequential model: each time you build a new tree, it will use the results of the precedent ones, and focus on residuals (where the precedent trees dont perform well). For this, the model exagerate the weights of the precedent mistakes.</p>
<p>If you build too much trees, the last ones will learn on noise, because
residuals will contains much noise than information. You will have an excellent error rate ("exaggerated error rates") for your train dataset, but if you apply your model on a new dataset, the error will be lower.</p>
| 389
|
boosting
|
Small number of estimators in gradient boosting
|
https://datascience.stackexchange.com/questions/68635/small-number-of-estimators-in-gradient-boosting
|
<p>I am tuning a regression gradient boosting-based model to determine the appropriate hyperparameters using 4-folds cross validation. More specifically, I am using XGBoost and lightGBM for the models and Bayesian optimization algorithm for the hyperparameters search (hyperopt)</p>
<p>One of the hyperparameter which is being tuned in the number of estimators used for each model. For the 1st round of testing, I starting with number of estimators in the range of 100-300. The outcome from hyperparameters optimization algorithm was that the best number of estimators is 100.I repeated the same analysis where the range of number of estimators was modified to 50-300, this time the outcome from hyperparameters optimization algorithm was that the best number of estimators is 50. I repeated the same analysis for the 3rd time setting the range of possible number of estimators to 2-300 (Just to check the extreme case) this time the outcome from hyperparameters optimization algorithm was that the best number of estimators is 2. The same outcome was noticed both for the XGBoost and lightGBM models.</p>
<p>What does this say about my model and data? Does this mean that the best model is Random forest instead of gradient boosting? is it smart to 'force' the number of estimators by fixing the hyperparameter value (e.g., 150) and tuning all other parameters?</p>
<p>The training dataset has > 0.5M Samples where ~ 90% of the data is zero and the rest is mix of positive and negative values. Should I consider using a different object function rather than MSE?</p>
<p>The range of the tuned parameters for LightGBM are:</p>
<pre><code>'max_depth': 5 ==> 15
'colsample_bytree': .6==> .9
'subsample':.5 ==> .8
'reg_alpha': np.log(1e-4) ==> np.log(1e-1)
'reg_lambda': np.log(1e-4) ==> np.log(1e-1)
'n_estimators': 50 ==> 300
'num_leaves': 10 ==> 150, 2
'min_child_samples': 20 ==> 800
'subsample_for_bin': 20000 ==> 300000
'subsample_freq': 1 ==> 20
</code></pre>
<p>Output:
{'subsample_freq': 16, 'num_leaves': 10, 'max_depth': 6, 'colsample_bytree': 0.7577614465604802, 'subsample_for_bin': 80000, 'min_child_samples': 415, 'n_estimators': 56, 'subsample': 0.6531478473538894, 'reg_alpha': 0.025744268683186224, 'reg_lambda': 0.0001729942781329532}</p>
<p>The range of the tuned parameters for XGBoost are:</p>
<pre><code>'colsample_bytree',.6 ==> .9
'max_depth', 4 ==> 9
'n_estimators', 50 ==> 300
'reg_alpha', np.log(1e-4) ==> np.log(1e-2)
'subsample',.5 ==> .8
'gamma', 0 ==> 4
</code></pre>
<p>Output:
{'gamma': 2.4257700330471357, 'max_depth': 4, 'n_estimators': 57, 'subsample': 0.5568564232616263, 'reg_alpha': 0.0009876777981446033, 'colsample_bytree': 0.7073279309167877}</p>
|
<p>First what is n_estimators:</p>
<blockquote>
<p>n_estimatorsinteger, optional (default=10)
The number of trees in the forest.</p>
</blockquote>
<p>Gradient Boosting and Random Forest are decision trees ensembles, meaning that they fit several trees and then they average(ensemble) them. </p>
<p>If you have n_estimators=1, means that you just have one tree, if you have n_estimators=3 means that you have 3 trees and that it predicts the results of each tree and then it "averages" the result to get you the best. </p>
<p>It is not a good idea to force hyperparameters, cross-validation is the way to go in hyperparameter selection.</p>
<p>If you make your search a bit bigger you might be able to find different results. Per example move wider in the subsample, in the penalties, in the max depth...</p>
<p>More parameters to search will mean more computational time. </p>
| 390
|
boosting
|
Improving the performance of gradient boosting classifier
|
https://datascience.stackexchange.com/questions/126473/improving-the-performance-of-gradient-boosting-classifier
|
<p>I am training a gradient boosting classifier on an imbalanced data but the model is not performing very well. These are the things I have done to improve the model's performance.</p>
<ol>
<li>Balanced the data with SMOTE</li>
<li>Added more variables</li>
<li>combined features</li>
<li>polynomial feature transformation (this did not improve the model's performance)</li>
<li>cleaned the data</li>
<li>Scaled the data</li>
</ol>
<p>Except number 4, the other efforts have improved the recall and precision of the model from 34% and 60% respectively to 58% and 51% respectively. Which is good but my aim is to improve the recall and precision to over 70%, is there any other method or technique I can try to get a recall and precision of of over 70?</p>
|
<p>A little bit late, but you could try to do hyperparameter tuning on your gradient boosting classifier. For example, random search would be an efficient and effective choice (RandomizedSearchCV from sklearn in Python).</p>
<p>If there are missing values in your dataset, you could impute them using multiple imputation or some other type of imputation, too.</p>
<p>You could also try to get access to more data, if possible. I do not specifically know the size of your dataset, but increasing its size could help.</p>
| 391
|
boosting
|
Would you recommend feature normalization when using boosting trees?
|
https://datascience.stackexchange.com/questions/16225/would-you-recommend-feature-normalization-when-using-boosting-trees
|
<p>For some machine learning methods it is recommended to use feature normalization to use features that are on the same scale, especially for distance based methods like k-means or when using regularization. However, in my experience, boosting tree regression works less well when I use normalized features, for some strange reason. How is your experience using feature normalization with boosted trees does it in general improve our models?</p>
|
<p>Boosting trees is about building multiple decision trees. Decision tree doesn't require feature normalization, that's because the model only needs the absolute values for branching.</p>
<p><a href="https://en.wikipedia.org/wiki/Decision_tree_learning#Decision_tree_advantages" rel="noreferrer">Wikipedia for decision tree</a>:</p>
<p><code>Requires little data preparation. Other techniques often require data normalization....</code></p>
<p>However, it's always a good idea to normalize your features because:</p>
<ul>
<li>It's easier to visualize and interpret your model</li>
<li>It's easier to compare another model (e.g. SVM) with the same data set</li>
</ul>
| 392
|
boosting
|
What is the formula of gradient boosting trees model?
|
https://datascience.stackexchange.com/questions/108545/what-is-the-formula-of-gradient-boosting-trees-model
|
<p>I have been reading about gradient boosting trees (GBT) in some machine learning books and papers, but the references seem to only describe the training algorithms of GBT, but they do not describe the formula of a GBT model. So, I am not sure how a GBT model predict a new instance.</p>
<p>What is the formula of a GBT model? Are there any references which describe the formula of the model?</p>
<p>Thanks
David</p>
| 393
|
|
boosting
|
Does gradient boosting algorithm error always decrease faster and lower on training data?
|
https://datascience.stackexchange.com/questions/80530/does-gradient-boosting-algorithm-error-always-decrease-faster-and-lower-on-train
|
<p>I am building another XGBoost model and I'm really trying not to overfit the data. I split my data into train and test set and fit the model with early stopping based on the test-set error which results in the following loss plot:</p>
<p><a href="https://i.sstatic.net/dLIDt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dLIDt.png" alt="enter image description here" /></a></p>
<p>I'd say this is pretty standard plot with boosting algorithms as XGBoost. My reasoning is that my point of interest is mostly the performance on the test set and until the XGBoost stopped training around 600th epoch due to early stopping the test-set loss was still decreasing. On the other way, overfitting is sometimes defined as a situation when train error decreases faster than test error and this is exactly what happens here. But my intuition is that decision-tree based techniques always drill down the training data (for example, trees in random forest kind of deliberately overfit the training set and it's a role of bagging to reduce the variance). I believe gradient boosting techniques are also known from drilling down the training dataset pretty deep and even with low learning rate we can't help it.</p>
<p>But I may be wrong. Therefore, I'd like to confirm that this situation is not really something to worry about and I do not overfit the data. I'd also like to ask what the perfect learning curves plot would look like with gradient boosting techniques?</p>
|
<p>You should worry about overfitting when the test error rate starts to go up again. Until then I would set it aside. Overfitting is rather about the number of parameters, e.g. When two models with the same performance have different number of parameters you would prefer the one with less parameters to preserve the generalisation power of the model.</p>
<p>Generally the training error will be lower than the test error for obvious reasons (the model specifically <strong>sees</strong> the training data but not the test data), but even this can be reversed in some cases. For example when there is very little test data, or the test data has a different distribution than the training data.</p>
<p>A "perfect" learning curve doesn't exist, the final performance is what counts.</p>
| 394
|
boosting
|
How are regression trees fitted in gradient boosting for classification?
|
https://datascience.stackexchange.com/questions/102160/how-are-regression-trees-fitted-in-gradient-boosting-for-classification
|
<p>What I understood is that even gradient boosting for binary classification uses regression trees. The first value we calculate is constant = log(odds).
For the rest of the trees, we try to fit regression trees on the residuals.
But how do we fit the trees? Or how is the best splitting feature calculated? Is it the same as decision trees for regression or something else?</p>
| 395
|
|
boosting
|
Deep Learning vs gradient boosting: When to use what?
|
https://datascience.stackexchange.com/questions/2504/deep-learning-vs-gradient-boosting-when-to-use-what
|
<p>I have a big data problem with a large dataset (take for example 50 million rows and 200 columns). The dataset consists of about 100 numerical columns and 100 categorical columns and a response column that represents a binary class problem. The cardinality of each of the categorical columns is less than 50. </p>
<p>I want to know a priori whether I should go for deep learning methods or ensemble tree based methods (for example gradient boosting, adaboost, or random forests). Are there some exploratory data analysis or some other techniques that can help me decide for one method over the other? </p>
|
<p>Why restrict yourself to those two approaches? Because they're cool? I would always start with a simple linear classifier \ regressor. So in this case a Linear SVM or Logistic Regression, preferably with an algorithm implementation that can take advantage of sparsity due to the size of the data. It will take a long time to run a DL algorithm on that dataset, and I would only normally try deep learning on specialist problems where there's some hierarchical structure in the data, such as images or text. It's overkill for a lot of simpler learning problems, and takes a lot of time and expertise to learn and also DL algorithms are very slow to train. Additionally, just because you have 50M rows, doesn't mean you need to use the entire dataset to get good results. Depending on the data, you may get good results with a sample of a few 100,000 rows or a few million. I would start simple, with a small sample and a linear classifier, and get more complicated from there if the results are not satisfactory. At least that way you'll get a baseline. We've often found simple linear models to out perform more sophisticated models on most tasks, so you want to always start there.</p>
| 396
|
boosting
|
Why is gradient boosting better than random forest for unbalanced data?
|
https://datascience.stackexchange.com/questions/115320/why-is-gradient-boosting-better-than-random-forest-for-unbalanced-data
|
<p>I've searched everywhere and still couldn't figure this one out.
<a href="https://365datascience.com/career-advice/job-interview-tips/machine-learning-interview-questions-and-answers/#2:%7E:text=8.%20What%20are%20the%20bias%20and%20variance%20in%20a%20machine%20learning%20model%20and%20explain%20the%20bias-variance%20trade-off?" rel="nofollow noreferrer">This post</a> mentioned that Gradient Boosting is better than Random Forest for unbalanced data. Why is that? Is Random Forest worse because of bootstrapping (perhaps this wouldn't get a stratified sample during training, idk)?</p>
<p>Any thoughts?</p>
<p>Thanks in advance</p>
|
<p>Boosting is the method of creating ensemble by increasing the importance of wrongly predicted instances in each iteration.
RandomForest works by creating ensemble using Bootstrap Aggregating which involves 'sampling with replacement'.
So, when you have imbalanced dataset random sampling is less likely to work than when you are increasing the importance of misclassified (in imbalanced dataset) instances as in Boosting.</p>
| 397
|
boosting
|
Feature importance by random forest and boosting tree when two features are heavy correlated
|
https://datascience.stackexchange.com/questions/103898/feature-importance-by-random-forest-and-boosting-tree-when-two-features-are-heav
|
<p>I have asked this question <a href="https://stats.stackexchange.com/questions/550948/feature-importance-by-random-forest-and-boosting-tree-when-two-features-are-heav">here</a> but seems no one is interested in it.</p>
<p>Here is my understanding, <code>pls correct me if there is any misunderstanding:</code></p>
<p>Tree models is used to select the importance of features by <code>Mean decrease impurity</code>(let's ignore Permutation):</p>
<p><a href="https://blog.datadive.net/selecting-good-features-part-iii-random-forests/" rel="nofollow noreferrer">https://blog.datadive.net/selecting-good-features-part-iii-random-forests/</a></p>
<p>but the tree has the weakness on heavy correlated features, where if one feature is split, the left one has no uncertainty to split, namely tree will only select one of two heavy correlated features (like LASSO).</p>
<p>I think the methodology and disadvantage of feature selection by random forest and boosting trees are all inherited from trees. But the difference is</p>
<ol>
<li><p>random forest: each feature has the opportunity to first split in a separated tree. Therefore the importance will tend to be evenly distributed over each feature, which may lead both of features become not important (the importance is diluted).</p>
</li>
<li><p>boosting tree: the boosting can be approximately regarded as continuing to split in a tree (fit the residual), therefore it is most likely that only one feature will split only one time. As result, one of the features will be selected as important and the left one will be ignored.</p>
</li>
</ol>
<p>In a summary, both random forest and boosting trees are not good ways to deal with heavy correlated features. As an improvement, some books mentioned <code>Wrapper method like </code>randomized sparse models<code>and</code>Recursive feature elimination` can be used to reduce the impact of correlation. But</p>
<ol>
<li><p>Randomized sparse models: Random forest and boosting tree already have the feature (column) sampling, are two effects repeated?</p>
</li>
<li><p>Recursive feature elimination: Does it something like the stepwise regression?</p>
</li>
</ol>
|
<blockquote>
<p>Randomized sparse models: Random forest and boosting tree already have the feature (column) sampling, are two effects repeated?</p>
</blockquote>
<p>This question is unclear. Random Forest and GBDT are not randomized sparse models. Unless it is a nomenclature that I am not familiar about.</p>
<p>In the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">random forest doc</a>,</p>
<pre><code>max_features{“auto”, “sqrt”, “log2”}, int or float, default=”auto”
The number of features to consider when looking for the best split:
</code></pre>
<p>This is the feature selection of random forest, it also avoids overfitting. It also appears in <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html" rel="nofollow noreferrer">GBDT</a></p>
<blockquote>
<p>Recursive feature elimination: Does it something like the stepwise regression?</p>
</blockquote>
<p>For the info about RFE you can have a look at <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html" rel="nofollow noreferrer">sklearn docs</a></p>
<p><code>Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through any specific attribute or callable. Then, the least important features are pruned from current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.</code></p>
<p>About your question, you are making some unjustified claims that are not true</p>
<ul>
<li>Tree model are used to select the importance of features by Mean decrease impurity(let's ignore Permutation) -- Tree models are not used to select importance of features, but they are predictors. You can use them later to see what are the most important features that they have selected.</li>
</ul>
<p>And more....</p>
| 398
|
boosting
|
Solving the dual problem of boosting using column generation
|
https://datascience.stackexchange.com/questions/80584/solving-the-dual-problem-of-boosting-using-column-generation
|
<p>In our book there is boosting algorithm using column generation method (Dantzig-Wolfe decomposition) to solve the dual problem.<br />
So lets say we have want to solve the following primal linear problem based on hinge loss:<br />
<span class="math-container">$$\min_{w,\xi} \Sigma_{m=1}^Mw_m + C\Sigma_{i=1}^N\xi_i$$</span>
<span class="math-container">$$ y_i\Sigma_{m=1}^MH_{im}w_m+\xi_i\geq 1 \text{ ,} i\in[N]$$</span>
<span class="math-container">$$w,\xi \geq 0,$$</span></p>
<p>where <span class="math-container">$H_{im}:=\phi(x_i),i\in[N],m\in[M]$</span> and <span class="math-container">$C>0$</span> is a weighting factor.<br />
Now the dual is:
<span class="math-container">$$\max_z \mathbb{1}^Tz$$</span>
<span class="math-container">$$\Sigma_{i=1}^N y_i H_{im} z_i \leq 1 \text{ }, m\in[M]$$</span>
<span class="math-container">$$0\leq z\leq C\mathbb{1}$$</span></p>
<p>Now in our book, it stated:</p>
<blockquote>
<p>The base learniners <span class="math-container">$\phi(x)$</span> can be obtained dynamically, by solving the dual problem with optimal solution <span class="math-container">$z^*\in\mathbb{R}_+^N$</span>. For a given subset of base functions, we need to check whether there exists another function <span class="math-container">$\phi_m(x)$</span> such that the corresponding dual constraint is violated, i.e.,
<span class="math-container">$$\Sigma_i=1^N y_iH_{im}z_i^* >1$$</span>
Thus the left hand side has to be maximized, which can be handled by calling a base learner with weights <span class="math-container">$z^*$</span> trying to maximize the weighted number of points that are correctly classified. If the value is larger than 1, <span class="math-container">$\phi_m(x)$</span> is added to the set of base learners ant the process is repeated. Note that this process will terminate, since there are only finitely many vectors <span class="math-container">$H_{. m}\in{-1,+1}^N$</span>.</p>
</blockquote>
<p><strong>For me it is totally unclear why we are checking for functions <span class="math-container">$\phi_m(x)$</span> which violate the constraint.</strong> I mean we want to solve the dual problem and if so the constraints forces us to use functions which classify the points incorrectly. Then we had <span class="math-container">$yH_m<0$</span> and the constraint would be active and the dual problem could be solved. I don't know how the problem is going to be solved if we only use functions which violate the constraint</p>
| 399
|
|
model interpretability
|
Interpreting Adaboost model results
|
https://datascience.stackexchange.com/questions/67843/interpreting-adaboost-model-results
|
<p>I'm trying to get a better grasp of model interpretability using many different kinds of models for a binary classification problem.</p>
<p><em>Quick note: By interpretability in this case, what I mean is understanding which features have the largest effect on the target. (Not necessarily understanding how the model works under the hood, although I do understand this fairly well for many models.)</em></p>
<p>For <strong>Logistic Regression</strong>, I know that you can inspect the model coefficients. Assuming that the features were normalized before training the model, the magnitude of each coefficient is proportional to its importance, and also shows the direction of the influence. (Correct?)</p>
<p>For <strong>Adaboost</strong> (in Python's <code>scikit-learn</code> package), you have access to <code>model.feature_importances()</code> which shows the importance of the feature, but not the direction. By using partial dependence plots, you can also get the direction of the influence.</p>
<p><strong>The main question:</strong> Am I correct in assuming that using feature importance with partial dependence plots for Adaboost gives a fairly good view of how to understand which features influence the target, similar to using model coefficients for Logistic Regression? Is there a better way that people usually do this?</p>
<p><em>Another quick note: I know that for right now I'm leaving aside the question of model accuracy. I'm assuming in this case that we want to build a model for the sake of interpreting relationships, not necessarily for predictive power.</em></p>
| 400
|
|
model interpretability
|
reliability of human-level evaluation of the interpretability quality of a model
|
https://datascience.stackexchange.com/questions/92227/reliability-of-human-level-evaluation-of-the-interpretability-quality-of-a-model
|
<p>Christoph Molnar, in his book <a href="https://christophm.github.io/interpretable-ml-book/" rel="nofollow noreferrer">Interpretable Machine Learning</a>, writes that</p>
<blockquote>
<p>Human level evaluation (simple task) is a simplified application level
evaluation. The difference is that these experiments are not carried
out with the domain experts, but with laypersons. This makes
experiments cheaper (especially if the domain experts are
radiologists) and it is easier to find more testers. An example would
be to show a user different explanations and the user would choose the
best one.</p>
</blockquote>
<p>(Chapter = <em>Interpretability</em> , section = <em>Approaches for Evaluating the Interpretability Quality</em>).</p>
<p>Why would anyone pick/trust a human-backed(not an expert) model over, say a domain-expert backed model or even a functionally evaluated model(i.e. accuracy/precision/recall/f1-score etc. are considerably good)?</p>
|
<p>This is specifically for interpretability of outcomes, i.e. a task where non-expert humans outperform machines.</p>
<p>There is a problem in collecting labels in machine learning, whereby labelling datasets is very expensive and time consuming (due to size of datasets & cost of experts' time).</p>
<p>So it's less about trust, its more about practicality. Consider hiring a data scientist to develop an algorithm to automatically label a dataset based of expert heuristics (e.g. <em>"label the data as cancerous if it looks red"</em>), it might take 6 months to collect data, plan, develop & test - therefore for certain use-cases hiring 10 non-experts and telling them the heuristic might be cheaper and faster.</p>
<p>The book uses an example <em>"show a user different explanations and the human would choose the best."</em> in the context of radiology, it could be something like:
<em>"Look at the images of the patient, and compare it to this dictionary of images and diagnoses, combine multiple sources and then report what the diagnosis is"</em></p>
<p>Of course if you have an algorithm which outperforms non-experts, you might just want some expert labels to validate your algorithm, and forget the non-experts.</p>
| 401
|
model interpretability
|
How would you describe the trade-off between model interpretability and model prediction power in layman's terms?
|
https://datascience.stackexchange.com/questions/26511/how-would-you-describe-the-trade-off-between-model-interpretability-and-model-pr
|
<p>I know it depends on the data and question asked but imagine a scenario that for a given dataset you could either go for a fairly complex nonlinear model (hard to interpret though) giving you a better prediction power perhaps because the model may see the nonlinearities present in the data, or have a simple model (perhaps a linear model or something) with less prediction power but easier to interpret. Here is a very good <a href="https://www.oreilly.com/ideas/ideas-on-interpreting-machine-learning" rel="noreferrer">post</a> discussing ideas on how to interpret machine learning models.</p>
<p>Industries, while being very cautious, are slowly becoming more interested in adopting more complex models! Still they want to know the trade-off clearly? A data scientist perhaps is the one sitting between data team and decision-makers, and often need to be able to explain these stuffs in layman's terms.</p>
<p>I am trying to brainstorm here to see what analogy you would come up with to describe such trade-off to a non-technical person?</p>
|
<p>Interesting question. I think that you can illustrate this by thinking about different use cases. The one example I've heard that I like is around lending decisions for loan applications. That's an algorithm but, because of regulations, it can't be strictly "black box". The decision has to be, effectively, interpretable because the bank has to give you a reason for decline on the loan. So, there's certainly better algos out there for loans that can give a binary result, but do you want a bank to just tell you yes or no? </p>
| 402
|
model interpretability
|
An example of explainable, but not interpretable ML model
|
https://datascience.stackexchange.com/questions/99808/an-example-of-explainable-but-not-interpretable-ml-model
|
<p><a href="https://datascience.stackexchange.com/questions/70164/what-is-the-difference-between-explainable-and-interpretable-machine-learning">This post</a> attempts to explain the difference between explainability and interpretability of ML models. However, the explanation is somewhat unclear. Can somebody provide specific examples of models that are explainable but not interpretable (or the over way round)?</p>
|
<p>I am following the <a href="https://www.nature.com/articles/s42256-019-0048-x" rel="nofollow noreferrer">definitions by Cynthia Rudin</a> (and the <a href="https://statmodeling.stat.columbia.edu/2018/10/30/explainable-ml-versus-interpretable-ml/" rel="nofollow noreferrer">article by Keith O’Rourke</a> which is based on it) here:</p>
<blockquote>
<ul>
<li>Explainable ML – using a black box and explaining it afterwards.</li>
<li>Interpretable ML – using a model that is not black box.</li>
</ul>
</blockquote>
<p>Accordingly, a decision tree, for example, is <em>interpretable</em> since it inherently makes its decision explicit through the nodes/split points. And according to above definitions it is <em>not</em> explainable since it is <em>not</em> a black box model (interpretable models are <em>not</em> a subset of explainable models according to the definitions).</p>
<p>In contrast, a CNN, for example, is a black box model which implicitly encodes its decision making procedure. However, ex-post analysis is an approach to make such a model <em>explainable</em>. You can, for example, assess the feature map activations per layer to do so, as done <a href="https://becominghuman.ai/what-exactly-does-cnn-see-4d436d8e6e52" rel="nofollow noreferrer">in this article</a>:</p>
<p><a href="https://i.sstatic.net/PrTEL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PrTEL.png" alt="enter image description here" /></a></p>
<p>This analysis reveals, for example, that layer 2 gets activates by patterns such as edges and layer 3 detects more complex patterns. Obviously this ex-post analysis has a different quality than the explicitly encoded rules of a decision tree.</p>
<p>(Somewhat contradicting the given definitions you could say that explainable models require a larger degree of interpretation while interpretable models explain their decisions inherently - but that is only my wording and not how the authors of above articles phrase it.)</p>
| 403
|
model interpretability
|
Understand the interpretability of word embeddings
|
https://datascience.stackexchange.com/questions/118804/understand-the-interpretability-of-word-embeddings
|
<p>When reading the Tensorflow tutorial for <em>Word Embeddings</em>, I found two notes that confuse me:</p>
<blockquote>
<p>Note: Experimentally, you may be able to produce more interpretable embeddings by using a simpler model. Try deleting the Dense(16) layer, retraining the model, and visualizing the embeddings again.</p>
</blockquote>
<p>And also:</p>
<blockquote>
<p>Note: Typically, a much larger dataset is needed to train more interpretable word embeddings. This tutorial uses a small IMDb dataset for the purpose of demonstration.</p>
</blockquote>
<p>I don't know exactly what it means by "more interpretable" in those two notes, is it related to the result displayed by the embedding projector? And also why the interpretability will increase when reducing model's complexity?</p>
<p>Many thanks!</p>
|
<p>"Interpretable" is not very precise in this context.</p>
<p>In the case of deleting a dense layer, the embedding layer is more likely can learn the nontask dependent co-occurrences of words in the dataset.</p>
<p>In the second case of adding more data, the embedding layer would learn more signals because there is an increased opportunity to "average out" the noise.</p>
<p>In other words, word embeddings are more generalizable by reducing the complexity of the architecture and training on more data.</p>
| 404
|
model interpretability
|
How are the confidence intervals of a model interpreted?
|
https://datascience.stackexchange.com/questions/109916/how-are-the-confidence-intervals-of-a-model-interpreted
|
<p>I am doing some work with R and after obtaining the confusion matrix I have obtained the following metrics corresponding to a logistic regression:</p>
<pre><code>Accuracy : 0.7763
95% CI : (0.6662, 0.864)
No Information Rate : 0.5395
P-Value [Acc > NIR] : 1.629e-05
</code></pre>
<p>And it is not clear to me how CI would be interpreted.</p>
<p>Maybe it would be that the Accuracy can take values between 0.666 and 0.864?
What does it mean that the CI are so large?</p>
<p>If someone could clarify it to me I would appreciate it. Best regards.</p>
|
<p>The other answer is correct and summed it up nicely: we are 95% confident the accuracy value will fall somewhere between .666 and .864. It is a probability claim for how representative your number is.</p>
<p>To your other question, it could mean a couple different things based on your data what the CI means. In general, the larger the CI, the bigger the range for your numbers will be. For example, we can be 95% certain the accuracy falls between .666 and .864, but you may change the CI to 99% and it might give you a value like .333 and 1.264 or something.</p>
<p>When you have a large range in your CI, it usually means you have high variability in your data (some of your datapoints are around .666 while others are around .864). The more data you have and the more related the data is, the lower the range will become.</p>
<p>It depends for what purpose you are crunching these numbers for, but generally speaking, Higher confidence intervals like 95% give you more certainty in the data while a smaller confidence interval like 75% can compile more neatly digestible graphs while sacrificing some accuracy.</p>
<p>Hope this helps a bit! :)</p>
| 405
|
model interpretability
|
Interpreting model
|
https://datascience.stackexchange.com/questions/114248/interpreting-model
|
<p>If I trained a model (say logistic regression) on train, test and validation. During interpretation which dataset (test or validation) should I base on for interpretation? If test and validation shows some difference in performance (say 10% where test is higher than validation in terms of accuracy) does it indicate signs of overfitting?</p>
|
<p>Yes - that is probably overfitting.</p>
<p>There is a chance that your distribution of test set, is different to your training or validation sets - but this is quite rare.</p>
<p>Add some form of regularisation to to your model, which will flatten out the difference between your training and validation/testing sets.</p>
<p>Think of it like this: Every % increase in performance of your training set over your testing set, comes from your model learning to "memorise", rather than learning real patterns.</p>
| 406
|
model interpretability
|
"Binary Encoding" in "Decision Tree" / "Random Forest" Algorithms
|
https://datascience.stackexchange.com/questions/39094/binary-encoding-in-decision-tree-random-forest-algorithms
|
<p>Is it OK to use Binary Encoding in a dataset containing categorical columns with very high cardinalities?
Some facts about my dataset:</p>
<ul>
<li>My dataset has ~170,000 rows</li>
<li>One of the categoric variables has 1,700 unique values.</li>
<li>Another one has 3,000 unique values.</li>
<li>Note that it is not practically possible to group the values of those variables into more aggregate levels.</li>
</ul>
<p>As a domain expert,I am sure those categorical columns with high cardinalities are strong candidates as predictors.
On the other hand, binary encoding surely decreases model's interpretability.
Else than interpretability, after binary encoding, is it just alright to build a decision tree / random forest model on the newly formed dataset with new variables which only indicating bits?</p>
<p>Click for a good post on <a href="https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931" rel="nofollow noreferrer" title="Visiting: Categorical Features and Encoding in Decision Trees">encoding categorical features</a></p>
|
<p>In general, it "okay" to apply to binary encode high cardinality datasets. In the sense of it will create numerical features that can be learned by a machine learning model.</p>
<p>However there are often better options, such a label encoding, frequency encoding, target encoding, or embeddings.</p>
<p>It is an empirical question which encoding scheme is best for your specific data and model. The best empirical coding scheme can be found through cross validation.</p>
| 407
|
model interpretability
|
What model is recommended: I am using text features in a regression and want to interpret coefficients
|
https://datascience.stackexchange.com/questions/55019/what-model-is-recommended-i-am-using-text-features-in-a-regression-and-want-to
|
<p>I am using the text of comments on a forum to predict how many upvotes it will get. I want to be able to say, "Reviews with X, Y, Z words are more upvoted". So to do this, I want to use text features in a regression. In particular,</p>
<p>What model should I use to maximize interpretability of coefficients? </p>
|
<p>I suppose you have a binary outcome (upvote: yes/no). In this case you could use simple linear (ols) regression (with lasso penalty). Each word (in a bag of words) is a „dummy“ here. If you look at the predicted coefficients, you can directly interpret them as „marginal effects“. Higher value means higher chance of getting an upvote (if the word is present). You can also directly see the magnitude of exp. increase in upvote probability.</p>
<p>One problem is that OLS is unconstrained wrt to y. So you can end up with predicted probability of upvote > 1. Use logistic regression if this bothers you. Under logit you will have similar results. Positive/negative coefficients indicate if a word increases/decreases the probability of an upvote. But because logit uses a transformation to squeeze y into a interval from zero to one, the coefficients are the log-odds and you cannot directly infer marginal effects. You would need to calculate this separately.</p>
<p>I tend to say: use OLS and get quick and dirty results if you are not interested in super precise estimates but if you merely look for robust estimates for which words are important.</p>
<p>However, if you really want to do this in a sound way, you would also need to think about „interaction“ of words (positive or negative effect on y), which are „masked“ on approaches as described above. </p>
| 408
|
model interpretability
|
Decomposition and scaling and their effect on interpretability?
|
https://datascience.stackexchange.com/questions/128130/decomposition-and-scaling-and-their-effect-on-interpretability
|
<p>I have time series GDP growth rate data that I use as my Y and other X variables that I put into neural networks to make predictions. The two questions that I have are:</p>
<ol>
<li><p>When I decompose my GDP_growth variable I get a detrended variable that I model and make predictions of. If I have train and test data until 2023 and want to make predictions for 2024 them they will likely be way different than the real values as when I removed the trend term I got smaller numbers for the new y. So how to deal with this? When I get predictions of 0.0034 because the data it is fed (after it being decomposed) is between -0.05 and 0.05 interpretability is impossible as the real values are generally bigger than 0.05 and smaller than -0.05. This is not the networks fault as this is all the data that it sees still the interpretability is awful.</p>
</li>
<li><p>When I scale my X's and put new data in (for example the year is 2025 and want to use the same model trained up to 2023 to make predictions) should I train a scaler upto 2023 and apply it to the 2025 data? If so, any unseen data for 2025 will be above the max or below the min of the min-max scaler.</p>
</li>
</ol>
|
<p>This is a problem with forward forecasting and min-max scaling. Without digging into your dataset and model architecture, experiment with Robust, Standard, and logarithmic scaling and compare the results of your forward-forecasted data. From personal experience, I have seen significant improvements in performance by using different scalers, even different scalers, for various features. The dataset, as does what you are looking to solve for, also matters.</p>
<p>If you continue with the min-max scaler in how you approach the problem, you will need to find a way to "retrain" your model as the market data changes and deviates. The time interval for the retraining takes much analysis to figure out and is no small feat as it is influenced by far more than just time. However, if you want something that works well and is manageable, change the scaler and optimize your dataset.</p>
| 409
|
model interpretability
|
How to restructure my dataset for interpretability without losing performance?
|
https://datascience.stackexchange.com/questions/60466/how-to-restructure-my-dataset-for-interpretability-without-losing-performance
|
<p><strong>What I am doing:</strong></p>
<p>I am predicting product ratings using boosted trees (XGBoost) with a dataset in this format:</p>
<p><a href="https://i.sstatic.net/UssVN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UssVN.png" alt="enter image description here" /></a></p>
<p><strong>What I want to do:</strong></p>
<p>I want to use <a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">SHAP TreeExplainer</a> to interpret each prediction my model gives in terms of product attributes and user ids.</p>
<p><strong>What I am getting:</strong></p>
<p>My model is drawing all the conclusions based on <strong>product names</strong> and user ids, instead of <strong>product attributes</strong> and user ids.</p>
<p><strong>What I tried:</strong></p>
<p>I discovered that each product name has a unique combination of product attributes, i.e. by knowing the product attributes you can find its name. So my idea was to remove the <code>product_name</code> column, leaving only the attributes.</p>
<p>My reasoning was that restructuring the dataset in this way would lead to the interpretability that I wanted without any performance loss (since the product name doesn't add any new information).</p>
<p><strong>What I got:</strong></p>
<p>The model performance decreased a lot. Even with a great deal of hyperparameter tuning, I couldn't get near the performance I had when also using the product name.</p>
<p><strong>What I think maybe going on:</strong></p>
<ol>
<li>My dataset is too small for the model to learn with the product attributes (10k samples, 60 attributes).</li>
</ol>
<p>or</p>
<ol start="2">
<li>Maybe there are some attributes adding bias and screwing with my model ability to generalize, leading to an overfit.</li>
</ol>
<p>I am a little skeptical about the number 2, seeing that my training loss also went up when I removed the product name.</p>
<p><strong>My question:</strong></p>
<p>So, how can I restructure my dataset? Does anybody have a clue why my model can't reach the same performance without using the product name? Any light or ideas on what I can try?</p>
|
<p>What may be happening is that your attribute predictors are weak predictors, they are noisy. Meaningful decision trees can't be made out of product attribute features by xgb. </p>
<p>When you are adding name as a predictor, xgb finds some signal wrt your target variable - rating and thus you get a better score. So your name plus attributes model may be performing better than attributes only model for this reason.</p>
<p>So if you from domain experience know product attributes are very weakly related to rating then you can conclude that this feature set of attributes is not going to help you make accurate predictions. Or instead of relying on d omain expertise, you can use correlation or relevant statistical tests to understand attributes relation to rating and if found that relationship is non existent or very weak you can conclude model isn't possible.</p>
<p>So may be add more relevant features if possible if you want to make a reasonably good model.</p>
<p>Regards
Vik</p>
| 410
|
model interpretability
|
How to interpret this Plot of Model Loss from a BiLSTM model?
|
https://datascience.stackexchange.com/questions/84394/how-to-interpret-this-plot-of-model-loss-from-a-bilstm-model
|
<p><a href="https://i.sstatic.net/GbG2Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GbG2Y.png" alt="enter image description here" /></a></p>
<p>Hi everyone,</p>
<p>the above graph is produced by a BiLSTM model i just trained and tested. I can't seem to interpret it while it is very different from the references that i acquired by googling online.
The graph above has a plateau appearing at the very beginning of the value loss. Shall I set my epochs to smaller than 20?</p>
<p>My model is like this:</p>
<pre><code>prepared_model = model.fit(X_train,y_train,batch_size=32,epochs=100,validation_data=(X_test,y_test), shuffle=False)
</code></pre>
<p>and how do you interpret it?
thank you guys.</p>
|
<p>It looks like your train/val loss curves have a very large generalisation gap, which suggests that your model is overfitting. THis simply means it does a great job making predictions for the training set but a terrible one for your validation set. This appears to be the case even in early epochs, since valid loss appears to never improve.</p>
<p>I see you have shuffle set to False. Is that related to shuffling datapoints in the batches? The unfortunate behaviour in training may as well trace back to the train and validation sets being very different. I suggest</p>
<ul>
<li>stratified train/val split</li>
<li>QA your train and val sets (e.g. class ratio in each set)</li>
<li>shuffle datapoints in your batches</li>
</ul>
| 411
|
model interpretability
|
Model selection by Interpreting p-value of anova function
|
https://datascience.stackexchange.com/questions/11248/model-selection-by-interpreting-p-value-of-anova-function
|
<p>I am trying to interpret the p-values for model selection. Here is a sample code taken from a book (<a href="http://www-bcf.usc.edu/~gareth/ISL/" rel="nofollow noreferrer">An Intro. to stat. Learning, page 290, by Gareth James et al.</a>)</p>
<p><a href="https://i.sstatic.net/Qo8f7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qo8f7.png" alt="enter image description here"></a>

Null-hypothsis: Model <code>M1</code> is sufficient to explain the data against the alternative hypothesis that a more complex model <code>M2</code>is required.</p>
<p>With this statement, I think p-value should be high so that null-hypotheis can be rejected and a model <code>M2</code> is selected. Keeping this in mind, I think we should select Model 5 as it is having highest p-value, but authors mention either of Model 3 or Model 4 is a good fit. I don't understand why Model 5 is not a good fit.</p>
|
<p>To understand why, we should try to understand what we're doing here.</p>
<p>Let's start with the first p-value: <code>2e-16</code>, what does that mean? This is the p-value under the null hypothesis that the linear model and the second polynomial model are statistically identical. We say that they are the same if the extra coefficient in the polynomial is statistically zero. This is exactly what the p-value is telling you. It reads like this: <code>There is 2e-16 probability that the null hypothesis of the linear model and the polynomial are identical</code>. This is a very small probability, so you can conclude that they are not identical. This means, the second-order polynomial is a <strong>better</strong> model than the linear model with a significance level of 5%.</p>
<p>(PS: a more correct statistical interpretation of the p-value relates to false-positive rejection, but let's not go into that deep)</p>
<p>Now, using the same logic, you can conclude the third-order polynomial fits <strong>better</strong> than the second-order polynomial.</p>
<p>Next, the p-value comparing the third and fourth order polynomial is about 0.05. You can reject it or you don't want to reject it, this is up to you. But If was you, I'd simply fail to reject it because it's larger than 0.05.</p>
<p>Finally, the final p-value is about <code>0.37</code> which is too high. This means although the fifth order polynomial fits better than the fourth order, the loss in your RSS is insufficient to justify the loss of the degree of freedom. Therefore, we say <code>the fifth order polynomial is statistically no better than the fourth-order</code>.</p>
<p><strong>Conclusion</strong>: It's "bad" to have a large p-value because you really want to reject a null hypothesis. Statistically, we do it to control the false positive rate.</p>
<p>PS: In your example, R uses the F-test to compare the two models.</p>
| 412
|
model interpretability
|
How to interpret predicted data from a keras model
|
https://datascience.stackexchange.com/questions/65421/how-to-interpret-predicted-data-from-a-keras-model
|
<p>I tried building a keras model to classify leaves from the leaf classification dataset on Kaggle. After I compiled and trained the model, I used it to predict the name of the leaves in the testing images, but all I got is an array of integers. How can I exactly interpret those numbers in order to get the names of
the leaves.</p>
<pre><code>model = Sequential()
model.add(Dense(128, kernel_initializer="uniform", input_dim= 192, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(99, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_history = model.fit(x=X_train,y=Y_train, epochs=500, batch_size= 32, validation_data=(X_val, Y_val), verbose=1)
predictions = model.predict(test_arr, batch_size=32, verbose=1)
computed_predictions = np.argmax(predictions, axis=1)
computed_predictions
</code></pre>
<pre><code>array([51, 50, 1, 19, 14, 3, 3, 28, 84, 8, 43, 74, 75, 10, 52, 46, 45,
73, 13, 71, 61, 68, 57, 77, 1, 70, 28, 15, 35, 70, 53, 74, 47, 50,
4, 36, 14, 69, 36, 93, 8, 32, 8, 9, 71, 70, 38, 23, 26, 18, 17,
5, 55, 94, 14, 86, 62, 33, 51, 54, 88, 56, 21, 59, 65, 11, 48, 5,
13, 4, 54, 57, 29, 7, 31, 98, 92, 84, 25, 10, 61, 43, 85, 24, 1,
2, 23, 83, 40, 22, 48, 90, 25, 21, 37, 56, 41, 95, 7, 49, 98, 77,
3, 12, 31, 84, 53, 96, 64, 72, 93, 93, 67, 30, 8, 88, 60, 87, 6,
57, 34, 34, 60, 17, 75, 27, 51, 73, 39, 23, 38, 2, 41, 61, 24, 97,
29, 28, 68, 81, 42, 51, 86, 62, 60, 52, 95, 81, 42, 96, 95, 20, 59,
35, 86, 1, 26, 38, 43, 75, 20, 60, 46, 79, 22, 79, 69, 87, 65, 97,
75, 21, 29, 21, 11, 10, 58, 94, 27, 22, 15, 45, 89, 54, 43, 5, 23,
94, 40, 49, 89, 72, 36, 11, 81, 95, 18, 91, 29, 64, 80, 6, 78, 45,
28, 9, 78, 90, 44, 89, 92, 13, 2, 59, 0, 96, 70, 32, 29, 78, 91,
55, 44, 38, 5, 60, 49, 58, 93, 67, 92, 88, 90, 79, 25, 37, 18, 0,
76, 27, 70, 71, 44, 70, 32, 90, 30, 82, 34, 30, 82, 96, 48, 65, 57,
64, 26, 53, 69, 73, 9, 3, 83, 26, 30, 63, 17, 22, 36, 63, 12, 78,
36, 14, 27, 25, 67, 38, 20, 54, 76, 69, 67, 97, 80, 44, 92, 69, 23,
21, 11, 51, 33, 77, 16, 11, 97, 1, 52, 39, 24, 52, 42, 17, 2, 73,
96, 83, 88, 9, 63, 50, 16, 37, 87, 95, 3, 35, 83, 60, 59, 58, 0,
79, 62, 38, 93, 68, 69, 46, 19, 46, 94, 18, 0, 33, 89, 40, 62, 48,
42, 6, 31, 91, 73, 81, 12, 85, 26, 6, 79, 2, 22, 35, 43, 6, 80,
78, 82, 5, 61, 37, 43, 33, 69, 56, 71, 45, 59, 42, 66, 86, 98, 83,
90, 64, 82, 11, 79, 56, 56, 49, 48, 20, 74, 15, 33, 49, 89, 44, 7,
35, 14, 55, 23, 34, 44, 32, 30, 36, 9, 72, 31, 61, 50, 82, 34, 28,
22, 92, 72, 11, 19, 4, 87, 51, 80, 39, 84, 32, 66, 36, 41, 31, 80,
4, 26, 68, 96, 20, 36, 34, 39, 56, 73, 76, 84, 7, 67, 37, 8, 95,
85, 62, 10, 65, 41, 2, 83, 86, 41, 52, 3, 49, 47, 76, 52, 11, 26,
88, 71, 45, 39, 66, 87, 75, 74, 7, 64, 65, 78, 63, 56, 21, 61, 88,
62, 91, 59, 12, 74, 15, 85, 8, 66, 57, 83, 82, 72, 58, 96, 7, 67,
66, 57, 66, 92, 35, 18, 9, 54, 91, 65, 19, 15, 10, 24, 71, 69, 48,
39, 98, 16, 19, 45, 74, 6, 69, 42, 34, 71, 47, 85, 28, 85, 47, 25,
27, 58, 68, 84, 97, 63, 97, 76, 81, 87, 77, 14, 0, 28, 41, 14, 12,
33, 86, 46, 4, 4, 47, 30, 19, 58, 13, 77, 98, 5, 49, 72, 53, 32,
77, 40, 68, 26, 92, 16, 81, 37, 14, 93, 80, 53, 46, 25, 50, 17, 37,
93, 0, 20, 54, 10, 91, 40, 81, 53, 18, 27, 1, 12, 54, 73, 15],
dtype=int64)
</code></pre>
|
<p><strong>Simple do backwards transformation of y_pred.</strong></p>
<p>You label-encoded Y_train (i.e. every number 1-99 essentially represents a leaf) you you need to do is find mapping from leaf number and leaf name (thats in LabelEncoder) you would proceede something like this</p>
<blockquote>
<pre><code>le=LabelEncoder()
bla bal
Y_train=le.fit_transform()
predictions_test = le.inverse_transform(prediction_test)
</code></pre>
</blockquote>
| 413
|
model interpretability
|
How to interpret classification output - Predective model
|
https://datascience.stackexchange.com/questions/93117/how-to-interpret-classification-output-predective-model
|
<p>What is the significance of the macro avg? I'm not sure if this report signifies a good prediction by the model.</p>
<p><a href="https://i.sstatic.net/YSnAU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YSnAU.png" alt="enter image description here" /></a></p>
|
<p>Your model is decent. Predictions are better than random chance.</p>
<p>Macro average is a simple average of the performance measures (precision, recall etc) across all classes/labels.</p>
<p>Micro average is a weighted average of the same , where the weights are based on the number of samples per class/label (support column).</p>
| 414
|
model interpretability
|
Interpreting Learning Curves of models
|
https://datascience.stackexchange.com/questions/117369/interpreting-learning-curves-of-models
|
<p>I need some help to understand if the models are overfitting and which of these we can consider "the best". On the internet i only find simple examples with learning curves but in these cases i'm not sure to interpret them, so thank you in advance. It's a binary classification problem, the classes in the dataset are quite balanced.
The first model is a Random Forest with all the features of the dataset:
<a href="https://i.sstatic.net/LjZH1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjZH1.png" alt="enter image description here" /></a></p>
<p>The second one is a KNN Classifier with all the features:
<a href="https://i.sstatic.net/fKbir.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fKbir.png" alt="enter image description here" /></a></p>
<p>Then I selected only 4 features of the dataset and I applied the models (using gridsearchcv so changing hyperparameters), this is Random Forest again:</p>
<p><a href="https://i.sstatic.net/q5svQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q5svQ.png" alt="enter image description here" /></a></p>
<p>The last one is KNN Classifier with only 4 features:</p>
<p><a href="https://i.sstatic.net/G5r9n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G5r9n.png" alt="enter image description here" /></a></p>
<p>Do the first 2 models have some problems looking at the learning curves? And looking at the last 2 models, they have improved comparing to the first 2? I see that the accuracy is worst in the second cases but maybe they are more solid models?</p>
|
<p>Difficult to tell without test data.</p>
<p>Validation data is important to fine-tune the hyperparameters and improve your model.</p>
<p>The final and most important result would be done with test data, i.e. unbiased data directly from the production environment.</p>
<p>In addition to that, a model is more robust if the validation result is closer to the training result, and if the score is neither too low(~0.7) nor too high(~0.95). Consequently, the third case "RF Classifier con best features" seems more reliable. But I could be wrong because I don't know the data and the results with test data or in the production environment.</p>
| 415
|
model interpretability
|
How to interpret Shapley value plot for a model?
|
https://datascience.stackexchange.com/questions/65307/how-to-interpret-shapley-value-plot-for-a-model
|
<p>I was trying to use <code>Shapley value</code> approach for understanding the model predictions. I am trying this on a <code>Xgboost</code> model. My plot looks like as below</p>
<p><a href="https://i.sstatic.net/lGb7V.png" rel="noreferrer"><img src="https://i.sstatic.net/lGb7V.png" alt="enter image description here"></a></p>
<p>Can someone help me interpret this? Or confirm my understanding is correct?</p>
<p>My interpretation</p>
<p><strong>1)</strong> High values of <code>Feature 5</code> (indicated by rose/purple combination) - leads to prediction 1</p>
<p><strong>2)</strong> Low values of <code>Feature 5</code> (indicated by blue) - leads to prediction 0</p>
<p><strong>3)</strong> Step 1 and 2 applies for <code>Feature 1</code> as well</p>
<p><strong>4)</strong> Low values of <code>Feature 6</code> leads to prediction 1 and high values of <code>Feature 6</code> leads to Prediction 0</p>
<p><strong>5)</strong> Low values of <code>Feature 8</code> leads to prediction 1 and high values of <code>Feature 8</code> leads to prediction 1 as well. If it's too the extreme of x-axis (meaning from x(1,2) or x(2,3) - it means the impact of low values (in this case) of this feature, has a huge impact on the prediction 1. Am I right?</p>
<p><strong>6)</strong> Why don't I see all my 45 features in the plot irrespective of the importance/influence. Shouldn't I be seeing <code>no color</code> when they have no importance. Why is that I only see around 12-14 features?</p>
<p><strong>7)</strong> What role does <code>Feature 43</code>,<code>Feature 55</code>, <code>Feature 14</code> play in prediction output?</p>
<p><strong>8)</strong> Why is the SHAP value range from <code>-2,2</code>?</p>
<p>Can someone help me with this?</p>
|
<p><strong>1. 2.</strong> not always there are some blue points also.</p>
<p><strong>3. 4. 5.</strong> yes</p>
<p><strong>6.</strong> it depends on the shap plot you are using, on some them default is to surpress less important features and not even plot them.</p>
<p><strong>7.</strong> They are discriminatory but not as much, you can reconcile them with some other feature selection technique and decide if you want to keep them.</p>
<p><strong>8.</strong> <strong>Range</strong> of the SHAP values are only bounded by the output magnitude range of the model you are explaining. The SHAP values will sum up to the current output, but when there are canceling effects between features some SHAP values may have a larger magnitude than the model output for a specific instance. If you are explaining a model that outputs a probability then the range of the values will be -1 to 1, because the range of the model output is 0 to 1. If you are explaining a model that outputs a real number or log odds the SHAP values could be larger since the model outputs can be larger.</p>
| 416
|
model interpretability
|
how to interpret predictions from model?
|
https://datascience.stackexchange.com/questions/37722/how-to-interpret-predictions-from-model
|
<p>I'm working on a multi-classification problem - <a href="https://www.kaggle.com/alxmamaev/flowers-recognition" rel="noreferrer">Recognizing flowers</a>. </p>
<p>I trained the mode and I achieved accuracy of 0.99.</p>
<p>To predict, I did:</p>
<pre><code>a = model.predict(train[:6])
</code></pre>
<p>output:</p>
<pre><code>array([[5.12799371e-18, 2.08305119e-05, 1.14476855e-07, 1.28556788e-02,
1.46144101e-08, 1.85072349e-05],
[7.72907813e-32, 7.86612819e-09, 8.08554124e-13, 1.87227300e-08,
4.61950422e-10, 6.42609745e-02],
[0.00000000e+00, 1.34313246e-02, 9.67072342e-13, 2.82699081e-12,
1.10958222e-10, 4.68058548e-14],
[7.75535319e-27, 6.51194032e-09, 2.49026186e-07, 1.88803018e-08,
3.77964647e-03, 7.01414028e-05],
[7.24011743e-22, 5.85804628e-07, 1.61177505e-09, 2.27746829e-01,
5.44432410e-09, 3.94427252e-06],
[1.81492225e-15, 3.36600904e-04, 4.39262622e-05, 8.63518100e-04,
9.29966700e-06, 9.75337625e-02]], dtype=float32)
</code></pre>
<p>How do I interpret this? How do I know get the label it predicted? I have five labels 0-4, which are assigned to 5 types of flowers.</p>
<p>My notebook is <a href="https://gist.github.com/jagadeesh-kotra/3da3f35bbed15f3ac125ddf3053e68a5" rel="noreferrer">here</a>. </p>
<p>What am I doing wrong here?</p>
|
<p>Alright so I rewrote some parts of your model such that it makes more sense for a classification problem. The first and most obvious reason your network was not working is due to the number of output nodes you selected. For a classification task the number of output nodes should be the same as the number of classes in your data. In this case we have 5 kinds of flowers, thus 5 labels which I reassigned to <span class="math-container">$y \in \{0, 1, 2, 3, 4\}$</span>, thus we will have 5 output nodes. </p>
<p>So let's go through the code. First we bring the data into the notebook using the code you wrote.</p>
<pre><code>from os import listdir
import cv2
daisy_path = "flowers/daisy/"
dandelion_path = "flowers/dandelion/"
rose_path = "flowers/rose/"
sunflower_path = "flowers/sunflower/"
tulip_path = "flowers/tulip/"
def iter_images(images,directory,size,label):
try:
for i in range(len(images)):
img = cv2.imread(directory + images[i])
img = cv2.resize(img,size)
img_data.append(img)
labels.append(label)
except:
pass
img_data = []
labels = []
size = 64,64
iter_images(listdir(daisy_path),daisy_path,size,0)
iter_images(listdir(dandelion_path),dandelion_path,size,1)
iter_images(listdir(rose_path),rose_path,size,2)
iter_images(listdir(sunflower_path),sunflower_path,size,3)
iter_images(listdir(tulip_path),tulip_path,size,4)
</code></pre>
<p>We can visualize the data to get a better idea of the distribution of the classes.</p>
<pre><code>import matplotlib.pyplot as plt
%matplotlib inline
n_classes = 5
training_counts = [None] * n_classes
testing_counts = [None] * n_classes
for i in range(n_classes):
training_counts[i] = len(y_train[y_train == i])/len(y_train)
testing_counts[i] = len(y_test[y_test == i])/len(y_test)
# the histogram of the data
train_bar = plt.bar(np.arange(n_classes)-0.2, training_counts, align='center', color = 'r', alpha=0.75, width = 0.41, label='Training')
test_bar = plt.bar(np.arange(n_classes)+0.2, testing_counts, align='center', color = 'b', alpha=0.75, width = 0.41, label = 'Testing')
plt.xlabel('Labels')
plt.xticks((0,1,2,3,4))
plt.ylabel('Count (%)')
plt.title('Label distribution in the training and test set')
plt.legend(bbox_to_anchor=(1.05, 1), handles=[train_bar, test_bar], loc=2)
plt.grid(True)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/2yJNJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2yJNJ.png" alt="enter image description here"></a></p>
<p>We will now transform the data and the labels to matrices.</p>
<pre><code>import numpy as np
data = np.array(img_data)
data.shape
data = data.astype('float32') / 255.0
labels = np.asarray(labels)
</code></pre>
<p>Then we will split the data.. Notice that you do not need to shuffle the data yourself since sklearn can do it for you.</p>
<pre><code>from sklearn.model_selection import train_test_split
# Split the data
x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.33, shuffle= True)
</code></pre>
<p>Let's construct our model. I changed the last layer to use the softmax activation function. This will allow the outputs of the network to sum up to a total probability of 1. This is the usual activation function to use for classification tasks. </p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense,Flatten,Convolution2D,MaxPool2D
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
model = Sequential()
model.add(Convolution2D(32, (3,3),input_shape=(64, 64, 3),activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dense(5,activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
</code></pre>
<p>Then we can train our network. This will result in about 60% accuracy on the test set. This is pretty good considering the baseline for this task is 20%.</p>
<pre><code>batch_size = 128
epochs = 10
model.fit(x_train, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test_binary))
</code></pre>
<p>After the model is trained you can predict instances using. Don't forget that the network needs to take the same shape in. Thus we must maintain the dimensionality of the matrix, that's why I use the [0:1].</p>
<pre><code>print('Predict the classes: ')
prediction = model.predict_classes(x_test[0:1])
print('Predicted class: ', prediction)
print('Real class: ', y_test[0:1])
</code></pre>
<p>This gives</p>
<blockquote>
<p>Predict the classes: 1/1 <br/>[==============================] - 0s
6ms/step <br/>Predicted class: [4] <br/>Real class: [4]</p>
</blockquote>
<h1>Some suggestions</h1>
<p>The model you are currently using is the one that is most common for MNIST. However, that data only has a single channel thus we don't need as many layers. You can increase the performance by increasing the complexity of your model. Or by reducing the complexity of your data, for example you can train using the grayscale equivalent of the images, thus reducing the problem to a single channel. </p>
| 417
|
model interpretability
|
Can numerical encoding really replace one-hot encoding?
|
https://datascience.stackexchange.com/questions/106673/can-numerical-encoding-really-replace-one-hot-encoding
|
<p>I am reading these articles (see below), which advocate the use of numerical encoding rather than one hot encoding for better interpretability of feature importance output from ensemble models. This goes against everything I have learnt - won't Python treat nominal features (like cities, car make/model) as ordinal if I encode them into integers?</p>
<p><a href="https://krbnite.github.io/The-Quest-for-Blackbox-Interpretability-Take-1/" rel="nofollow noreferrer">https://krbnite.github.io/The-Quest-for-Blackbox-Interpretability-Take-1/</a></p>
<p><a href="https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931" rel="nofollow noreferrer">https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931</a></p>
| 418
|
|
model interpretability
|
Interpretable models apart from Logistic Regression
|
https://datascience.stackexchange.com/questions/66066/interpretable-models-apart-from-logistic-regression
|
<p>I am wondering about other interpretable models apart from logistic regression.</p>
<p>I am looking for models that can interpret the effect on the target variable by unit change in any feature variable.</p>
<p>I was thinking tree-based models may help but I'm not sure.</p>
| 419
|
|
model interpretability
|
How to interpret RMSE to evaluate a regression model
|
https://datascience.stackexchange.com/questions/126593/how-to-interpret-rmse-to-evaluate-a-regression-model
|
<p>I am trying to evaluate a regression model (random forests); my understanding is that R^2 (coefficient of determination) is not a good measure of fitness since my dataset is non-linear. It looks like RMSE is the usual choice, but how do I know what is a 'good' value? Furthermore, it seems that RMSE is sensitive to the scale of the data?? I don't have a baseline model to compare to unfortunately.</p>
<p>edit: perhaps I use learning curves to determine if it is underfit or overfit ?</p>
|
<p>You didn’t tell us about the use case or business domain for your problem. For example, if you were modeling battery energy consumption in noise canceling headphones, root mean squared error would be a natural loss function for your model;
it falls out of the power equations.</p>
<p>Figure out what matters to the business. Write it down. Then pick a loss function that steers the model in the direction the business cares about.</p>
<hr />
<p>All that you told us about the problem you’re solving is that it involves a “non-linear” dataset. It’s not obvious that RMSE is a natural
measure for your problem.
You did not describe nonlinear equations of motion, or other relevant description of the situation you’re examining.</p>
<p>Often it can be convenient to precondition inputs with a nonlinear transform.
For example, if you were looking at impact velocity or crater size in some trebuchet observations, SQRT might be a natural fit.
If you’re looking at market cap of firms in a given vertical, or home prices, or salaries, distributions will skew toward large figures, and a LOG transform may prove useful for taming that long tail.</p>
<p>In the problems I’ve worked on, interpreting root mean square error has never been an issue. Usually it corresponds to heat load dissipated by a resistor, or size of a power supply in a control system. If error magnitude suggests we could exceed rated load of the component, then we look for another model solution, or buy a bigger component.</p>
<p>Clearly we can rank order models by RMSE. But if that’s not interpretable enough for your use case, and there’s not some obvious feature in the input space that suggests a ratio against the error output, then maybe RMSE isn’t appropriate to the problem. Adopting a metric “because everyone else is using it” doesn’t sound like a principled approach.</p>
<hr />
<blockquote>
<p>predict housing prices .... What would be an appropriate loss function?</p>
</blockquote>
<p>Well now it's pretty obvious what matters to the business.
For a large firm it's just profit, or capital at risk.
So <a href="https://en.wikipedia.org/wiki/Mean_absolute_error" rel="nofollow noreferrer">MAE</a>,
possibly with asymmetric skewing so a "loss" surprise weighs
more heavily than a windfall "profit" surprise.
Sounds like a pretty linear measure.</p>
<p>Predicting the error bars around an estimate might be more
valuable than the actual estimate.
In the face of large uncertainty, choose not to transact.</p>
<p>For a small firm, existential risk ("we can't make payroll next month")
may be the more interesting measure.
It's very non-linear, either we're in business next month or we aren't.
So training a model to identify low variance predicted transactions
could be the focus.
High recall might not matter if the market is large enough to be choosy,
as long as we have fairly high precision on the deals we choose to participate in.
A loss function like RMSE <em>can</em> be helpful here,
in the sense that it discourages large errors more than MAE would.
I can't offer a principled theory for why we should square such dollar errors,
instead of, say, cubing them.
If we're going for "time value of money", then maybe EXP plays nicely
with compound interest and with the opportunity cost of alternative
investments we didn't make.</p>
<p>I have worked with models of house price and of propensity for owner to sell.
I can tell you that nailing it within one standard deviation of ± $5k,
in the U.S. market, is essentially impossible. There's a lot of things
happening in the market, not all of them observable by a model.</p>
| 420
|
model interpretability
|
How can we assess the importance of the features even if we ended up applying PCA?
|
https://datascience.stackexchange.com/questions/67455/how-can-we-assess-the-importance-of-the-features-even-if-we-ended-up-applying-pc
|
<p>There are multiple techniques to analyze the feature importance (permutations, SHAP values, etc).</p>
<p>It is essential that, in order to improve the interpretability of the model, we can somehow map the transformed features to the original ones. For example, when we assess the feature importance of encoded features, we can somehow see what are the encoded variables related to primordial ones that are predominant at the top score/weight entries.</p>
<p>But, how can we assess the feature importance when we apply feature reduction techniques like PCA that can dramatically reduce the number of features and the interpretability of the model?</p>
<p>Thanks.</p>
| 421
|
|
model interpretability
|
How to interpret the Mean squared error value in a regression model?
|
https://datascience.stackexchange.com/questions/90396/how-to-interpret-the-mean-squared-error-value-in-a-regression-model
|
<p>I'm working on a simple linear regression model to predict 'Label' based on 'feature'. The two variables seems to be highly correlate corr=0.99. After splitting the data sample for to training and testing sets. I make predictions and evaluate the model.</p>
<pre><code>metrics.mean_squared_error(Label_test,Label_Predicted) = 99.17777494521019
metrics.r2_score(Label_test,Label_Predicted) = 0.9909449021176512
</code></pre>
<p>Based on the r2_score my model is performing perfectly. 1 being the highest possible value. But when it comes to the mean squared error, I don't know if it shows that my model is performing well or not.</p>
<ol>
<li><p>How can I interpret MSE here ?</p>
</li>
<li><p>If I had multiple algorithms and the same data sets, after computing MSE or RMSE for all models, how can I tell which one is better in describing the data ?</p>
</li>
<li><p>R2
score is 0.99, is this suspicious ? Or expected since the label and
feature are highly correlated?</p>
<pre><code> Feature Label
0 56171.757812 56180.234375
1 56352.500000 56363.476562
2 56312.539062 56310.859375
3 56432.539062 56437.460938
4 56190.859375 56199.882812
... ... ...
24897 56476.484375 56470.742188
24898 56432.148438 56432.968750
24899 56410.312500 56428.437500
24900 56541.093750 56541.015625
24901 56491.289062 56499.843750
</code></pre>
</li>
</ol>
<p><a href="https://i.sstatic.net/xVHCo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVHCo.png" alt="enter image description here" /></a></p>
|
<p>Whether you model is performing well or not depends on your business case, you might hive tiny RMSE or great looking score on whatever metric you are using, but it just not enough to solve the business problem, in that case the model is not performing well.</p>
<ol>
<li><p>MSE is just that Mean Squared Error</p>
</li>
<li><p>Both MSE and RMSE measure by how much the predicted result deviates from actual, because of the squared term more weight is given to larger errors, and because of square root in RMSE, it is in the same units as dependent variable. MAE, Mean Absolute Error is another useful metric to look at when you are evaluating a regression model; it is also easier to interpret.</p>
</li>
<li><p>Given your data, R-squared seems fine to me.</p>
</li>
</ol>
| 422
|
model interpretability
|
Interpreting a curve val_loss and loss in keras after training a model ; help
|
https://datascience.stackexchange.com/questions/111721/interpreting-a-curve-val-loss-and-loss-in-keras-after-training-a-model-help
|
<p>I need some help in Interpreting a curve val_loss and loss in keras after training a model</p>
<p>These are the learning curves:
<a href="https://i.sstatic.net/gdmMw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gdmMw.png" alt="after 100 epoch of training " /></a></p>
|
<p>The training loss decreases while the validation loss increases, which is a clear sign of <strong>overfitting</strong>. This means that your model is too specific to the training dataset and do not generalize to the testing dataset. To solve that problem you can decrease the complexity of the model or add regularization.</p>
| 423
|
model interpretability
|
How do I interpret the output of linear regression model in R?
|
https://datascience.stackexchange.com/questions/77823/how-do-i-interpret-the-output-of-linear-regression-model-in-r
|
<p><a href="https://i.sstatic.net/mHbeh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHbeh.png" alt="enter image description here" /></a></p>
<p>I have the following linear regression model and its analysis. There are a few errors, but I am not very sure about the errors. I have not succeeded in finding them so far.</p>
<p>First, the 95% confidence interval for the slope should be</p>
<p><a href="https://i.sstatic.net/aMVXC.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aMVXC.gif" alt="enter image description here" /></a></p>
<p>So the calculation is wrong.</p>
<p>Second, I'm not sure about the interpretation of the confidence interval. How would you interpret it in the context ?</p>
|
<p>So, the question is centred around the meaning behind a confidence interval.</p>
<p>The main principle behind confidence intervals is the following:</p>
<p>It is very costly and time-inefficient (if not impossible) to sample the whole population (i.e. all UCLA students from China and Hong Kong) and measure their cultural adjustment. Therefore, we can take a sample from this population (i.e. 200 students).</p>
<p>From this sample we can then develop a linear model between input features (i.e. country) and the level of cultural adjustment and establish the slope of the model.</p>
<p>The slope alone only tells us the slope for the linear model trained on these 200 students. We want to have an estimate of the slope that we have at least 95% confidence that it lies in a particular range. Hence, confidence intervals.</p>
<p>Here is an article on confidence intervals for further reference: <a href="https://www.simplypsychology.org/confidence-interval.html" rel="nofollow noreferrer">https://www.simplypsychology.org/confidence-interval.html</a></p>
| 424
|
model interpretability
|
How to interpret metrics of a model after scaling the data
|
https://datascience.stackexchange.com/questions/51323/how-to-interpret-metrics-of-a-model-after-scaling-the-data
|
<p>I have a GradientBoostingRegressor from <code>scikit-learn</code> which I trained. Afterwards, I obviously would like to know how good the model is. So, on a non-scaled dataset I would just use the <code>mean_squared_error</code> function from scikit and it would output a certain value that made sense (regarding the dataset).</p>
<p>Now, I scaled (/ transform) the targets in my dataset using the scikit <code>QuantileTransformer(output_distribution='uniform')</code>. The target is now scaled between 0 - 1. This is fine during training etc. </p>
<p>After training the model, I ran the following code to get a few metrics:</p>
<pre><code>test_pred = gb.predict(X_test)
mse_test = mean_squared_error(y_test, test_pred)
print("RMSE on Test:", np.sqrt(mse_test))
print("MSE on Test:", mse_test)
mae_test = mean_absolute_error(y_test, test_pred)
print("MAE on Test:", mae_test)
</code></pre>
<p>Because the target values are scaled, the output is something similar to this:</p>
<pre><code>RMSE on Test: 0.23563730007705744
MSE on Test: 0.05552493718760521
MAE on Test: 0.19235478752773819
</code></pre>
<p>I assumed that I could get the 'actual' non-scaled metrics back by applying the <code>QuantileTransformer.inverse_transform</code> function to the output.</p>
<p>So then I got:</p>
<pre><code>RMSE on Test: 2231.21330222
MSE on Test: 807.28588575
MAE on Test: 1888.23406628
</code></pre>
<p>Which doesn't seem very <em>correct</em> to me. Normally, the RMSE would be smaller than the MSE, but that isn't the case. If you get that the sqrt of a (MSE) value under 1. Also, the MAE should probably be smaller than the MSE.</p>
<p>My question is, how do you interpret those scaled metric values? Is the inverse_transform output correct? How do I get correct, non-scaled values for the metrics?</p>
<p>I'd appreciate some help on this.</p>
<p>Thanks.</p>
<p>Edit:
The <code>QuantileTransformer</code> is only an example. The question also applies to the <code>MinMaxScaler</code> and other scalers in general.</p>
|
<p>I solved this issue with the help of @Franziska W. Thanks!</p>
<p>I currently reverse transform y_test and y_pred and then calculate the metrics as following:</p>
<pre><code># qt_y is a QuantileTransformer instance, gb is a GradientBoostingRegressor
y_pred = gb.predict(X_test)
inv_y_pred = qt_y.inverse_transform(y_pred.reshape(-1, 1))
inv_y_test = qt_y.inverse_transform(y_test.reshape(-1, 1))
mse_test = mean_squared_error(inv_y_test, inv_y_pred)
print("RMSE on Test:", np.sqrt(mse_test))
print("MSE on Test:", mse_test)
mae_test = mean_absolute_error(inv_y_test, inv_y_pred)
print("MAE on Test:", mae_test)
</code></pre>
| 425
|
model interpretability
|
How do standardization and normalization impact the coefficients of linear models?
|
https://datascience.stackexchange.com/questions/80624/how-do-standardization-and-normalization-impact-the-coefficients-of-linear-model
|
<p>One benefit of creating a linear model is that you can look at the coefficients the model learns and interpret them. For example, you can see which features have the most predictive power and which do not.</p>
<p>How, if at all, does feature interpretability change if we normalize (scale all features to 0-1) all our features vs. standardizing (subtract mean and divide by the standard deviation) them all before fitting the model.</p>
<p>I have read elsewhere that you 'lose feature interpretability if you normalize your features' but could not find an explanation as to why. If that is true, could you please explain?</p>
<p>Here are two screenshots of the coefficients for two multiple linear regression models I built. It uses Gapminder 2008 data and statistics about each country to predict its fertility rate.</p>
<p>In the first, I scaled features using StandardScaler. In the second, I used MinMaxScaler. The Region_ features are categorical and were one-hot encoded and not scaled.</p>
<p>Not only did the coefficients change based on different scaling, but their ordering (of importance?) did too! Why is this the case? What does it mean?</p>
<p><a href="https://i.sstatic.net/ErjUO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ErjUO.png" alt="Linear Model coefficients using StandardScaler for preprocessing" /></a></p>
<p><a href="https://i.sstatic.net/pVatE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pVatE.png" alt="Linear Model coefficients using MinMaxScaler for preprocessing" /></a></p>
|
<p>When you have a linear regression (without any scaling, just plain numbers) and you have a model with one explanatory variable <span class="math-container">$x$</span> and coefficients <span class="math-container">$\beta_0=0$</span> and <span class="math-container">$\beta_1=1$</span>, then you essentially have a (estimated) function:</p>
<p><span class="math-container">$$y = 0 + 1x .$$</span></p>
<p>This tells you that when <span class="math-container">$x$</span> goes up (down) by one unit, <span class="math-container">$y$</span> goes up (down) by one unit. In this case it is just a linear function with slope 1.</p>
<p>Now when you scale <span class="math-container">$x$</span> (the plain numbers) like:</p>
<pre><code>scale(c(1,2,3,4,5))
[,1]
[1,] -1.2649111
[2,] -0.6324555
[3,] 0.0000000
[4,] 0.6324555
[5,] 1.2649111
</code></pre>
<p>you essentially have different units or a different scale (with mean=0, sd=1).</p>
<p>However, the way OLS works will be the same, it still tells you "if <span class="math-container">$x$</span> goes up (down) by one unit, <span class="math-container">$y$</span> will change by <span class="math-container">$\beta_1$</span> units. So in this case (given a different scale of <span class="math-container">$x$</span>), <span class="math-container">$\beta_1$</span> will be different.</p>
<p>The interpretation here would be "if <span class="math-container">$x$</span> changes by one standard deviation...". This is very handy when you have several <span class="math-container">$x$</span> with different units. When you standardise all the different units, you make them comparable to some extent. I.e. the <span class="math-container">$\beta$</span> coefficients of your regression will be compareable in terms of how strong the variables impact on <span class="math-container">$y$</span> is. This is sometimes calles <a href="https://en.wikipedia.org/wiki/Standardized_coefficient#:%7E:text=In%20statistics%2C%20standardized%20%5Bregression%5D,and%20independent%20variables%20are%201." rel="nofollow noreferrer">Beta-Coefficients or Standardised Coefficients</a>.</p>
<p>A very similar thing happens when you normalise. In this case you will also change the scale of <span class="math-container">$x$</span>, so the way how <span class="math-container">$x$</span> is measured.</p>
<p>Also see <a href="https://u.demog.berkeley.edu/%7Eandrew/teaching/standard_coeff.pdf" rel="nofollow noreferrer">this handout</a>.</p>
| 426
|
model interpretability
|
Methods to Interpret Physical Relationships Between Inputs and Output in a Trained ANN Model
|
https://datascience.stackexchange.com/questions/129910/methods-to-interpret-physical-relationships-between-inputs-and-output-in-a-train
|
<p>I have trained an artificial neural network (ANN) model using PyTorch that accurately predicts a scalar output based on three input tensors with the following shapes:</p>
<p>Input 1: (3600, 3)
Input 2: (10001, 3)
Input 3: (1,)
Output: (1,) float64</p>
<p>The model architecture includes fully connected layers that take flattened and concatenated inputs from these tensors. The model is trained successfully and gives accurate predictions.</p>
<p>I aim to interpret the physical relationships between these input matrices and the output. To find a mathematical relation, that explains the Physics behind how the three inputs affect the output value.</p>
<ol>
<li><p>What techniques can be used to understand the contribution of different features in the input matrices to the final prediction?</p>
</li>
<li><p>Are there effective visualization methods to illustrate how the input data relates to the output in a physical context?</p>
</li>
</ol>
<p>I'm relatively new to ANN and is there any specific name for this process? That is extracting the relation from a trained model?</p>
<p>I am seeking guidance on methodologies, tools, and best practices to achieve these goals. Any insights or references to relevant resources would be greatly appreciated.</p>
<p>I tried plotting the fc1 layer and here is the result, it is not helping much</p>
<p><a href="https://i.sstatic.net/XWe9f5Tc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWe9f5Tc.png" alt="Bar chart showing the weight of each input (flattened)" /></a></p>
| 427
|
|
model interpretability
|
What is the difference between explainable and interpretable machine learning?
|
https://datascience.stackexchange.com/questions/70164/what-is-the-difference-between-explainable-and-interpretable-machine-learning
|
<p><a href="https://statmodeling.stat.columbia.edu/2018/10/30/explainable-ml-versus-interpretable-ml/" rel="noreferrer">O’Rourke</a> says that explainable ML uses a black box model and explains it afterwards, whereas interpretable ML uses models that are no black boxes.</p>
<p><a href="https://christophm.github.io/interpretable-ml-book/interpretability.html" rel="noreferrer">Christoph Molnar</a> says interpretable ML refers to the degree to which a human can understand the cause of a decision (of a model). He then uses interpretable ML and explainable ML interchangably.</p>
<p><a href="https://en.wikipedia.org/wiki/Explainable_artificial_intelligence" rel="noreferrer">Wikipedia</a> says on the topic "Explainable artificial intelligence" that it refers to AI methods and techniques such that the results of the solution can be understood by human experts. It contrasts with the concept of the black box in machine learning where even their designers cannot explain why the AI arrived at a specific decision. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.</p>
<p><a href="https://arxiv.org/abs/1702.08608" rel="noreferrer">Doshi-Velez and Kim</a> say that that interpretable machine learning systems provide explanation for their outputs.</p>
<p>Obviously, there are a lot of definitions but they do not totally agree. Ultimatively, what should be explained: The results of the model, the model itself or how the model makes decissions? And what is the difference between interpret and explain?</p>
|
<p>I found <a href="https://www.nature.com/articles/s42256-019-0048-x" rel="noreferrer">this article by Cynthia Rudin</a> which goes a bit more into the difference between the two terms that is in line with your source from O'Rourke.</p>
<p>At the core it is about the time and mechanism of the explanation:</p>
<p>A priori (Interpretable) vs. A posterio (Explainable)</p>
<p>I found this quote to be very helpful and inline with my own thoughts (emphasis mine):</p>
<blockquote>
<p>Rather than trying to create <strong>models that are inherently interpretable</strong>, there has been a recent explosion of work on <strong>‘explainable ML’, where a second (post hoc) model is created to explain the first black box model</strong>. This is problematic. <strong>Explanations are often not reliable, and can be misleading</strong>, as we discuss below. If we instead use <strong>models that are inherently interpretable</strong>, they <strong>provide their own explanations</strong>, which are <strong>faithful to what the model actually computes</strong>.</p>
</blockquote>
<p>In short an interpretable model is able to output humanly understandable summaries of its calculation that allow us understand how it came to specific conclusions. Due to that a human would be able to actually create a specific desired outcome by selecting specific inputs.</p>
<p>A "merely" explainable model however does not deliver this input and we need a second model or mode of inspection to create a "Hypothesis about its mechanism" that will help explain the results but not allows to rebuild results by hand deterministically.</p>
| 428
|
model interpretability
|
Interpreting a curve val_loss and loss in keras after training a model
|
https://datascience.stackexchange.com/questions/58506/interpreting-a-curve-val-loss-and-loss-in-keras-after-training-a-model
|
<p><a href="https://i.sstatic.net/W0JuB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W0JuB.png" alt="mu"></a></p>
<p><a href="https://i.sstatic.net/Osp3x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Osp3x.png" alt="ee"></a></p>
<p>I am having trouble understanding the curve <code>val_loss</code> and <code>loss</code> in keras after training my model.</p>
<p>Can anyone help me understand it? Also, is my model overfitting or underfitting?</p>
|
<p>The loss curve shows what the model is trying to reduce. The training procedure tried to achieve the lowest loss possible. The loss is calculated using the number of training examples that the models gets right, versus the ones it gets wrong. Or how close it gets to the right answer for regression problems. </p>
<p>The loss curves are going smoothly down, meaning your model improves as it is training, which is good. Your test loss is slightly higher than your training loss, meaning your model is slightly overfitting the training data, but that’s inevitable, it doesn’t seem problematic. All seems ok from this plot.</p>
<p>Now your model is getting an accuracy of 30% or so. Unless you tell us what the model is doing and how you define accuracy, there’s no way of knowing if that’s ok or not.</p>
| 429
|
model interpretability
|
Feature engineering of timestamp for time series analysis
|
https://datascience.stackexchange.com/questions/107215/feature-engineering-of-timestamp-for-time-series-analysis
|
<p>Following a Tensorflow time series analysis <a href="https://www.tensorflow.org/tutorials/structured_data/time_series" rel="nofollow noreferrer">tutorial</a>, I came across a particular way of converting data timestamps into a time-of-day periodic signal, that could help the model interpret the data better than just providing the timestamp.</p>
<pre><code>timestamp_s = date_time.map(pd.Timestamp.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
</code></pre>
<p><a href="https://i.sstatic.net/iCzJQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iCzJQ.png" alt="enter image description here" /></a></p>
<p>I am not sure I understand how time of day and day of year periodic structure was extracted from the time stamp, so I would appreciate any pointers regarding this.</p>
<p>Lastly, would a simple normalized <code>time_of_day</code> and <code>day_of_year</code> extra new columns from <code>date_time</code> column suffice?</p>
|
<p><code>timestamp_s = date_time.map(pd.Timestamp.timestamp)</code>
takes a column of timestamps and <a href="https://www.tensorflow.org/tutorials/structured_data/time_series#time" rel="nofollow noreferrer">converts them into seconds-since-1970</a> format (also called unix timestamp).</p>
<p><code>day</code> is set to 86400 seconds. The remainder from dividing <code>timestamp_s</code> by <code>day</code> is the time of day, where 0 is midnight (in UTC timezone), and 34200 is noon, and 86359 is 23:59:59.</p>
<p><code>np.sin()</code> takes input in radians, so that is what the multiplying by <code>2 * np.pi</code> is doing. (There is no need to explicitly take the remainder, of course, because sine is a cyclic function.)</p>
<p>The year calculation is using the same idea, but using the number of seconds in a year. So 0 is Jan 1st, 00:00:00 UTC, 86359 is Jan 1st, 23:59:59 UTC, and er... 31557599 is 23:59:59 UTC on Dec 31st.</p>
<p>Well, kind of. They are using 365.25 to avoid messing around with Feb 29th and leap years. But it does mean that e.g. 10am on Dec 25th is only the same number every 4th year.</p>
<p>Another common one would be to use <code>day * 7</code> to see if day of week is a useful predictor. E.g. if the data is supermarket sales figures.</p>
<p>I really like their fourier transform graph they show in that article. That clearly shows that weekday would not be at all useful. Ah, just seen it is temperature data being plotted. That makes sense!</p>
<blockquote>
<p>Lastly, would a simple normalized time_of_day and day_of_year extra new columns from date_time column suffice?</p>
</blockquote>
<p>As in -1.0 for 00:00:00 through to +1.0 for 23:59:59. And -1.0 for Jan 1st through to +1.0 for Dec 31st.</p>
<p>The nice feature the sine waves bring is you don't get that disjoint at midnight, and at new year. You could instead do -1.0 for 00:00:00 through to +1.0 for 12:00:00, then back to -1.0 for 23:59:59 (and something similar with Jun 30th). But, at the point, sine is looking both smoother and simpler to code.</p>
| 430
|
model interpretability
|
How to perform bootstrap validation on CART decision tree?
|
https://datascience.stackexchange.com/questions/114309/how-to-perform-bootstrap-validation-on-cart-decision-tree
|
<p>I have a relatively small dataset n = 500 for which I am training a CART decision tree.
My dataset has about 30 variables and the outcome has 3 classes.
<a href="https://i.sstatic.net/LbmUp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LbmUp.png" alt="enter image description here" /></a></p>
<p>I am using CART for interpretability purposes, as what I am interested in, is sharing and analyzing the rules generated by CART.</p>
<p>However, I don't know what would be the optimal way to externally validate my CART model while still maintaining the intrinsic interpretability of CART.</p>
<p>The only option I came up with was using 80/20 split which I find relatively weak.
Intuitively, I would like to use bootstrapping which I think would be more robust in terms of justifying the lack of external dataset but I don't think bootstrapping would translate in CART.</p>
<p>The question is open. How would you approach validation (other than the generic 20/80 split) for a CART decision tree model?</p>
<p>As of note, this task is for a potential publication in a medical journal.</p>
<p>Thank you!</p>
| 431
|
|
model interpretability
|
How to compare and interpret results of two neural networks models?
|
https://datascience.stackexchange.com/questions/80564/how-to-compare-and-interpret-results-of-two-neural-networks-models
|
<p>I made two models of neural networks(different architectures), the first one gave me as result:</p>
<ul>
<li><p>Training Loss: 0.4195</p>
</li>
<li><p>Training Acc: 0.8325</p>
</li>
<li><p>Training Recall: 0.7266</p>
</li>
<li><p>Validation Loss: 0.4331</p>
</li>
<li><p>Validation Acc: 0.8483</p>
</li>
<li><p>Validation Recall: 0.7400</p>
</li>
<li><p>Test Acc: 0.7864</p>
</li>
<li><p>Test Recall: 0.7657</p>
</li>
</ul>
<p>The second one:</p>
<ul>
<li><p>Training Loss: 0.3354</p>
</li>
<li><p>Training Acc: 0.8583</p>
</li>
<li><p>Training Recall: 0.7791</p>
</li>
<li><p>Validation Loss: 0.3791</p>
</li>
<li><p>Validation Acc: 0.8493</p>
</li>
<li><p>Validation Recall: 0.7773</p>
</li>
<li><p>Test Acc: 0.8254</p>
</li>
<li><p>Test Recall: 0.7602</p>
</li>
</ul>
<p>The problem there is that although the second model gives a better loss value, the first model gives a better result on the test set, so how can we interpret and compare these results? and does it have a way to test the stability of a model?</p>
<p>NB: the hyper parameters are the same for two models (batch size, learning rate, optimization algorithm) and the same validation and training sets</p>
| 432
|
|
model interpretability
|
Is multicollinarity a problem when interpreting SHAP values from an XGBoost model?
|
https://datascience.stackexchange.com/questions/111255/is-multicollinarity-a-problem-when-interpreting-shap-values-from-an-xgboost-mode
|
<p>I'm using an XGBoost model for multi-class classification and is looking at feature importance by using SHAP values. I'm curious if multicollinarity is a problem for the interpretation of the SHAP values? As far as I know, XGB is not affected by multicollinarity, so I assume SHAP won't be affected due to that?</p>
|
<p>Shapley values are designed to deal with this problem. You might want to have a look <a href="https://christophm.github.io/interpretable-ml-book/shapley.html" rel="nofollow noreferrer">at the literature</a>.</p>
<p>They are based on the idea of a collaborative game, and the goal is to compute each player's contribution to the total game.</p>
<p>Lets say you are playing in the football champions league final Real Madrid vs Liverpool. And Madrid only has 3 players A,B,C and they somehow score 5 goals.</p>
<p>To calculate the Shapley value of player 1, you will have the following combinations playing, combinations:</p>
<p><span class="math-container">$S_1 = \frac{1}{3}\left( v(\{1,2,3\} - v(\{2,3\})\right) + \frac{1}{6}\left( v(\{1,2\} - v(\{2\})\right) + \frac{1}{6}\left( v(\{1,3\} - v(\{3\})\right)+ \frac{1}{3}\left( v(\{1\} - v(\emptyset)\right)$</span></p>
<p>$S_2 = \frac{1}{3}\left( v({1,2,3} - v({1,2})\right) + \frac{1}{6}\left( v({1,2} - v({1})\right) + \frac{1}{6}\left( v({2,3} - v({3})\right)+ \frac{1}{3}\left( v({2} - v(\emptyset)\right)$$</p>
<p><span class="math-container">$S_3 = \frac{1}{3}\left( v(\{1,2,3\} - v(\{1,2\})\right) + \frac{1}{6}\left( v(\{1,3\} - v(\{1\})\right) + \frac{1}{6}\left( v(\{2,3\} - v(\{2\})\right)+ \frac{1}{3}\left( v(\{3\} - v(\emptyset)\right)$</span></p>
<p>Where v = value of the function of the set. For the Real Madrid the numbers of goals scored, by the different combinations of players.</p>
<p>As you see, the theoretical definition encapsulates the dependence between features. The theory will tell you that the sum of the contributions is equal to the prediction <span class="math-container">$S_1 + S_2 + S_3 = 5$</span>.</p>
<p>Let's now if RM players gets some high Shapley values next week.</p>
| 433
|
model interpretability
|
How to interpret the graph representing the fit provided by the ARIMA model?
|
https://datascience.stackexchange.com/questions/53222/how-to-interpret-the-graph-representing-the-fit-provided-by-the-arima-model
|
<p>I'm following this tutorial <a href="https://www.datascience.com/blog/introduction-to-forecasting-with-arima-in-r-learn-data-science-tutorials" rel="nofollow noreferrer">here</a> to build an ARIMA model in R. </p>
<p>I've done a Forecast using a fitted model in R. I specified the forecast horizon h periods ahead for predictions to be made and used the fitted model to generate those predictions. Then I plotted them to see the results and this is what I got:</p>
<p><a href="https://i.sstatic.net/AfCrU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AfCrU.png" alt="enter image description here"></a></p>
<p>The light blue line above is supposed to show the fit provided by the model.</p>
<p>In the Tutorial this is what they got (their Dataset is different from mine):</p>
<p><a href="https://i.sstatic.net/v0r2H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v0r2H.png" alt="enter image description here"></a></p>
<p>In their case it is clear in contrast with mine. Can somebody provide me with an explanation regarding the graph that I got. Does it make any sense? Does it tell something or is what I got completely wrong (knowing that I followed the tutorial as it is)?</p>
<p>Any insights would be much appreciated.</p>
<p>Thank you.</p>
<p>Here's my code : </p>
<pre><code> #Removing Outliers
count_ts = ts(df[, c('qty')])
df$clean_qty = tsclean(count_ts)
df<span class="math-container">$cnt_ma = ma(df$</span>clean_qty, order=7) # using the clean count with no outliers
df<span class="math-container">$cnt_ma30 = ma(df$</span>clean_qty, order=30)
#Data Decomposition
count_ma = ts(na.omit(df$cnt_ma), frequency=30)
decomp = stl(count_ma, s.window="periodic")
deseasonal_cnt <- seasadj(decomp)
fit2 = arima(deseasonal_cnt, order=c(1,1,7))
fcast <- forecast(fit2, h=30)
plot(fcast)
</code></pre>
| 434
|
|
model interpretability
|
How can I know how to interpret the output coefficients (`coefs_`) from the model sklearn.svm.LinearSVC()?
|
https://datascience.stackexchange.com/questions/17970/how-can-i-know-how-to-interpret-the-output-coefficients-coefs-from-the-mode
|
<p>I'm following <em>Introduction to Machine Learning with Python: A Guide for Data Scientists</em> by Andreas C. Müller and Sarah Guido, and in Chapter 2 a demonstration of applying <code>LinearSVC()</code> is given. The result of classifying three blobs is shown in this screenshot:</p>
<p><a href="https://i.sstatic.net/U23J4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U23J4.png" alt="enter image description here"></a></p>
<p>The three blobs are obviously correctly classified, as depicted by the colored output.</p>
<p>My question is <strong>how are we supposed to know how to interpret the model fit output</strong> in order to draw the three lines? The output parameters are given by</p>
<pre><code>print(LinearSVC().fit(X,y).coef_)
[[-0.17492286 0.23139933]
[ 0.47621448 -0.06937432]
[-0.18914355 -0.20399596]]
print(LinearSVC().fit(X,y).intercept_)
[-1.07745571 0.13140557 -0.08604799]
</code></pre>
<p>And the authors walk us through how to draw the lines:</p>
<pre><code>from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X,y)
...
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1]) #HOW DO WE KNOW
plt.ylim(-10, 15)
plt.xlim(-10, 8)
plt.show()
</code></pre>
<p>The line of code with the comment is the one that converts our coefficients into a slope/intercept pair for the line:</p>
<pre><code>y = -(coef_0 / coef_1) x - intercept/coef_1
</code></pre>
<p>where the term in front of <code>x</code> is the slope and <code>-intercept/coef_1</code> is the intercept. In the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html" rel="nofollow noreferrer">documentation on LinearSVC</a>, the <code>coef_</code> and <code>intercept_</code> are just called "attributes" but don't point to any indicator that <code>coef_0</code> is the slope and <code>coef_1</code> is the negative of some overall scaling.</p>
<p><strong>How can I look up the interpretation of the output coefficients of this model and others similar to it in Scikit-learn without relying on examples in books and StackOverflow?</strong></p>
|
<p>Here's one (admittedly hard) way.</p>
<p>If you really want to understand the low-level details, you can always work through the source code. For example, we can see that the <code>LinearSVC</code> <code>fit</code> method <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/classes.py#L210" rel="nofollow noreferrer">calls <code>_fit_liblinear</code></a>. That <a href="https://github.com/scikit-learn/scikit-learn/blob/668a24c9e6e1a541bede64819612fdb5e169a341/sklearn/svm/base.py#L884" rel="nofollow noreferrer">calls <code>train_wrap</code> in liblinear</a>, which gets everything ready to <a href="https://github.com/scikit-learn/scikit-learn/blob/4d9a12d175a38f2bcb720389ad2213f71a3d7697/sklearn/svm/liblinear.pyx#L55" rel="nofollow noreferrer">call into the C++ function <code>train</code></a>.</p>
<p>So <a href="https://github.com/scikit-learn/scikit-learn/blob/4d9a12d175a38f2bcb720389ad2213f71a3d7697/sklearn/svm/src/liblinear/linear.cpp#L2364" rel="nofollow noreferrer"><code>train</code> in linear.cpp</a> is where the heavy lifting begins. Note that the <code>w</code> member of the model struct in the train function gets mapped back to <code>coef_</code> in Python.</p>
<p>Once you understand exactly what the underlying <code>train</code> function does, it should be clear exactly what <code>coef_</code> means and why we draw the lines that way. </p>
<p>While this can be a little laborious, once you get used to doing things this way, you will really understand how everything works from top to bottom.</p>
| 435
|
model interpretability
|
How to interpret continuous variables in a decision tree model?
|
https://datascience.stackexchange.com/questions/23247/how-to-interpret-continuous-variables-in-a-decision-tree-model
|
<p>After fitting a decision tree with some continuous variable, how do I interpret the effect that variable has on the target?</p>
<p>For example I'm predicting target Y. From sklearn random forest or Xgboost I can find out that the feature X is important. How do I determine if feature X's correlation to Y is positive or negative?</p>
|
<p>Calculate the Pearson correlation coefficient or the Spearman rank <a href="https://databasecamp.de/en/statistics/correlation-and-causation" rel="nofollow noreferrer">correlation</a> coefficient between feature X and target Y. The <a href="https://databasecamp.de/en/statistics/correlation-and-causation" rel="nofollow noreferrer">correlation</a> coefficient quantifies the direction and strength of the linear or monotonic relationship between the two variables.</p>
<p>A positive <a href="https://databasecamp.de/en/statistics/correlation-and-causation" rel="nofollow noreferrer">correlation</a> coefficient indicates a positive relationship between feature X and the target Y, while a negative <a href="https://databasecamp.de/en/statistics/correlation-and-causation" rel="nofollow noreferrer">correlation</a> coefficient suggests a negative relationship. The magnitude of the coefficient indicates the strength of the relationship.</p>
| 436
|
model interpretability
|
Beginner level: how to interpret LIME and classification result
|
https://datascience.stackexchange.com/questions/93687/beginner-level-how-to-interpret-lime-and-classification-result
|
<p>I am new to the concept of model interpretability using LIME method. I am following the tutorial <a href="https://www.mathworks.com/help/deeplearning/ug/investigate-spectrogram-classifications-using-lime.html" rel="nofollow noreferrer">LIME for spectrogram classification</a>. I am finding hard to understand the color coding -- before using LIME the important features were already visible. After applying LIME there is no way to see the colors that highlight the important features. The last set of images below the section "Compute LIME" shows the plot of spectrograms before and after LIME -- to me they look the same. Another tutorial <a href="https://www.mathworks.com/help/deeplearning/ref/imagelime.html" rel="nofollow noreferrer">imageLime</a> shows how to apply LIME on natural images. This one is ok to understand since the overlay of LIME on the dog picture is clear. But for synthetic or non-natural images such as the spectrogram, I cannot understand the overlay part. Can somebody help in making me understand how to make sense of the color code and the overlay? Below is a spectrogram image <code>X</code> that I generated using some random data shown in Fig1. Fig2 is obtained by overlaying the LIME map `'image and Fig3 is obtained by applying segmentation. Fig4 is the result of overlay of superpixels. I cannot understand from Fig2 if the entire image pixels are important or not since the color seems to cover the whole image. Similarly, from Fig3 and Fig4 what does the segmentation tell? Also any suggestion on how to do segmentation or apply LIME in a better way would be very helpful.</p>
<pre><code>figure
imagesc(X)
colormap gray %fig1
hold on %fig2
label = categorical(Y(900)); %choosing 900the image as the query
scoreMap = imageLIME(net,X,label);
imagesc(scoreMap,'AlphaData',0.5)
colormap parula
colorbar
colormap gray %fig3
numSuperPixel=75;
[L,N] = superpixels(X,numSuperPixel);
figure
BW = boundarymask(L);
imshow(imoverlay(X,BW,'cyan'),'InitialMagnification',100) %fig4
</code></pre>
<p><a href="https://i.sstatic.net/SzJiq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SzJiq.png" alt="output" /></a></p>
| 437
|
|
model interpretability
|
correlated variables & model performance: optimal trade-off
|
https://datascience.stackexchange.com/questions/93899/correlated-variables-model-performance-optimal-trade-off
|
<p>on the back of this topic (<a href="https://datascience.stackexchange.com/questions/36404/when-to-remove-correlated-variables">When to remove correlated variables</a>) I feel a follow up is needed, with the focus here being on raw performance and risk of distribution shift.</p>
<p>assuming little to medium interpretability is needed, we have a scenario when even medium-ish gains in predictive performance are very very welcome but, in order to do so, we have to step up the number of moderate-to-strongly correlated variables (<code>0.6-0.8</code> range, linear correlation). this provides benefits (using <code>R2</code> here) in the order of 5 to 10 percentage points. these gains absolutely do make a difference for the problem at hand.</p>
<p>but I am concerned that I am implicitly tethering to the same phenomenon (therefore making the model less robust) especially when distribution shifts are on the horizon for the variables upon which the model relies the most.</p>
<p>so, do you have any approaches or even rule of thumbs that can be generally <em>applicable</em> in these scenarios?</p>
| 438
|
|
model interpretability
|
For a regression model, can you transform all your features to linear to make a better prediction?
|
https://datascience.stackexchange.com/questions/56874/for-a-regression-model-can-you-transform-all-your-features-to-linear-to-make-a
|
<p>I was thinking. Would it be a good approach to check your features one by one (assuming you have a manageable amount of them) and see the relationship they have with your target variable, if they have a non linear relationship then transform each of those features using their appropriate function for each case to make them linear? In my mind if you do this your are guaranteed to have a better Linear model and also you are able to perform hypothesis testing on each feature to see the relevance of them, giving you the chance to perform some feature selection as well. </p>
<p>I know that the interpretability of model will be thrown out of the window, but the model will give a much better performance. Basically you could potentially end up with a model with only engineer features (assuming that all of them have a non linear relationship)</p>
<p>Would this approach be acceptable and it is worth exploring?</p>
|
<p>Your idea is good, but you are not the first with this idea.</p>
<p>You can use <a href="https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/gam.html" rel="nofollow noreferrer">Generalized Additive Models (GAM)</a> with regression splines to check and/or add non-linearity in a linear regression setup. There is a clear advantage over looking at just descriptive figures one-by-one (with manual feature generation), since you can estimate a whole model with extreme flexibility.</p>
<p>Alternatively, you can simply do a linear regression with a <a href="https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html" rel="nofollow noreferrer">lasso penalty</a>. Add polynomials to your <span class="math-container">$X$</span>, and let the lasso „shrink“ irrelevant features to zero.</p>
<p>The book „Introduction to Statistical Learning“ covers these topics in <a href="https://github.com/ryanhnelson/introduction-statistical-learning/blob/master/ISLR%20Sixth%20Printing.pdf" rel="nofollow noreferrer">Sections 6,7</a>.</p>
<p>BTW: even with polynomials, your model can retain interpretability if you care for this. Polys are relatively easy to interpret. However, under a GAM approach, interpretation is a little more difficult.</p>
<p>Maybe as a note: you want to make features linear (which can be a problem since you can only apply linear transformations to the data). The approaches proposed above aim at making your (linear) model more flexible to cope with non-linearity in data.</p>
| 439
|
model interpretability
|
When is scaling and centering important?
|
https://datascience.stackexchange.com/questions/112340/when-is-scaling-and-centering-important
|
<p>There are some models such as PCA or SVM where scaling and centering of training data is essential.</p>
<p>There are some models, mostly tree-based where scaling and centering is not required at all.</p>
<p>I don't think some of the linear models like linear or logistic regression need it. But I could be wrong. What are the implications of normalizing data for these models?</p>
<p>In general, is there a mental model or framework that I can use to determine whether scaling and centering is needed? How will it affect any kind of interpretability?</p>
| 440
|
|
model interpretability
|
Is there an R package for Locally Interpretable Model Agnostic Explanations?
|
https://datascience.stackexchange.com/questions/17791/is-there-an-r-package-for-locally-interpretable-model-agnostic-explanations
|
<p>One of the researchers, Marco Ribeiro, who developed this method of explaining how black box models make their decisions has developed a Python implementation of the algorithm available through Github, but has anyone developed a R package? If so, can you report on using it?</p>
|
<p>I think you're talking about the <code>lime</code> Python package. No, there is <strong>no</strong> R port for the package. The implementation for the localized model requires enhancements to the existing machine-learning code (explained in the paper), a new implementation for R would be very time consuming.</p>
<p>You may want to take a look at <a href="https://stackoverflow.com/q/11716923/9214357">this</a> for interfacing Python in R.</p>
<p>My suggestion is stick with Python. The package is only useful for highly complicated non-linear models, which Python offers better support than R.</p>
| 441
|
model interpretability
|
How to interpret importance of random forest model, Mean Decrease Accuracy and Mean Decrease Gini?
|
https://datascience.stackexchange.com/questions/110184/how-to-interpret-importance-of-random-forest-model-mean-decrease-accuracy-and-m
|
<p><a href="https://i.sstatic.net/eeSVj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eeSVj.png" alt="Importance" /></a></p>
<p>A random forest model outputs the following importance values. How do I interpert them for feature selection? If it's the mean decreased accuracy does that mean that by removing them from the model the accuracy should increase?</p>
|
<p>I'm not sure which software you're using so I don't know the details, but generally it's simple: the highest values indicate the features which contribute the most to the target.</p>
<p>In particular, the mean decrease in accuracy shows how much the accuracy decreases <em>when removing this feature</em>. Thus again a high value (e.g. <code>emotionality</code> in your example) indicates an important feature for predicting the target.</p>
| 442
|
model interpretability
|
Adding recommendations to the output of a classification model
|
https://datascience.stackexchange.com/questions/41259/adding-recommendations-to-the-output-of-a-classification-model
|
<p>I have built a binary classification model using:</p>
<ul>
<li>logit</li>
<li>decision trees</li>
<li>random forest</li>
<li>bagging classifier</li>
<li>gradientboost</li>
<li>xgboost</li>
<li>adaboost</li>
</ul>
<p>I have evaluated the above models and chose xgboost based on training/test and validation metrics (accuracy, prediction, recall, f1 and AUC).</p>
<p>I want to now productionalize it and share the output with the business. The output would basically have a list of items with the predicted class and that could be filtered based on business needs. </p>
<p>However, Instead of simply giving the business the predicted classes, I want to add insights/recommendations as to why a specific item was predicted with class X and how you could go about working on the item to change its class from say X to Y. </p>
<p>How do I go about this? I thought of using feature importance, but my input data shape is [800,000 * 1,050] and I am not sure if it would the best way to proceed. </p>
<p>Are there any existing industry standard methodologies that can add interpretability to such models and convert them from a black box models to prescriptive models?</p>
|
<p>Decision tress can be plotted in python, this is the most visualizing machine learning algorithm, you can see a link here: <a href="https://medium.com/@rnbrown/creating-and-visualizing-decision-trees-with-python-f8e8fa394176" rel="nofollow noreferrer">https://medium.com/@rnbrown/creating-and-visualizing-decision-trees-with-python-f8e8fa394176</a>.</p>
<p>For everything else you need to do data exploration again. Try to visualize in bar plots, for example you can show, if you have class 'X' and 'Y', what your confidence intervall of your future 1 if it is class X and what if it is class 'Y'.</p>
| 443
|
model interpretability
|
Should normalization be applied on interaction feature
|
https://datascience.stackexchange.com/questions/130734/should-normalization-be-applied-on-interaction-feature
|
<p>I am working with interaction features in my machine learning model, where I create new features by multiplying a numeric variable with an encoded categorical feature. My question is:</p>
<p>Should normalization be applied to these interaction terms? If yes, so should it be applied <strong>before interaction features are created</strong> or <strong>after the interaction features are created</strong> ?</p>
<p>In my case, I am using <strong>Neural Networks</strong> and <strong>GBM</strong> (Gradient Boosting Machines), and I’m wondering whether normalization of interaction terms will affect model performance. By normalization, I mean <strong>scaling to a 0-1 range</strong>.</p>
<p>Challenges when creating interaction terms:</p>
<ol>
<li><strong>Numerical Features Normalized Before Interaction</strong>:</li>
</ol>
<p>While this prevents the dominance of large numerical scales, it can
distort the relative magnitude of the interaction term, especially
when the categorical encoding carries meaningful differences (e.g.,
weighted target encoding). This approach may also introduce encoded
categorical bias, where the interaction term does not accurately
reflect the true relationship between the categorical and numerical
features.</p>
<ol start="2">
<li><strong>Interaction Term Normalized After Creation</strong>:</li>
</ol>
<p>This approach equalizes the scale of the interaction terms but can introduce a loss of interpretability for the interaction term and cause an uneven scale impact. In this case, the encoded categorical feature’s signal might get diluted, which could lead to potential bias in the model’s understanding of the relationships between the features.</p>
<p>How do you handle these challenges in your workflow?</p>
<ul>
<li>Do you always normalize numerical features <strong>before</strong> creating
interactions?</li>
<li>How do you preserve interpretability when normalizing
interaction terms?</li>
<li>Any tips or best practices to balance scale, bias,
and interpretability in such cases?</li>
</ul>
<p>For example, if I multiply a normalized numeric feature with a one-hot encoded categorical variable, does that change the original relationship between the numeric feature and the category, or does it still capture the intended interaction?</p>
<p>What did I try:</p>
<p>I have experimented with normalizing the numeric feature before creating the interaction term. Specifically, I normalized the numeric variable and then multiplied it with an encoded categorical feature. I also tried creating interaction features without normalizing the numeric variable first to compare the two approaches.</p>
<p>What was I expecting:</p>
<p>I was hoping to understand if normalizing the numeric feature before creating the interaction term would affect the model's ability to capture the intended relationships between the numeric and categorical features. I was also curious whether the meaning of the interaction term would be preserved or distorted due to the normalization of the numeric feature. I expected to learn whether normalization would cause the interaction term to lose its original scale and significance or if it would be beneficial for model convergence and performance.</p>
|
<p>What is really important, you should use same scale factors for the data on which you <strong>train your model</strong> and data on which you <strong>make predictions</strong> by this trained model.</p>
<p>If these two scalings are consistent, it's no big deal if some of your data not lay exactly in [0, 1] range or [0, 10] range or whatever.</p>
<p>You should shift and divide your data with same coefficients <code>A</code> and <code>B</code> for same feature in train data and predict data, that's all you really need to do. Or you can train some scaler (from Scikit-Learn, for example) on your train data and apply this trained scaler on your predict data. That's it. All the rest is not such important. Interaction features or not, <strong>use same scaling for train and test</strong>, and you are good.</p>
| 444
|
model interpretability
|
Feature reduction convenience
|
https://datascience.stackexchange.com/questions/17244/feature-reduction-convenience
|
<p>In the field of machine learning, I'm wondering about the interest of applying feature selection techniques.</p>
<p>I mean, I often read articles or lectures speaking about how to reduce the number of feature (dimensionality reduction, PCA), how to select the best features (feature selection etc).</p>
<p>I'm not sure of the main purpose of this:</p>
<ul>
<li>Does feature reduction techniques always improve accuracy of the learned model?</li>
<li>Or is it just a computational cost purpose?</li>
</ul>
<p>I would like to understand when it is necessary to reduce the number of features and when it is not, in order to improve interpretability or accuracy.
Thanks!</p>
|
<p>Feature Selection (FS) methods are focused on specializing the data as much as possible to find accurate models for your problem. Some of the main issues that drive the need for FS are:</p>
<ul>
<li>Curse of dimensionality: Most algorithms suffer to grasp relevant chacteristics of data for a specific prediction task, when the number of dimensions (features) of the data is high, and the number of examples is not sufficiently big. <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">Check here some more detailed explanation</a></li>
<li>Correlation between variables: Typically, the presence of highly correlated pair of variables can cause ML algorithms to pay to much attention to a particular effect that is "over-represented". For this reason many FS methods address the reduction of this correlation. Reducing the number of correlated variables oftenly increases the model predictive power.</li>
<li>Latent features: Although specific variables might be highly expressive for your problem, a lot of power can be achieved when finding "latent features", such as linear and non-linear combinations of the original variables. Here there are hundreds of approaches, from PCA to Neural Networks. Independently from the approach (and statistical assumptions) the idea is to create new features that condense the information of a bigger set of features into a smaller one. Hopefuly the new set of features is more representative, and being smaller could be more easily learnt.</li>
</ul>
<p>Feature selection does not necessarily improve the predictive quality of the model. Reducing or transforming the features might lead you to loss of information and then a less accurate model. Is an open and complex field of research. However in many cases it becomes quite useful. It will depend on how good and different are your original features for describing the target variable.
If you look into bionformatics, you'll see people dealing with thousands or even millions of features, while having only hundred of examples. Here feature selection becomes increasingly relevant.</p>
<p>PS: Most commonly I've seen the term "feature selection" used for the creation of compound features, as most examples I've mentioned, while Feature Extraction term is used for actually removing specific features from the dataset without taking into account is relation with the target variable.</p>
| 445
|
model interpretability
|
How to tell a boosting model that 2 features are related and should not be interpreted stand-alone?
|
https://datascience.stackexchange.com/questions/61125/how-to-tell-a-boosting-model-that-2-features-are-related-and-should-not-be-inter
|
<p>I am using XGBoost for a machine learning model that learns from tabular data.</p>
<p>XGBoost uses boosting method on decision trees. When I look at the decision-making logic of decision trees, I notice the logic is based on 1 feature at one time. In real life, certain multiple features are related to each other. </p>
<p>Currently, when I feed data to the model, I simply feed all the features to it without telling the model how certain features are related to each other. </p>
<p>Let me describe a hypothetical example to be clearer. Suppose I have 2 features - gender and length of hair. In this hypothetical problem, I know from my domain knowledge that if gender is female, length of hair matters in determining the outcome. If gender is male, length of hair is irrelevant. How do I tell the machine learning model this valuable piece of information so that the model can learn better?</p>
<p>I am using XGBoost on python 3.7</p>
|
<p>I am going to talk about some ways you could do it later but first I want to talk about whether you should!</p>
<p>If the relation that you describe exists XGB will be able to learn and detect it! There is no real benefit in "hard-coding" a rule into the algorithm, it won't speed up the training, it won't benefit accuracy, etc. Simply put the benefit of ML algorithms is that they are able to detect exactly these relationships and model them in the best possible way.</p>
<p>Now if you still insist that this is something that must be done, you can. The easiest way to achieve this would be feature engineering:</p>
<ol>
<li><p>Introduce NAs -simply leave out hair length of male respondents and fill with NA</p></li>
<li><p>Create interaction factors - instead of having hair length and gender as a simple variables you could also code it in a way that represents the known interaction like this:</p></li>
</ol>
<pre><code>gender_hair = [male, female_short,female_medium,female_long] # example factor levels
</code></pre>
<p>But again if you compare models with those engineered features to a simpler model you will see no benefit I'd wager.</p>
| 446
|
model interpretability
|
When should I NOT scale features
|
https://datascience.stackexchange.com/questions/64302/when-should-i-not-scale-features
|
<p>Feature scaling can be crucially necessary when using distance-, variance- or gradient-based methods (KNN, PCA, neural networks...), because depending on the case, it can improve the quality of results or the computational effort.</p>
<p>In some cases (tree-based models in particular), scaling has no impact on the performance.</p>
<p>There are many discussions out there about when one should scale their features, and why they should do it. Apart from interpretability (which is not a problem as long as the scaling can be reverted), I'm wondering about the opposite: <strong>are there cases when scaling is a bad idea, i.e. can have a negative impact on model quality?</strong> or less importantly, on computation time?</p>
|
<p>Scaling often assumes you know the <strong>min/max</strong> or <strong>mean/standard deviation</strong>, so directly scaling features where these information is not really known, can be a bad idea.</p>
<p>For example, <strong>clipped signals</strong> may hide this info, so scaling them can have a negative result because you may distort its true values.</p>
<p>Below is an image of 1) a signal that can be scaled, and 2) a <a href="https://mackie.com/blog/what-clipping" rel="noreferrer">clipped signal</a> that scaling should not be done.</p>
<p><a href="https://i.sstatic.net/dSEZf.png" rel="noreferrer"><img src="https://i.sstatic.net/dSEZf.png" alt="https://mackie.com/blog/what-clipping"></a></p>
| 447
|
model interpretability
|
Can I include a quotient as dependent variable and independent variables with same denominator in a linear model? How do we interpret such models?
|
https://datascience.stackexchange.com/questions/107980/can-i-include-a-quotient-as-dependent-variable-and-independent-variables-with-sa
|
<p>I want to create a model in a food processing plant where my dependent variable is Electricity (KWhr) consumption per kg. Plant produce different food items with varying electricity consumption. I'm interested in knowing the impact of the proportion of each food item on consumption per kg.
so my model is</p>
<pre><code> consumption per kg produced (Kwhr/kg) = alpha0 +alpha1(Food Item A/Total Production) +
alpha2(Food Item B/Total Production)+....+Other variables
</code></pre>
<p>Is it correct to frame the question like this?. I have Total Production on both sides of the equation as the denominator.
What is the right way to approach this problem?. I would like to hear your thoughts on this.
Any help is highly appreciated.</p>
|
<p>Energy is additive but energy-per-kg is not.</p>
<p>Assume</p>
<p><span class="math-container">$p_i$</span> be the energy consumption rate for food <span class="math-container">$i$</span> (<span class="math-container">$\mathrm{kJs^{-1}} = \mathrm{kW}$</span>),</p>
<p><span class="math-container">$t_i$</span> be the # second for producing 1 kg of food <span class="math-container">$i$</span> (<span class="math-container">$\mathrm{skg^{-1}}$</span>)</p>
<p><span class="math-container">$w_i$</span> be the amount of food <span class="math-container">$i$</span> produced (<span class="math-container">$\mathrm{kg}$</span>)</p>
<p>so <span class="math-container">$p_it_iw_i$</span> will give us the total energy consumption for food <span class="math-container">$i$</span></p>
<p>Consequently,</p>
<p><span class="math-container">$E = \sum_i p_it_iw_i + C $</span></p>
<p><span class="math-container">$E$</span> is the total energy consumed, and <span class="math-container">$C$</span> encapsulate other energy consumption independent of the food.</p>
<p>For energy consumption per kg,</p>
<p><span class="math-container">$E' = \frac{\sum_i p_it_iw_i + C}{\sum_i w_i} $</span></p>
| 448
|
model interpretability
|
Feature Importance based on a Logistic Regression Model
|
https://datascience.stackexchange.com/questions/63045/feature-importance-based-on-a-logistic-regression-model
|
<p>I was training a Logistic Regression model over a fairly large dataset with ~1000 columns.</p>
<p>I did apply scaling of features using MinMaxScaler. </p>
<p>I was wondering how to interpret the coefficients generated by the model and find something like feature importance in a Tree based model. </p>
<p>Should I re-scale the coefficients back to original scale to interpret the model properly?</p>
<p>It will be great if someone can shed some light on how to interpret the Logistic Regression coefficients correctly.</p>
|
<p>No, you do not need to re-scale the coefficients. To the contrary - if they are scaled, you can use them as a way to compare feature importance.</p>
<p>Let's assume that our logistic regression model has coefficients {<span class="math-container">$ a_i$</span>}, relating to the different (scaled) variables {<span class="math-container">$x_i$</span>}. <br>
A change of <span class="math-container">$\Delta x_i $</span> in the variable <span class="math-container">$ x_i $</span> will result in an increase (or decrease, if <span class="math-container">$a_i$</span> is negative) of <span class="math-container">$ a_i \Delta x_i $</span> in <span class="math-container">$ log({\hat p_i \over {1-\hat p_i}}) $</span>, i.e. the <a href="https://en.wikipedia.org/wiki/Logit" rel="nofollow noreferrer">logit</a> function of <span class="math-container">$ \hat p_i $</span>, where <span class="math-container">$ \hat p_i $</span> is the predicted probability that the i-th example is in the positive class.</p>
<p>So, if the variables are scaled, you can say that if <span class="math-container">$ a_i$</span> is larger, then <span class="math-container">$x_i$</span> is more important in the model.</p>
| 449
|
neural network architectures
|
Is there data available about successful neural network architectures?
|
https://ai.stackexchange.com/questions/14303/is-there-data-available-about-successful-neural-network-architectures
|
<p>I am curious to if there is data available for MLP architectures in use today, their initial architecture, the steps that were taken to improve the architecture to an acceptable state and what the problem is the neural network aimed to solve.</p>
<p>For example, what the initial architecture (amount of hidden layers, the amount of neurons) was for a MLP in a CNN, the steps taken to optimize the architecture (adding more layers and reducing nodes, changing activation functions) and the results each step produced (i.e. increased error or decreased error). What the problem is the CNN tried to solve (differentiation of human faces, object detection inteded for self driving cars etc.)</p>
<p>Of course I used a CNN as an example but I am referring to data for any MLP architecture in plain MLPs or Deep Learning architectures such as RNNs, CNNs and mroe. I am focused on the MLP architecture mostly. </p>
<p>If there is not how do you think one can accumulate this data?</p>
| 0
|
|
neural network architectures
|
Can most of the basic machine learning models be easily represented as simple neural network architectures?
|
https://ai.stackexchange.com/questions/24484/can-most-of-the-basic-machine-learning-models-be-easily-represented-as-simple-ne
|
<p>I am currently studying the textbook <em>Neural Networks and Deep Learning</em> by Charu C. Aggarwal. In chapter <strong>1.2.1 Single Computational Layer: The Perceptron</strong>, the author says the following:</p>
<blockquote>
<p>Different choices of activation functions can be used to simulate different types of models used in machine learning, like <em>least-squares regression with numeric targets</em>, the <em>support vector machine</em>, or a <em>logistic regression classifier</em>. Most of the basic machine learning models can be easily represented as simple neural network architectures.</p>
</blockquote>
<p>I remember reading something about it being mathematically proven that neural networks can approximate any function, and therefore any machine learning method, or something along these lines. Am I remembering this correctly? Would someone please clarify my thoughts?</p>
|
<p>I think the author refers to both different choices of activation function and loss. It is explained in more detail in chapter 2. In particular 2.3 is ilustrative of this point.</p>
<p>I don't think there is a relation between this argument and universal approximation theorems, which state that certain classes of neural networks can approximate any <em>function</em> in certain domains, rather than any <em>learning algorithm</em>.</p>
| 1
|
neural network architectures
|
Why are these same neural network architecture giving different results?
|
https://ai.stackexchange.com/questions/22467/why-are-these-same-neural-network-architecture-giving-different-results
|
<p>I tried the first neural network architecture and the second one, but keeping all other variables constants, I am getting better results with the second architecture. Why are these same neural network architecture giving different results? Or am I making some mistakes?</p>
<p><strong>First one:</strong></p>
<pre><code>def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
hidden_state_value=[512,512]):
super(DuelingQNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
hidden_layers = [state_size] + hidden_advantage
self.adv_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
nn.ReLU(),
nn.Linear(hidden_layers[1], hidden_layers[2]),
nn.ReLU(),
nn.Linear(hidden_layers[2], action_size))
hidden_layers = [state_size] + hidden_state_value
self.val_network = nn.Sequential(nn.Linear(hidden_layers[0], hidden_layers[1]),
nn.ReLU(),
nn.Linear(hidden_layers[1], hidden_layers[2]),
nn.ReLU(),
nn.Linear(hidden_layers[2], 1))
def forward(self, state):
"""Build a network that maps state -> action values."""
# Perform a feed-forward pass through the networks
advantage = self.adv_network(state)
value = self.val_network(state)
return advantage.sub_(advantage.mean()).add_(value)
</code></pre>
<p><strong>Second one:</strong></p>
<pre><code>def __init__(self, state_size, action_size, seed, hidden_advantage=[512, 512],
hidden_state_value=[512,512]):
super(DuelingQNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
hidden_layers = [state_size] + hidden_advantage
advantage_layers = OrderedDict()
for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
advantage_layers['adv_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
advantage_layers['adv_activation_'+str(idx)] = nn.ReLU()
advantage_layers['adv_output'] = nn.Linear(hidden_layers[-1], action_size)
self.network_advantage = nn.Sequential(advantage_layers)
value_layers = OrderedDict()
hidden_layers = [state_size] + hidden_state_value
# Iterate over the parameters to create the value network
for idx, (hl_in, hl_out) in enumerate(zip(hidden_layers[:-1],hidden_layers[1:])):
# Add a linear layer
value_layers['val_fc_'+str(idx)] = nn.Linear(hl_in, hl_out)
# Add an activation function
value_layers['val_activation_'+str(idx)] = nn.ReLU()
# Create the output layer for the value network
value_layers['val_output'] = nn.Linear(hidden_layers[-1], 1)
# Create the value network
self.network_value = nn.Sequential(value_layers)
def forward(self, state):
"""Build a network that maps state -> action values."""
# Perform a feed-forward pass through the networks
advantage = self.network_advantage(state)
value = self.network_value(state)
return advantage.sub_(advantage.mean()).add_(value)
</code></pre>
| 2
|
|
neural network architectures
|
Image-in image-out neural network architectures
|
https://ai.stackexchange.com/questions/34685/image-in-image-out-neural-network-architectures
|
<p>With an RGB image of a paper sheet with text, I want to obtain an output image which is cropped and deskewed. Example of input:</p>
<p><a href="https://i.sstatic.net/l64Kn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l64Kn.png" alt="enter image description here" /></a></p>
<p>I have tried non-AI tools (such as <code>openCV.findContours</code>) to find the 4 corners of the sheet, but it's not very robust in some lighting conditions, or if there are other elements on the photo.</p>
<p>So I see two options:</p>
<ul>
<li><p>a NN with <code>input=image, output=image</code>, that does <strong>everything</strong> (including the deskewing, and even also the brightness adjustment). I'll just train it with thousands of images.</p>
</li>
<li><p>a NN with <code>input=image, output=coordinates_of_4_corners</code>. Then I'll do the cropping + deskewing with a homographic transform, and brightness adjustment with standard non-AI tools</p>
</li>
</ul>
<p>Which approach would you use?</p>
<p><strong>More generally what kind of architecture of neural network would you use in the general case <code>input=image, output=image</code>?</strong></p>
<p>Is approach #2, for which input=image, output=coordinates possible? Or is there another segmentation method you would use here?</p>
|
<p>I think the second approach will be the best because it only requires that your training set is annotated with four labels for each of the four corners of the paper sheet.</p>
<p>This is sort of the idea of a Region Proposal Network which is used in <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="noreferrer">Faster R-CNN</a> (section 3.1).</p>
<p><a href="https://github.com/pytorch/vision/blob/5e56575e688a85a3bc9dc3c97934dd864b65ce47/torchvision/models/detection/rpn.py#L88-L367" rel="noreferrer">Here</a> is a reference implementation of a Region Proposal Network in PyTorch from the <a href="https://github.com/pytorch/vision/" rel="noreferrer">torchvision</a> library. Notice how the network outputs <code>boxes</code> (in the <code>forward()</code> method) which is a tuple <code>(x1, y1, x2, y2)</code>. From these four coordinates, you could crop the image to the desired paper sheet region.</p>
| 3
|
neural network architectures
|
Neural network architecture for comparison
|
https://ai.stackexchange.com/questions/9169/neural-network-architecture-for-comparison
|
<p>When someone wants to compare 2 inputs, the most widespread idea is to use a Siamese architecture. Siamese architecture is a very high level idea, and can be customized based on the problem we are required to solve.</p>
<p><strong>Is there any other architecture type to compare 2 inputs ?</strong> </p>
<hr>
<h3>Background</h3>
<p>I want to use a neural network for comparing 2 documents (semantic textual similarity). Siamese network is one approach, I was wondering if there is more.</p>
|
<p>A RBM (restricted Boltzmann machine) can be trained to extract document features. The same resulting machine can extract features of two or more documents. Because documents can be just as easily processed in series using the same machine parameters and CPU (saving the feature results) as documents could be processed in parallel using separate CPUs, the Siamese idea is less of a characteristic of system architecture and more of a characteristic of the process topology. The architectural decision of how to run the process topology on available hardware should usually remain somewhat decoupled.</p>
<p>The comparison operation could be a MLP (multi-layer perceptron) trained to produce the comparison results based on a requisite number example document comparisons. In such a case, the MLP should be trained with the previously trained RBMs as a front end. You will need some human labor to produce or extract from available data sources the example document comparison results. The example results will be need, along with a reference to the pair of RBMs feature extraction results corresponding to the comparison, to train the MLP.</p>
<p>Even further upstream from the RBMs, it is often useful to producing reliable and accurate document processing to pre-process the documents. Images and text can be delineated so that features can be extracted from the images in one way and from the text in another and then from the sequence of text and images in a third way. In such process topologies, CNNs (convolution network) typically pre-process the images and LSTM networks typically pre-process the text. They may then directly feed the RBM or indirectly through another process component.</p>
| 4
|
neural network architectures
|
Is there a neural network architecture specialized for mapping lower-to-higher-dimensional data?
|
https://ai.stackexchange.com/questions/41498/is-there-a-neural-network-architecture-specialized-for-mapping-lower-to-higher-d
|
<p>I am building a neural network that takes in a set of 86 parameters (primarily architecture-related, such as building floor area, kitchen size, number of a certain type of furniture, etc.) and outputs a 4D mulitdimensional array of shape <code>(6, 2532, 39, 5)</code> containing vertices of path data that are meant to output to a CAD file of the building. As of the current moment, I am using the most primitive method to do this: a basic densely-connected neural network with the architecture:</p>
<pre><code>Input(86),
Normalize(),
Dense(64, activation="relu"),
Dense(128, activation="relu"),
Dense(128, activation="relu"),
# required for the reshape
Dense(2962440),
Reshape((6, 2532, 39, 5))
</code></pre>
<p>However, this densely-connected neural network produces losses in the trillions, making it hardly viable as a deployable model. This is to be expected - MLPs aren't meant to map lower-dimensional features onto higher-dimensional labels. My "cheating" way of making a <code>Dense</code> layer of high enough dimensions to do a reshape isn't truly providing information that the neural network can use to learn. Yet I cannot think of a neural network architecture specialized for low-to-high dimensional training - most architectures are instead specialized for high-to-low dimensional training, like CNNs, which reduce 2D/3D data down to 1D.</p>
<p>Is there any neural network architecture suitable for the task I have outlined?</p>
<hr />
<p>UPDATE: I found that significant loss reduction could be accomplished by rescaling my train labels to between [0, 1]. However, the model now performs poorly on any input data other than its own training dataset, as the rescaling means that the neural network is highly sensitive to any variation in input data.</p>
| 5
|
|
neural network architectures
|
What type of neural network architecture allows filtering out of unwanted sounds?
|
https://ai.stackexchange.com/questions/38069/what-type-of-neural-network-architecture-allows-filtering-out-of-unwanted-sounds
|
<p>I have a use case where I will be inputting audio to a model, and the output of the model will be the same audio except with certain sounds removed (volume set to zero). The dataset is generated by taking an audio file, duplicating it, and then zeroing out the unwanted sounds (usually a half second long).</p>
<p>I believe a neural network architecture is needed here with the input being the undisturbed audio and its spectrogram. The output is then the modified/cleaned audio.</p>
<p>What model architectures would work for this use case? I would potentially like to have this run real-time as a person is speaking.</p>
|
<p>Since you say that you "believe that a neural network architecture is needed here...", I am assuming that you are open to other options. This approach doesn't utilize a neural networks, but I think it can potentially get the job done (with some caveats). One approach is to</p>
<ol>
<li>First isolate the unwanted sound segments that you want to "subtract"</li>
<li>Calculate the Fourier transform of each unwanted sound segment</li>
<li>Determine the components with the highest amplitudes in each unwanted sound segment</li>
<li>Determine if these dominant peaks have shared frequencies across the various unwanted sound segments</li>
<li>If so, this spectrum of frequencies is the "signature" of your unwanted sound</li>
<li>Once you have that, you can simply subtract those frequencies from your full sound recording to remove the unwanted sound</li>
<li>This is done by finding the Fourier transform of the full sound recording</li>
<li>Subtracting the frequencies (i.e., the signature) of the unwanted sound</li>
<li>Transforming back to the time domain to get the signal minus the unwanted sound</li>
</ol>
<p>As implied, for this approach to work, the unwanted sounds will have to have a consistent set of dominant frequencies (i.e., a signature). If you have multiple "types" of unwanted sounds, then this procedure can be repeated for each type (i.e., find the signature for each type). If you have multiple types of unwanted sounds and you cannot distinguish them, a neural network may then be useful.</p>
<p>You will also have to make sure that those frequencies are not a significant component of your desired signal. Otherwise, you will be removing an important part of your desired signal. To do that, just follow the same procedure as described for the unwanted signal.</p>
<p>This code from <a href="https://www.mathworks.com/matlabcentral/answers/382113-how-to-remove-noise-from-audio-using-fourier-transform-and-filter-and-to-obtain-back-the-original-au" rel="nofollow noreferrer">MATLAB</a> describes one possible implementation (albeit for a denoising application).</p>
| 6
|
neural network architectures
|
Is the following neural network architecture considered deep learning?
|
https://ai.stackexchange.com/questions/6215/is-the-following-neural-network-architecture-considered-deep-learning
|
<p>I am working in the following neural network architecture, I am using keras and TensorFlow as a back-end.</p>
<p>It is composed by the following, embedding of words, then I added a layer of
Long Short-Term Memory (LSTM) neural networks, one layer of output and finally. I am using the softmax activation function.</p>
<pre><code>model = Sequential()
model.add(Embedding(MAX_NB_WORDS, 64, dropout=0.2))
model.add(LSTM(64, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(8))
model.add(Activation('softmax'))
</code></pre>
<p>I have the following question, if I am getting a model through this code, could the final product be called a deep learning model?,
I know that this code is very small however there is a lot of computations that the machine is making on the background. </p>
|
<p>"Deep learning" is not formally defined. However, typically even simple RNNs are taught as advanced neural network subject alongside other topics labelled "deep learning". </p>
<p>Technically, given the time dimension, the depth of a RNN can include many layers of processing (as opposed to many layers of parameters). As such, some of the knowledge and experience used to help with deep feed-forward networks also applies to RNNs. You could consider the LSTM architecture one such thing, because it is designed to address the <a href="https://en.wikipedia.org/wiki/Vanishing_gradient_problem" rel="nofollow noreferrer">vanishing gradient problem</a> that plagues simpler RNN architectures.</p>
<p>So, yes you can call your model a "deep learning model" and have that generally accepted. </p>
<p>I'd be slightly concerned if anyone important to the success of a project thought that label was a big deal - either placed on your CV as or as a buzzword on a resulting product. However, it is not unrealistic marketing because it is essentially true.</p>
| 7
|
neural network architectures
|
Which neural network architectures are there that perform 3D convolutions?
|
https://ai.stackexchange.com/questions/5979/which-neural-network-architectures-are-there-that-perform-3d-convolutions
|
<p>I am trying to do <strong>3d image deconvolution</strong> using <strong>convolution neural network</strong>. But I cannot find many famous CNNs that perform a 3d convolution. Can anyone point out some for me?</p>
<p>Background: I am using PyTorch, but any language is OK. What I want to know most is the network structure. I can't find papers on this topic.</p>
<p>Links to research papers would be especially appreciated.</p>
|
<p>There are many approaches for training CNN on 3d data, but the decision to use a particular architecture is heavily dependant upon the format of your dataset.</p>
<p>If you are using 3d point cloud data, I would suggest you go through <a href="https://arxiv.org/abs/1612.00593" rel="nofollow noreferrer">PointNet</a> and <a href="https://github.com/yangyanli/PointCNN" rel="nofollow noreferrer">PointCNN</a>.</p>
<p>But training a CNN on 3d point clouds is very tough.</p>
<p>There is also a way to train CNNs by posing the 3d structure from different viewpoints (<a href="https://arxiv.org/pdf/1505.00880.pdf" rel="nofollow noreferrer">Multiview CNNs</a>).</p>
<p>But remember that training CNN on 3d data is really a tough task. </p>
<p>If you plan to use a voxelized input data format, I suggest going through <a href="https://arxiv.org/abs/1711.06396" rel="nofollow noreferrer">VoxelNet</a>.</p>
<p>Since you are mentioning deconvolution, the most relevant paper I can come across is <a href="https://arxiv.org/abs/1606.06650" rel="nofollow noreferrer">3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation</a>.</p>
<p>But deconvolution in its own right is a very expensive operation, which acting on 3d data makes it very hard, so I would suggest you check for alternate methods.</p>
| 8
|
neural network architectures
|
Neural network architecture with inputs and outputs being an unkown function each
|
https://ai.stackexchange.com/questions/24852/neural-network-architecture-with-inputs-and-outputs-being-an-unkown-function-eac
|
<p>I am trying to set up a neural network architecture that is able to learn the points of one function (blue curves) from the points of an other one (red curves). I think that it could be somehow similar to the problem of learning a functional like it was described in this question <a href="https://stats.stackexchange.com/questions/158348/can-a-neural-network-learn-a-functional-and-its-functional-derivative">here</a>. I don't know at all what this (let's call it) functional looks like, I just see the 'blue' response of it to the 'red' input.</p>
<p><a href="https://i.sstatic.net/SvfNK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SvfNK.png" alt="enter image description here" /></a></p>
<p>The inputs of the network would be the (e.g. 100) points of a red curve and the outputs would probably be the (e.g. 50) points of the blue curve. This is where my problem begins. It tried to implement a simple dense network with two hidden layers and around 200-300 neurons each. Obviously it didn't learn much.</p>
<p>I have the feeling that I somehow need to tell the network that the points next to each other (e.g. input points <span class="math-container">$x_0$</span> and <span class="math-container">$x_1$</span>) are correlated and the function they belong to is differentiable. For the inputs this could be archieved by using convolutional layers, I suppose. But I don't really now how to specify, that the output nodes are correlated with each other as well.</p>
<p>At the beginning I had high hopes in the approach using Generalized Regression Neural Networks as presented <a href="http://article.nadiapub.com/IJSIP/vol7_no2/8.pdf" rel="nofollow noreferrer">here</a>, where a lowpass filter is implemented with NNs. However, as I understood, only the filter coefficients are predicted. As I don't know anything about the general structure of my functional, this will not help me here ...</p>
<p>Do you have any other suggestions for NN architectures that could be helpful for this problem? Any hint is appreciated, thank you!</p>
| 9
|
|
neural network architectures
|
Neural network architecture for line orientation prediction
|
https://ai.stackexchange.com/questions/6996/neural-network-architecture-for-line-orientation-prediction
|
<p>Imagine that a line divides an image in two regions which (slightly) differ in terms of texture and color. It is not a perfect, artificial line but rather a thin transition zone. I want to build a neural network which is able to infer geometrical information on this line (orientation and offset). The image may also contain other elements which are not relevant for the task. Now, would a classical CNN be suitable for this task? How complex should it be in terms of number of convolutions (and number of layers, in general)?</p>
|
<p>A long shot: <a href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Lee_Semantic_Line_Detection_ICCV_2017_paper.pdf" rel="nofollow noreferrer">these guys</a> have worked on a problem that might be relevant. They define "semantic" lines as lines delimiting significant regions or objects in an image. To detect such lines, they use the conv layers from a pre-trained VGG16 net and then add their own layers on top. The cool thing about their approach is that they run both classification <em>and</em> regression in parallel on the same network.</p>
<p>You might be able to adopt a similar technique to determine <em>where</em> the line is, and then run some simple analysis on the extracted line to determine the offset and the orientation.</p>
| 10
|
neural network architectures
|
What kind of neural network architecture do I use to classify images into one hundred thousand classes?
|
https://ai.stackexchange.com/questions/6880/what-kind-of-neural-network-architecture-do-i-use-to-classify-images-into-one-hu
|
<p>I have an image dataset where objects may belong to one of the hundred thousand classes.</p>
<p>What kind of neural network architecture should I use in order to achieve this?</p>
|
<p>Classification tasks with a large number of classes are usually handled with <a href="http://ruder.io/word-embeddings-softmax/index.html#hierarchicalsoftmax" rel="nofollow noreferrer">hierarchical softmax</a> to reduce the complexity of the final layer. This is useful, for example, in applications such as word embedding where you have hundreds of thousands of classes (words), like in your case.</p>
| 11
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.