category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
model evaluation
Some questions about supervised learning, model evaluation and preprocessing
https://datascience.stackexchange.com/questions/104851/some-questions-about-supervised-learning-model-evaluation-and-preprocessing
<p>I've been trying to employ some basic techniques of supervised learning on a dataset that I have and I have several questions about the overall procedure (i.e. data preprocessing, model evaluation etc).</p> <p>Before I start posing the questions let me give you an insight of how my dataset looks like. The dataset is from the open ML repository, it consists of 22 different types of articles (the targets or classes) and 1079 different words (features). The aim is to classify the upcoming blurbs of these articles based on these 1079 words. Below you can see the first 5 rows of my dataset.</p> <p><a href="https://i.sstatic.net/llTvt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/llTvt.png" alt="The first 5 rows of the dataset" /></a></p> <p>As you can see from the above snippet in the first 22 columns I have my targets (i.e. type of articles) and the rest of the columns belong to my word predictors. The values of my features are binary, i.e. «1» if the word appeared on the blurb, and «0» otherwise. First, I do a little of preprocess and I separate the targets from the features, I change the boolean values that correspond to the targets to 0 and 1 and I give Labels to my articles (i.e. 0:«Entertainment», 1: «Interviews» etc). In the following snippet you can see the distribution of my samples according to each different type of article.<a href="https://i.sstatic.net/9w63u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9w63u.png" alt="enter image description here" /></a></p> <p>As you can see from the above image, my dataset is unbalanced. My aim is to try the following classification algorithms and choose the best one at end: 1) GaussianNB (GNB), 2) KNearestNeighbors (KNN), 3) LogisticRegression (LR), 4) Multi-Layer Perceptron (MLP) and 5) Support Vector Machines (SVM). Before I start to preprocess my dataset in more detail I split my dataset on 70% train and 30% test and I do an Out-of-box-test of these algorithms, in the following table you can see my results</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"><strong>Classifier</strong></th> <th style="text-align: left;"><strong>f1score(train)</strong></th> <th style="text-align: left;"><strong>acc(train)</strong></th> <th style="text-align: left;"><strong>f1score(test)</strong></th> <th style="text-align: left;"><strong>acc(test)</strong></th> <th style="text-align: left;"><strong>Time – f1score(train)</strong></th> </tr> </thead> <tbody> <tr> <td style="text-align: left;"><strong>GaussianNB</strong></td> <td style="text-align: left;">23 %</td> <td style="text-align: left;">41 %</td> <td style="text-align: left;">24 %</td> <td style="text-align: left;">40 %</td> <td style="text-align: left;">0.54 (sec)</td> </tr> <tr> <td style="text-align: left;"><strong>Dummy</strong></td> <td style="text-align: left;">1 %</td> <td style="text-align: left;">16 %</td> <td style="text-align: left;">2 %</td> <td style="text-align: left;">19 %</td> <td style="text-align: left;">0.01 (sec)</td> </tr> <tr> <td style="text-align: left;"><strong>1NN</strong></td> <td style="text-align: left;">4 %</td> <td style="text-align: left;">18 %</td> <td style="text-align: left;">8 %</td> <td style="text-align: left;">24 %</td> <td style="text-align: left;">0.22 (sec)</td> </tr> <tr> <td style="text-align: left;"><strong>Logistic</strong></td> <td style="text-align: left;">34 %</td> <td style="text-align: left;">56 %</td> <td style="text-align: left;">40 %</td> <td style="text-align: left;">62 %</td> <td style="text-align: left;">1.04 (sec)</td> </tr> <tr> <td style="text-align: left;"><strong>MLP</strong></td> <td style="text-align: left;">33 %</td> <td style="text-align: left;">54 %</td> <td style="text-align: left;">39 %</td> <td style="text-align: left;">59 %</td> <td style="text-align: left;">10.5 (sec)</td> </tr> <tr> <td style="text-align: left;"><strong>SVM</strong></td> <td style="text-align: left;">19 %</td> <td style="text-align: left;">46 %</td> <td style="text-align: left;">32 %</td> <td style="text-align: left;">57 %</td> <td style="text-align: left;">3.7 (sec)</td> </tr> </tbody> </table> </div> <p>The first two columns refer to the outcome of the classifier using a 2-fold cross validation on my training set according to accuracy and f1(average = macro) metrics and the last two columns are the results of the classifiers on the test set according to the aforementioned metrics. From the above table you can observe that some of these classifiers have low performance. In my next task I am trying to preprocess my data in more detail using techniques like StandarScaler, VarianceThreshold, PCA or SelectBestK, RandomOverSampling and optimize the parameters of my classifiers with GridSearch. At the moment I can pose my first question.</p> <p><strong>Q1</strong> Are the above techniques (except GridSearch which I completely understand) going to improve the perfomance of the classifiers for sure? I mean is there any justification that these techniques generally work for better or it is just a trial and observation procedure?</p> <p>In the next code I create a pipeline that does the following manipulations, Oversampling, deleting the features with low variance, sequentially, before training the model (in this case I use only GaussianNB. Observe that I first do the splitting of my dataset into train and test and then use the oversampling.</p> <pre><code># Case 1 splitting first and then oversampling on training set from imblearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from imblearn.over_sampling import RandomOverSampler from sklearn.feature_selection import VarianceThreshold selector = VarianceThreshold() scaler = StandardScaler() over = RandomOverSampler() clf = GaussianNB() pipe = Pipeline(steps = [('over', over),('selector', selector), ('scaler', scaler),\ ('GNB', clf)]) train_new, test_new, train_new_labels,\ test_labels_new = train_test_split(features, labels, test_size = 0.3) pipe.fit(train_new, train_new_labels) pipe.score(test_new, test_labels_new) </code></pre> <p>The accuracy on the test set of the following pipeline is 38 % which is 2 % less than the score of the GaussianNB without this preprocess procedure, so my second question is the following.</p> <p><strong>Q2</strong> Why these modifications deteriorated the performance of the classifier? Is there any sign from the values of my dataset or its structure that could predict this outcome?</p> <p>Now, if I change the order of splitting and the oversampling I get completely different different results, for example if I run the following block of code.</p> <pre><code># Case 2 oversampling before splitting clf = GaussianNB() features2, labels2 = over.fit_resample(features, labels) # First do the oversampling train_new, test_new, train_new_labels, test_labels_new \ = train_test_split(features2, labels2, test_size = 0.3) # Then do the splitting train_new = selector.fit_transform(train_new) test_new = selector.transform(test_new) # This block of code does all train_new = scaler.fit_transform(train_new) # all the preprocessing. test_new = scaler.transform(test_new) train_new = selector2.fit_transform(train_new, train_new_labels) test_new = selector2.transform(test_new) clf.fit(train_new, train_new_labels) # Fit on train clf.score(test_new, test_labels_new) # Evaluate on test </code></pre> <p>I get 74 % percent accuracy on the test set which is much better than before. So my question is:</p> <p><strong>Q3</strong> Why changing the order of the splitting with oversampling changed the result that much? In general, I must do the splitting first and then do the preprocess only on my training set? For example, I understand that if first do some preprocess like scaling, PCA and then split my set, then my results would be biased since I have preprocessed the test set also, but I don't understand why this happens with oversampling also (if this is the case).</p> <p>To give you another view of the above result below I can show you the learning curve of the GaussianNB on a 10-fold cross validation in the second case where I first do the oversampling and then the splitting.</p> <p><a href="https://i.sstatic.net/XFpb3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XFpb3.png" alt="enter image description here" /></a></p> <p>As you can see from the above snippet the validation score and the training score converges on the same number which is a good indication that the model can achieve a good generalization performance.</p> <p><strong>Q4</strong> Which of the above two cases is more reliable to give me good results on future unseen samples? Furthermore, what kind of preprocess would you suggest on the above dataset? For example, in these two runs I deleted all samples from my dataset that belonged to 2 or more classes, this modification gave me back the 84% of my initial dataset. Would it be better if I had created duplicates of these samples to prevent as much the unbalancedness of dataset?</p> <p>PS: Excuse me for the long post, I don't expect to get answers for all the above questions, I would be very pleased to share your opinions if you have any insightful answers to any of the above questions! Thanks in advance!</p>
<h2>Q1</h2> <blockquote> <p>Q1 Are the above techniques (except GridSearch which I completely understand) going to improve the perfomance of the classifiers for sure? I mean is there any justification that these techniques generally work for better or it is just a trial and observation procedure?</p> </blockquote> <p>Q1 is very broad because it involves many different techniques which can each have their purpose in some cases but not in general. As a whole it's unlikely to always have a positive effect on performance, especially with a set of very different learning algorithms. But when used properly it's not a trial and error procedure, it's about using appropriate technique(s) knowing the characteristics of the data and algorithm (and the possible problems with them).</p> <h2>Q2</h2> <blockquote> <p>Q2 Why these modifications deteriorated the performance of the classifier? Is there any sign from the values of my dataset or its structure that could predict this outcome?</p> </blockquote> <p>Probably most of the difference is explained by the oversampling. We see a lot of questions here on DSSE about resampling, presumably because this is a very simple technique which can sometimes have a large impact on performance... especially when used the wrong way!</p> <p>It's useful to understand resampling and there might be a few rare cases where it makes sense, but imho it's more often just a lazy option to avoid studying the specific data and problem properly, and it's also a big source of confusion and mistakes (see next question). In general resampling simply doesn't improve performance, see for instance <a href="https://datascience.stackexchange.com/a/104433/64377">this answer</a>.</p> <h2>Q3</h2> <blockquote> <p>Q3 Why changing the order of the splitting with oversampling changed the result that much? In general, I must do the splitting first and then do the preprocess only on my training set?</p> </blockquote> <p>The most important point is to understand what is a proper evaluation method in supervised learning:</p> <ul> <li>A ML model is trained to solve one particular &quot;target problem&quot;, and this problem must be clearly defined from the start. In particular the distribution of the classes is an important part of the definition of the problem itself. For example it's not the same problem to classify between N equal classes and between N imbalanced classes, and a classifier trained for one problem is unlikely to work well for the other.</li> <li>Logically, the evaluation stage must always represent the target problem as defined, including of course the class distribution. If the test set doesn't follow the &quot;true distribution&quot; for the target problem, the resulting performance is unreliable, as we simply don't know how the model would perform on the true distribution.</li> </ul> <p>In Q3 the model is evaluated on the oversampled data. Unless you really mean to use the model on this distribution (unlikely since it's not the one in the source data), this performance doesn't mean anything for the real target problem with the original distribution. The reason why the performance is much higher is that the balanced version of the problem is much easier, so it's like solving an easy question and pretend that it's the same as the original difficult question ;)</p> <h2>Q4</h2> <blockquote> <p>Q4 Which of the above two cases is more reliable to give me good results on future unseen samples? Furthermore, what kind of preprocess would you suggest on the above dataset? For example, in these two runs I deleted all samples from my dataset that belonged to 2 or more classes, this modification gave me back the 84% of my initial dataset. Would it be better if I had created duplicates of these samples to prevent as much the unbalancedness of dataset?</p> </blockquote> <p>As per the answer above, future unseen samples probably follow the same distribution as the source data, so any performance calculated on another distribution is meaningless for this.</p> <p>For the same reasons deleting or adding instances can indeed modify performance, but that also means changing the target problem.</p> <p>See also <a href="https://datascience.stackexchange.com/q/104745/64377">this question</a> about supervised learning, in case it helps.</p>
62
model evaluation
Evaluation of linear regression model
https://datascience.stackexchange.com/questions/40265/evaluation-of-linear-regression-model
<p>I want to evaluate the performance of my linear regression model. I have the true values of y (y-true). I am thinking of two way for evaluation but not sure which one is correct. </p> <p>Let's assume that we have 2 samples and each sample has two outputs as following: </p> <pre><code>y_true = [[0.5, 0.5],[0.6, 0.3]] y_pred = [[0.3, 0.7],[0.9, 0.1]] </code></pre> <p><strong>- Approach#1 :</strong> </p> <p>One way to calculate the sum of the difference between the actual and predicted for each vector and then average all, as follows: </p> <p>sum_diff_Vector(1) = abs( 0.5 - 0.3 ) + abs( 0.5 - 0.7 ) = 0.4</p> <p>sum_diff_Vector(2) = abs( 0.6 - 0.9 ) + abs( 0.3 - 0.1 ) = 0.5</p> <p>Then avg ( sum_diff_Vector(1) , sum_diff_Vector(2) ) = 0.45</p> <p><strong>- Approach#2 :</strong> </p> <p>Another way to use the mean absolute error provided by sklearn.metrics in python. The thing with this metric, as opposed to the previous method, it calculates the mean absolute error for each output over all samples independently and then average all of them, as follows: </p> <p>MAE_OUTPUT(1) = abs(( 0.5 - 0.3 ) + ( 0.6 - 0.9 )) / 2 = 0.25</p> <p>MAE_OUTPUT(1) = abs(( 0.5 - 0.7 ) + ( 0.3 - 0.1 )) /2 = 0.2 </p> <p>Then avg ( MAE_OUTPUT(1) , MAE_OUTPUT(1) ) = 0.225</p> <p>Which way is correct and I should use ? please advise? </p>
<p>The only difference is in your example is that you divide by an additional two, because you take the mean per vector instead of the sum. Correctness does not play here because for comparison between different models the only difference is a constant factor and for interpretability it depends on the problem you are solving.</p> <p>The mean absolute error punishes mistakes linearly while the mean squared error punishes larger mistakes more heavily. This means this depends a bit on what you want to measure, based on the problem you are solving. Next to proper evaluation you could use this same measure to change the KPI you are optimizing directly with a different loss function.</p>
63
model evaluation
Evaluation code understanding in wide and deep model of tensorflow
https://datascience.stackexchange.com/questions/35993/evaluation-code-understanding-in-wide-and-deep-model-of-tensorflow
<p>I'm doing some research on the wide and deep model developed by Google. And I got 2 questions on the training and evaluation code fragment. (<a href="https://github.com/tensorflow/models/blob/master/official/wide_deep/wide_deep_run_loop.py#L96-L120" rel="nofollow noreferrer">Complete code</a> see here):</p> <pre><code> train_hooks = hooks_helper.get_train_hooks( flags_obj.hooks, model_dir=flags_obj.model_dir, batch_size=flags_obj.batch_size, tensors_to_log=tensors_to_log) # Train and evaluate the model every `flags.epochs_between_evals` epochs. for n in range(flags_obj.train_epochs // flags_obj.epochs_between_evals): model.train(input_fn=train_input_fn, hooks=train_hooks) results = model.evaluate(input_fn=eval_input_fn) # Display evaluation metrics tf.logging.info('Results at epoch %d / %d', (n + 1) * flags_obj.epochs_between_evals, flags_obj.train_epochs) tf.logging.info('-' * 60) for key in sorted(results): tf.logging.info('%s: %s' % (key, results[key])) benchmark_logger.log_evaluation_result(results) if early_stop and model_helpers.past_stop_threshold( flags_obj.stop_threshold, results['accuracy']): break </code></pre> <p>So my questions: </p> <ol> <li><p>default setting for <code>epochs_between_evals</code> is 2, but how is it possible for evaluating 1 epoch after training for every 2 epochs? Apparently there is one <code>model.evaluate()</code> after every <code>model.train()</code>, isn't it?<br> Perhaps the <code>train_hooks</code> controls that in <code>model.train()</code> to break every 2 epochs, but the parameters of <code>train_hooks</code> don't include <code>num_epochs</code> or <code>epochs_between_evals</code>. I'm a bit confused.</p></li> <li><p>What does <code>model.evaluate()</code> actually do? As I check the source code, it seems it's just a forwarding pass to get evaluation outputs. Any other work done behind the scene? </p></li> </ol>
64
model evaluation
Evaluation of a model of imbalanced data
https://datascience.stackexchange.com/questions/114577/evaluation-of-a-model-of-imbalanced-data
<p>I've created a model with Random Forest algorithm. There are 45k observations, where 1s I have 12% and the rest are 0s. As far as I know ROC AUC is not the best evaluation metric in such a case. I went with PR AUC and got 59%. How would you assess the results ?</p> <p><a href="https://i.sstatic.net/XyA5a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XyA5a.png" alt="classification report and confusion matrix" /></a></p> <p><a href="https://i.sstatic.net/zdH92.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zdH92.png" alt="PR AUC" /></a></p> <p><a href="https://i.sstatic.net/RCX7p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RCX7p.png" alt="ROC AUC" /></a></p>
<p><a href="https://stats.meta.stackexchange.com/q/6349/247274">As discussed on the stats.SE meta, class imbalance leads to a lot of misconceptions, and it is important to have a strong understanding of the underlying statistics in order to overcome those misconceptions.</a></p> <p>Log loss and Brier score are two popular metrics, whether there is imbalance or not. Among their advantages is that they are strictly proper scoring rules that are uniquely optimized in expected value by the true probabilities. (Links contained in that stats.SE meta post discuss the importance of such scoring rules.)</p> <p>Let <span class="math-container">$y_i$</span> and <span class="math-container">$p_i$</span> be the <span class="math-container">$i^{th}$</span> true observation (<span class="math-container">$0$</span> or <span class="math-container">$1$</span>) and predicted probability, respectively.</p> <p><span class="math-container">$$ \text{Log Loss}\\ -\dfrac{1}{N}\sum_{i=1}^N\bigg[ y_i\log(p_i)+(1-y_i)\log(1-p_i) \bigg] $$</span></p> <p><span class="math-container">$$ \text{Brier Score}\\ \dfrac{1}{N} \sum_{i=1}^N\bigg( y_i-p_i \bigg)^2 $$</span></p> <p>These are related to the McFadden and Efron pseudo <span class="math-container">$R^2$</span> values discussed at a nice <a href="https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-what-are-pseudo-r-squareds/" rel="nofollow noreferrer">UCLA page</a>.</p>
65
model evaluation
Is anything missing from my model evaluation procedure?
https://datascience.stackexchange.com/questions/43370/is-anything-missing-from-my-model-evaluation-procedure
<p>I have been building a model, can someone please review my methods and let me know if I am making a mistake? </p> <p>I trained a model with a support vector machine as follows : </p> <p>Split data into training and test sets as 10 partitions for K10 fold cross validation. </p> <p>Split training set into training and validation set with K5 fold. </p> <p>Train a parameter <span class="math-container">$C$</span> using validation set by choosing <span class="math-container">$C$</span> that gave best results from K5 fold tests.</p> <p>Train a model with parameter <span class="math-container">$C$</span> and training data from K10 fold, 10 times for each partition from the K10 fold. </p> <p>Take 1000 random samples of 80% of the test set partition data, and classify these random samples with the SVM. Calculate a mean and standard deviation. Repeat 10 times for each of the K10 fold partitions. Calculate a mean of all K10 partition means, and their combined standard deviation.</p> <p>I am in the process of repeating this entire process 10 times, and then I will compute a mean and standard deviation of all 10 experiments.</p> <p>For real world testing, I plan to repeat the above procedure but instead of splitting the data into train, test, and validate, I will use all data to find <span class="math-container">$C$</span> with K5 fold cross validation, and then test on real world data. Meaning, there will be no test set, the test set will become part of the training set, and the training set will be bigger because of this. </p> <p>Is this the correct way to go about it? </p> <p>Edit : Here is a diagram, hope its helpful. (Hyperparam = <span class="math-container">$C$</span>) <a href="https://i.sstatic.net/LzRzz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LzRzz.png" alt="diagram"></a></p>
66
model evaluation
Can I use GridSearchCV.best_score_ for evaluation of model performance?
https://datascience.stackexchange.com/questions/122562/can-i-use-gridsearchcv-best-score-for-evaluation-of-model-performance
<p>Scikit-learn page on Grid Search says:</p> <blockquote> <p>Model selection by evaluating various parameter settings can be seen as a way to use the labeled data to “train” the parameters of the grid.</p> <p>When evaluating the resulting model it is important to do it on held-out samples that were not seen during the grid search process: it is recommended to split the data into a development set (to be fed to the GridSearchCV instance) and an evaluation set to compute performance metrics.</p> </blockquote> <p>Does it mean that the GridSearchCV.best_score_ from the Grid Search object shouldn't be used for model performance evaluation? Why is that the case?</p> <p>I've been using my GridSearchCV scores as my performance estimates, because I wanted to get a reliable score over several runs (and standard deviation), and running a separate cross-validation after Grid Search gives me overestimated scores because some of the data in the CV validation sets was already seen by the Grid Search. Is this an incorrect approach?</p>
<p>Yes, the GridSearchCV.best_score_ should not be used as a final measure of model performance. The reason is that this score is optimistic, it is the best score obtained on the validation set during the grid search, but it does not guarantee that this is the best score the model can achieve on unseen data.</p> <p>The grid search process involves tuning the hyperparameters of the model to find the best combination that gives the highest score on the validation set. This means that the model is indirectly &quot;fit&quot; to the validation set, because the hyperparameters are chosen based on their performance on this set. Therefore, the best_score_ is likely to be an overestimate of the true performance of the model on unseen data.</p> <p>To get a more reliable estimate of model performance, it is recommended to hold out a separate test set that is not used during the grid search process. After the grid search, you can evaluate the model on this test set. This gives you an estimate of how well the model is likely to perform on new, unseen data.</p> <p>Your approach of running a separate cross-validation after the grid search is not incorrect, but it may indeed give overestimated scores if some of the data in the CV validation sets was already seen by the Grid Search. To avoid this, you could split your data into three sets: a training set for the grid search, a validation set for the cross-validation, and a test set for the final performance evaluation.</p>
67
model evaluation
Rate-distortion plots in denoising diffusion model evaluation
https://datascience.stackexchange.com/questions/131567/rate-distortion-plots-in-denoising-diffusion-model-evaluation
<p>In the Denoising Diffusion Probabilistic Models paper (<a href="https://arxiv.org/abs/2006.11239" rel="nofollow noreferrer">https://arxiv.org/abs/2006.11239</a>), the rate-distortion plot is computed assuming access to a protocol that can transmit samples <span class="math-container">$(x_T, ... x_0)$</span>. This is then used to construct Algorithm3 and Algorithm4 in the paper, and the claim is that Eqn 5 (pasted below) gives the total number of bits transmitted on average.</p> <p><a href="https://i.sstatic.net/DpQZOk4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DpQZOk4E.png" alt="Equation 5" /></a></p> <p>I am confused how this is possible?</p> <p>e.g. if we restrict ourselves to just transmitting <span class="math-container">$x_T$</span>, and if <span class="math-container">$p(x_T)$</span> and <span class="math-container">$q(x_T|x_0)$</span> are exactly same (e.g. both isotropic Gaussians), we cannot send <span class="math-container">$D_{KL}(q(x_T|x_0) || p(x_T)) = 0$</span> bits to reconstruct <span class="math-container">$x_T$</span> at the reciever. It seems we would need at least the <span class="math-container">$H(x_T)$</span> bits to be sent where <span class="math-container">$H$</span> is the entropy function. Wikipedia entry on KL divergence also seems to agree with this interpretation: <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Introduction_and_context" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Introduction_and_context</a>.</p> <p>If Algorithm3 and Algorithm4 are just trying to send a message such that the receiver has a random variable <span class="math-container">$y_T$</span> such that <span class="math-container">$x_T$</span> and <span class="math-container">$y_T$</span> have exactly the same distribution, then this seems possible to achieve with <span class="math-container">$D_{KL}(q(x_T|x_0) || p(x_T))$</span> bits for the first step. However in this case, doesn’t the logic break down at the next step? i.e. <span class="math-container">$p(x_{T-1}|x_T)$</span> will in general be very different from <span class="math-container">$p(y_{T-1}|y_T)$</span>.</p>
68
model evaluation
What to report in the build model, asses model and evaluate results steps of CRISP-DM?
https://datascience.stackexchange.com/questions/33265/what-to-report-in-the-build-model-asses-model-and-evaluate-results-steps-of-cri
<p>I would greatly appreciate if you could let me know what to report in the following steps of CRISP-DM?</p> <ul> <li><strong>Build Model</strong>: what should be reported for parameter settings, models and model description? I used grid search to tune hyperparameters.</li> <li><strong>Assess Model</strong>: what should be reported for model assessment and revised parameter settings?</li> <li><strong>Evaluate Results</strong>: what should be reported for assessment of data mining results?</li> </ul> <p><a href="https://i.sstatic.net/c53CB.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c53CB.gif" alt="enter image description here"></a></p> <p>In fact, I used just Logistic regression to do a classification task using a procedure like what is depicted below: <a href="https://i.sstatic.net/MSbzE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MSbzE.png" alt="enter image description here"></a></p> <p>Could the final model evaluation on test data set be considered as a part of Evaluate Results or it should be done on new data?</p>
<p>The way you are trying to present the outcome is pretty good.</p> <p>I cannot say that the following procedure is the standard procedure in my scenario I did something like this: <a href="https://i.sstatic.net/Y2uNf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y2uNf.png" alt="CRISP-DM"></a></p> <p>This is how I presented to my managers to make them understand the procedure which I followed.</p> <p>I made a slide for each and every segment and highlighted the points which are necessary for them to be known.</p> <p>While showing the results, I named the data(accuracies) differently </p> <ol> <li>Train Set Accuracy: Which is 70% of the total dataset, derive the accuracy % also called model accuracy</li> <li>Test Set Accuracy: Test set which is 30% of the total dataset, derive the accuracy called Test Accuracy</li> <li>Blind Test Accuracy: Which is completely new data, here I trained the model with whole dataset and then tested using the new data(you can also call it as Validation set)</li> </ol> <p>This is how I presented: <a href="https://i.sstatic.net/eSzZS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eSzZS.png" alt="Accuracy Table"></a></p> <p>Since yours is also a classification problem to explain it better I gave them the breakdown:</p> <p><a href="https://i.sstatic.net/uYdKU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uYdKU.png" alt="Further Breakdown of Accuracies"></a></p> <p>Do let me know if you have any issues, would love to help you.</p>
69
model evaluation
reliability of human-level evaluation of the interpretability quality of a model
https://datascience.stackexchange.com/questions/92227/reliability-of-human-level-evaluation-of-the-interpretability-quality-of-a-model
<p>Christoph Molnar, in his book <a href="https://christophm.github.io/interpretable-ml-book/" rel="nofollow noreferrer">Interpretable Machine Learning</a>, writes that</p> <blockquote> <p>Human level evaluation (simple task) is a simplified application level evaluation. The difference is that these experiments are not carried out with the domain experts, but with laypersons. This makes experiments cheaper (especially if the domain experts are radiologists) and it is easier to find more testers. An example would be to show a user different explanations and the user would choose the best one.</p> </blockquote> <p>(Chapter = <em>Interpretability</em> , section = <em>Approaches for Evaluating the Interpretability Quality</em>).</p> <p>Why would anyone pick/trust a human-backed(not an expert) model over, say a domain-expert backed model or even a functionally evaluated model(i.e. accuracy/precision/recall/f1-score etc. are considerably good)?</p>
<p>This is specifically for interpretability of outcomes, i.e. a task where non-expert humans outperform machines.</p> <p>There is a problem in collecting labels in machine learning, whereby labelling datasets is very expensive and time consuming (due to size of datasets &amp; cost of experts' time).</p> <p>So it's less about trust, its more about practicality. Consider hiring a data scientist to develop an algorithm to automatically label a dataset based of expert heuristics (e.g. <em>&quot;label the data as cancerous if it looks red&quot;</em>), it might take 6 months to collect data, plan, develop &amp; test - therefore for certain use-cases hiring 10 non-experts and telling them the heuristic might be cheaper and faster.</p> <p>The book uses an example <em>&quot;show a user different explanations and the human would choose the best.&quot;</em> in the context of radiology, it could be something like: <em>&quot;Look at the images of the patient, and compare it to this dictionary of images and diagnoses, combine multiple sources and then report what the diagnosis is&quot;</em></p> <p>Of course if you have an algorithm which outperforms non-experts, you might just want some expert labels to validate your algorithm, and forget the non-experts.</p>
70
model evaluation
BERT Model Evaluation Measure in terms of Syntax Correctness and Semantic Coherence
https://datascience.stackexchange.com/questions/63124/bert-model-evaluation-measure-in-terms-of-syntax-correctness-and-semantic-cohere
<p>For example I have an original sentence. The word barking corresponds to the word that is missing.</p> <pre><code>Original Sentence : The dog is barking. Incomplete Sentence : The dog is ___________. </code></pre> <p>For example, using the BERT model, it predicts the word crying instead of the word barking. How will I measure the accuracy of the BERT Model in terms of how syntactically correct and semantically coherent the predicted word is?</p> <p>(For an instance, there are a lot of incomplete sentences, and the task is to evaluate BERT accuracy based on these incomplete sentences.)</p> <p>In other words, how will I measure the distance in terms of semantics in terms of model between the two words <code>barking</code> and <code>crying</code>.</p> <p>Please help.</p>
<p>You just stumble over one big problem in the NLP field : finding the perfect metric..</p> <hr> <p>Most traditional metrics (BLEU, ROUGE, ...) simply does not take into account the distance in terms of semantics between <code>barking</code> and <code>crying</code>.</p> <p>So according to these metrics, <code>The dog is crying</code> is as similar as <code>The dog is salmon</code> to the reference, <code>the dog is barking</code>.<br> From a human viewpoint, this is not correct, the first sentence is closer to the reference, because for example the second sentence makes no sense.</p> <hr> <p>People recently tried to provide better metrics in this sense. You might be interested in <a href="https://github.com/Tiiiger/bert_score" rel="nofollow noreferrer">BERT score</a>.</p> <p>The idea is simply to use a BERT model (that have been pretrained, therefore have some linguistic knowledge) to compute how similar 2 sentences are.</p>
71
model evaluation
Restricting the output of a model didn&#39;t improve the loss value of the model evaluation
https://datascience.stackexchange.com/questions/39022/restricting-the-output-of-a-model-didnt-improve-the-loss-value-of-the-model-eva
<p>There is a deep model for prediction.</p> <p>The outputs are some numbers between 0 and 80. (In the dataset the outputs are 0-80)</p> <p>The model Loss value is 70 and I would like to reduce it.</p> <p>I printed the outputs after evaluating the model by test values and some of the predicted values are more than 80 or less than 0.</p> <p>I decided to set up the final layer to predict just in 0-80 in the training step, therefore I set a lambda layer after final Dense layer to clip output values.</p> <p>The codes:</p> <pre><code>def relu_advanced(x): return K.relu(x, max_value=80) def createModel4(): model = models.Sequential() model.add(Conv2D(256,(3, 3), activation='relu', input_shape=(320,20,1), padding='same')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(MaxPooling2D((2,2))) model.add(Flatten()) #model.add(Dense(5*320, activation= 'relu')) model.add(Dense(5*320)) model.add(Lambda(relu_advanced)) model.summary() return model </code></pre> <p>I tested the model with and without the relu_advanced and unfortunately, the Loss value is increased with advanced_relu!</p> <p>While there is no value much than 80 or less than zero, I don't know what may happen that the Loss is increased?</p> <p>Thank you</p>
<p>I suggest you normalise your labels so that it is scaled between 0 and 1, rather than 0 and 80. Once you have a trained model then multiply your output at the end. The network should find it easier to learn values between 0 and 1 (Andrew Ng's coursera course has a good lecture on this). </p> <p>Go back to using the standard ReLU: The problem with yours is that it cannot learn if it tries to output a value greater than 80, as there is no gradient with which to update the parameters.</p> <p>I would also consider putting in a much smaller Dense layer at the end (eg 10 nodes).</p>
72
model evaluation
Best metric to evaluate model probabilities
https://datascience.stackexchange.com/questions/110837/best-metric-to-evaluate-model-probabilities
<p>i'm trying to create ML model for binary classification problem with balanced dataset and i care mostly about probabilities. I was trying to search web and i find only advices to use AUC or logloss scores. There is no advices to use Brier score as evaluation metric. Can i use brier score as evaluation metric or there is some pitfalls within it? As i can understand if i will use logloss score as evaluation metric the &quot;winner one&quot; model will be the one who has probabilities closer to 0-10% and 90-100%.</p>
73
model evaluation
How is model evaluation and re-training done after deployment without ground truth labels?
https://datascience.stackexchange.com/questions/109553/how-is-model-evaluation-and-re-training-done-after-deployment-without-ground-tru
<p>Suppose I deployed a model by manual labeling the ground truth labels with my training data, as the use case is such that there's no way to get the ground truth labels without humans. Once the model is deployed, if I wanted to evaluate how the model is doing on live data, how can I evaluate it without sampling some of that live data, that doesn't come with ground truth labels, and manually giving it the ground truth labels? And then once evaluating its performance on that labeled live data sample, using that as the new training set as the new model. That's the only approach I can think of where you can't discern ground truth without human intervention, and it doesn't some too automated to me.</p> <p>Is there any other way to do this without the manuel labelling?</p>
<p>In your scenario there's no other way: the only way to properly evaluate on some live data is to have a sample of live data annotated.</p> <p>However there are a few automatic things that can be done. Even though it's not a full evaluation it can give some indications about whether the model is doing as expected:</p> <ul> <li>If the model is capable of measuring the likelihood of the data it receives, a decrease in this value means that the model has some difficulties.</li> <li>Measuring how similar the distribution of the features and (predicted) target is between the training data and the live data. If very different, the model is likely to make mistakes.</li> <li>Measuring any difference in the probabilities predicted by the model. Lower probabilities tend to correspond to lower confidence by the model.</li> </ul> <p>There are probably other methods, in particular based on the specific task.</p>
74
model evaluation
What is the most accurate way of computing the evaluation time of a neural network model?
https://datascience.stackexchange.com/questions/129480/what-is-the-most-accurate-way-of-computing-the-evaluation-time-of-a-neural-netwo
<p>I am training some neural networks in pytorch to use as an embedded surrogate model. Since I am testing various architectures, I want to compare the accuracy of each one, but I am also interested in evaluating the computational time of a single forward pass as accurately as possible. Below is the structure I have been currently using, but I wonder if it can be done better:</p> <pre><code>import torch from time import perf_counter x = ... # input tensor (n_samples, n_input_features) model = ... # trained pytorch model times = [] # empty list to hold evaluation times # Warm up pytorch: _ = model(x) # Timing run: for i in range(n_samples): start = perf_counter() with torch.no_grad(): y_hat = model(x[i]) end = perf_counter() times.append(end-start) avg_time = sum(times)/n_samples # average time per run </code></pre> <p>The reason I evaluate each sample individually in a loop is that in the embedded surrogate model, the model will receive a single set of inputs at a time. This approach seems more applicable in my case, especially to avoid parallel computation with CUDA or MPS for the whole set of samples in x.</p> <p>I have a few questions regarding this:</p> <ol> <li><p>Can the current structure of my code be improved to maximize the accuracy of the timings?</p> </li> <li><p>If I have the device of the model and tensors set to MPS, is there a benefit to setting it to CPU when evaluating computation time?</p> </li> <li><p>Wouldn't it make sense to confine the evaluation of the model to a specific thread in the CPU to maximize consistency in my readings? Is that even possible?</p> </li> <li><p>Any other thoughts or suggestions you may have on this?</p> </li> </ol> <p>Thanks in advance for the help!</p>
75
model evaluation
How to interprete the feature significance and the evaluation metrics in classification predictive model?
https://datascience.stackexchange.com/questions/98038/how-to-interprete-the-feature-significance-and-the-evaluation-metrics-in-classif
<p>Consider a experiment to predict the Google-Play apps rating using a Random-Forest classifier with scikit-learn in Python. Three attributes 'Free', 'Size' and 'Category' are utilized to predict the apps rating. 'Rating' (label) is not continuous value, instead, grouped into two classes 0 and 1. Where 0 is below 4 star rating and 1 is above 4 rating. Through Random-Forest, feature significance of all three predictive attributes and the F1 Evaluation of the model is also calculated.</p> <p>Firstly, lets suppose model omits the 'Size' as most significant feature, so what is implied here, having larger size or lower size of an app contribute to the rating? What If there is no ascending or descending order in the attribute, for instance if the 'Category' is most significant, then what category contributed the most?</p> <p>Secondly, the scikit-learn calculates the evaluation for each individual labels. As there are two labels, 0 and 1, so model yields two separate F1 scores for these labels and also the overall F1 scores of entire model. Now consider, for label 0, F1 is 30% and label 1 its 75%. Whereas overall F1 of entire model is 55%. In general, F1 and all other evaluation metrics must be around 90% for good prediction model. But suppose a scenario where above 70% F1 is considered good. Can I claim that these above mentioned attributes are good predictor of label 1 as its individual F1 is 75% but not for label 0, because it has only 30% F1 individually. If yes, then is it means predictive model cant find any relation between attributes and label 0, but finds a considerable relation with label 1? Or I have to consider the overall F1 which is 55%, and claim there exists very little correctional between 'Rating' and predictive attributes for all the labels. In conclusion, not a good prediction model at entirety for all the labels?</p>
<blockquote> <p><em>Firstly, lets suppose model omits the 'Size' as most significant feature, so what is implied here, having larger size or lower size of an app contribute to the rating? What If there is no ascending or descending order in the attribute, for instance, if the 'Category' is most significant, then what category contributed the most?</em></p> </blockquote> <p>Decision Tree splits the space based on a feature value. <br>Good feature importance implies, <em>when the model used that feature to split the space, split were cleaner [better Gini drop]</em>. <br>At this point, you can't know the value-specific correlation <em>i.e. whether RED color is causing high rating or GREEN color. Its just the color</em>. <br>If you OHE your data, then each value will become a feature and you may get the required significance value.</p> <blockquote> <p>Can I claim that these above-mentioned attributes are good predictors of label 1 as its individual F1 is 75% but not for label 0, because it has only 30% F1 individually.</p> </blockquote> <p>Yes, it should imply that <em>i.e. splits on the feature separate out Label_1</em>. Though I belive, a &quot;Confusion matrix&quot; is a better view for such analysis.</p>
76
model evaluation
Evaluating Logistic Regression Model in Tensorflow
https://datascience.stackexchange.com/questions/19850/evaluating-logistic-regression-model-in-tensorflow
<p>Following <a href="https://github.com/chiphuyen/tf-stanford-tutorials/blob/master/examples/03_logistic_regression_mnist_sol.py" rel="nofollow noreferrer">this</a> tutorial, I have a doubt about the evaluation part in:</p> <pre><code># test the model n_batches = int(mnist.test.num_examples/batch_size) total_correct_preds = 0 for i in range(n_batches): X_batch, Y_batch = mnist.test.next_batch(batch_size) _, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) preds = tf.nn.softmax(logits_batch) correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y_batch, 1)) accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) # need numpy.count_nonzero(boolarr) :( total_correct_preds += sess.run(accuracy) print 'Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples) </code></pre> <p>Note that this is done on the test set, so the goal is purely to obtain the accuracy using a previously trained model. However isn't calling the line:</p> <pre><code>_, loss_batch, logits_batch = sess.run([optimizer, loss, logits], feed_dict={X: X_batch, Y:Y_batch}) </code></pre> <p>equivalent to re-optimizing the model using the test data (and labels)? Shouldn't we avoid re-running optimizer and loss and just compute the predictions?</p>
<p>I think you are correct. The line should be</p> <pre><code>loss_batch, logits_batch = sess.run([loss, logits], feed_dict={X: X_batch, Y:Y_batch}) </code></pre>
77
model evaluation
Evaluate prediction from multiple classification model
https://datascience.stackexchange.com/questions/36312/evaluate-prediction-from-multiple-classification-model
<p>Given that I have data containing images of oranges, apples and pineapples and I want to classify depending on a set of features.</p> <p>Expected that I have completed the model and ready for prediction.</p> <p>My questions are:</p> <p>How can I output each score for every category the model predicted like this one?</p> <pre><code>Image 1: 60% apple, 20% orange, 1% pineapple It implies that the image is an apple. Image 2: 40% apple, 60% orange, 10% pineapple It implies that the image is an orange. </code></pre> <p>Is there any libraries I can use with?</p> <p>Does it depend on the model I am using? If yes, what are these models that implement this evaluation?</p>
<p>Based on the description of your question, it seems that you want the probability of outcome of each class (in <code>multiclass-classification</code>).</p> <p>I would suggest you to use <code>XGBoost</code> to get output based on your requirement. By setting the value of <code>objective</code> parameter to <code>multi:softprob</code>, you can get probability of prediction of each and every class. If you set the value of objective parameter to <code>multi:softmax</code>, then you will only get the class with maximum probability among other classes.</p> <p>Here, I am writing a example for your reference and to explain this description in a better way. You can get output by printing <code>y_test_preds</code>.</p> <pre><code>import xgboost as xgb xgb_class = xgb.XGBClassifier(**params) bst = xgb.train(params, dtrain, num_rounds) y_test_preds = bst.predict(dtest) </code></pre> <p>By the following way, you can set the parameters for XGBoost. I would strongly suggest you to modify these parameters (except <code>objective</code>) based on your data and requirements. </p> <pre><code>params = { 'objective' : 'multi:softprob', 'max_depth' : 6, 'silent' : 1, 'eta' : 0.4, 'num_class' : 3, 'n_estimators' : 500, 'learning_rate' : 0.1, 'num_rounds' : 15 } </code></pre> <p>Note: In <code>XGBoost</code>, you have to use <code>DMatrix</code> instead of <code>DataFrame</code>. You can also get the <code>DMatrix</code> from <code>DataFrame</code> by this way.</p> <pre><code>dtrain = xgb.DMatrix(X_train.values, label = y_train.values) dtest = xgb.DMatrix(X_test.values, label = y_test.values) </code></pre> <p>If you are new to <code>XGBoost</code>, then I would recommend you to go through this link once. https://xgboost.readthedocs.io/en/latest/get_started.html</p>
78
model evaluation
Chi-square as evaluation metrics for nonlinear machine learning regression models
https://datascience.stackexchange.com/questions/36550/chi-square-as-evaluation-metrics-for-nonlinear-machine-learning-regression-model
<p>I am using machine learning models to predict an ordinal variable (values: 1,2,3,4, and 5) using 7 different features. I posed this as a regression problem, so the final outputs of a model are continuous variables. So an evaluation box plot looks like this: <a href="https://i.sstatic.net/FBXw2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FBXw2.jpg" alt="Ground truth vs. predicted values" /></a></p> <p>I experiment with both linear (linear regression, linear SVMs) and nonlinear models (SVMs with RBF, Random forest, Gradient boosting machines ). The models are trained using cross-validation (~1600 samples), and 25% of the dataset is used for testing (~540 samples). I am using R-squared and Root Mean Square Error (RSME) to evaluate the models on test samples. <strong>I am interested in finding an evaluation measure to compare linear models to nonlinear ones.</strong></p> <p>This is done for scientific research. It was pointed out that R-square might not be an appropriate measure for nonlinear models, and that the Chi-Square test would be a better measure for goodness of fit.</p> <p>The problem is, I am not sure what is the best way to do it. When I browse Chi-square as the goodness of fit, I only get examples where the Chi-square test is used to see whether some categorical samples fit a theoretical expectation, such as <a href="http://www.biostathandbook.com/chigof.html" rel="nofollow noreferrer">here</a>. So here are my considerations/questions:</p> <ol> <li><p>One way I could think of is to categorize predicted (continuous) values into bins, and compare predicted distribution to the ground truth distribution using the Chi-Square test. But that doesn't make much sense, i.e. we have a machine learning model that perfectly predicts ground truth values 2,3, and 4, and values 5 predicts as 1, and values 1 as 5 - Chi-Square test that I propose here would reject the null hypothesis, although the model is mispredicting 2 out of 5 values.</p> </li> <li><p>As referred to in a tutorial from <a href="http://maxwell.ucsc.edu/%7Edrip/133/ch4.pdf" rel="nofollow noreferrer">USC</a> I could use formula (1) to compute Chi-Square value, where experimentally measured quantities (xi) are my ground truth values, and hypothesized values (mui) are my predicted values. My question is, what is the variance? If we observe each value 1,2,3,4, and 5 as a distinct category, then the variance of ground truth within each category is equaled to zero. Also, how one computes the degree of freedom (N-r)?</p> </li> <li><p>Related to the statement <strong>I am interested in finding an evaluation measure to compare linear models to nonlinear</strong> is the Chi-Square test the best (or even good) choice? What I've seen so far in machine learning competitions for regression tasks, either MSE or RSME are used for evaluation.</p> </li> </ol>
<p>Use your test data to compare the predictive performance of each model.</p> <p>In R you could do this like:</p> <pre><code>linear.predictions &lt;- predict(linear.model, newdata = test.data) nonlinear.predictions &lt;- predict(nonlinear.model, newdata = test.data) linear.percent.difference &lt;- (test.data$TARGET_VARIABLE - linear.predictions) / test.data$TARGET_VARIABLE nonlinear.percent.difference &lt;- (test.$TARGET_VARIABLE - nonlinear.predictions) / test.dtat$TARGET_VARIABLE linear.grade &lt;- mean(linear.percent.difference) nonlinear.grade &lt;- mean(nonlinear.percent.difference) </code></pre> <p>This is a pretty simple way to do it, but it is one that works for me and is easy to understand, especially if your audience is going to eye-glaze as soon as you say "Chi-square..." Get creative!</p>
79
model evaluation
Evaluation method for multi-class classification problem modeled as binary classification problem
https://datascience.stackexchange.com/questions/63091/evaluation-method-for-multi-class-classification-problem-modeled-as-binary-class
<p>I should mention that even though I have some basic knowledge regarding ML, it is the first big ML project I am working on and for the proposal of my research project I need to suggest an evaluation metric.</p> <p>The problem is a multiclass(16 classes) classification problem where one data point can be classified in multiple classes (not ranking based though). I plan to model it as a binary classification problem for each class but for the related evaluation metrics I was not able to find a proper application. So, first of all, should I evaluate individual performance for each class (how well class A classification is working), should I go for a general evaluation (This data point belongs A,B,C but at the end classified for A and B only), or both? Second, what kind of metrics can I have a look at? Finally, I haven't started working on the data yet but I expect an unbalanced distribution for my classes. Would it affect my results? </p>
<p>Unbalanced data will definitely be a problem and should be addressed. In particular, "accuracy" will not be dependable metric any more so if you decide the use unbalanced data directly, so you should use other metrics that are more reliable for such scenarios, but that also can depend on the data distribution you have. <a href="https://towardsdatascience.com/what-metrics-should-we-use-on-imbalanced-data-set-precision-recall-roc-e2e79252aeba" rel="nofollow noreferrer">Here</a> is a discussion of how each metric perform for different situations.</p> <p>Apart from the choice of a proper metric, There are other ways to deal with imbalanced data. <a href="https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/" rel="nofollow noreferrer">Here</a> you can find some of the methods but probably most well known way to deal with it is <a href="https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis" rel="nofollow noreferrer">oversampling and undersampling</a>, in which you equate the number of samples in each class.</p> <p>Regarding the individual evaluation vs general evaluation, this probably depends on your own preference and the classes you have in the problem. But individual classification will be useful at least to fine tune your model: you would not prefer a model which performs very well on some set of samples but very badly on some other set. Such a situation might indicate that the features used might not be as useful for each class and require getting more data. </p> <p>And finally, relatively small differences in the size of classes might not be a problem and ignored; but this might be more of a personal choice. What are the relative sizes of the classes?</p>
80
model evaluation
Modelling on one Population and Evaluating on another Population
https://datascience.stackexchange.com/questions/887/modelling-on-one-population-and-evaluating-on-another-population
<p>I am currently on a project that will build a model (train and test) on Client-side Web data, but evaluate this model on Sever-side Web data. Unfortunately building the model on Server-side data is not an option, nor is it an option to evaluate this model on Client-side data.</p> <p>This model will be based on metrics collected on specific visitors. This is a real time system that will be calculating a likelihood based on metrics collected while visitors browse the website.</p> <p>I am looking for approaches to ensure the highest possible accuracy on the model evaluation.</p> <p>So far I have the following ideas,</p> <ol> <li>Clean the Server-side data by removing webpages that are never seen Client-side.</li> <li>Collect additional data Server-side data to make the Server-side data more closely resemble Client-side data.</li> <li>Collect data on the Client and send this data to the Server. This is possible and may be the best solution, but is currently undesirable. </li> <li>Build one or more models that estimate Client-side Visitor metrics from Server-side Visitor metrics and use these estimates in the Likelihood model.</li> </ol> <p>Any other thoughts on evaluating over one Population while training (and testing) on another Population?</p>
<p>If the users who you are getting client-side data from are from the same population of users who you would get server-side data from. If that is true, then you aren't really training on one population and applying to another. The main difference is that the client side data happened in the past (by necessity unless you are constantly refitting your model) and the server side data will come in the future.</p> <p>Let's reformulate the question in terms of models rather than web clients and servers.</p> <p>You are fitting a model on one dataset and applying it to another. That is the classic use of predictive modeling/machine learning. Models use features from the data to make estimates of some parameter or parameters. Once you have a fitted (and tested) model, all that you need is the same set of features to feed into the model to get your estimates.</p> <p>Just make sure to model on a set of features (aka variables) that are available on the client-side and server-side. If that isn't possible, ask that question separately.</p>
81
model evaluation
Model Performance using Precision as evaluation metric
https://datascience.stackexchange.com/questions/47804/model-performance-using-precision-as-evaluation-metric
<p>I am dealing with an imbalanced class with the following distribution : (Total dataset size : 10763 X 20)</p> <p>0 : 91%</p> <p>1 : 9%</p> <p>To build model on this dataset having class imbalance, I have compared results using </p> <p>1) SMOTE and </p> <p>2) Assigning more weight to the minority class when applying fit </p> <p>and the latter seems to be working better.</p> <p>After experimenting with Decision Tree, LR, RF SVM(poly and rbf) I am now using XGBoost classifier which gives me the below classification results(these are the best numbers I've got so far)</p> <p><a href="https://i.sstatic.net/WOnsD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WOnsD.png" alt="enter image description here"></a></p> <p>The business problem I'm trying to solve requires the model to have high precision, as the cost associated with that is high. </p> <p>Here's my XGBClassifer's code : </p> <pre><code>xgb3 = XGBClassifier( learning_rate =0.01, n_estimators=2000, max_depth=15, min_child_weight=6, gamma=0.4, subsample=0.8, colsample_bytree=0.8, reg_alpha=0.005, objective= 'binary:logistic', nthread=4, scale_pos_weight=10, eval_metrics = 'logloss', seed=27) model_fit(xgb3,X_train,y_train,X_test,y_test) </code></pre> <p>And here's the code for model_fit : </p> <pre><code>def model_fit(algorithm,X_train,y_train,X_test,y_test,cv_folds = 5 ,useTrainCV = "True", early_stopping_rounds = 50): if useTrainCV: xgb_param = algorithm.get_xgb_params() xgbtrain = xgb.DMatrix(X_train,label = y_train) xgbtest = xgb.DMatrix(X_test) cvresult = xgb.cv(xgb_param,xgbtrain,num_boost_round = algorithm.get_params()['n_estimators'], nfold = cv_folds,metrics='logloss', early_stopping_rounds=early_stopping_rounds) algorithm.set_params(n_estimators=cvresult.shape[0]) algorithm.fit(X_train, y_train, eval_metric='logloss') y_pred = algorithm.predict(X_test) cm = confusion_matrix(y_test,y_pred) print(cm) print(classification_report(y_test,y_pred)) </code></pre> <p>Can anyone tell me how can I increase the precision of the model. I've tried everything I know of. I'll really appreciate any help here.</p>
82
model evaluation
Cost Function for evaluating a Regression Model
https://datascience.stackexchange.com/questions/24950/cost-function-for-evaluating-a-regression-model
<p>There are several "classical" ways to quantify the quality of (any!) regression models such as the RMSE, MSE, explained variance, r2, etc...</p> <p>These metrics however do not take "costs" into account, for example, for me it is worse to under-predict a value (Real: 0.5, Predicted: 0.4) than to over-predict it (Real: 0.5, Predicted: 0.6). </p> <p>How can I model such costs into an evaluation function? I just need a first idea to start with and will welcome any suggestions.</p>
<p>A loss function and cost function are the same thing. As you intuit, classical regression treats loss/cost as symmetric, which is not always what you want. In classification tasks, you can make an asymmetric <em>loss matrix</em>. You can do a similar thing with regression if you solve it with gradient descent, but the ordinary least squares has symmetric loss baked in. </p> <p>So I would consider either (1) using a numeric optimization library like sklearn or tensorflow to explicitly define the regression parameters you want to estimate, write your own custom loss function, and then do parameter estimation via gradient descent, or (2) finding a software package that allows for asymmetric loss, for example see <a href="https://stats.stackexchange.com/questions/37955/how-to-design-and-implement-an-asymmetric-loss-function-for-regression">this discussion</a>.</p>
83
model evaluation
Do we need the testing data to evaluate the Model Performance - Regression
https://datascience.stackexchange.com/questions/36089/do-we-need-the-testing-data-to-evaluate-the-model-performance-regression
<p>I have been working with Classification Modelling in R and Python for the last 6 months now. With the Classification, the evaluation of the Model was based on Precision, Recall, Hamming Loss, accuracy etc., These classification models needed the testing data to calculate these evaluation metrics.</p> <p>Is it the Same case with Regression when we calculate SSR, SSE, RMSE and other evaluation metrics. </p> <p>From an R point of View - Summary(LmRegressionModel) gives these evaluation metric figures one way or another. Why do we need the testing data then to evaluate the model here in Regression.</p>
<p>ML community has many more metrics than you just listed here, both for regression and classification. But the principal remains; calculating the metrics on the training set would likely lead to overfitting.</p>
84
model evaluation
Why my sentiment analysis model is overfitting?
https://datascience.stackexchange.com/questions/121745/why-my-sentiment-analysis-model-is-overfitting
<p>The task is to predict sentiment from 1 to 10 based on Russian reviews. The training data size is 20000 records, of which 1000 were preserved as a validation set. The preprocessing steps included punctuation removal, digit removal, Latin character removal, stopword removal, and lemmatization. Since the data was imbalanced, I decided to downsample it. After that, TF-IDF vectorization was applied. At the end, I got this training dataset:</p> <p><a href="https://i.sstatic.net/EWo0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EWo0m.png" alt="enter image description here" /></a></p> <p>The next step was the validation set TF-IDF transformation:</p> <p><a href="https://i.sstatic.net/HAv7x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HAv7x.png" alt="enter image description here" /></a></p> <p>As a classifier model, I chose MultinomialNB (I read it is useful for text classification tasks and sparse data). The training data fit was pretty quick:</p> <pre><code># TODO: create a Multinomial Naive Bayes Classificator clf = MultinomialNB(force_alpha=True) clf.fit(X_res, y_res.values.ravel()) </code></pre> <p>But the problem was in model evaluation part:</p> <pre><code># TODO: model evaluation print(clf.score(X_res, y_res.values.ravel())) print(clf.score(X_val, y_val.values.ravel())) y_pred = clf.predict(X_val) print(precision_recall_fscore_support(y_val, y_pred, average='macro')) </code></pre> <p>Output:</p> <pre><code>0.9352409638554217 0.222 (0.17081898127154763, 0.1893033502842826, 0.16303596541199034, None) </code></pre> <p>It is obvious that the model is overfitting, but what do I do? I tried to use SVC, KNeighborsClassifier, DecisionTreeClassifier, RandomForestClassifier, and GaussianNB, but everything remained the same. I tried to play around with the MultinomialNB hyperparameter <code>alpha</code> but <code>force_alpha=True</code> option is the best so far.</p>
<p>There might be multiple reasons that might be the reason for overfitting some of which are:</p> <p>1.) Scaling the data</p> <p>2.) You have not mentioned which parameter values you have selected in the Tfidf vectorizer. Some of them might help to reduce overfitting. <code>ngram_range</code> and <code>max_features</code> are 2 which you can play around with.</p> <p>3.) Make sure you are using <code>fit_transform</code> on the train set only and not on the test set for both tfidf and scaling. Use only <code>transform</code> for the test set.</p> <p>4.) Try to tune the hyperparameters of other models such as <code>RandomForest</code> and <code>SVC</code>.</p> <p>5.) Use other word embedding techniques such as <code>Word2Vec</code>, <code>Glove</code> or <code>Fasttext</code> as they capture the word context as well as opposed to just the word frequency (which is happening in the case of tfidf).</p> <p>6.) Try different models. You are just testing 4-5 models when in fact there are so many classification models out there. Try as many as you can to see which one gives the best result.</p> <p>7.) Last but not the least,increase the data size. Since you are down sampling the data (I don't know by how much), this also might be a factor in overfitting.</p> <p>Try to implement all of the above points and let me know whether results improve.</p> <p><strong>Cheers!</strong></p>
85
model evaluation
What is a suitable loss function and evaluation metric for a classification model with large number of unbalanced target classes?
https://datascience.stackexchange.com/questions/40946/what-is-a-suitable-loss-function-and-evaluation-metric-for-a-classification-mode
<p>I am building a multiclass classifier to predict the "Intent" of a question. There are some 100 classes in the target variable and each target class contains an unequal proportion of observations/questions varying from 3 % to 40 %.</p> <p><strong>Questions</strong></p> <ol> <li>What would be a good evaluation metric for this classifier? </li> <li>What is the best proxy loss function for optimizing the suggested evaluation metric?</li> </ol> <p>EDIT: This is not a ranking problem. The model should predict only 1 target class. The cost of misclassification is the same for each target class.</p>
<p>You could look at <code>sensitivity</code> and <code>specificity</code>. They can be combined effectively to either provide a basis for a correct classification, or the basis for an exclusionary classification within each class. That said, if you are looking for a measure for the model as a whole, you'll probably want to look at some sort of weighted average of the <code>sensitivity</code> scores. </p> <p><code>sensitivity</code> is the same calculation as <code>precision</code>, but <code>specificity</code> is not the same as <code>recall</code>.</p> <p>That said, you may need to have several models that specialize in different areas or, perhaps as worst case scenario, you may need to produce a one-vs-the-rest ensemble of models to get the best results. This might help if, as <strong>anymous.asker</strong> suggested, perhaps you can introduce some cost around positive or negative outcomes.</p> <p>HTH</p>
86
model evaluation
Need term or method name for evaluation of CNN without ground truth using e.g. a regression model
https://datascience.stackexchange.com/questions/111330/need-term-or-method-name-for-evaluation-of-cnn-without-ground-truth-using-e-g-a
<p>I have the following problem, I have trained a CNN and I can evaluate the network in-sample. I want to use the trained model for the class prediction of images for which I have no ground truth. However, there are other features referenced to these images that I can implement in a regression model along with predicted labels to predict Y. The only way to evaluate somehow the CNN is to infer if the predicted labels have an effect on Y e.g. to evaluate the significance of the variable with the predicted classes or the performance of the entire regression model.</p> <p>Since the regression is interpretable, I think the approach can be assigned to interpretable AI (@Nikos M. thank you). Unfortunately, I cannot assign the approach to any method/taxonomy of ineterpretable AI I have read so far. The regression model serves in my example as a kind of surrogate model. It is not a global surrogate because I do not use the predictions from the black box but use the trained CNN model to predict new unknown images in the &quot;surrogate&quot; model.</p> <p>Can anyone tell me if this kind of evaluation method for machine learning models has already been described in the community and what is it called?</p> <p>Tnx</p>
<p>What you ask for is related to what is termed as <a href="https://en.wikipedia.org/wiki/Explainable_artificial_intelligence" rel="nofollow noreferrer">explainable AI</a> (especially for deep models, like CNNs). Explainable AI methods try to provide (quantitative) insight as to why the model makes this or that prediction. So you can search into this type of literature for approaches.</p> <p>Effectively what you do with the regression model is trying to quantify how this or that feature affects prediction, which is an approach to explainable AI</p>
87
model evaluation
Multiple models have extreme differences during evaluation
https://datascience.stackexchange.com/questions/102911/multiple-models-have-extreme-differences-during-evaluation
<p>My dataset has about 100k entries, 6 features, and the label is simple binary classification (about 65% zeros, 35% ones).</p> <p>When I train my dataset on different models: random forest, decision tree, extra trees, k-nearest neighbors, logistic regression, sgd, dense neural networks, etc, the evaluations differ GREATLY from model to model.</p> <ul> <li>tree classifiers: about 80% for both accuracy and precision</li> <li>k-nearest neighbors: 56% accuracy and 36% precision.</li> <li>linear svm: 65% accuracy and 0 positives guessed</li> <li>sgd : 63% accuracy and 2 true positives + 4 false positives</li> </ul> <p>I don't understand the difference in such disparity. Can someone explain why that happens? Am I doing something wrong?</p> <p>Also cannot find an answer to my question, so please link if someone asked it already</p> <p>Would really appreciate the help!</p>
<p>A few thoughts:</p> <ul> <li>The first thing I would check is whether the other models overfit. You could check this by comparing the performance between the training set and the test set.</li> <li>Also there's something a bit strange about k-NN always predicting the majority class. This would happen only if any instance is always closer to more majority instances than minority instances. In this case there's something wrong with either the features or the distance measure.</li> <li>100k instances looks like a large dataset but with only 6 features it's possible that the data contains many duplicates and/or near-duplicates which don't bring any information for the model. In general it's possible that the features are simply not good indicators, although in this case the decision tree models would fail as well.</li> <li>The better performance of the tree models points to something discontinuous in the features (btw you didn't mention if they are numerical or categorical?). Decision trees and especially random forests can handle discontinuity but like logistic regression might have trouble with it.</li> </ul>
88
model evaluation
keras custom metric function how to feed 2 model outputs to a single metric evaluation function
https://datascience.stackexchange.com/questions/54443/keras-custom-metric-function-how-to-feed-2-model-outputs-to-a-single-metric-eval
<p>I have an CNN object detection model which has two heads(outputs) with tensor names <code>'classification'</code> and <code>'regression'</code>. </p> <p>I want to define a metric function that <strong>accepts both the outputs at the same time</strong>, so that it looks into the <strong>regression predictions</strong> to decide which indexes to retain and use those indexes to select tensors from <strong>classification predictions</strong> and calculate some metric.</p> <p>my current metric function defined with help from <a href="https://www.tensorflow.org/beta/guide/keras/training_and_evaluation#passing_data_to_multi-input_multi-output_models" rel="nofollow noreferrer">this link</a>:</p> <pre><code>from tensorflow.python.keras.metrics import MeanMetricWrapper class Accuracy2(MeanMetricWrapper): def __init__(self, name='dummyAccuracy', dtype=None): super(Accuracy2, self).__init__(metric_calculator_func, name, dtype=dtype) self.true_positives = self.add_weight(name='lol', initializer='zeros') @classmethod def from_config(cls, config): if 'fn' in config: config.pop('fn') return super(Accuracy2, cls).from_config(config) def update_state(self, y_true, y_pred, sample_weight=None): print("==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@===") print("Y-True {}".format(y_true)) print("Y-Pred {}".format(y_pred)) print("==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@===") update_ops = [self.true_positives.assign_add(1.0)] return tf.group(update_ops) def result(self): return self.true_positives def reset_states(self): # The state of the metric will be reset at the start of each epoch. self.true_positives.assign(0.) </code></pre> <p>which I call the during model compilation as :</p> <pre><code>training_model.compile( loss={ 'regression' : regression_loss(), 'classification': classification_loss() }, optimizer=keras.optimizers.Adam(lr=lr, clipnorm=0.001), metrics=[Accuracy2()] ) </code></pre> <p>screen log during <strong>tf.estimator.train_and_evaluate</strong> is:</p> <blockquote> <p>INFO:tensorflow:loss = 0.0075738616, step = 31 (11.941 sec)</p> <p>INFO:tensorflow:global_step/sec: 4.51218</p> <p>INFO:tensorflow:loss = 0.01015341, step = 36 (1.108 sec)</p> <p>INFO:tensorflow:Saving checkpoints for 40 into /tmp/tmpcla2n3gy/model.ckpt.</p> <p>INFO:tensorflow:Calling model_fn. ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== Tensor("IteratorGetNext:1", shape=(?, 120087, 5), dtype=float32, device=/device:CPU:0) Tensor("regression/concat:0", shape=(?, ?, 4), dtype=float32) ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== Tensor("IteratorGetNext:2", shape=(?, 120087, 2), dtype=float32, device=/device:CPU:0) Tensor("classification/concat:0", shape=(?, ?, 1), dtype=float32) ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@===</p> <p>INFO:tensorflow:Done calling model_fn.</p> <p>INFO:tensorflow:Starting evaluation at 2019-06-24T08:20:35Z INFO:tensorflow:Graph was finalized. 2019-06-24 13:50:36.457345: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-06-24 13:50:36.457398: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-06-24 13:50:36.457419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-06-24 13:50:36.457425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-06-24 13:50:36.457539: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9855 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)</p> <p>INFO:tensorflow:Restoring parameters from /tmp/tmpcla2n3gy/model.ckpt-40</p> <p>INFO:tensorflow:Running local_init_op.</p> <p>INFO:tensorflow:Done running local_init_op.</p> <p>INFO:tensorflow:Evaluation [10/100]</p> <p>INFO:tensorflow:Evaluation [20/100]</p> <p>INFO:tensorflow:Evaluation [30/100]</p> <p>INFO:tensorflow:Evaluation [40/100]</p> <p>INFO:tensorflow:Evaluation [50/100]</p> <p>INFO:tensorflow:Evaluation [60/100]</p> <p>INFO:tensorflow:Evaluation [70/100]</p> <p>INFO:tensorflow:Evaluation [80/100]</p> <p>INFO:tensorflow:Evaluation [90/100]</p> <p>INFO:tensorflow:Evaluation [100/100]</p> <p>INFO:tensorflow:Finished evaluation at 2019-06-24-08:20:44</p> <p>INFO:tensorflow:Saving dict for global step 40: _focal = 0.0016880237, _smooth_l1 = 0.0, dummyAccuracy = 100.0, global_step = 40, loss = 0.0016880237</p> </blockquote> <p>This line :</p> <pre><code>==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== Tensor("IteratorGetNext:1", shape=(?, 120087, 5), dtype=float32, device=/device:CPU:0) Tensor("regression/concat:0", shape=(?, ?, 4), dtype=float32) ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== Tensor("IteratorGetNext:2", shape=(?, 120087, 2), dtype=float32, device=/device:CPU:0) Tensor("classification/concat:0", shape=(?, ?, 1), dtype=float32) ==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@=== </code></pre> <p>shows that <code>Accuracy2()</code> is invoked two times first for <strong>regression</strong> then for <strong>classification</strong>. but I want that it gets invoked once with <strong>regression</strong> and <strong>classification</strong> fed into it together</p>
<p>I have found suggestion to implement metrics through callback.</p> <p><a href="https://github.com/keras-team/keras/issues/4506" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/4506</a></p> <p>Maybe it helps you and those with similar problem to figure out the needed solution</p>
89
model evaluation
Why training model give great result but real data gives very bad result: Azure ML Studio
https://datascience.stackexchange.com/questions/28251/why-training-model-give-great-result-but-real-data-gives-very-bad-result-azure
<p>I am using Two-Class Boosted Decision Tree to train model. </p> <p>Evaluation result I'd say really good.</p> <p>But when I am using real dataset - the result is very bad.</p> <p>What can possibly go wrong that makes such huge difference? </p> <p>Below is the screenshot of my model:</p> <p><a href="https://i.sstatic.net/8SBw2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8SBw2.png" alt="enter image description here"></a></p> <p>Two Class Boosted Decision Tree parameters (default):</p> <p><a href="https://i.sstatic.net/JCBhg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JCBhg.png" alt="enter image description here"></a></p> <p><a href="https://i.sstatic.net/LCDbR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LCDbR.png" alt=""></a></p>
<p>Your question is not clear. There's 2 ways to understand it. Which dataset did you use to train your model?</p> <ol> <li>You trained and tested on a premade dataset. The result is great. Then you applied this model to real dataset and the result is really bad.</li> </ol> <p>If this is the case, you should retrain on your real dataset or apply some Transfer Learning techniques to your current model.</p> <ol start="2"> <li>You trained and tested on a premade dataset. The result is great. Using the same model, you trained and tested on real dataset but the result is much worse.</li> </ol> <p>I can't tell exactly the reason for this. Normally, real data is much more noisy. Did you handle missing data and do some feature engineering before training? </p>
90
model evaluation
Image Preprocessing
https://datascience.stackexchange.com/questions/100542/image-preprocessing
<p>I'm working on a use case where I need to pre process the image for my AIML model evaluation and I want to count all black pixels in RGB image.</p> <p>Instead of iterating rows*column, I'm looking for some vector based approach for this evaluation. Please suggest.</p>
<p>I did this implementation,</p> <pre><code>#img = [w,h,c] numpy array of image out = img == [0,0,0] np.sum(np.sum(out,axis=1) == 3) </code></pre> <p>Its working. Let me know in case we can optimized it further.</p>
91
model evaluation
How to make an DL model predict Correctly
https://datascience.stackexchange.com/questions/103955/how-to-make-an-dl-model-predict-correctly
<p>So I trained a DL algorithm using Keras for Human Action Recognition. The model has an accuracy of about 85 percent and a loss of 0.3 something. The problem is that the model did not predict well on unseen data. The dataset consists of 3 classes with 100 videos in each class.</p> <p><strong>Training Phase:</strong> <a href="https://i.sstatic.net/oO6yv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oO6yv.png" alt="Training of DL model" /></a> <strong>The model evaluation</strong> [<img src="https://i.sstatic.net/HgBqN.png" alt="Model evaluation][2]" /></p> <p><strong>This was a handwaving video that was predicted as walking</strong><a href="https://i.sstatic.net/DWtmh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DWtmh.png" alt="This was a handwaving video that was predicted as walking" /></a> <strong>The model</strong><a href="https://i.sstatic.net/SuuDt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SuuDt.png" alt="The model" /></a></p>
92
model evaluation
Evaluate best model
https://datascience.stackexchange.com/questions/114204/evaluate-best-model
<p>Let's assume I have 2 models</p> <p>Model 1:</p> <ul> <li>Train Accuracy = 92.4%</li> <li>Validation Accuracy = 37.6%</li> <li>Test Accuracy = 35.3%</li> </ul> <p>Model 2:</p> <ul> <li>Train Accuracy = 37.0%</li> <li>Validation Accuracy = 34.2%</li> <li>Test Accuracy = 34.1%</li> </ul> <p>Which is the best model ? Model 1 is heavily overfitting but the final performance is better</p>
<p>Deep learning models heavily rely on stochastic processes such as weight initialization, back-propagation, etc. For evaluation and comparison of different models, there are methods that are generally referred to as <a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)" rel="nofollow noreferrer">Cross-Validation</a>. The most popular type of CV is the <a href="https://machinelearningmastery.com/k-fold-cross-validation/" rel="nofollow noreferrer">k-Fold CV</a>, and if your model training comprises hyperparameter tuning, you must use the <a href="https://machinelearningmastery.com/nested-cross-validation-for-machine-learning-with-python/" rel="nofollow noreferrer">Nested k-Fold CV</a>.</p>
93
model evaluation
How to effectively evaluate a model with highly imbalanced and limited dataset
https://datascience.stackexchange.com/questions/112713/how-to-effectively-evaluate-a-model-with-highly-imbalanced-and-limited-dataset
<p>Most data imbalance questions on this stack have been asking <em>How to learn a better model</em>, but I tend to think one other problem is <em>How do we define &quot;better&quot; (i.e. fairly evaluate the learned model)</em> to ensure the evaluation performance on the limited test set does not suffer from high variance, since many of the imbalance classes also have limited number of samples. I'll summarize my current understanding below. First let's suppose we have 1000 samples, among which only 10 have positive labels. Our task is binary classification.</p> <p>One common practice is to do cross-validation. In a typical cross-validation setting, one split the data to remaining-holdout first, and then conduct cross-validation on the remaining set to train-val folds for hyperparameter tuning. Evaluation of the model is performed on the test set, by either using one of the trained fold models, ensemble of all folds, or re-trained model on the full remaining set. However, the test set is still comprised of only <strong>two positive samples</strong>, which might very likely suffer from variance in the split.</p> <p>Built upon such intuition, one natural thought is to do three-way cross-validation, which splits each fold to train-val-test sets. The test performance would be the average of all test performances across folds. However, it really confuses me because we are <strong>not</strong> even using <strong>one single model</strong> for evaluation, and it cannot really tell us how the one selected model would behave in the real world.</p> <p>I understand that we have the assumption that test distribution should be the same as the general. But two test positive samples are not able to capture a distribution. In such scenario, what should we do?</p>
<p>I think that one way is using a out of bootstrap validation several times in order to estimate the distribution of your interest metric.</p>
94
model evaluation
Is it a good practice to evaluate a model on the training set
https://datascience.stackexchange.com/questions/80555/is-it-a-good-practice-to-evaluate-a-model-on-the-training-set
<p>Is it a good practice to evaluate a model on the <strong>training set</strong> (i.e. train a model on training set and evaluate the regression error/accuracy on the same training set) <strong>and</strong> compare the evaluation result with the model regression error/accuracy of cross validation (we do the cross validation on the same training set) <strong>and</strong> test set in order to <strong>check for overfitting/underfitting</strong>?</p> <p>Since to my knowledge, we should never evaluate a model on the training set. However, I saw some lectures that seem to promote evaluating the training error.</p>
<p>Ok, let's be clear:</p> <ul> <li>When we say that evaluation should never be done on the training set, it means that <strong>the real performance of the model</strong> can only be estimated on a separate test set.</li> <li>It's totally fine to calculate the performance of a system on the training data, and it's often useful (e.g. to avoid overfitting). Of course the obtained result <strong>does not represent in any way the real performance of the system</strong>, so it's important to make sure that there's no confusion by mentioning it clearly.</li> </ul>
95
model evaluation
How to evaluate a hurdle model
https://datascience.stackexchange.com/questions/82639/how-to-evaluate-a-hurdle-model
<p>Should the two sequential parts of a hurdle model (Response model and Expected value model) be tuned and evaluated independently, and then combines for the final results (Product of the probability to respond and the expected value)? Or should they be tuned and evaluated as a single model? What is the best approach to evaluated the overall/global performance of a two part hurdle model?</p> <p>For example, if the two models are evaluated and tuned <strong>independently</strong>, then the hurdle model (i.e. probability to respond) will be evaluated based on classification metrics (e.g., ROC, F1, Recall, etc.) and the expected value model will be evaluated based on regression metrics (e.g., MSE, MAE, etc.). Once they combined, what is the best approach to evaluated the overall/global model performance? Otherwise, if the two parts of the model should be combined in a <strong>single</strong> model to be tuned and evaluated, then can regression evaluated metrics be used to evaluated the overall/global model performance?</p>
96
model evaluation
Cross validation and evaluation: neural network loss function continuously decreases in cross-validation
https://datascience.stackexchange.com/questions/81126/cross-validation-and-evaluation-neural-network-loss-function-continuously-decre
<p>I am evaluating a neural network model using cross validation in 2 different ways ( A &amp; B ) that I thought were equivalent.</p> <ul> <li><strong>Evaluation type A :</strong> For each cross validation loop, the model is instantiated and fitted.</li> <li><strong>Evaluation type B :</strong> I instantiate the model once and then that instantiated model is fitted for each loop of the cross validation procedure.</li> </ul> <p>I am using the metric mean absolute error (<em>MAE</em>).</p> <p><strong>Question: Why do I get a continuously decreasing <em>MAE</em> over cross-validation loops when using type B evaluation and not when using type A evaluation?</strong></p> <h2>Code and details</h2> <p>First I generate synthetic data :</p> <pre><code>from sklearn.datasets import make_regression X , y = make_regression( n_samples = 1000 , n_features = 10 , n_informative = 5 , n_targets = 1 , random_state = 2 ) </code></pre> <p>I then define a function to get a model ( neural network ) :</p> <pre><code>from keras.models import Sequential from keras.layers import Dense def get_model( n_nodes_hidden_layer , n_inputs , n_outputs ) : model = Sequential() model.add( Dense( n_nodes_hidden_layer , input_dim = n_inputs , kernel_initializer = 'he_uniform' , activation = 'relu' ) ) model.add( Dense( n_outputs ) ) model.compile( loss = 'mae' , optimizer = 'adam' ) return model </code></pre> <p>After that I define 2 evaluation functions using :</p> <pre><code>from sklearn.model_selection import RepeatedKFold from sklearn.metrics import mean_absolute_error </code></pre> <p>Type A evaluation function :</p> <pre><code>def evaluate_model_A( X , y ) : results = list() cv = RepeatedKFold( n_splits = 10 , n_repeats = 1 , random_state = 999 ) for train_ix, test_ix in cv.split( X ) : X_train, X_test = X[ train_ix ] , X[ test_ix ] y_train, y_test = y[ train_ix ] , y[ test_ix ] model = get_model( 20 , 10 , 1 ) model.fit( X_train , y_train , epochs = 100 , verbose = 0 ) y_test_pred = model.predict( X_test ) mae = mean_absolute_error( y_test , y_test_pred ) results.append( mae ) print( f'mae : {mae}' ) return results </code></pre> <p>Type B evaluation function :</p> <pre><code> def evaluate_model_B( model , X , y ) : results = list() cv = RepeatedKFold( n_splits = 10 , n_repeats = 1 , random_state = 999 ) for train_ix, test_ix in cv.split( X ) : X_train, X_test = X[ train_ix ] , X[ test_ix ] y_train, y_test = y[ train_ix ] , y[ test_ix ] model.fit( X_train , y_train , epochs = 100 , verbose = 0 ) y_test_pred = model.predict( X_test ) mae = mean_absolute_error( y_test , y_test_pred ) results.append( mae ) print( f'mae : {mae}' ) return results </code></pre> <p>Before using type B evaluation function I need to instantiate the model because it is an argument of the function :</p> <pre><code>model = get_model( 20 , 10 , 1 ) </code></pre> <p>What I do not understand is the fact that while using type B evaluation function the MAE is decreasing for each cross validation loop which is not the case with type A evaluation function.</p> <p>Is this specific to neural networks?</p> <p><strong>Note</strong> : when I am using a <code>RandomForestRegressor()</code> the phenomenon does not show up.</p>
<p>In the <em>evaluation type B</em> approach, your neural network <strong>weights and biais are not reset</strong> before each loop of cross-validation. The neural network is then learning from one loop to another, so you see the MAE continuously decreasing.</p> <p>A solution is to store your weights and biais before fitting the model and load them at each loop so they have the same init.</p> <p>You can use those methods to do so:</p> <pre class="lang-py prettyprint-override"><code>model.save_weights('model.h5') # right after model instantiation model.load_weights('model.h5') # in the loop before fitting </code></pre> <p>In <em>evaluation type A</em>, because you instantiate the model in the loop, weights and biais are reset so you don't see the phenomenon.</p>
97
model evaluation
Match between objective function and evaluation metric
https://datascience.stackexchange.com/questions/82224/match-between-objective-function-and-evaluation-metric
<p>Does the objective function for model fitting and the evaluation metric for model validation need to be identical throughout the hyperparameter search process?</p> <p>For example, can a XGBoost model be fitted with the Mean Squares Error (MSE) as the objective function (setting the 'objective' argument to reg:squarederror: regression with squared loss), while the cross validation process is evaluated based on a significantly different metric such as the gamma-deviance (residual deviance for gamma regression)? Or should the evaluation metric match the the objective function as much as possible, hence the root mean square error need to be selected as the evaluation metric?</p>
<p>The evaluation metric for model validation has to be the same throughout the hyperparameter search process in order fairly compare different models.</p> <p>The objective function for model fitting can be different throughout the hyperparameter search process. During the hyperparameter search process, you can compare different algorithms and each of those algorithms can have different objective functions.</p> <p>The objective function and evaluation metric should be thought of as completely separate concepts. There can only be one objective function per algorithm. However, there can be many evaluation metrics. Objective functions are chosen for computers so they can efficiently and effectively fit the training data. Evaluation metrics are chosen for human stakeholders so they can better understand the impact of the model.</p> <p>The confusion between the two concepts often comes from just measuring a machine learning system to minimize error. If the measurement of a machine learning system broadens to include other requirements, then the difference between the objective function and evaluation metric is more apparent. The objective function is typically set up only to minimize error over the training dataset and work well with the chosen optimization technique. Evaluation metrics can also assess model error but can also include prediction speed, model size, model fairness, and many other requirements.</p>
98
model evaluation
Is data leakage giving me misleading results? Independent test set says no!
https://datascience.stackexchange.com/questions/108916/is-data-leakage-giving-me-misleading-results-independent-test-set-says-no
<p><strong>TLDR:</strong></p> <p>I evaluated a classification model using 10-fold CV with data leakage in the training and test folds. The results were great. I then solved the data leakage and the results were garbage. I then tested the model in an independent new dataset and the results were similar to the evaluation performed with data leakage.</p> <p><strong>What does this mean? Was my data leakage not relevant? Can I trust my model evaluation and report that performance ?</strong></p> <hr /> <p><strong>Extended version:</strong></p> <p>I'm developing a binary classification model using a dataset 108326x125 (observations x features) with a class imbalance of ~1:33 (1 positive observation for each 33 negative observations). However, those 108326 observations came from only 95 subjects, which means there are more than one observation from each subject.</p> <p>Initially, during model training and evaluation, I performed cross-validation (CV) using the command (<code>classes</code> is a column array with the class of each observation):</p> <p><code>cross_validation = cvpartition(classes,'KFold',10,'Stratify',true);</code></p> <p>and obtained good performance in terms of the metrics that interested me the most (recall and precision). The model was an ensemble (boosted trees).</p> <p>However, by performing the above CV partition, I have data leakage since observations from a same subject might be simultaneously in the training and test sets in a given CV iteration. Besides data leakage, I believed my model could be overfitting since the optimal hyperparameter for each tree maximum number of leaves in the ensemble is around 350 and usually this parameter should be something around 8 and 32 (to avoid each tree to become a strong learner, which might lead to overfitting).</p> <p>Then, I performed a different CV partition where I make the partition by subject ID, which solves the data leakage problem. However, when doing this, the classes distribution might become very different in the training and test sets (since around 30 subjects do not have positive observations, there's even the extreme case where some test folds might end up having 0 positive observations), which influences my evaluation of the model's performance. To mitigate this, I performed repeated 5x10-CV.</p> <p>With this partition, I've tested several different model types (including the exact same model and hyperparameters as the one with data leakage), such as MATLAB's fitcensemble and KNN and Python's XGBoost (and performed hyperparameters optimization in all of them) and no matter what I do, I simply cannot reach acceptable performance with this approach. Therefore, my first questions are:</p> <p><strong>1. Is there something wrong with this partitioning that might be influencing my model evaluation? (see code below)</strong></p> <p><strong>2. Do you have any suggestion to improve this CV partitioning?</strong></p> <p>Finally, to confirm my initial model evaluation (with data leakage in the partition) was misleading me, I tested the model in an independent new dataset (however way smaller) and the performance was good (similar to the one obtained through the CV partition)!!</p> <p><strong>What does this mean? Was my data leakage not relevant? Can I trust my model evaluation and report that performance ?</strong></p> <p>Thank you in advance!</p> <hr /> <p>*Code for subject-based partition</p> <pre><code>% Randomize subjects list order to introduce randomization to the partition process data_name_list = data_name_list(randperm(length(data_name_list))); % Get array containing the corresponding fold of each subject ('histcounts' splits subjects as % uniformly as possible when using BinWidth like this) [~,~,fold_of_subject] = histcounts(1:length(data_name_list), 'BinWidth', length(data_name_list)/num_folds);``` </code></pre>
99
hyperparameter tuning
With automated hyperparameter tuning available, do we still need to learn hyperparameter tuning
https://datascience.stackexchange.com/questions/85426/with-automated-hyperparameter-tuning-available-do-we-still-need-to-learn-hyperp
<p>Tools like AWS Sagemaker have capability to do automated hyperparameter tuning, even with complex algos like Neural Networks using Tensorflow. So do we still need to learn how to do hyperparameter tuning, or simply leave it to tools like Sagemaker? Thx</p>
<blockquote> <p>So do we still need to learn how to do hyperparameter tuning</p> </blockquote> <p>If you're saying this based on the context of acquiring a new skill, then go for it. It's always a good thing to get an idea an idea of how hyper-parameter testing is done for real. In addition to sagemaker you can use tools like <a href="https://www.wandb.com/" rel="nofollow noreferrer">weights and biases</a></p>
100
hyperparameter tuning
Automated Hyperparameter tuning
https://datascience.stackexchange.com/questions/27057/automated-hyperparameter-tuning
<p>Are there any advanced packages that allow automated hyperparameter tuning for neural networks and traditional machine learning algorithms like XGBoost, random forest (using method like Bayesian, random search etc. that could allow for faster discovery of the optimal parameters)? I have heard of hyperopt, but it seems there are some <a href="https://johnflux.com/2017/02/10/python-hyperopt-finding-the-optimal-hyper-parameters/" rel="nofollow noreferrer">problems</a>, and I am not sure it can train traditional machine learning algorithms?</p>
<p>There are a number of methods to automate the optimization of your hyper-parameters, such as GridSearch and RandomSearch which the article you linked discusses briefly.</p> <p>The main reason to choose one over the other is if you want the best possible parameters, and do not care how long it takes to get them: go for <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html" rel="nofollow noreferrer">GridSearch</a>. On the other hand, if you do not want the optimization to take a long time, but still want some good parameters then go for <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html" rel="nofollow noreferrer">RandomSearch</a>.</p> <p>These two implementations in scikit-learn are not exactly &quot;advanced packages&quot;, but they will get the job done for any model in sckit-learn (Random Forest, MLPClassifiers, etc). Emre's comment also has some pretty cool advanced packages which are scikit-learn or TensorFlow compatible.</p>
101
hyperparameter tuning
Hyperparameter Tuning vs Regularization
https://datascience.stackexchange.com/questions/116846/hyperparameter-tuning-vs-regularization
<p>While designing the architecture of a Neural Network, should I consider adding regularization (like Dropout, L1/L2, etc.) even after optimizing the problem using Hyperparameter Tuning? What should be the main focus (tuning or regularization) to achieve more generalization, considering that both of these simplify the problem?</p> <p>P.S.: Links to some recent publications would be of great help.</p>
<p>Yes, it is <strong>generally recommended to use regularization techniques</strong>, such as <strong>Dropout, L1/L2 regularization</strong>, etc., <em><strong>even after optimizing the problem using hyperparameter tuning</strong></em>. <strong>Regularization</strong> techniques help to prevent overfitting by adding constraints to the model and simplifying the model complexity, which can improve the generalization performance.</p> <p>On the other hand, <strong>hyperparameter tuning</strong> optimizes the hyperparameters of the model to find the best configuration that minimizes the error on the training set.</p> <p>In summary, both <strong>hyperparameter tuning</strong> and <strong>regularization</strong> are important <em><strong>for achieving more generalization</strong></em> in a neural network. <strong>Hyperparameter tuning</strong> helps to find the optimal values for the hyperparameters of the model, such as the learning rate, the number of hidden layers, etc., which can improve the performance of the model. <strong>Regularization</strong> techniques, on the other hand, help to reduce the complexity of the model and prevent overfitting, which can also improve the generalization performance of the model.</p> <p>Therefore, it is <em><strong>recommended to use both hyperparameter tuning and regularization techniques to achieve more generalization</strong></em> in a neural network.</p> <p>Here are some recent <strong>publications</strong> that discuss the <strong>use of regularization</strong> and <strong>hyperparameter tuning</strong> for Neural Networks:</p> <ul> <li><p><a href="https://arxiv.org/abs/1705.08741" rel="nofollow noreferrer"><strong>Understanding Regularization in Deep Learning</strong></a> by <strong>B. Zoph and Q. V. Le</strong></p> </li> <li><p><a href="https://arxiv.org/abs/1805.07623" rel="nofollow noreferrer"><strong>Hyperparameter Optimization for Neural Networks: A Survey</strong></a> by <strong>A. K. Agarwal, S. R. M. Prasad, and A. R. Menon</strong></p> </li> <li><p><a href="https://www.cs.toronto.edu/%7Ehinton/absps/JMLRdropout.pdf" rel="nofollow noreferrer"><strong>Dropout: A Simple Way to Prevent Neural Networks from Overfitting</strong></a> by <strong>N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov</strong></p> </li> <li><p><a href="https://dl.acm.org/doi/10.5555/2969239.2969312" rel="nofollow noreferrer"><strong>Early Stopping - But When?</strong></a> by <strong>Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner</strong></p> </li> <li><p><strong><a href="https://arxiv.org/pdf/1705.07832.pdf" rel="nofollow noreferrer">Reducing Overfitting In Neural Networks by Constraining the Complexity of the Model</a></strong> by <strong>Yarin Gal and Zoubin Ghahramani</strong></p> </li> <li><p><strong><a href="https://arxiv.org/pdf/1701.05369.pdf" rel="nofollow noreferrer">An Overview of Regularization Techniques in Deep Learning</a></strong> by <strong>Danilo Jimenez Rezende and Shakir Mohamed</strong></p> </li> </ul>
102
hyperparameter tuning
Time Series Hyperparameter Tuning
https://datascience.stackexchange.com/questions/106346/time-series-hyperparameter-tuning
<p>My question is about the intuition for hyperparameter tuning of time series.</p> <p>In other models, like Linear or Logistic Regression there is labeled data and according to accuracy or precision, the parameters are tuned. But in time series, the future values are predicted. For example, the next 30 days.</p> <p>My question is, what is the point of tuning, when you do not know what the answer is (as it is going to happen in future)?</p>
103
hyperparameter tuning
hyperparameter tuning with validation set
https://datascience.stackexchange.com/questions/60547/hyperparameter-tuning-with-validation-set
<p>For what I know, and correct me if I am wrong, the use of cross-validation for hyperparameter tuning is not advisable when I have a huge dataset. So, in this case it is better to split the data in training, validation and test set; and then perform the hyperparameter tuning with the validation set.</p> <p>In the case that I am programming I would like to use scikit, the yeast dataset available at: <a href="http://archive.ics.uci.edu/ml/datasets/yeast" rel="nofollow noreferrer">http://archive.ics.uci.edu/ml/datasets/yeast</a>; and for example to tune the number of epochs.</p> <p>First, I have separated my training, validation and test set by using the train_test_split twice according to one answer that I saw here. The loss plot that I got is the following for 1500 max iterations:</p> <p><a href="https://i.sstatic.net/xsCPd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xsCPd.png" alt="enter image description here"></a></p> <p>Then I wanted to use my validation set with a list of different values for the hypeparameter of max iterations. The graph I obtained is the following (with some warning messages of non-convergence for max_iter values less than 1500):</p> <p><a href="https://i.sstatic.net/4G6mD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4G6mD.png" alt="enter image description here"></a></p> <p>So, I have the first question here. It seems that for a value of max_iter of 3000 the accuracy is 64% approximately, so I should choose that value for the max_iter hyperparameter; is that correct? I can see from the graph that also the red line of 3000 has a less value of loss than the other compared options.</p> <p>My program so far is the following:</p> <pre><code>import numpy as np import pandas as pd from sklearn import model_selection, linear_model from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.neural_network import MLPClassifier import matplotlib.pyplot as plt from sklearn.model_selection import GridSearchCV def readFile(file): head=["seq_n","mcg","gvh","alm","mit","erl","pox","vac","nuc","site"] f=pd.read_csv(file,delimiter=r"\s+") f.columns=head return f def NeuralClass(X,y): X_train,X_test,y_train,y_test=model_selection.train_test_split(X,y,test_size=0.2) print (len(X)," ",len(X_train)) X_tr,X_val,y_tr,y_val=model_selection.train_test_split(X_train,y_train,test_size=0.2) mlp=MLPClassifier(activation="relu",max_iter=1500) mlp.fit(X_train,y_train) print (mlp.score(X_train,y_train)) plt.plot(mlp.loss_curve_) max_iter_c=[500,1000,2000,3000] for item in max_iter_c: mlp=MLPClassifier(activation="relu",max_iter=item) mlp.fit(X_val,y_val) print (mlp.score(X_val,y_val)) plt.plot(mlp.loss_curve_) plt.legend(max_iter_c) def main(): f=readFile("yeast.data") list=["seq_n","site"] X=f.drop(list,1) y=f["site"] NeuralClass(X,y) </code></pre> <p>Second question, is my approach valid? I have seen a lot of information over the web and all point to cross validation for hyperparameter tuning, but I want to perform it with a validation set.</p> <p>Any help?</p> <p>PD. I have tried early stopping and the results are poor compared to the ones obtained with the method I programmed.</p> <p>Thanks</p>
<p>It seems to me that you're manually iterating through the hyper-parameters.</p> <p><code>scikit-learn</code> has a number of helper functions that make it easy to iterate through all of the parameters using various strategies: <a href="https://scikit-learn.org/stable/modules/grid_search.html#grid-search" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/grid_search.html#grid-search</a></p> <p>Secondly, if I was 'manually' tuning hyper-parameters I'd split my data into 3: train, test and validation (the names aren't important)</p> <p>I'd change my hyper-parameters, train the model using the <code>training data</code>, test it using the <code>test data</code>. I'd repeat this process until I had the 'best' parameters and then finally run it with the <code>validation data</code> as a sanity check (should have similar scores).</p> <p>With <code>scikit-learn</code>'s helper functions, I just split the data into two parts. Use <code>GridSearchCV</code> with one part and then at the end using the best parameters (stored in the attribute <code>best_estimator_</code>) run a sanity check with the second part.</p> <pre><code># define parameter sweep param_grid = { 'max_iter' : [100, 1000, 10000] } # define grid search clf = GridSearchCV(mlp, param_grid, cv=5) # perform search clf.fit(X, y) # best estimator clf._best_estimator </code></pre> <p>For info, every estimator and their respective scores are also available as attributes (see <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV</a>) which means you can check the result of every CV if you wanted to make sure you got the best parameters.</p> <p>As you can see you can avoid thinking too hard about whether CV was the right approach by using a final validation set as a sanity check.</p>
104
hyperparameter tuning
How to avoid numerous Hyperparameter tuning in ML?
https://datascience.stackexchange.com/questions/112244/how-to-avoid-numerous-hyperparameter-tuning-in-ml
<p>Suppose I have developed a dynamic system for forecasting the future of some specific stocks. As time passes, the train set will change dynamically. For a better understanding, consider this example:</p> <ul> <li><p><strong>First Round:</strong><br /> train set = [0 : 150] (The first 150 samples are in the training set)<br /> test set = [150 : 152]</p> </li> <li><p><strong>Second Round:</strong><br /> train set = [1 : 151]<br /> test set = [151 : 153]</p> </li> <li><p><strong>Third Round:</strong><br /> train set = [2 : 152] (152 is exclusive)<br /> test set = [152 : 154]</p> </li> <li><p>and so on.</p> </li> </ul> <p>For each round, I use a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html" rel="nofollow noreferrer"><code>RandomSearchCV</code></a> to tune hyperparameters of <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html?highlight=randomforest#sklearn.ensemble.RandomForestRegressor" rel="nofollow noreferrer"><code>Random Forest</code></a> to predict the returns of some stocks using specific features. I am focusing on tuning hyperparameters in this question.</p> <p>As I mentioned, I perform a Hyperparameter tuning in each round, <strong>which costs a lot of time! (specifically when the train set is enormous)</strong> So I'm seeking a way to avoid This hyperparameter tuning each round repeatedly. I'm interested to know, How scientists perform hyperparameter tuning (considering time-consuming process)?<br /> Shall I perform hyperparameter tuning just once and precisely before the first round?</p>
<p>A good hyperparameter may be considered as a random variable with some variation. I wouldn't worry to much not to find the best parameter for one specific test set. In case you are sure there should be multiple values for some parameter, I would try to do it not for all test sets but with some step. Please take into account, cross validation &quot;contaminates&quot; your assessment. You'll definitely get more optimistic metric score on data you used for cv.</p> <p>Imagine, you have 100 test n folds. You may use the first 10 of them for cv, then assess the next 10 test n folds with the best model and hyperparameter you got on the previous step. These 10 folds will give you more realistic score. Then repeat the bundle (10 cv folds + 10 assessment folds).</p> <p>Also, if you really use test set with size 2, your estimates for the hyperparameter may vary too much. You may compare it with bigger test fold size.</p>
105
hyperparameter tuning
Why does model with hyperparameter tuning underperform?
https://datascience.stackexchange.com/questions/112115/why-does-model-with-hyperparameter-tuning-underperform
<p>I have a dataset where I applied xgb model with grid search to tune the hyperparameters and class balancing. I compared the model without any hyperparameter tuning with a model that has been applied with hyperparameter tuning. The model without any hyperparameters set output an accuracy around 90 (train), 81 (test) but with parameters, it has an accuracy of 70 (train) 69 (test). Hyperparameters: n_estimators, max_depth, eta, gamma, L1 and L2 regularization, scale_pos_weight. May I know why does the model has not improve? Despite around 10% difference for the model without any hyperparameters on train and test accuracy, I think that it should not be an issue on overfitting.</p>
106
hyperparameter tuning
Is hyperparameter tuning done on training or validation data set?
https://datascience.stackexchange.com/questions/121388/is-hyperparameter-tuning-done-on-training-or-validation-data-set
<p>Is hyperparameter tuning done on training or validation data set? The post <a href="https://stats.stackexchange.com/questions/366862/is-using-both-training-and-test-sets-for-hyperparameter-tuning-overfitting">here</a> gives mixed opinion as of whether the training set should be used for hyperparameter tuning. And I would like to know whether hyperparameter tuning can be done on training data set?</p> <p>More, I want to know what are the consequences for why we should/should not hyperparameter-tune on training dataset.</p> <p>Thanks in advance!</p>
<p>Tuning hyperparameters on the training set is generally not as critical as training on the test set, but overall increases the risk of overfitting.</p> <p>Alternatively, use cross-validation to tune the hyperparameters. You might even get less biased hyperparameters, at the cost of higher implementation and computational demand (you have to train your model multiple times).</p>
107
hyperparameter tuning
Ordering of Train/Val/Test set use in hyperparameter tuning
https://datascience.stackexchange.com/questions/124881/ordering-of-train-val-test-set-use-in-hyperparameter-tuning
<p>The way I read almost lots of ML advice on these datasets sounds like &quot;You train a model that's randomly chosen hyperparameters first on the training set, then you ignore this bit of the work, and hyperparameter tune on a validation set, then choose the final model you want and test it on the test set&quot;. This doesn't make sense, but I swear I read something like this quite a bit.</p> <p>But likewise hyperparameter tuning is expensive. You can end up with many (e.g. &quot;N&quot; models all of different hyperparameters). If we train them all on the training set, then that is super expensive (biggest dataset training times 'multiplied by' N models). Then we could check them on the validation set. If we just &quot;choose the best performance&quot; and that's it, and then don't do any new training this version of hyperparameter tuning seems expensive, and I don't understand why we'd need a test set.</p> <p>Other options involve seemingly merging the train and val sets, to do a final round of tuning.</p> <p>I've no real idea what's going on.</p> <p>Can someone explain hyperparameter tuning steps in simple ways, and how the datasets are used? Nothing seems to make sense to me.</p>
<p>You are asking about a model's ability to <strong>generalize</strong> to unseen examples.</p> <p>Consult <a href="https://work.caltech.edu/textbook.html" rel="nofollow noreferrer">Abu-Mostafa, et al.</a>, Learning From Data, § 5.3 Data Snooping.</p> <blockquote> <p>If a data set has affected any step in the learning process, its ability to assess the outcome has been compromised.</p> </blockquote> <p>There is a giant space of models that you might choose from, based on your knowledge of the underlying generative process (e.g. &quot;sales of widgets&quot;). You might choose a linear model for regression, or a quadratic or other polynomial model, or perhaps <a href="https://en.wikipedia.org/wiki/Support_vector_machine" rel="nofollow noreferrer">SVM</a>, <a href="https://en.wikipedia.org/wiki/Random_forest" rel="nofollow noreferrer">random forest</a>, or a multitude of other hypothesis families and loss functions.</p> <p>Having decided on a family of hypotheses, you next must settle on the hyperparameters that specify a particular hypothesis, such as &quot;max tree depth&quot; for RF, together with per-node learned parameters. Commonly there will be more than half a dozen hyperparameters. Their values matter, they affect model performance.</p> <p>So we can sweep through a grid of proposed hyperparameter settings, or better we can quickly explore hyperparameter space by choosing random points.</p> <p>The reason we do this is so we can generalize. To get 100% accuracy on already-seen training data is trivial; any RDBMS could parrot back Y for such X values. But for <em>novel</em> X values, being able to interpolate or otherwise generalize requires a more sophisticated approach. The generative process in the world produced observables that adhere to some pattern, and it is the model's job to learn that pattern. For example, proposing &quot;interpolation&quot; suggests a continuous smoothly differentiable generating function, to which we can usefully apply interpolation techniques.</p> <hr /> <h2>no cheating</h2> <p>We <strong>evaluate</strong> our model's ability to generalize to unseen data by scoring it against unseen test data, which superficially resembles scoring against folds of training data.</p> <p>If any of the model's parameters or hyperparameters have settings based on some test data, then that data <em>cannot</em> be used to assess the model's ability to generalize to unseen data. We need <em>new</em>, <em>unseen</em> data to accomplish such an assessment.</p>
108
hyperparameter tuning
Hyperparameter tuning one-class svm
https://datascience.stackexchange.com/questions/74691/hyperparameter-tuning-one-class-svm
<p>I have a problem where I am trying to apply a one-class svm to detect outliers. I am training on a dataset of <em>true</em> cases using a one-class radial svm and then predicting for both false and true cases. It is worth noting that the <em>true</em> cases I am training on do contain a proportion of misclassified cases. This is the nature of the problem and not something that can be corrected for or identified easily.</p> <p>I have applied hyperparameter tuning through a grid search with cross-validation to get the best values of <code>nu</code> and <code>gamma</code> for the model. However, when I do this and predict on all cases, I get poor separability between true and false cases based on the resulting decision values. I actually get far better separability from using a fixed value for <code>C</code> and <code>gamma</code> of 1 and 0.05 respectively. My intuition is that this lack of separability is down to the misclassified cases in the <em>true</em> training data. As such, my question is should hyperparameter tuning be applied to a one-class svm on data with misclassified values? If so, should <code>nu</code> and <code>gamma</code> both be tuned? Any papers discussing this problem would be great as I haven't found any resources relating to it.</p> <p>I am aware that nu is related to C by <code>nu = A+B/C</code> where A, B are constants that are difficult to calculate and that</p> <blockquote> <p>The parameter nu is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors relative to the total number of training examples</p> </blockquote> <p>So <code>nu</code> of 0.01 would mean at most 1% of your training examples would be misclassified. A good discussion on <code>nu</code> is <a href="https://stackoverflow.com/questions/11230955/what-is-the-meaning-of-the-nu-parameter-in-scikit-learns-svm-class">here</a>. I am searching values <code>0.0001 0.0010 0.0100 0.0500 0.1000 0.1500 0.2000 0.2500 0.3000 0.3500 0.4000 0.4500 0.5000</code>. I am also aware that</p> <blockquote> <p>Gamma is a parameter of the RBF kernel and can be thought of as the ‘spread’ of the kernel and therefore the decision region. When gamma is low, the ‘curve’ of the decision boundary is very low and thus the decision region is very broad. When gamma is high, the ‘curve’ of the decision boundary is high, which creates islands of decision-boundaries around data points.</p> </blockquote> <p>A good post on gamma with intuitive visualisations is <a href="https://chrisalbon.com/machine_learning/support_vector_machines/svc_parameters_using_rbf_kernel/" rel="nofollow noreferrer">here</a>. I am searching across gamma values of <code>1x10^-04 1x10^-03 1x10^-02 1x10^-01 1x10^+00 1x10^+01 1x10^+02 1x10^+03 1x10^+04 1x10^+05</code></p> <p>So should I hold <code>nu</code> constant at a high value to assume a high number of misclassified cases? Also should I search across different values of gamma?</p>
109
hyperparameter tuning
Feature selection or hyperparameter tuning first for 30 feature data
https://datascience.stackexchange.com/questions/126768/feature-selection-or-hyperparameter-tuning-first-for-30-feature-data
<p>I have about 30 variables and trying to create a Random Forest model. All the variables are expected to be predictors of outcome. I want to find the best model based on a C-stat score with any number of features. Shall I do feature selection first or hyperparameter tuning? I read that people prefer feature selection first, but they do have a few hundred features. I only have 30.</p> <p>I am leaning towards hyperparameter tuning before going into feature selection.</p>
<p>Feature selection and hyper-parameter tuning have different purposes.</p> <p>If you consider the entire workflow of data to the ML model and what the model is doing you can get to some intuition what to do first. The model will learn some patterns that are in your data, and the hyper-parameter will slightly dictate what the patterns that will be learned are, especially in a random forest. So, the hyper-parameter tuning will depend on the features that you have.</p> <p>The act of feature selection will be before the model training, the feature selection results will have a direct change on the hyper-parameters of the model itself. So, in theory you should do feature selection before hyper-parameter tuning as there may be patterns that you do not want your model to learn which can be removed. Or redundencies within the data can be removed so the hyper-parameter tuning could be faster.</p>
110
hyperparameter tuning
Hyperparameter Tuning Time Series in Production
https://datascience.stackexchange.com/questions/56530/hyperparameter-tuning-time-series-in-production
<p>I have a time series data that handled using GDBT to predict the next value. I always use previous 30 days data to train daily, but overtime the data to predict and train is increased because the number of combination things increased.</p> <p>My question is, how often do we need to hypertune our model? and what eval number combination considered to be enough for hyperparameter tuning? 50? 100? or just 10 is enough?</p> <p>right now I do it daily, but it getting costly and costly, previously it just aroung 10 minutes, but now about more than hour. The system need to do other things hourly so this hypertuning model will be a deadlock for the system.</p>
<p>Some ideas:</p> <ul> <li><strong>Number of previous observations to use:</strong> depends on the process you are modelling. If the target is related in some way to many of the previous values, you may need to use more data in your tuning process. On the other hand, if the target is largely unrelated to previous observations you can use fewer. You will need to check the relationship - lag plots and auto-correlation plots may be useful.</li> <li><strong>Frequency of parameter tuning:</strong> again, depends on your model and process. It lay be a good idea to monitor the prediction accuracy of your model, and to only perform more parameter tuning if the model accuracy drops below a certain threshold.</li> </ul>
111
hyperparameter tuning
Proper workflow for model selection and hyperparameter tuning using cross validation
https://datascience.stackexchange.com/questions/73990/proper-workflow-for-model-selection-and-hyperparameter-tuning-using-cross-valida
<p>I have been trying to teach myself about machine learning and wanted to make sure I had the right idea about model selection, hyperparameter tuning, and cross validation. </p> <p>So given a data set, my understanding is that this is the general workflow should be. 1. Split into train and test 2. Use cross validation on training set to select model 3. After picking a model then perform hyperparameter tuning with cross validation</p> <p>Is that correct? Also, for step 3, should tuning be done with the whole data set or just the test set?</p>
<p>Like a Russian Matryoshka doll, there can be many layers of data partitioning and model selection you can use with a dataset. For a "simple" approach, you can first partition your data into train and test sets. Within your training set, you can use cross validation to choose the best hyperparameters for each model. Once you choose your best hyperparameters, you can see how well the model with those hyperparameters performs on the test set. You can repeat this process to build models with many approaches, such as a linear model, random forest, SVM, etc. The model that performs best on the test set will be your "best" model to deploy.</p> <p>To be clear: you do NOT tune on the test set. This would lead to overfitting. You would simply use the test set to compare a few different models. However, the more you look at the test set, the more risk of overfitting.</p>
112
hyperparameter tuning
Getting worse results after Hyperparameter Tuning(Grid/Random Search/TPOT)
https://datascience.stackexchange.com/questions/61487/getting-worse-results-after-hyperparameter-tuninggrid-random-search-tpot
<p>I have a problem with Hyperparameter Tuning. Usually I getting almost the same results(or worse) than before tuning. Usually default parameters of classificator(regressor) give me a best score. Anybody could recommend other techniques or give me a tip what I should improve?</p>
113
hyperparameter tuning
Hyperparameter tuning results yield no improvement over spot-check
https://datascience.stackexchange.com/questions/73304/hyperparameter-tuning-results-yield-no-improvement-over-spot-check
<p>There is a balanced binary classified dataset as seen below.</p> <p><a href="https://i.sstatic.net/NLgSH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NLgSH.png" alt="Target Variable count"></a></p> <p>Things I have tried using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">RandomForestClassifier</a> as chosen model:</p> <ul> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html" rel="nofollow noreferrer">TimeSeriesSplit</a> with <code>n_splits</code>=3 and 10</li> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html#sklearn.preprocessing.MinMaxScaler" rel="nofollow noreferrer">MinMaxScaler</a>, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html#sklearn.preprocessing.MaxAbsScaler" rel="nofollow noreferrer">MaxAbsScaler</a>, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler" rel="nofollow noreferrer">StandardScaler</a>, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html#sklearn.preprocessing.RobustScaler" rel="nofollow noreferrer">RobustScaler</a>, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html#sklearn.preprocessing.QuantileTransformer" rel="nofollow noreferrer">QuantileTransformer</a>, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html#sklearn.preprocessing.PowerTransformer" rel="nofollow noreferrer">PowerTransformer</a></li> </ul> <p>Snippet of RandomizedSearchCV set-up:</p> <p><code>scorer = {..., 'precision': make_scorer(precision_score, average = 'weighted'), ..., } RandomizedSearchCV(model, grid, scoring=scorer, cv=TimeSeriesSplit(n_splits=10), refit='precision', verbose=10)</code></p> <p>However despite all these efforts, precision (the main metric I have chosen to use) are all very similar. Inital spotcheck precision results were: ~.55-.56. Hyperparameter tuning results in the same range too, <strong>no improvement</strong> at all! Other "curious" metrics like F0.5 score and recall all have no improvements too with little variation.</p> <p>The biggest noticeable difference so far are <code>n_splits</code> 3 and 10. With all else equal, the range of precision is bumped up to ~.57-.59 with <code>n_splits=10</code>. Hyperparameter tuning results also within ~.57-.59!</p> <p>I am stumped. Does anyone have ideas on how to find out what may be the issue(s)? I can provide more information if needed.</p> <p>Edit 1 (grid):</p> <pre><code># Number of trees in random forest n_estimators = [x for x in range(400, 2000, 200)] # Number of features to consider at every split max_features = ['log2'] # Maximum number of levels in tree max_depth = [x for x in range(10, 110, 10)] max_depth.append(None) # Minimum number of samples required to split a node min_samples_split = [2, 5, 10] # Minimum number of samples required at each leaf node min_samples_leaf = [4] # Method of selecting samples for training each tree bootstrap = [True, False] </code></pre>
114
hyperparameter tuning
validation after hyperparameter tuning
https://datascience.stackexchange.com/questions/90539/validation-after-hyperparameter-tuning
<p>I tuned my hyperparameter with random search and i used <code>cv=5</code>. Is it important to validated the hyperparameter with model and testdata or is it okay to use the given back accuracy from random search?</p>
<p>In machine learning there is a fundamental difference between what is known as <em>hyperparameters</em> and <em>parameters</em>.</p> <p><strong>Parameters</strong></p> <p>They are the variables that define your <em>model</em>, or in other words the relationship between your inputs and the output you are trying to predict.</p> <p>For example, in simple linear regression, the goal is to be able to make predictions for a target variable <span class="math-container">$y$</span> given some new input value <span class="math-container">$x$</span> of the input variable on the basis of a set of training data comprising <span class="math-container">$N$</span> input values <span class="math-container">$\mathbf{x} = (x_1, \dots, x_n)^{\mathbf{T}}$</span>. Relationship between inputs and output is given by</p> <p><span class="math-container">$$ y = w_0 x + w_1 $$</span></p> <p>where <span class="math-container">$w_0$</span> and <span class="math-container">$w_1$</span> are the <strong>parameters</strong> our algorithm tries to &quot;learn&quot; during training to produce the most accurate possible predictions for <span class="math-container">$y$</span>. FYI the term <span class="math-container">$w_0$</span> is called <em>weight</em> and the term <span class="math-container">$w_1$</span> is called <em>bias</em>.</p> <p><strong>Hyperparameters</strong></p> <p>These are different from the parameters above in the sense that they act on the <em>training</em> process itself. Hyperparameters can be seen as knobs you would tweak to adjust the learning of your model in the best possible way.</p> <p>An example of hyperparameter is that of the <em>learning rate</em> <span class="math-container">$\eta$</span> used to adjust how far apart from each other values are computed when looking for best possible parameters of a model using Gradient Descent. Another example is the <em>number of decision trees to have in the forest</em> when using Random Forest algorithm.</p> <p><strong>Hyperparameter tuning</strong></p> <p>Unlike model parameters, hyperparameters can be set by the user before training a machine learning model. To be fair the Random Search Cross-Validation method you mentioned is pretty efficient for finding most suitable values for hyperparameters given a machine learning problem. You can reasonably use results from your cross validation for hyperparameters.</p> <p>Next steps should include testing your model on a held-off test data to check that your model <em>parameters</em> determined during training are indeed correctly setting your model up.</p>
115
hyperparameter tuning
XGBoost regressor hyperparameter tuning with hyperopt leads to overfit
https://datascience.stackexchange.com/questions/94600/xgboost-regressor-hyperparameter-tuning-with-hyperopt-leads-to-overfit
<p>Using hyperopt to hyperparameter tuning on XGBoost regressor, I am receiving overfiting on the train set.</p> <p>There is any suggestion how to solve it ?</p> <p>I have used cross validation with early_stopping_rounds and it still doesn't improved.</p> <p>I have tried to tune the gamma, colsample_bytree, subsample, max_depth, min_child_weight and eta and still couldn't reached to good enough results without overfitting.</p> <p>Would appreciate any help.</p> <p>Thanks,</p> <p>Roi.</p>
116
hyperparameter tuning
Hyperparameter tuning XGBoost
https://datascience.stackexchange.com/questions/84609/hyperparameter-tuning-xgboost
<p>I'm trying to tune hyperparameters with bayesian optimization. It is a regression problem with the objective function: objective = 'reg:squaredlogerror'<br /> <span class="math-container">$\frac{1}{2}[log(pred+1)-log(true+1)]^2$</span></p> <p>My dataset consists of 20k vectors, each vector has length 12 (twelve features). Every vector has a corresponding Y value.</p> <p>I want to find the set of hyperparameters that minimize the loss function. This is how it is implemented in code:</p> <pre class="lang-py prettyprint-override"><code>def evaluate_model(learning_rate, max_depth, nr_estimators, min_child_weight, min_split_loss, reg_lambda): model = get_model(learning_rate, max_depth, nr_estimators, min_child_weight, min_split_loss, reg_lambda) model.fit(X_train, Y_train) pred = model.predict(X_val) error = np.array([]) for i in range(len(pred)): prediction = np.maximum(pred[i],1) error = np.append(error, (1/2)*(np.log(prediction+1)-np.log(Y_val[i]+1))**2) err = np.mean(error) return -err </code></pre> <p>My question is if anyone has any problem with how I've constructed the evaluate_model function. Do this optimize the squared log error when bayesian hyperoptimization is being implemented? The maximum(pred[i],1) is there in case a negative prediction is produced. Also, I get bad results even after the hyperparameter optimization.</p> <p>These are the hyperparameters I evaluate:</p> <pre class="lang-py prettyprint-override"><code>pbounds = {'learning_rate': (0,1), 'max_depth': (3,10), 'nr_estimators': (100, 5000), 'min_child_weight': (1,9), 'min_split_loss': (0,10), 'reg_lambda': (1,10)} </code></pre> <p>The optimization is ran for 100 iterations and 10 init points. The package I've used for the bayesian optimization is bayes_opt</p>
<p>Another way is to use the mean_squared_log_error from the same metrics module,.</p> <p>First clip the negative values in the predictions to 1 and find the mean squared log error</p> <p>pred = np.clip(pred, min=1, max=None)</p> <p>err = mean_squared_log_error(yval, pred)</p>
117
hyperparameter tuning
Is it a good practice to use hyperparameter tuning in production pipeline?
https://datascience.stackexchange.com/questions/108451/is-it-a-good-practice-to-use-hyperparameter-tuning-in-production-pipeline
<p>I'm studying TensorFlow Extended and I can see that it's training pipeline includes a &quot;Tuner&quot; component for hyperparameter tuning. As a consequence, I'm wondering if inclusion of tuning is a good practice in case of a production pipeline (which, as in most cases, invoked iteratively from time to time with additional new training instances). I can see three possibilities:</p> <ul> <li>Separate hyperparameter-tuning experiment before production, and no tuning in production pipeline (this is what I used to use before)</li> <li>No intitial tuning-experiment but let TFX do it all the time (this seems supported by TFX)</li> <li>Somekind of a mixture: see some level of tuning in initial separate experiment but still perform tuning for other parameters in production</li> </ul> <p>My problem with the 2nd approach is that hyperparameter-tuners are usually not fit for examination of all possible components (like number of layers, for instance), and anyways, some hyperparams tend to be stable in searches, so why search these in each training run?</p>
<p>Hyperparameter tuning helps model to find best parameters so that it can give you optimal results. Hyperparameter tuning should be done during training process only and not in production pipeline if you are deploying the model.</p> <p>But if your doing model retraining as part of production pipeline then hyperparameter tuning should be done to get best results.</p> <p>Your second approach is about how to optimise Hyperparameters tuning as it is very computation intensive task.</p> <pre><code>1. Stepwise Hyperparameter tuning to reduce the number of iteration required for example for randomforest first find the optimal depth and fix it in next step. 2. Instead of grid Search use a smart hyperparam tuning like optuna ,hyperopt to reduce the time taken. 3. If you know some of the best hyperparam and know they are stable you dont need to tune those. </code></pre>
118
hyperparameter tuning
Disadvantages of hyperparameter tuning on a random sample of dataset
https://datascience.stackexchange.com/questions/44109/disadvantages-of-hyperparameter-tuning-on-a-random-sample-of-dataset
<p>I often work with very large datasets where it would be impractical to check all relevant combinations of hyperparameters when constructing a machine learning model. I'm considering randomly sampling my dataset and then performing hyperparameter tuning using the sample. Then, I would train/test the model using the full dataset with the chosen hyperparameters. </p> <p>What are the disadvantages of this approach? </p>
<p>One of the good practices is to create a split in the dataset for each tuning/ training step of your pipeline. Since you have large datasets, you should have enough data to split the original dataset into multiple subsets and still have a relevant number of rows for each step. As such, as an example, you can divide your dataset in 60% training, 20% hyperparameter tuning and 20% for the test.</p> <p>It is important to avoid optimizing the hyperparameters with the same data you train on because this can lead to overfitting both tuning steps of your model to the same source of data.</p> <p>Also, be careful on how you sample the original datasource. When dealing with highly skewed categorical features, random sampling can lead to categories in the test set which are not observed during training, which can cause some models to break. Also, numerical features should have a similar distribution between the training and the test set.</p>
119
hyperparameter tuning
How does batch normalization make a model less sensitive to hyperparameter tuning?
https://datascience.stackexchange.com/questions/114953/how-does-batch-normalization-make-a-model-less-sensitive-to-hyperparameter-tunin
<p>Question 22 of <a href="https://www.projectpro.io/article/100-data-science-interview-questions-and-answers-for-2021/184" rel="nofollow noreferrer">100+ Data Science Interview Questions and Answers for 2022</a> asks <em>What is the benefit of batch normalization?</em></p> <p>The first bullet of the answers to this is <em>The model is less sensitive to hyperparameter tuning.</em></p> <p>The wikipedia page <a href="https://en.wikipedia.org/wiki/Batch_normalization" rel="nofollow noreferrer">batch normalization</a> similarly claims:</p> <blockquote> <p>Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting.</p> </blockquote> <p>In both cases I suspect they are referring to improved test error, with the former involving improved test error even having done some hyperparameter tuning.</p> <p>Why does batch normalization have a regularizing effect? (Or does it?)</p>
<p>Regularization is not the primary goal of batch normalization. The main goal of batch normalization is to speed up learning. Regularization is a side effect of batch normalization, it does not replace dropout.</p> <p>As normalizing inputs improves learning, normalizing aggregated inputs to the activation function of a neuron in a hidden layer improves learning as well. More details <a href="https://www.youtube.com/watch?v=tNIpEZLv_eg" rel="nofollow noreferrer">here</a>, and then <a href="https://www.youtube.com/watch?v=em6dfRxYkYU" rel="nofollow noreferrer">here</a>.</p> <p>If during batch training, you normalize the aggregated inputs to hidden activations for the batch rather than for the whole dataset, then the mean and standard deviations of the aggregated inputs will be different for each batch. This will add noise to the training, and make the network to rely less on smaller details of the distribution of the inputs.</p> <p>However, the regularization effect of batch normalization depends on the batch size. It is lower for larger batch sizes, and is lower than the effect of dropout which shuts down some neurons completely for a training step, so it probably cannot replace dropout. More details <a href="https://www.youtube.com/watch?v=nUUqwaxLnWs" rel="nofollow noreferrer">here</a>.</p>
120
hyperparameter tuning
Which data hyperparameter tuning using for fit the model
https://datascience.stackexchange.com/questions/115204/which-data-hyperparameter-tuning-using-for-fit-the-model
<pre><code>X = all features from dataset y = all target from dataset X_train = features that already using train_test_split approach y_train = target that already using train_test split approach </code></pre> <p>So my question is which one should I choose if I would like to do hyperparameter tuning? I have imbalanced data. In this case I would like to make pipeline that contains smote and the algorithm. I read someone who said that you should do oversampling on each fold of cross validation. Assuming when I am using randomized search CV --&gt; that also have cross validate I am decided to run smote in pipeline. But I am unsure which data should I fit after I run the code.</p> <pre><code>fit(X,y) or fit(X_train, y_train) </code></pre>
<p>The recommended approach is to use cross validation on the training dataset (X_train, y_train) for hyperparameter tunning and oversampling on each fold of cross validation.</p> <p>The code would something like this:</p> <pre class="lang-py prettyprint-override"><code>from imblearn.over_sampling import SMOTE from imblearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, RandomizedSearchCV, StratifiedKFold X_train, y_train, X_test, y_test = train_test_split(X, y) pipeline = Pipeline([(&quot;smote&quot;, SMOTE()), (&quot;rf&quot;, RandomForestClassifier())]) kf = StratifiedKFold() rscv = RandomizedSearchCV(estimator=pipeline, cv=kf) rscv.fit(X_train, y_train) </code></pre>
121
hyperparameter tuning
XGBoost Binary Classification for authentication hyperparameter tuning
https://datascience.stackexchange.com/questions/129989/xgboost-binary-classification-for-authentication-hyperparameter-tuning
<p>Im currently working on biometric authentication, specifically on keystroke dynamics. I used XGBoost as binary classification so i can calculate equal error rate (EER) and ROC-AUC. Now i want to do tuning hyperparameter but i dont know how should set them up. My current approach was wraping XGBoost with OneVsRestClassifier then use gridsearchcv with roc_auc_ovr scoring? is this the correct approch?</p>
122
hyperparameter tuning
SVM C vs gamma hyperparameter tuning
https://datascience.stackexchange.com/questions/66251/svm-c-vs-gamma-hyperparameter-tuning
<p>While running SVC(), how we can hyperparameter tune C vs gamma combination?</p> <p>I could see changes in C and gamma are impacting the accuracy differently. Also, what I understand about C and gamma are:</p> <ol> <li><p>C is the cost of mis-classification which means a large C gives you a low bias and high variance, low bias because you penalize the cost of mis-classification a lot. And a small C gives you a higher bias and lower variance.</p> </li> <li><p>Gamma controls the shape of the &quot;peaks&quot; where you raise the points. A small gamma gives you a pointed bump in the higher dimensions, while a large gamma gives you a softer, broader bump. So a small gamma will give you a low bias and high variance, while a large gamma will give you a higher bias and low variance.</p> </li> </ol>
123
hyperparameter tuning
How to compare hyperparameter tuning in R and Python
https://datascience.stackexchange.com/questions/88363/how-to-compare-hyperparameter-tuning-in-r-and-python
<p>I tried random forest in both R (Caret) and Python (Scikit-learn), but the results differ drastically. Pearson correlation between predicted value and actual value was 0.2 in python whereas 0.8 in R.</p> <p>I suspect that is because of hyperparameter tuning difference and wanted to check detail settings in R and python. Since I manually designated hyperparameters in Python explicitly, I know hyperparameter values in Python, but in R I am not sure which corresponding hyperparameters were used. In R, it only shows mtry was chosen as 46 and number of trees were 500.</p> <p>In python, however, there were many more hyperparameters such as maximum number of levels in tree(max_depth), minimum number of samples required to split a node (min_samples_split), minimum number of samples required at each leaf node. Etc which I couldn’t find in R.</p> <p>Are there hidden default settings for the values in R or R doesn’t consider hyperparameters other than mtry and number of trees? If so, why? I tried to search such information but there was no information available. Below are the links that I referred to.</p> <p><a href="https://arxiv.org/pdf/1804.03515.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1804.03515.pdf</a> <a href="https://topepo.github.io/caret/train-models-by-tag.html#random-forest" rel="nofollow noreferrer">https://topepo.github.io/caret/train-models-by-tag.html#random-forest</a></p>
124
hyperparameter tuning
RFECV and grid search - what sets to use for hyperparameter tuning?
https://datascience.stackexchange.com/questions/131174/rfecv-and-grid-search-what-sets-to-use-for-hyperparameter-tuning
<p>I am running machine learning models (all with sci-kit learn estimators, no neural networks) using a custom dataset with a number of features and binomial output. I first split the dataset into 0.6 (train), 0.2 (validation), 02 (test) sets, before preprocessing and converting into dataframes.</p> <p>I use RFECV with StratifiedKFold to find the best features on the training set.</p> <p>I then use grid search with StratifiedKFold to tune the hyperparameters before fitting the model on a combined train and validation set and evaluating on the test set for performance.</p> <p>My question is what set to use the grid search on? Training set (potential overfitting?), validation set, or combined train and validation set (overfitting?).</p> <p>Current code snippet if anyone is interested showing hyperparameter tuning on validation set only:</p> <pre><code>import pandas as pd from sklearn.model_selection import cross_val_score, StratifiedKFold from sklearn.model_selection import GridSearchCV from sklearn.feature_selection import RFECV estimator.fit(X_train_preprocessed_df, y_train) custom_scorer = make_scorer(log_loss, greater_is_better=False, response_method='predict_proba') # Initialize RFECV rfecv = RFECV(estimator=estimator, step=1, min_features_to_select=2, verbose=0, cv=StratifiedKFold(5), scoring=custom_scorer) # Fit RFECV rfecv.fit(X_train_preprocessed_df, y_train) # Get the selected features selected_features = rfecv.support_ selected_feature_names = X_train_preprocessed_df.columns[selected_features] print(f&quot;Number of Selected Features: {selected_features.sum()}&quot;) print(&quot;Selected Features:&quot;, selected_feature_names.tolist()) # Train the model with selected features estimator.fit(X_train_preprocessed_df.loc[:, selected_features], y_train) # Transform the training, validation, and test data to include only the selected features X_train_selected = X_train_preprocessed_df.loc[:, selected_features] X_val_selected = X_val_preprocessed_df.loc[:, selected_features] X_test_selected = X_test_preprocessed_df.loc[:, selected_features] # Initialize GridSearchCV for hyperparameter tuning using the combined training and validation set grid_search = GridSearchCV(estimator=estimator, param_grid=param_grid, scoring=custom_scorer, cv=StratifiedKFold(5), verbose=0, n_jobs=-1) # Fit GridSearchCV on validation set grid_search.fit(X_val_selected, y_val) # Print the best parameters print('Best Parameters found') print(grid_search.best_params_) # Print the best estimator print('\nBest Estimator found') print(grid_search.best_estimator_) # Get the best estimator from GridSearchCV best_estimator = grid_search.best_estimator_ # Combine training and validation sets for final training X_combined = pd.concat([X_train_selected, X_val_selected]) y_combined = pd.concat([y_train, y_val]) # Train the model with the best estimator on the combined set best_estimator.fit(X_combined, y_combined) # Evaluate on the validation set to tune hyperparameters y_val_pred = best_estimator.predict(X_combined) val_score = custom_scorer(best_estimator, X_combined, y_combined) print(f&quot;Validation Score: {val_score}&quot;) # Evaluate on the test set to get the final performance y_test_pred = best_estimator.predict(X_test_selected) test_score = custom_scorer(best_estimator, X_test_selected, y_test) print(f&quot;Test Score: {test_score}&quot;) </code></pre>
<p>The best would be to do the features selection inside the grid search (look also at randomized search that I find more effective).</p> <p>For this, using a scikit-learn pipeline is convenient and rigourous as in theory you should also include the other preprocessing steps inside the loop like encoding categorical features, normalizing, selecting features, ...<br /> But the full process may become slow to run (nested loops).</p> <p>To train the gridsearchCV you take your train+validation sets together as the different validation folds will be created inside the gridsearch via cross-validation.</p> <p>For the code, you can find an example <a href="https://stats.stackexchange.com/questions/464312/how-to-combine-recursive-feature-elimination-and-grid-random-search-inside-one-c">here</a>.</p>
125
hyperparameter tuning
Robustness of hyperparameter tuning
https://datascience.stackexchange.com/questions/74710/robustness-of-hyperparameter-tuning
<p>I use a Bayesian hyperparameter (HP) optimization approach (<a href="https://docs.ray.io/en/master/_modules/ray/tune/suggest/bohb.html" rel="nofollow noreferrer">BOHB</a>) to tune a deep learning model. However, the resulting model is not robust when repeatedly applied to the same data. I know, I could use a seed to fix the parameter initialization, but I wonder if there are HP optimization approaches that already account for robustness.</p> <p>To illustrate the problem, let's consider a one-layer neural network with only one HP: the hidden size (<em>h</em>). The model performs well with a small <em>h</em>. With a larger <em>h</em>, the results start to fluctuate more, maybe due to a more complex loss landscape; the random initialization of the parameters can lead to a good performance, or to a very bad performance if the optimizer gets stuck in a local minimum (which happens more often due to the complex loss landscape). The loss vs <em>h</em> plot could look something like this:</p> <p><a href="https://i.sstatic.net/Q7O4R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q7O4R.png" alt="enter image description here" /></a></p> <p>I would prefer the 'robust solution', while the 'best solution' is selected by the HP optimizer algorithm. Are there HP optimization algorithms that account for the robustness? Or how would you deal with this problem?</p>
<p>As I understand them, Bayesian optimization approaches are already somewhat robust to this problem. The evaluated performance function is usually(?) considered noisy, so that the search would want to check nearby the &quot;best solution&quot; <span class="math-container">$h$</span> to improve certainty; if it then finds lots of poorly performing models, its surrogate function should start to downplay that point. (See e.g. <a href="http://krasserm.github.io/2018/03/21/bayesian-optimization/" rel="nofollow noreferrer">these</a> <a href="http://neupy.com/2016/12/17/hyperparameter_optimization_for_neural_networks.html#bayesian-optimization" rel="nofollow noreferrer">two</a> blog posts.)</p> <p>If the instability is large due to random effects (e.g. initializations of weights that you mention), then just repeating the model fit and taking an average (or worst, or some percentile) of the performances should work well. If it's really an effect of &quot;neighboring&quot; <span class="math-container">$h$</span> values, then you could similarly fit models near the selected <span class="math-container">$h$</span> and consider their aggregate performance. Of course, both of these add quite a bit of computational expense; but I think this might be the closest to &quot;the right&quot; solution that doesn't depend on the internals of the optimization algorithm.</p>
126
hyperparameter tuning
Does hyperparameter tuning of Decision Tree then use it in Adaboost individually vs Simultaneously yield the same results?
https://datascience.stackexchange.com/questions/102103/does-hyperparameter-tuning-of-decision-tree-then-use-it-in-adaboost-individually
<p>So, my predicament here is as follows, I performed hyperparameter tuning on a standalone Decision Tree classifier, and I got the best results, now comes the turn of Standalone Adaboost, but here is where my problem lies, if I use the Tuned Decision Tree from earlier as a base_estimator in Adaboost, then I perform hyperparameter tuning on Adaboost only, will it yield the same results as trying to perform hyperparameter tuning on untuned Adaboost and untuned Decision Tree as a base_estimator simultaneously, where I try the hyperparameters of both Adaboost and Decision Tree together.</p>
<p>No, generally optimizing two parts of a modeling pipeline separately will not work as well as searching over all the parameters simultaneously.</p> <p>In your particular case, this is easier to see: the optimal single tree will probably be much deeper than the optimal trees in an AdaBoost ensemble. A single tree (probably) needs to split quite a bit to avoid being dramatically underfit, whereas AdaBoost generally performs best with &quot;weak learners&quot;, and in particular often a &quot;decision stump&quot;, i.e. a depth-1 tree, is selected.</p>
127
hyperparameter tuning
Hyperparameter tuning
https://datascience.stackexchange.com/questions/126432/hyperparameter-tuning
<p>Jane trains three different classifiers: Logistic Regression, Decision Tree, and Support Vector Machines on the training set. Each classifier has one hyper-parameter (regularisation parameter, depth-of-tree, etc) that needs to set, so she chooses a set of 10 reasonable values for each and performs a sweep over those values (retraining the classifier each time) and chooses the value that gives the best performance on the test set. She then reports performance for the three classifiers with their best hyper-parameter settings on the test dataset. Is there a problem with her approach? I think the answer regards some problems related to overfitting ,isn't it?</p>
<p>This is called data leakage.</p> <p>She should choose hyper parameters based upon the training data. And then report performance when evaluated using the test data.</p> <p>Being able to make good predictions on already seen data is uninteresting . Any relational database could accomplish that trick, it is trivial. What we care about is the ability of a model to generalize to unseen data, and to make good predictions on that.</p>
128
hyperparameter tuning
Cross validation and hyperparameter tuning workflow
https://datascience.stackexchange.com/questions/104435/cross-validation-and-hyperparameter-tuning-workflow
<p>After reading a lot of articles on cross validation, I am now confused. I know that cross validation is used to get an estimate of model performance and is used to select the best algorithm out of multiple ones. After selecting the best model (by checking the mean and standard deviation of CV scores) we train that model on the whole of the dataset (train and validation set) and use it for real world predictions.</p> <p>Let's say out of the 3 algorithms I used in cross validation, I select the best one. What I don't get is in this process, when do we tune the hyperparameters? Do we use Nested Cross validation to tune the hyperparameters during the cross validation process or do we first select the best performing algorithm via cross validation and then tune the hyperparameter for only that algorithm?</p> <p><strong>PS</strong>: I am splitting my dataset into train, test and valid where I use train and test sets for building and testing my model (this includes all the preprocessing steps and nested cv) and use the valid set to test my final model.</p> <p><strong>Edit 1</strong> Below are two ways to perform Nested cross validation. Which one is the correct way aka which method does not lead to data leakage/overfitting/bias?</p> <p><strong>Method 1</strong>: Perform Nested CV for multiple algorithms and their hyperparameters simultaneously:-</p> <pre><code>from sklearn.model_selection import cross_val_score, train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from sklearn.svm import SVR from sklearn.datasets import make_regression import numpy as np import pandas as pd # create some regression data X, y = make_regression(n_samples=1000, n_features=10) # setup models, variables results = pd.DataFrame(columns = ['model', 'params', 'mean_mse', 'std_mse']) models = [SVR(), RandomForestRegressor(random_state = 69)] params = [{'C':[0.01,0.05]},{'n_estimators':[10,100]}] # split into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.3) # estimate performance of hyperparameter tuning and model algorithm pipeline for idx, model in enumerate(models): # perform hyperparameter tuning clf = GridSearchCV(model, params[idx], cv = 3, scoring='neg_mean_squared_error') clf.fit(X_train, y_train) # this performs a nested CV in SKLearn score = cross_val_score(clf, X_train, y_train, cv = 3, scoring='neg_mean_squared_error') row = {'model' : model, 'params' : clf.best_params_, 'mean_mse' : score.mean(), 'std_mse' : score.std()} # append the results in the empty dataframe results = results.append(row, ignore_index = True) </code></pre> <p><strong>Method 2</strong>: Perform Nested CV for single algorithm and it's hyperparameters:-</p> <pre><code>from sklearn.datasets import load_iris from matplotlib import pyplot as plt from sklearn.svm import SVC from sklearn.model_selection import GridSearchCV, cross_val_score, KFold, train_test_split import numpy as np # Load the dataset iris = load_iris() X_iris = iris.data y_iris = iris.target train_x, test_x, train_y ,test_y = train_test_split(X_iris, y_iris, test_size = 0.2, random_state = 69) # Set up possible values of parameters to optimize over p_grid = {&quot;C&quot;: [1, 10], &quot;gamma&quot;: [0.01, 0.1]} # We will use a Support Vector Classifier with &quot;rbf&quot; kernel svm = SVC(kernel=&quot;rbf&quot;) # Choose cross-validation techniques for the inner and outer loops, # independently of the dataset. # E.g &quot;GroupKFold&quot;, &quot;LeaveOneOut&quot;, &quot;LeaveOneGroupOut&quot;, etc. inner_cv = KFold(n_splits=4, shuffle=True, random_state=69) outer_cv = KFold(n_splits=4, shuffle=True, random_state=69) # Nested CV with parameter optimization clf = GridSearchCV(estimator=svm, param_grid=p_grid, cv=inner_cv) clf.fit(train_x, train_y) nested_score = cross_val_score(clf, X=X_iris, y=y_iris, cv=outer_cv) nested_scores_mean = nested_score.mean() nested_scores_std = nested_score.std() </code></pre>
<p>Suppose you have two models which you can choose <span class="math-container">$m_1$</span>, <span class="math-container">$m_2$</span>. For a given problem, there is a best set of hyperparameters for each of the two models (where they perform as good as possible), say <span class="math-container">$m_1^*$</span>, <span class="math-container">$m_2^*$</span>. Now say <span class="math-container">$Acc(m_1^*) &gt; Acc(m_2^*)$</span>, i.e. model 1 is better than model 2.</p> <p>Now suppose you have tuned model 2 (or you have &quot;okay&quot; hayperparameter by coincidence) but you use inferior hyperparameter for model 1. You could end up finding <span class="math-container">$Acc(m_1^s) &lt; Acc(m_2^*)$</span> (i.e. &quot;choose model 2&quot;), while the <em>true</em> best choice would be: &quot;use tuned model 1&quot;.</p> <p>Thus in order to make an informed decision, you would need to &quot;tune&quot; both models and compare the performance of the tuned models with &quot;best hyperparameter&quot;. What I often do is to define <code>test</code> and <code>train</code> data, tune possible models using cross validation (<code>train</code> data only!), and assess the performance of the tuned models based on the <code>test</code> set.</p> <p>In addition you may want to do feature engineering / feature generation. This should be done <em>before</em> tuning the models, since different data may lead to different optimal hyperparameter, e.g. in case of a random forest, where the number of split candidates per split can be contingent on the number and the quality of features.</p>
129
hyperparameter tuning
Is there a point in hyperparameter tuning for Random Forests?
https://datascience.stackexchange.com/questions/116761/is-there-a-point-in-hyperparameter-tuning-for-random-forests
<p>I have a binary classification task with substantial class imbalance (99% negative - 1% positive). I want to developed a Random Forest model to make prediction, and after establishing a baseline (with default parameters), I proceed to hyperparameter tuning with scikit-learn's GridSearchCV.</p> <p>After setting some parameters (e.g. <code>max_depth</code>, <code>min_samples_split</code>, etc.), I noticed that the best parameters, once GridSearch was done, are highest <strong>max</strong> parameters (<code>max_depth</code>) and the smallest <strong>min</strong> parameters (<code>min_samples_split</code>, <code>min_samples_leaf</code>). In other words, GridSearchCV favored the combination of parameters that fits most closely to the training set, i.e. <strong>overfitting it</strong>. I always thought that cross-validation would protect from this scenario.</p> <p>Therefore, my question is 'What is the point of GridSearch if the outcome is overfitting?' Have I misunderstood its purpose?</p> <p>My code:</p> <pre><code>rf = RandomForestClassifier(random_state=random_state) param_grid = { 'n_estimators': [100, 200], 'criterion': ['entropy', 'gini'], 'max_depth': [5, 10, 20], 'min_samples_split': [5, 10], 'min_samples_leaf': [5, 10], 'max_features': ['sqrt'], 'bootstrap': [True], 'class_weight': ['balanced'] } rf_grid = GridSearchCV(estimator=rf, param_grid=param_grid, scoring=scoring_metric, cv=5, verbose=False, n_jobs=-1) best_rf_grid = rf_grid.fit(X_train, y_train) <span class="math-container">```</span> </code></pre>
<p>The grid search chooses the hyperparameters whose average score across the <em>test folds</em> is the best. That might also correspond to the best training fold scores and/or the highest-capacity hyperparameters, but might not. A &quot;more overfit&quot; model can still be better if its test and production scores are better.</p> <p>In your case, I notice that your grid parameters are all more conservative than the default. So it might just be a matter of needing to expand the grid range before you start to see really-overfit models underperform on the test folds.</p> <p>Also, you don't say what your <code>scoring</code> metric is. Make sure it's relevant to your business problem; and in a 1% positive-class setting, probably that's not accuracy.</p>
130
hyperparameter tuning
Hyperparameter Tuning in Machine Learning
https://datascience.stackexchange.com/questions/29962/hyperparameter-tuning-in-machine-learning
<p>What is the difference between Hyper-parameter Tuning and k-NN algorithm? Is k-NN also a type of Hyper-parameter tuning?</p>
<p>In kNN algorithm, you only try to find a suitable value of parameter k. And some models may have many parameters that can be modified. Normal parameters are optimized by loss functions and Hyperparameter tuning allows you to set various parameters to get the best model. You set them before training. Its 2 methods: <strong><em>GRID METHOD AND RANDOM SAMPLING</em></strong> might work well.</p> <p><strong>Grid Method</strong>: Impose a grid on possible space of a hyperparameter and then go over each cell of grid one by one and evaluate your model against values from that cell. Grid method tends to vast resources in trying out parameter values which would not make sense at all.</p> <p><strong>Random Sampling Method</strong>: In random method, we have a high probability of finding a good set of params quickly. After doing random sampling for a while, we can zoom into the area indicative of the good set of params.</p> <p>Random sampling allows efficient search in hyperparameter space. But sampling at random does not guarantee uniformity over the range of valid values. Therefore, it is important to pick an appropriate scale.</p> <p>You can read this: <a href="http://datalya.com/blog/2017/hyperparameter-tuning-of-deep-learning-algorithm" rel="nofollow noreferrer">Hyperparameter Tuning of Deep Learning Algorithm</a></p>
131
hyperparameter tuning
Is hyperparameter tuning more affected by the input data, or by the task?
https://datascience.stackexchange.com/questions/57364/is-hyperparameter-tuning-more-affected-by-the-input-data-or-by-the-task
<p>I'm working on optimizing the hyperparameters for several ML models (FFN, CNN, LSTM, BiLSTM, CNN-LSTM) at the moment, and running this alongside another experiment examining which word embeddings are best to use on the task of binary text classification.</p> <p>My question is: should I decide on which embeddings to use before I tune the hyperparameters, or can I decide on the best hyperparameters and then experiment with the embeddings? The task remains the same in both cases.</p> <p>In other words, is hyperparameter tuning more affected by the task (which is constant) or by the input data?</p>
<blockquote> <p>In other words, is hyperparameter tuning more affected by the task (which is constant) or by the input data?</p> </blockquote> <p>It's correct that the task is constant, but hyper-parameters are usually considered specific to a particular learning algorithm, or to a method in general. In a broad sense the method may include what type of algorithm, its hyper-parameters, which features are used (in your case which embeddings), etc. </p> <p>The performance depends on both the data and the method (in a broad sense), and since hyper-parameters are parts of the method, there's no guarantee that the optimal parameters stay the same when any part of the method is changed <em>even if the data doesn't change</em>.</p> <p>So for optimal results it's better to tune the hyper-parameters for every possible pair of ML model and words embeddings. You can confirm this experimentally: it's very likely that the selected hyper-parameters will be different when you change any part of the method.</p>
132
hyperparameter tuning
Validation set after hyperparameter tuning
https://datascience.stackexchange.com/questions/89323/validation-set-after-hyperparameter-tuning
<p>Let's say I'm comparing few models, and for my dataset I'm using train/validation/test split, and not cross validation. Let's say I'm completely done with parameter tuning for one of them and want to evaluate on the test set. Will I train a new model, on both the training and the validation datasets combined with the best configuration, or will I just run the same model on the test data?</p>
133
hyperparameter tuning
How to combine preprocessor/estimator selection with hyperparameter tuning using sklearn pipelines?
https://datascience.stackexchange.com/questions/106568/how-to-combine-preprocessor-estimator-selection-with-hyperparameter-tuning-using
<p>I'm aware of how to use <code>sklearn.pipeline.Pipeline()</code> for simple and slightly more complicated use cases alike. I know how to set up pipelines for homogeneous as well as heterogeneous data, in the latter case making use of <code>sklearn.compose.ColumnTransformer()</code>.</p> <p>Yet, in practical ML one must oftentimes not only experiment with a large set of model hyperparameters, but also with a large set of potential preprocessor classes and different estimators/models.</p> <p>My question is a dual one:</p> <ol> <li>What would be the preferred way to set up a pipeline where the selection of text vectorizers is treated as an additional hyperparameter for grid or randomized search?</li> <li>Additionally, what would be the preferred way to set up a pipeline where the selection of multiple models can also be treated as an additional hyperparameter? What about optimizing the model-specific hyperparameters in this case?</li> </ol> <p>In the first case a common use case is text vectorization: treating the choice of <code>CountVectorizer()</code> or <code>TfidfVectorizer()</code> a hyperparameter to be optimized.</p> <p>In the second case a practical use case could be selecting between various algorithms or in the case of multiclass classifications, whether to use <code>OneVsOneClassifier()</code> or <code>OneVsRestClassifier()</code>.</p> <p>I understand that this might exactly be what AutoML solutions have been developed for. I heard of out-of-the-box AutoML solutions in the past that can do automatic model selection with hyperparameter tuning but I have no experience in any of them, thus I don't know if they indeed provide an answer for the general topics I described in this post.</p>
<p>Some pure scikit approaches:</p> <ul> <li><p>When pre-processing relates to data balancing &amp; sampling strategies, consider using <a href="https://imbalanced-learn.org/stable/" rel="nofollow noreferrer">Imbalance-Learn</a> components (ie: RandomUnderSample) you embed right into your pipelines. This lets you hyper tune the parameters.</p> </li> <li><p>Rely on <strong>passthrough</strong> functionality of grid search when deciding if certain pre-processing steps are needed at all. This however cannot express when the step is a required pre-condition to another step (ie: StandardScaler + MLPerceptronClassifier)</p> </li> <li><p>Consider using <a href="https://scikit-optimize.github.io/stable/" rel="nofollow noreferrer">scikit-opt</a>'s <strong>BayesSearchCV</strong> strategy to walk through the parameter space based on previous runs rather than fixed or randomly like <em>GridSearchCV</em> or <em>RandomizedSearchCV</em> do. For many parameter tuning this may converge faster.</p> </li> </ul> <p>⚠️ In practice I find its often extremely time &amp; computationally expensive to have full end-to-end pipelines that try to learn everything. (ie: parameters, metrics, model types, normalization stages, features, architectures, etc), so hyper tune what matters most.</p>
134
hyperparameter tuning
Hyperparameter tuning for stacked models
https://datascience.stackexchange.com/questions/41336/hyperparameter-tuning-for-stacked-models
<p>I'm reading the following kaggle post for learning how to incorporate model stacking </p> <p><a href="http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/" rel="nofollow noreferrer">http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/</a> in ML models. The structure behind constructing the 5 folds and creating out of sample predictions on the training data makes sense for the purpose of building the meta model or the model on top of the base models. However i'm not sure how it uses hyper parameter tuning especially for the base models. </p> <p>So the concept of getting out of sample predictions makes sense to me. We essentially for each of the 5 folds use the other 4 folds to train and then predict on the fifth. So how do we actually hyper parameter tune the base models on this same dataset without adding bias, it's seems to me that this is not possible? </p> <p>Note i'm making the assumption that there is no more data available to use. I'd appreciate any help!</p>
<p>Do not mix dividing the data into k-fold with cross validation.</p> <p>You can use the 4 folds ( training data) to optimize the base classifiers. You can also find the best hyper parameters by applying cross validation on your training data, re-train using all training data ( the 4 folds) and then test using the last fold to generate the meta data.</p> <p>After you finish generating the meta data you can now use all the data the 5 folds to train the base learners. The higher the convergence between the final set of base classifiers and the ones you used to generate the meta data the better. This is why the less the data you have the larger the k you need to go with and vice versa.</p> <p>Finally, you use the meta data to train the meta classifier</p> <p>Here is one last note to think about: using cross validation to optimize base learners my not be very beneficial, but why? If you have access to N different training algorithms and you can use cross validation to optimize them and choose the best one ( the one that leads to a very low bias error), then there is a high probability that you don't need to use stacking. But stacking is very beneficial when all these N algorithms can not lead to 0 bias error (if you have variance error use bagging), it is a case that may confuse the cross validation algorithm. The other thing is using cross validation to optimize algorithms may lead to different algorithms that have similar biases, which reduces the benefits from the stacking technique.</p> <p>you need to know that stacking, is one of the most tricky ensembles and this is why it is not well studied in literature in compare to bagging and boosting. In the other hand it is proven that it is very important, especially for practical purposes.</p>
135
hyperparameter tuning
Hyperparameter tuning and cross validation
https://datascience.stackexchange.com/questions/61620/hyperparameter-tuning-and-cross-validation
<p>I have some confusion about proper usage of cross-validation to</p> <ol> <li>tune hyperparameters and</li> <li>evaluate estimator performance and generalizeability.</li> </ol> <p>As I understand it, this would be the process you would follow:</p> <ol> <li>Split your full dataset into a training and test set (Python's <code>train_test_split</code>)</li> <li>Use cross-validation to build a model and tune hyperparameters on the <strong>training</strong> set (<code>GridSearchCV</code>)</li> <li>Evaluate the best estimator and assess generalizeability using cross-validation on the <strong>test</strong> set (<code>cross_val_score</code>)</li> </ol> <p>I've looked through <a href="https://scikit-learn.org/stable/modules/cross_validation.html" rel="nofollow noreferrer">sklearn's cross-validation</a> documentation, and it recommends still having a test set for final evaluation.</p> <blockquote> <p>A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV.</p> </blockquote> <p><a href="https://scikit-learn.org/stable/modules/grid_search.html#model-selection-development-and-evaluation" rel="nofollow noreferrer">sklearn's grid-search</a> information recommends:</p> <blockquote> <p>When evaluating the resulting model it is important to do it on held-out samples that were not seen during the grid search process: it is recommended to split the data into a development set (to be fed to the GridSearchCV instance) and an evaluation set to compute performance metrics.</p> <p>This can be done by using the train_test_split utility function.</p> </blockquote> <p>My issue is that I often see conflicting work (for example, just <code>cross_val_score</code> on the entire dataset, only <code>GridSearchCV</code> on the entire dataset, or just a <code>train_test_split</code> variant), and I am hoping to understand what are best practices and clarify where I may be wrong.</p> <p><strong>Edit:</strong> This Stack Overflow <a href="https://stackoverflow.com/a/49165571/9660800">answer</a> seems to answer my question.</p>
<p>Some of the popular ways of splitting of data that the user can validate a model:</p> <ol> <li>Train-Test (Most popular)</li> <li>Train-Test-Validation</li> <li>Train-Test-Development</li> <li>Train-Test-Dev-Val</li> </ol> <p>Every way has their own pros and cons. There is no one-size-fits-all approach for getting a perfect model. Choice is typically made by the developer considering following factors:</p> <ol> <li>Size of data </li> <li>Diversity of data</li> <li>Computation budget</li> <li>Efficiency</li> <li>Necessity </li> </ol> <p>But I would recommend K-fold CV is the best way to go with the basic train-test split model. </p> <p>Thank you.</p>
136
hyperparameter tuning
Hyperparameter Tuning with Simulated Data
https://datascience.stackexchange.com/questions/47608/hyperparameter-tuning-with-simulated-data
<p>I'm trying to create a SVM classifier which can predict some fault, and to train it I'm using <strong>simulated examples</strong> of the fault. Of course, the simulations are not perfect, but they appear to be good enough since I get reasonable results when predicting on the real examples. </p> <p>What is the best way to tune the hyperparameters in order to maximise the performance on the real data? Using <code>GridSearchCV</code> does a good job of maximising the accuracy on a subset of the simulated data, but that doesn't always mean good results on the real data. </p> <p>Any general tips on working with simulated training data are also welcome. </p>
137
hyperparameter tuning
Why does hyperparameter tuning occur on validation dataset and not at the very beginning?
https://datascience.stackexchange.com/questions/111372/why-does-hyperparameter-tuning-occur-on-validation-dataset-and-not-at-the-very-b
<p>Despite doing/using it a few times, I'm still slightly confused by the use of a validation set for hyper parameter tuning.</p> <p>As far as I can tell, I choose a model, train it on training data, assess performance on training data, then do hyper parameter tuning assessing model performance on validation data, then choose the best model and test this on test data.</p> <p>In order to do this, I basically need to pick a model at random for training data. What I don't understand is I don't know which model is going to be best at the start anyway. Let's say I think neural networks and random forests may be useful for my problem. So why don't I start searching with a general e.g. Neural Network architecture, random forest architecture, and from the very beginning, assess which model is best on a small portion of data varying all hyper parameters at the start anyway.</p> <p>Basically why choose a human based &quot;guess&quot; to do the training, then hyperparameter tune in validation phase? Why not &quot;start with total uncertainty&quot;, and do a broad search, assess performance of a wide range of hyperparameters from a general neural network or random forests or ... architecture, from the very beginning?</p> <p>Thanks!</p>
<p>You perform hyperparameter tuning using train dataset. Validation dataset is used to make sure the model you trained is not overfit. The issue here is that the model has already &quot;seen&quot; the validation dataset and it is possible that the model doesn't perform as expected against new/unseen data. That's why you need an additional dataset, namely test dataset.</p>
138
hyperparameter tuning
Train/val/test approach for hyperparameter tuning
https://datascience.stackexchange.com/questions/118574/train-val-test-approach-for-hyperparameter-tuning
<p>When looking to train a model, does it make sense to have a 60-20-20 train val test split, first hyper parameter tuning over the training dataset, using the validation set, picking the best model. Then training over train+val and the final test occurring on the test set?</p>
<p>I would say that this depends heavily on the type of data that you have and the task at hand. If the available dataset is sufficiently large, you can add a larger validation and test set. If you only have limited data available, you might consider decreasing the size of the validation and test set in order to improve the model by providing it with more data for training.</p> <p>But generally speaking, without having any further information about your case, the approach is fine.</p>
139
hyperparameter tuning
Hyperparameter Tuning Guidelines Across Lots of Models
https://datascience.stackexchange.com/questions/114064/hyperparameter-tuning-guidelines-across-lots-of-models
<p>I am performing a set of experiments in which I need to tune a fairly small set of hyperparameters but over a very large space of models trained on relatively related datasets and I'm trying to essentially limit the amount of trials I need to run in order to find <em>somewhat</em> optimal hyperparameter settings for this space.</p> <p>To clarify, I am deconstructing a number of classification datasets which have a binary label space but also have a categorical variable, for example, sentiment classification of product reviews but these can then be further split by the type of product; so one model would be sentiment classification of Video Game reviews and the other would be sentiment classification of Movie reviews and these datasets would be split into seperate binary classification models where I try to predict the sentiment only on that subdomain.</p> <p>What I am asking is are there general best practices to avoid evaluating <strong>every</strong> combination of hyperparameters over <strong>every</strong> dataset. I know that there are options such as random over grid search or even something fancier like Bayesian optimisation, or even narrowing down the space by performing one technique and identifying groups of poor parameters etc. But even with a small pool of combinations that I'm willing to try, with each dataset and each task the number of trials explode.</p> <p>Does it make sense to evaluate every combination over every task or is it reasonable to pick a random subset of tasks, find the best parameter settings through some technique, and just use those parameters going forward or is this considered to be a bit of an optimistic assumption? My idea behind this is that the datasets are sufficiently related that it is reasonable to assume a set of hyperparameters that generalise well to say, five tasks, will likely generalise to others.</p> <p>I know this question may be largely down to preference but I'm essentially trying to find out best practices and take some shortcuts without taking shortcuts that are potentially detrimental to my research process.</p>
140
hyperparameter tuning
Is it a problem to use the test dataset for the hyperparameter tuning, when I want to compare 2 classification algorithms on the 10 different dataset?
https://datascience.stackexchange.com/questions/123982/is-it-a-problem-to-use-the-test-dataset-for-the-hyperparameter-tuning-when-i-wa
<p>I know that we should use the validation set to perform hyperparameter tuning and that test dataset is not anymore really the test if it is used for hyperparameter tuning. But is this a problem if i want to compare the performance of 2 algorithms (e.g., Random Forest and XGBoost) across 10 different datasets, where each time I am using the test data for tuning. I believe that if they are trained and tested under the same conditions, the final performance analysis should be actually true representation which algorithm is better performing on these datasets. Or am i mistaken?</p>
<p>Seems like there is something flawed in the procedure here. If you use the test data set for tuning, then what do you use for testing performance?</p> <p>In general, the models should not get any information from the test set. If models are exposed to the test set you will generally tend to conclude the more flexible model has better performance when it may tend to overfit the training data and underperform simpler models if the test set is isolated during the model fitting procedure.</p>
141
hyperparameter tuning
Hyperparameter tuning with Bayesian-Optimization
https://datascience.stackexchange.com/questions/89047/hyperparameter-tuning-with-bayesian-optimization
<p>I'm using LightGBM for the regression problem and here is my code.</p> <pre><code> def bayesion_opt_lgbm(X, y, init_iter = 5, n_iter = 10, random_seed = 32, seed= 100, num_iterations = 50, dtrain = lgb.Dataset(data = X_train, label = y_train)): def lgb_score(y_preds, dtrain): labels = dtrain.get_labels() return 'r2', r2_score(labels, y_preds), True def hyp_lgb(num_leaves, feature_fraction, bagging_fraction, max_depth, min_split_gain, min_child_weight): params = {'application': 'regression', 'num_iterations': 'num_iterations', 'early_stopping_round': 50, 'learning_rate': 0.05, 'metric': 'lgb_r2_score'} params['num_leaves'] = int(round(num_leaves)) params['feature_fraction'] = max(min(feature_fraction, 1), 0) params['bagging_fraction'] = max(min(bagging_fraction, 1), 0) params['max_depth'] = int(round(max_depth)) params['min_split_gain'] = min_split_gain params['min_child_weight'] = min_child_weight cv_results = lgb.cv(params, train_set = dtrain, nfold = 5, stratified = False, seed = seed, categorical_feature = [], verbose_eval = None, feval = lgb_r2_score) print(cv_results) return np.max(cv_results['r2-mean']) bounds = {'num_leaves': (80,100), 'feature_fraction': (0.1, 0.9), 'bagging_fraction': (0.8, 1), 'max_depth': (5,10,15,20), 'min_split_gain': (0.001, 0.01), 'min_child_weight': (10,20) } optimizer = BayesianOptimization(f = hyp_lgb, pbounds = bounds, random_state = 32) optimizer.maximaze(init_points= init_iter, n_iter = n_iter) bayesion_opt_lgbm(X_train, y_train) </code></pre> <p><em>When I run my code, I get an error something like that, Please help me where am i missing</em></p> <pre><code>TypeError Traceback (most recent call last) TypeError: float() argument must be a string or a number, not 'tuple' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) &lt;ipython-input-57-86f7d803c78d&gt; in &lt;module&gt;() 40 #Optimize 41 optimizer.maximaze(init_points= init_iter, n_iter = n_iter) ---&gt; 42 bayesion_opt_lgbm(X_train, y_train) 43 2 frames /usr/local/lib/python3.6/dist-packages/bayes_opt/target_space.py in __init__(self, target_func, pbounds, random_state) 47 self._bounds = np.array( 48 [item[1] for item in sorted(pbounds.items(), key=lambda x: x[0])], ---&gt; 49 dtype=np.float 50 ) 51 ValueError: setting an array element with a sequence. </code></pre>
<p>The <code>pbounds</code> must all be pairs; you cannot specify a list of options for <code>max_depth</code>.</p> <p>The package cannot deal with discrete hyperparameters very directly; see section 2, &quot;Dealing with discrete parameters&quot;, of <a href="https://github.com/fmfn/BayesianOptimization/blob/master/examples/advanced-tour.ipynb#2.-Dealing-with-discrete-parameters" rel="nofollow noreferrer">their &quot;advanced tour&quot; notebook</a> about this.</p>
142
hyperparameter tuning
Worse performance after Hyperparameter tuning
https://datascience.stackexchange.com/questions/97440/worse-performance-after-hyperparameter-tuning
<p>I first construct a base model (using default parameters) and obtain MAE.</p> <pre><code># BASELINE MODEL rfr_pipe.fit(train_x, train_y) base_rfr_pred = rfr_pipe.predict(test_x) base_rfr_mae = mean_absolute_error(test_y, base_rfr_pred) </code></pre> <p>MAE = 2.188</p> <p>Then I perform GridSearchCV to get best parameters and get the average MAE.</p> <pre><code># RFR GRIDSEARCHCV rfr_param = {'rfr_model__n_estimators' : [10, 100, 500, 1000], 'rfr_model__max_depth' : [None, 5, 10, 15, 20], 'rfr_model__min_samples_leaf' : [10, 100, 500, 1000], 'rfr_model__max_features' : ['auto', 'sqrt', 'log2']} rfr_grid = GridSearchCV(estimator = rfr_pipe, param_grid = rfr_param, n_jobs = -1, cv = 5, scoring = 'neg_mean_absolute_error') rfr_grid.fit(train_x, train_y) print('best parameters are:-', rfr_grid.best_params_) print('best estimator is:- ', rfr_grid.best_estimator_) print('best mae is:- ', -1 * rfr_grid.best_score_) </code></pre> <p>MAE = 2.697</p> <p>Then I fit the &quot;best parameters&quot; obtained to get an optimized MAE, but the results are always worse than the base model MAE.</p> <pre><code># OPTIMIZED RFR MODEL opt_rfr = RandomForestRegressor(random_state = 69, criterion = 'mae', max_depth = None, max_features = 'auto', min_samples_leaf = 10, n_estimators = 100) opt_rfr_pipe = Pipeline(steps = [('rfr_preproc', preproc), ('opt_rfr_model', opt_rfr)]) opt_rfr_pipe.fit(train_x, train_y) opt_rfr_pred = opt_rfr_pipe.predict(test_x) opt_rfr_mae = mean_absolute_error(test_y, opt_rfr_pred) </code></pre> <p>MAE = 2.496</p> <p>Not just once but every time and in most of the models (linear regression, random forest regressor)! I guess there is something fundamentally wrong with my code or else this problem wouldn't arise every time. Any idea what might be causing this?</p>
<p>Apparently the reason I was getting worse performance was because I was using cross validation during HP tuning but not when I built the base model. Hence the issue.</p> <p>Another mistake was not scaling my data!</p> <p>Typical noob mistakes!</p>
143
hyperparameter tuning
Hyperparameter tuning in multiclass classification problem: which scoring metric?
https://datascience.stackexchange.com/questions/30876/hyperparameter-tuning-in-multiclass-classification-problem-which-scoring-metric
<p>I'm working with an imbalanced multi-class dataset. I try to tune the parameters of a <code>DecisionTreeClassifier</code>, <code>RandomForestClassifier</code> and a <code>GradientBoostingClassifier</code> using a randomized search and a bayesian search.</p> <p>For now, I used just <code>accuracy</code> for the scoring which is not really applicable for assessing my models performance (which I'm not doing). Is it also not suitable for parameter tuning?</p> <p>I found that for example <code>recall_micro</code> and <code>recall_weighted</code> yield the same results as <code>accuracy</code>. This should be the same for other metrics like <code>f1_micro</code>.</p> <p><strong>So my question is</strong>: Is the scoring relevant for tuning? I see that <code>recall_macro</code> leads to lower results since it doesn't take the number of samples per class into account. So which metric should I use?</p>
<p>You should use the same metric to evaluate and to tune the classifiers. If you wull evaluate the final classifier using accuracy, then you must use accuracy to tune the hyper parameters. If you think you should use macro-averaged F1 as the final evaluation of the classifier, use it also to tune them.</p> <p>On a side, for multiclass problems I have not yet heard any convincing argument not to use accuracy, but that is just me.</p>
144
hyperparameter tuning
Metric to use to choose between different models - Hyperparameters tuning
https://datascience.stackexchange.com/questions/85456/metric-to-use-to-choose-between-different-models-hyperparameters-tuning
<p>I'm building a Feedforward Neural Network with Pytorch and doing hyperparameters tuning using Ray Tune. I have train, validation and test set, train and validation used during the training procedure. I have different versions of the model (different learning rates and numbers of neurons in the hidden layers) and I have to choose the best model. But I'm unsure on which metric should I use to choose the best model. Basically I don't know the XXXX in this line of code:</p> <pre><code>analysis.get_best_config(metric='XXXX', mode='min') </code></pre> <p>Should it be test loss or validation loss?</p>
<p>Essentially, the function of your <code>testset</code> is to evaluate the performance of your model on new data. It mimics the situation of your model being put into production. The <code>validation set</code> is used for optimizing your algorithm.</p> <p>Personally I would recommend tuning your algorithm using your <code>validation set</code> and using the hyperparameters of the training epoch with the lowest <strong>validation loss</strong>. The accuracy of the model using these hyperparameters on your <code>testset</code> can be used to estimate the performance of your model in the 'real world'.</p>
145
hyperparameter tuning
Hyperparameter Tuning in Random Forest Model
https://datascience.stackexchange.com/questions/81961/hyperparameter-tuning-in-random-forest-model
<p>I'm new to the machine learning field, and I'm learning ML models by practice, and I'm facing an issue while using the machine learning model.</p> <p>While I'm implementing the <code>RandomForestClassifier model</code> with hyper tunning it's taking too much time to predict output. And I'm also using <code>GridSearchCV</code> on it. so it's take much time.</p> <p>Is there any way how can I solve this problem.</p> <p>OR, Is <code>Google Colab</code> or <code>Kaggle Notebook</code> editor can perform better than <code>Jupiter Notebook</code> ?</p>
<p>You can access the GPU by going to the settings:</p> <pre><code>Runtime&gt; Change runtime type and select GPU as Hardware accelerator. </code></pre>
146
hyperparameter tuning
Minimizing overfitting when doing hyperparameter Tuning
https://datascience.stackexchange.com/questions/60230/minimizing-overfitting-when-doing-hyperparameter-tuning
<p>Generaly when using Sklearn's GridSearchCV (or RandomizedGridSearchCV), we get best model with best test score even if the model overfits a little bit. How can we compute generalization error efficiently and force GridSearchCV to get best model with minimum generalization error?</p>
147
hyperparameter tuning
Hyperparameter tuning does not improve accuracy?
https://datascience.stackexchange.com/questions/73526/hyperparameter-tuning-does-not-improve-accuracy
<p>I am working on titanic dataset, I achieved 92% accuracy using random forest. However, the accuracy score dropped to 89% after I tuned it using <code>Gridsearch</code>. Now, I was wondering if it caused by imbalanced dataset, since it has only 342 out of 891 passenger survived the disaster. Would appreciate the clarification.</p>
<p>Welcome! You haven't given us enough information to be able to diagnose this issue completely, but you should check your grid search code to see how each cross-validated model is being trained and note which parameters are different from those used with the 92% model. </p> <p>If it has something to do with the unbalanced data, it's because you're not stratifying your sampling of the dataset with respect to the labels (making sure training and validation sets have the same ratio of classes (in this case, 38% survive, 62% don't survive.))</p> <p>I would guess that something is going on with your cross validation process. If I had to guess specifics (and again, we can't say for sure given what you've posted here) , I would say that something you're doing in the CV probably results in those models not using as much training data as the 92% model.</p>
148
hyperparameter tuning
How to optimally visualise hyperparameter tuning?
https://datascience.stackexchange.com/questions/114645/how-to-optimally-visualise-hyperparameter-tuning
<p>I am working on a basic Neural network and want to show performance of model with respect to different parameters.<a href="https://i.sstatic.net/qvcKB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qvcKB.png" alt="Here is my dataframe looks like." /></a></p> <p>I need help with suggestions for impactful visualizations. I can not make countourplots as far as I understand because for that I have to define loss as a function of the parameters. Please suggest some interesting plots.</p>
<p>You can use a &quot;Parallel Coordinates&quot; plot, in which</p> <ul> <li>A model is represented by a line</li> <li>Model performance is highlighted in colour</li> <li>Model hyper-parameters are presented in axes along with their respective values.</li> </ul> <p>I attach below an example which you can generate either using Tensorboard or tools like Weights &amp; Biases.</p> <p>You can find <a href="https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams" rel="nofollow noreferrer">here</a> and <a href="https://docs.wandb.ai/ref/app/features/panels/parallel-coordinates" rel="nofollow noreferrer">here</a> resources on how to implement this representation for each of the tools above, respectively.</p> <p>Hope this helps, good luck!</p> <p><a href="https://i.sstatic.net/Y1QdY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y1QdY.jpg" alt="enter image description here" /></a></p>
149
imbalanced datasets
CART classification for imbalanced datasets with R
https://datascience.stackexchange.com/questions/70116/cart-classification-for-imbalanced-datasets-with-r
<p>Hey guys i need your help for a university project. The main Task is to analyze the effects of over/under-smapling on a imbalanced Dataset. But before we can even start with that, our task sheet says, that we 1) have to find/create imbalanced Datasets and 2) fit those with a binary classification model like CART. So my auestions would be, where do i find such imbalanced datasets? And how do i fit those datasets with CART, and what does that help in regard of over/under-sampling?</p> <p>Thats my whole first try.</p> <pre><code> # CART - Datensatz laden setwd("C:\\Users\\..\\Dropbox\\Uni\\Präsentation\\Datensätze") add &lt;- "data1.csv" df &lt;- read.csv(add) head(df) # Ersten 6 Zeilen nrow(df) # Anzahl der Reihen des Datensatzes # CART - Wichtige Daten selektieren df &lt;- mutate(df, x= as.numeric(x), y= as.numeric(y), label=factor(label)) set.seed(123) sample = sample.split(df$x, SplitRatio = 0.70) train = subset(df, sample==TRUE) test = subset(df, sample==FALSE) # grow tree (Baum wachsen lassen) fit &lt;- rpart(x~., data = train, method = "class") printcp(fit) plotcp(fit) summary(fit) # plot tree plot(fit, uniform = TRUE, main="Bla Bla Bla") # text(fit, use.n=TRUE, all=TRUE, cex=.08) # prune the table --&gt; to avoid overfitting the data# pfit&lt;- prune(fit, cp= fit<span class="math-container">$cptable[which.min(fit$</span>cptable[,"xerror"]),"CP"]) plot(pfit, uniform=TRUE, main="Pruned Classification Tree for Us") </code></pre> <p>Why do i need to make such a decision tree and how does it help with Over/Under-Sampling?</p> <p>Help is much appreciated</p>
150
imbalanced datasets
Model assessment for EXTREMELY imbalanced dataset
https://datascience.stackexchange.com/questions/74014/model-assessment-for-extremely-imbalanced-dataset
<p>I am dealing with an extremely imbalanced dataset, with about 10,000 negative samples for each positive sample. I am now trying to come up with an adequate measurement of model accuracy but none seem to fit. Many places recommend the PR curve over the AUC curve for imbalanced datasets (e.g. <a href="https://datascience.stackexchange.com/questions/73757/the-most-informative-curve-for-imbalance-datasets">The most informative curve for imbalance datasets</a>) but it looks like all these recommendations are good for less imbalanced sets. </p> <p>Since the denominator for precision takes into account the number of false positives, more negative samples in the dataset means smaller precision vale.</p> <p>I noticed that my PR curve tends to look nice and informative as long as I keep the positive/negative ratio to around 1/10-20, but as more negative samples are taken into account the curve looks less and less like it should. </p> <p>My question is whether there's a better way to assess model performance for super imbalanced datasets, or maybe I am missing something with my interpretation of the PR curve and its purpose. </p>
<p>I would only look at the <strong>precision/recall scores of the undersampled class</strong> (the positive class in your case).</p> <p>Checking the performance on the oversampled class seems quite meaningless, since it is quite easy to achieve very high scores.</p> <p>Then, balancing the precision/recall with a <em>F-beta</em> score will depend on your specific use case.</p>
151
imbalanced datasets
What cost function and penalty are suitable for imbalanced datasets?
https://datascience.stackexchange.com/questions/3699/what-cost-function-and-penalty-are-suitable-for-imbalanced-datasets
<p>For an imbalanced data set, is it better to choose an L1 or L2 regularization?</p> <p>Is there a cost function more suitable for imbalanced datasets to improve the model score (<code>log_loss</code> in particular)? </p>
<p>So you ask <strong>how does class imbalance affect classifier performance under different losses?</strong> You can make a numeric experiment. </p> <p>I do binary classification by logistic regression. However, the intuition extends on the broader class of models, in particular, neural networks. I measure performance by cross-validated ROC AUC, because it is insensitive to class imbalance. I use an inner loop of cross validation to find the optimal penalties for L1 and L2 regularization on each dataset.</p> <pre><code>from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression, LogisticRegressionCV import matplotlib.pyplot as plt cvs_no_reg = [] cvs_lasso = [] cvs_ridge = [] imb = [0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, 0.005, 0.002, 0.001] Cs = [1e-5, 3e-5, 1e-4, 3e-4, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1] for w in imb: X, y = make_classification(random_state=1, weights=[w, 1-w], n_samples=10000) cvs_no_reg.append(cross_val_score(LogisticRegression(C=1e10), X, y, scoring='roc_auc').mean()) cvs_ridge.append(cross_val_score(LogisticRegressionCV(Cs=Cs, penalty='l2'), X, y, scoring='roc_auc').mean()) cvs_lasso.append(cross_val_score(LogisticRegressionCV(Cs=Cs, solver='liblinear', penalty='l1'), X, y, scoring='roc_auc').mean()) plt.plot(imb, cvs_no_reg) plt.plot(imb, cvs_ridge) plt.plot(imb, cvs_lasso) plt.xscale('log') plt.xlabel('fraction of the rare class') plt.ylabel('cross-validated ROC AUC') plt.legend(['no penalty', 'ridge', 'lasso']) plt.title('Sensitivity to imbalance under different penalties') plt.show() </code></pre> <p><a href="https://i.sstatic.net/be35G.png" rel="noreferrer"><img src="https://i.sstatic.net/be35G.png" alt="enter image description here"></a></p> <p>You can see that under high imbalance (left-hand side of the picture) <strong>L1 regularization performs better</strong> than L2, and both better than no regularization. </p> <p>But if the imbalance is not so serious (the smallest class share is 0.03 and higher), all the 3 models perform equally well.</p> <p>As for the second question, <strong>what is a good loss function for imbalanced datasets</strong>, I will answer that <strong>log loss is good enough</strong>. Its useful property is that it doesn't make your model turn the probability of a rare class to zero, even if it is <em>very very</em> rare.</p>
152
imbalanced datasets
RFECV for feature selection for imbalanced dataset
https://datascience.stackexchange.com/questions/104897/rfecv-for-feature-selection-for-imbalanced-dataset
<p>I am new in machine learning and just learned about feature selection. In my project, I have a dataset with 89% being a majority class and 11% as the minority class. Also, I have 24 features. I opted to use Recursive Feature Elimination with Cross-Validation (RFECV in the scikit-learn package) to find the optimal number of features in the dataset. I also set the 'scoring' parameter to 'f1' since I am dealing with an imbalanced dataset. Furthermore, the estimator I used is the Random Forest classifier. After fitting the data, I had around 12 features with an f1 score of 0.94.</p> <p>Is using RFECV appropriate for imbalanced datasets?</p>
153
imbalanced datasets
The most informative curve for imbalance datasets
https://datascience.stackexchange.com/questions/73757/the-most-informative-curve-for-imbalance-datasets
<p>For the imbalanced datasets:</p> <ol> <li>Can we say the Precision-Recall curve is more informative, thus accurate, than ROC curve? </li> <li>Can we rely on F1-score to evaluate the skillfulness of the resulted model in this case?</li> </ol>
<p>Precision-recall curves are argued to be more useful than ROC curves in &quot;<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4349800/" rel="nofollow noreferrer">The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets</a>&quot; by Saito and Rehmsmeier. They argue that ROC might lead to the wrong visual interpretation of specificity.</p> <p>F1-score equally balances precision and recall. In some domains it might be more useful to more heavily weight precision (F &lt; 1) or recall (F &gt; 1).</p>
154
imbalanced datasets
Prior probability shift vs oversampling/undersampling imbalanced datasets
https://datascience.stackexchange.com/questions/116527/prior-probability-shift-vs-oversampling-undersampling-imbalanced-datasets
<p>I'm trying to understand what prior probability shift (label drift) in data means.</p> <p>If I understand it correctly then it means that distribution of labels in training dataset differs compared to distribution of labels in production environment. This difference causes that ML model trained on such data and deployed to production environment makes poor predictions.</p> <p>That makes sense.</p> <p>But then I remembered that one of the techniques how to train ML model on imbalanced dataset is oversampling the minor class or undersampling the major class (so changing labels distribution in the training dataset). But these techniques (oversampling or undersampling) causes that distribution of labels in training dataset differs compared to the production environment (imbalanced data setting). That sounds exactly like label drift setting!</p> <p>So is my understanding of prior shift in data wrong (most probably yes)?</p> <p>Are undersampling/oversampling techniques of imbalanced dataset flawed (I don't this so)?</p> <p>Am I missing something else?</p> <p>Thank you for explanation.</p> <p>Tomas</p>
155
imbalanced datasets
Multi class Imbalanced datasets under-sampling imblearn
https://datascience.stackexchange.com/questions/49575/multi-class-imbalanced-datasets-under-sampling-imblearn
<p>I have an imbalanced dataset. I am looking to under-sample. Even though, the oversampling process takes less time, the model training takes a lot of time. I have taken a look at <a href="https://imbalanced-learn.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">imbalanced-learn</a> website. There are several <a href="https://imbalanced-learn.readthedocs.io/en/stable/api.html#module-imblearn.under_sampling" rel="nofollow noreferrer">under sampling</a> methods. I am looking at method that tries to undersample the classes with much as possible information intact. I tried <code>.ClusterCentroids()</code> methods and found it takes way too long to balance the classes. </p> <p>I have tried other methods that have been mentioned in the website. However, even with <code>sampling_strategy</code> to equal values eg: <code>sampling_strategy={0: 2000, 1: 2000, 2: 2000}</code> The resulting dataset is not balanced. Such as in <code>.CondensedNearestNeighbour()</code> and <code>.AllKNN()</code> methods. Would anyone be able to help me create a class balanced dataset using these methods.</p> <p>Thanks</p> <p>Michael</p>
<p>If you're looking for a fast workaround to solve this you have to increase <code>n_neighbors</code> parameter in <code>AllKNN</code>, but I <strong>wouldn't recommend</strong> using this type of undersampling algorithm for what you want to do!</p> <hr> <p><strong>Explanation:</strong></p> <p><a href="https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.under_sampling.AllKNN.html#imblearn.under_sampling.AllKNN" rel="nofollow noreferrer">AllKN</a> is an under-sampling technique based on <a href="https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.under_sampling.EditedNearestNeighbours.html#imblearn.under_sampling.EditedNearestNeighbours" rel="nofollow noreferrer">Edited Nearest Neighbors</a>. These techniques try to under-sample your majority classes by removing samples that are <strong>close to the minority class</strong>, in order to make your classes more separable. The way they work is that they remove samples from the majority class that have at least 1 nearest neighbor in the minority class. The thing is that if the classes are separable enough and the majority samples have no minority nearest neighbors, they can't be removed!</p> <p>If you want a technique that undersamples your data in order to get exactly the same number of samples from the minority and the majority class, I'd recommend using <strong>a different technique</strong> (e.g. <code>ClusterCentroids</code>, which you've used is such). ENN-based undersampling techniqes aren't built for that. You can also read <a href="https://github.com/djib2011/python_ml_tutorial/blob/master/notebooks/23_preprocessing_2.ipynb" rel="nofollow noreferrer">this tutorial</a> which compares different resampling algorithms imblearn.</p> <p>As a final remark, if possible, I'd recommend oversampling... </p>
156
imbalanced datasets
ROC vs PR-score and imbalanced datasets
https://datascience.stackexchange.com/questions/131105/roc-vs-pr-score-and-imbalanced-datasets
<p>I can see everywhere that when the dataset is imbalanced PR-AUC is a better performance indicator than ROC. From my experience, if the positive class is the most important, and there is higher percentage of positive class than the negative class in the dataset, then PR-AUC seems to be very biased. Actually, the higher the percentage of positive class the higher the PR-AUC (PR-AUC score is inflated). Would it make sense to say, PR-AUC is good for imbalanced datasets when the positive class is a small percentage comparing to the negative class, and ROC score is better performance indicator when the percentage of positive class is much smaller than the negative class?</p> <p>For example, I have a model tested on a dataset, that the positive class is 88%, and it is higher than the negative class. PR-AUC is 99%, and ROC is 87%. On the contrary, when this model is tested on a dataset where the percentage of negative class is now higher than the positive class (PR-AUC is now 67% and ROC is 76%). This second case aligns with the literature (see my first comment). I have tested in many test sets, and I can agree that PR-AUC is less biased when negative class out weights positive class, but it looks biased when positive class out weights negative class (it gives me 99% of performance). Please consider that training is done using undersampling technique, to deal with imbalancing.</p> <p>Thank you in advance.</p>
<p>The way to go is to take into account costs in learning: <a href="https://scikit-learn.org/dev/auto_examples/model_selection/plot_cost_sensitive_learning.html" rel="nofollow noreferrer">https://scikit-learn.org/dev/auto_examples/model_selection/plot_cost_sensitive_learning.html</a></p> <p>For a start you can use target imbalance as a proxy for cost imbalance and weight classes accordingly.</p> <p>Then after choose a metric that is relevant for your problem. Notably most of the time you will take a decision. And the curves you mention considers all binary thresholds. You are probably better off selecting one threshold.</p>
157
imbalanced datasets
Doesn&#39;t over(/under)sampling an imbalanced dataset cause issues?
https://datascience.stackexchange.com/questions/93730/doesnt-over-undersampling-an-imbalanced-dataset-cause-issues
<p>I'm reading a lot about how to use different metrics specifically for imbalanced datasets (e.g. two classes present, but 80% of the data is one class) and how to tackle the issue of imbalanced datasets.</p> <p>One trick is to oversample, so to take more (or even duplicate some) data belonging to the underrepresented class. I've tried this and did achieve better results (before my models would easily just predict a single class for everything, achieving 80% accuracy lol).</p> <p>However, I was wondering, will this model work well with real-life data? One of the 'laws' of data science/machine learning is that your training data has to have the same/similar attributes as your live data you're intending to use your model on. However, by oversampling, I create a dataset that's 50% one class and 50% other, as opposed to the &quot;natural&quot;, real-life-data having 80% of one class and 20% of the other.</p> <p>So I guess the question in short is: Will oversampling my imbalanced dataset of 80/20 class distribution to 50/50 class distribution impact the usability of my model for real-life data? Why?</p>
<p>Yes, the classifier will expect the relative class frequencies in operation to be the same as those in the training set. This means that if you over-sample the minority class in the training set, the classifier is likely to over-predict that class in operational use.</p> <p>To see why it is best to consider probabilistic classifiers, where the decision is based on the posterior probability of class membership p(C_i|x), but this can be written using Bayes' rule as</p> <p><span class="math-container">$p(C_i|x) = \frac{p(x|C_i)p(C_i)}{p(x)}\qquad$</span> where <span class="math-container">$\qquad p(x) = \sum_j p(x|C_j)p(c_j)$</span>,</p> <p>so we can see that the decision depends on the prior probabilities of the classes, <span class="math-container">$p(C_i)$</span>, so if the prior probabilities in the training set are different than those in operation, the operational performance of our classifier will be suboptimal, even if it is optimal for the training set conditions.</p> <p>Some classifiers have a problem learning from imbalanced datasets, so one solution is to oversample the classes to ameliorate this bias in the classifier. There are to approaches. The first is to oversample by <em>just</em> the right amount to overcome this (usually unknown) bias and no more, but that is <em>really</em> difficult. The other approach is to balance the training set and then post-process the output to compensate for the difference in training set and operational priors. We take the output of the classifier trained on an oversampled dataset and multiply by the ratio of operational and training set prior probabilities,</p> <p><span class="math-container">$q_o(C_i|x) \propto p_t(x|C_i)p_t(C_i) \times \frac{p_o(C_i)}{p_t(C_i} = p_t(x|C_i)p_o(C_i)$</span></p> <p>Quantities with the o subscript relate to operational conditions and those wit the t subscript relate to training set conditions. I have written this as <span class="math-container">$q_o(C_i|x)$</span> as it is an un-normalised probability, but it is straight forward to renormalise them by dividing by the sum of <span class="math-container">$q_o(C_i|x)$</span> over all classes. For some problems it may be better to use cross-validation to chose the correction factor, rather than the theoretical value used here, as it depends on the bias in the classifier due to the imbalance.</p> <p>So in short, for imbalanced datasets, use a probabilistic classifier and oversample (or reweight) to get a balanced dataset, in order to overcome the bias a classifier may have for imbalanced datasets. Then post-process the output of the classifier so that it doesn't over-predict the minority class in operation.</p>
158
imbalanced datasets
Ranking problem and imbalanced dataset
https://datascience.stackexchange.com/questions/85453/ranking-problem-and-imbalanced-dataset
<p>I know about the problems that imbalanced dataset will cause when we are working on classification problems. And I know the solution for that including undersampling and oversampling.</p> <p>I have to work on a Ranking problem(Ranking hotels and evaluate based on NDCG50 score <a href="https://github.com/achilleasatha/expedia_sortranking/blob/master/xgboost_listwise.ipynb" rel="nofollow noreferrer">this link</a>), and the dataset is extremely imbalanced. However, the example I saw on the internet use the dataset as it is and pass it to train_test_split without oversampling/undersampling.</p> <p>I am kind of confused if that is true in the Ranking problems in which the imbalanced data does not matter and we do not need to fix this before passing the data to the model?</p> <p>And if that is the case why?</p> <p>Thanks</p>
<p>You are completely right, imbalance of labels does have an impact on ranking problems and people are using techniques to counter it.</p> <p>The example in your notebook applies list-wise gradient boosting. Since pairwise ranking can be made list-wise by injecting the NDCG into the gradient, I will focus on pair-wise rank loss for the argument. I will base myself on this paper (<a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf" rel="nofollow noreferrer">https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf</a>).</p> <p><span class="math-container">$C = -\bar{P}_{ij}$</span>log<span class="math-container">$P_{ij} - (1 - \bar{P}_{ij})$</span>log<span class="math-container">$(1 - P_{ij})$</span></p> <p>with, <span class="math-container">$P_{ij}\equiv P(U_{i}\rhd U_{j})\equiv {1\over{1 + e^{-\sigma(s_{i} - s_{j})}}}$</span> and <span class="math-container">$\bar{P}_{ij} = {1\over2}(1 + S_{ij})$</span> for <span class="math-container">$S_{ij}$</span> being either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>.</p> <p>This is actually just a classification problem, with 0 being article i being less relevance than article j and 1 in the opposite case.</p> <p>Imagine that now you are working with queries which have a lot of matching documents but only a couple of documents have been tagged as relevant. Often such sparse tagging does not mean that ONLY these documents were relevant, but only is caused by the limitation of the estimation of relevance (<a href="https://www.cs.cornell.edu/people/tj/publications/joachims_etal_05a.pdf" rel="nofollow noreferrer">https://www.cs.cornell.edu/people/tj/publications/joachims_etal_05a.pdf</a>).</p> <p>Hence, it is not uncommon to down-sample high rated documents.</p> <p>Another reason for applying imbalance methods such as reweighting labels, is for example bias due to position (see for example <a href="https://ciir-publications.cs.umass.edu/getpdf.php?id=1297" rel="nofollow noreferrer">https://ciir-publications.cs.umass.edu/getpdf.php?id=1297</a>). The loss is reweighted based on the observed position of the documents when their relevance was tagged.</p>
159
imbalanced datasets
Cross validation schema for imbalanced dataset
https://datascience.stackexchange.com/questions/76107/cross-validation-schema-for-imbalanced-dataset
<p>Based on a previous <a href="https://datascience.stackexchange.com/questions/76056/for-imbalanced-classification-should-the-validation-dataset-be-balanced">post</a>, I understand the need to ensure that the validation folds during the CV process have the same imbalanced distribution as the original dataset when training a binary classification model with imbalance dataset. My question is regarding the best training schema.</p> <p>Let’s assume that I have an imbalanced dataset with 5M samples where 90% are pos class vs 10% neg class, and I am going to use 5-folds CV for model tuning. Also, let’s assume I will hold out a random 100K samples for test (90K samples w/ pos class vs 10K samples w/ neg class). Now I have two options:</p> <p><strong>Option 1)</strong></p> <ul> <li>Step 1: Pull <em>a randomly selected 200K imbalanced data</em> for training (180K samples pos class vs 20K samples neg class)</li> <li>Step 2: During each CV iteration: <ul> <li>The training fold will have 160K samples (144K pos vs 16K neg)</li> <li>and the validation fold will have 40K samples (36K pos vs 4K neg)</li> </ul> </li> <li>Step 3: Apply data balancing for the training fold (e.g., Downsampling, Upsampling, SMOTE, etc.) and fit a model</li> <li>Step 4: Validate the model on the imbalanced training fold</li> </ul> <p>However, given that I have enough data, I want to avoid using any data balancing algorithm for the training folds.</p> <p><strong>Option 2)</strong></p> <ul> <li>Step 1: Pull <em>a randomly selected 200K balanced data</em> for training (100K samples pos class vs 100K samples neg class)</li> <li>Step 2: During each CV iteration: <ul> <li>The training fold will have 160K samples (80K pos vs 80K neg)</li> <li>and the validation fold will have 40K samples (20K pos vs 20K neg)</li> </ul> </li> <li>Step 3: Fit a model for the already balanced training fold</li> <li>Step 4: Can I apply down sampling to the balanced validation dataset to restore it to its imbalanced state? If so, how can I do that in sklearn?</li> </ul> <p>I am also clear that I have a 3rd option, which is based on the 1st option above, where the model could be trained on an imbalanced dataset. Therefore, a data balancing algorithm can be avoided.</p> <p>My questions are:</p> <ol> <li>Is option 2 better than option 1?</li> <li>How to apply a downsampling to a balanced validation dataset (Option 2-step 4</li> </ol>
<p>I'm not sure if there's a question here, but I'll add some comments.</p> <p><strong>Firstly</strong>, if you can get it in the wild, always work with balanced data. However, if you are going to manually create a &quot;balanced&quot; data set yourself, make sure that the selection criteria that you use to create that data is appropriate. As an example, choosing the 100k most recent positive and negative outcomes may not be appropriate because the time frame of the positive outcomes may extend well beyond that of the more common negative outcomes. So in this 200k data set, your negative 100k outcomes may relate to data from the last year while the data relating to your 100k positive outcomes may relate to the last ten years.</p> <p><strong>Secondly</strong>, if you are going to balance your data be aware of how the balancing technique works and try to understand its weaknesses / limitations. Be mindful that rebalancing a data set will result in a new data set, and remember that you will have to check that the new data set is still appropriate to use. As an example, you will need to check that the distribution of input variables is still roughly the same as before. Applying this thinking to your options above, can you be certain that the data in each fold will be roughly similar?</p> <p><strong>Lastly</strong>, if you are going to use a modelling framework which can handle imbalanced data then make sure you understand why it can handle the imbalanced data. In particular, if the framework applies some weighting / balancing technique in the background you should be aware of this and be able to explain it.</p>
160
imbalanced datasets
upsampling imbalanced dataset in decision tree
https://datascience.stackexchange.com/questions/71978/upsampling-imbalanced-dataset-in-decision-tree
<p>I have a imbalanced dataset with 3 output labels with one class with 98 percent and other two classes with 1 percent each. I need to run decision tree on this dataset. Should i be upsampling this dataset by duplicating rows? Would this effect the impurity, entropy or information gain for nodes?</p>
<p>Don't simply duplicate data.</p> <p>You should do - </p> <ol> <li>Oversampling using standard techniques e.g. SMOTe</li> <li>Undersampling</li> <li>Class weights</li> <li>Must try - <strong>Combining</strong> few of these</li> </ol> <p>Check these links - </p> <p><a href="https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/" rel="nofollow noreferrer">MachineLearningMastery</a> and <a href="https://www.jeremyjordan.me/imbalanced-data/" rel="nofollow noreferrer">jeremyjordan</a></p>
161