Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Prostate cANcer graDe Assessment (PANDA) Challenge
1st Place Solution [PND]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to everyone and thanks for the hosts for preparing this competition!</p> <h4>I published <a href="https://docs.google.com/presentation/d/1Ies4vnyVtW5U3XNDr_fom43ZJDIodu1SV6DSK8di6fs/edit?usp=sharing" target="_blank">slide</a>!</h4> <h4>Our code is <a href="https://github.com/kentaroy47/Kaggle-PANDA-1st-place-solution" target="_blank">here</a>!</h4> <h1>Proposed Denoising Method</h1> <p>We're very suprised that we finished 1st, and our simple label-denoising method (suprisingly) boosted up PB.</p> <p>The competition was all about handling noisy labels, so we worked hard on finding good ways to denoising.</p> <p>Here is our simple denoising method by <a href="https://www.kaggle.com/kyoshioka47" target="_blank">@kyoshioka47</a>:</p> <h2>Getting cleaned labels</h2> <ul> <li>train k-folds with effnet-b1 (Almost identical to Qishen's kernel) <ul> <li>Model specifics in fam_taro( <a href="https://www.kaggle.com/yukkyo" target="_blank">@yukkyo</a> ) part</li></ul></li> <li>Predict hold-out sets with the trained model. We get <code>pred</code> with this step.</li> <li>Remove the training data which has a high disparity between ground truth and pred. The filtered labels will be called cleaned labels.</li> </ul> <p>We calculate <code>disparity</code> by the absolute difference of ISUP between GT and pred. Data with disparity larger than 1.6 was simply removed. </p> <p>Here is the psuedo codes. probs_raw is the raw prediciton results (ISUP)</p> <pre><code># Base arutema method def remove_noisy(df, thresh): gap = np.abs(df["isup_grade"] - df["probs_raw"]) df_removed = df[gap &gt; thresh].reset_index(drop=True) df_keep = df[gap &lt;= thresh].reset_index(drop=True) return df_keep, df_removed df_keep, df_remove = remove_noisy(df, thresh=1.6) show_keep_remove(df, df_keep, df_remove) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F6f6bbe0dfdb2bd5ba10057a1ba32f040%2Farutema.png?generation=1595474367096439&amp;alt=media" alt=""></p> <h2>Retraining</h2> <p>Retrain model with using the denoised labels. <br> We get CV 0.94 LB 0.90 PB 0.934 with a simple Qishen Eff-b0 model with k-folds.<br> Ensambling with different models further boosted to 1st place.</p> <p>We tried CleanLab too, but that did not perform well in CV/LB so we sticked with this.</p> <h1>1. Our final submission</h1> <ul> <li>Select 1 (public LB 0.910, private LB 0.922)<ul> <li>Resnext50_32x4d(poteman)</li></ul></li> <li>Select 2 (public LB 0.904, private LB 0.940)<ul> <li>Effnet-B0(arutema47) + Effnet-B1(fam_taro)<ul> <li>Simple average (<code>1 : 1</code>)</li></ul></li></ul></li> </ul> <p>Suprisingly, even with several weight patterns, the PB was 0.940.</p> <h1>2. Resnext50_32x4d( <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a> ), public 0.910, private 0.922</h1> <p>This was our best LB model.</p> <ul> <li>Split kfold: stratified kfold with imghash(threshold 0.90)</li> <li>iafoss tile method<ul> <li>tile size 256, tile num 64</li></ul></li> <li>model:resnext50_32x4d</li> <li>head: 3 * reg_head + 1 * softmax head</li> </ul> <h1>3. Effnet-B1(fam_taro), public 0.901, private 0.932</h1> <ul> <li>Split kfold<ul> <li>stratified 5 kfold with gleason-score and imghash similarity (threshold 0.90)<ul> <li>convert <code>negative</code> to <code>0+0</code></li> <li>how to grouping by imghash similarity<ul> <li>This is based on <a href="https://www.kaggle.com/appian" target="_blank">@appian</a> 's kernel<ul> <li><a href="https://www.kaggle.com/appian/panda-imagehash-to-detect-duplicate-images" target="_blank">https://www.kaggle.com/appian/panda-imagehash-to-detect-duplicate-images</a></li></ul></li> <li><a href="https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping" target="_blank">https://www.kaggle.com/yukkyo/imagehash-to-detect-duplicate-images-and-grouping</a></li></ul></li></ul></li> <li>In my opinion, split method is import point for our denoise method.<ul> <li>Because we use prediction of out of fold</li> <li>If you put the duplicate images in a different fold, I don't think denoise will work for them</li></ul></li></ul></li> <li>Data<ul> <li>iafoss tile method</li> <li>tile size 192, tile num 64</li></ul></li> <li>Model: Effnet-B1 + GeM<ul> <li>label: isup-grade and first score of gleason(10 dim bin)</li></ul></li> <li>Make final sub by 3 steps<ul> <li>Local train &amp; predict</li> <li>Remove noisy label<ul> <li>extended <a href="https://www.kaggle.com/kyoshioka47" target="_blank">@kyoshioka47</a> method</li> <li>Change threshold for each isup-grade and data-provider</li></ul></li> <li>Re-train</li></ul></li> <li>Not work for me<ul> <li>Remove noisy by confident-learning</li> <li>Cycle GAN augmentation(karolinska radboud)</li> <li>test with AdaBN &amp; Freezing BN at train</li> <li>CutMix, Mixup (before denoising)</li></ul></li> </ul> <pre><code>def remove_noisy2(df, thresholds): gap = np.abs(df["isup_grade"] - df["probs_raw"]) df_keeps = list() df_removes = list() for label, thresh in enumerate(thresholds): df_tmp = df[df.isup_grade == label].reset_index(drop=True) gap_tmp = gap[df.isup_grade == label].reset_index(drop=True) df_remove_tmp = df_tmp[gap_tmp &gt; thresh].reset_index(drop=True) df_keep_tmp = df_tmp[gap_tmp &lt;= thresh].reset_index(drop=True) df_removes.append(df_remove_tmp) df_keeps.append(df_keep_tmp) df_keep = pd.concat(df_keeps, axis=0) df_removed = pd.concat(df_removes, axis=0) return df_keep, df_removed def remove_noisy3(df, thresholds_rad, thresholds_ka): df_r = df[df.data_provider == "radboud"].reset_index(drop=True) df_k = df[df.data_provider != "radboud"].reset_index(drop=True) dfs = [df_r, df_k] thresholds = [thresholds_rad, thresholds_ka] df_keeps = list() df_removes = list() for df_tmp, thresholds_tmp in zip(dfs, thresholds): df_keep_tmp, df_remove_tmp = remove_noisy2(df_tmp, thresholds_tmp) df_keeps.append(df_keep_tmp) df_removes.append(df_remove_tmp) df_keep = pd.concat(df_keeps, axis=0) df_removed = pd.concat(df_removes, axis=0) return df_keep, df_removed # Change thresh each label each dataprovider thresholds_rad=[1.3, 0.8, 0.8, 0.8, 0.8, 1.3] thresholds_ka=[1.5, 1.0, 1.0, 1.0, 1.0, 1.5] df_keep, df_removed = remove_noisy3(df, thresholds_rad=thresholds_rad, thresholds_ka=thresholds_ka) show_keep_remove(df, df_keep, df_removed) </code></pre> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1670024%2F8f930b3a7b3e30fe877ab049d0ed3b13%2F2020-07-23%2017.14.29.png?generation=1595492134245391&amp;alt=media" alt=""></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
4th place solution [NS Pathology]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thank you very much to organizers and thanks to <a href="/sinpcw">@sinpcw</a> for fighting with me! Our solution is simple. We ended up with the following models in our final ensemble submission.</p> <p>&gt; efficientnet_b5 x 4 &gt; seresnext101-64x4d x2 &gt; seresnext101-32x4d x2 &gt; resnest101e x2 &gt; gem+efficientnet-b3 x 1</p> <p>We used hard voting for the ensemble method, not soft voting. This definitely improved our score! We also used the average if the one getting the most votes in our hard voting did not get more than 1/3 of the total votes. But this method only worked on publicLB.</p> <p>In training, We use the technique of tiling the images in the following links. This allows us to ensure that the tissues are evenly distributed across all tiles.It is also able to perform Data Augmentation by changing the scaling factor. We use 512x512x16 from middle layer. We've also tried using 1024x1024x16 from highest resolution layer, but there was no improvement. <a href="https://www.kaggle.com/hirune924/image-loader-test">https://www.kaggle.com/hirune924/image-loader-test</a></p> <p>We also use syncBN. This was important when training the larger models. blue line is normal BN, brown line is syncBN. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1626653%2F3adeecd96e2f4fa58855db72e7a93999%2FsyncBN.png?generation=1595467440685526&amp;alt=media" alt=""></p> <h3>O2U-Net</h3> <p>(This method seemed to work in the privateLB, but we didn't use it in the end because we couldn't see the effect in the publicLB) We also tried using O2U-Net to remove the data noise, but didn't work in publicLB. But data cleansing of radboud only seemed to work for privateLB. seresnext50 trained on noise removed dataset for only radboud achieves 0.933 in privateLB.(if without data cleansing privateLB 0.915) <a href="https://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_O2U-Net_A_Simple_Noisy_Label_Detection_Approach_for_Deep_Neural_ICCV_2019_paper.pdf">https://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_O2U-Net_A_Simple_Noisy_Label_Detection_Approach_for_Deep_Neural_ICCV_2019_paper.pdf</a></p> <p>I share a notebook that calculates the noise level based on the recorded loss by O2UNet. <a href="https://www.kaggle.com/hirune924/o2unet-loss-aggregate">https://www.kaggle.com/hirune924/o2unet-loss-aggregate</a> The effect of data cleansing on private LB is also described in the 1st place solution. <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169143">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169143</a></p> <h3>Usefull tools</h3> <p>PyTorch Lightning <a href="https://github.com/PyTorchLightning/pytorch-lightning">https://github.com/PyTorchLightning/pytorch-lightning</a> Hydra <a href="https://hydra.cc/">https://hydra.cc/</a> Neptune ai <a href="https://neptune.ai/">https://neptune.ai/</a> KAMONOHASHI <a href="https://github.com/KAMONOHASHI">https://github.com/KAMONOHASHI</a></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
65th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for organizer! Congratulations to winners! I create this topic just want to sharing my solution(get 0.920 private lb). I know my solution will not so great as top solutions, but here I want to record something and hope you can give some advice.😄 </p> <ul> <li><p>Image Pre-process</p> <ul><li>Filter polluted tiles.(precompute and save the useful(pass filtering) tile ids of each image) <ul><li>change each tile from rgb to hsv, and observe some polluted tiles you will find the thresholds to filter polluted tiles :D</li> <li>how check each tile(already hsv)? Actually if you use thresholds above to fiter on tile-level, you will waste many useful tiles which have tissue information about half of tile-size. So I random sample 20 squares(length of side = tile-size//10), if the the number of squares, not be filted by the thresholds above, can reach 20xpass-ratio, I will treat this tile is useful and will also use it.</li> <li>If the number of tiles from a iamge is &lt; 36(I choose 36 tiles, size is 224x224). I use np.random.choice to supplement until reaching 36.</li></ul></li> <li>Patch Image(game changer 1) <ul><li>Like what <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles">PANDA 16x128x128 tiles</a> did. Thanks <a href="/iafoss">@iafoss</a> !!!</li></ul></li></ul></li> <li><p>Models</p> <ul><li>efn-b0x3(I will call them b0-1, b0-2, b0-3 below) + efn-b4</li></ul></li> <li>Optimizer <ul><li>AdamW(default parameters)</li></ul></li> <li>Scheduler <ul><li>OneCycle(epochs=30, steps-per-epoch=int(np.ceil(len(train_dl)/acc-grad-step)) <ul><li>about pct-start, I try 0.1, 1/30, 2/30, and 0.1 works best.</li></ul></li></ul></li> <li>Loss(it's the game changer 2) <ul><li>BCEWithLogitsLoss(thanks <a href="/haqishen">@haqishen</a> !!!), details in [Train EfficientNet-B0 w/ 36 tiles_256 [LB0.87]] (<a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87"></a><a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87</a>)</li></ul></li> <li>Train Details <ul><li>efn-b0(use tiles) <ul><li>batch-size=4, use gradient accumulation(also 4), so the batch-size can be treated as 4x4(not actually 4x4 of course)</li></ul></li> <li>efn-b4(use full-image) <ul><li>batch-size=16, resize lv1 tiff to 448x448</li></ul></li> <li>apex amp in kaggle is too hard to be installed well. So I use gradscaler from torcu.cuda.amp(seems torch version should &gt;= 1.5, I forgot the specific version).</li></ul></li> <li>Ensemble <ul><li>0.35 x (b0-1 + b0-2) + 0.25 x b0-3 + 0.05 x b4</li></ul></li> <li>Models Details <ul><li>b0-1(local score: 0.885) and b0-2(local score: 0.873) are the 2 highest local score folds of my 5-stratified-folds, and if mix other folds(local score all &lt; 0.87) will hurt lb, so I only remain the 2 highest.</li> <li>b0-3(local score: 0.872) was trained by re-sampling data. And I only keep 5% as valid-set in order the model can see more data.</li> <li>b4(local score: 0.794). It was trained by full-image(resize lv1 tiff to 448x448, data is same like the best local score fold of 5-stratified-folds). The reason why I train a full-image model is I can't find some efficient ways to make model know the context relationship by using tiles, so I train a model.</li></ul></li> <li>TTA <ul><li>only use different patch-mode, same like [Train EfficientNet-B0 w/ 36 tiles_256 [LB0.87]] (<a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87"></a><a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87</a>)</li></ul></li> </ul> <hr> <p>As for bce to label. First, pred.sigmoid(), and then check how many value &gt; 0.5. And I will treat the pred is what value. pseudo code: x, = np.where(pred[i]&gt;0.5), label[i] = len(x) Why I supplement a b0 and b4 is because I listen the iWildCam 1st solution from megvii. They got iWildCam 2020 1st and shared they solution. Thanks megvii !!! Never dream about get 0.920 and shake to 70( main reason maybe the test-set is too small for qwk and I was lucky enough :) ) What's more, thanks <a href="/lopuhin">@lopuhin</a> , I use <a href="https://www.kaggle.com/lopuhin/panda-2020-level-1-2">PANDA: Level 1 and 2 images</a> on colab which created by him :D Feel free to discuss and post you point about my solution. Thanks for reading :D</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
8Th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Despite some evidence of randomness I'd like to share the ideas we used: * 10 model ensemble based on local CV and decent LB. * Different predictions between models (regression, bins and ordinal regression) * Some of it used bags of tiles and other stack the tiles in squares * Efficient nets (I trained only b0 but partner had a few b4 * My models were trained in two steps. First a model with an attention layer is made (was shared by me in some thread). Then this attention layer and model is reused to predict weights for tiles. Then a model is retrained with a lower number of tiles (9 or 16). I have some 9 tiles models that were both fast and were going at 0.90CV+. On top of it it allowed us to inspect a larger amount of tiles during inference (128 tiles) and just select the best 9 or 16. * My partner used in his model a NetVlad layer which maybe hell talk about in this thread. * Ensembling with mean + round was better than majority voting for LB (and is what we used) but actually our best solution uses majority voting (which we didnt select). * We also built a CV without duplicates and without "suspicious slides".</p> <p>In the last weeks after making the team and how obvious it seemed the shake up would be big I started to mistrust LB and try to bring diversity to the ensemble. As long as a model was at LB &gt; 0.88 that was good enough if the CV was among the best ones.</p> <p>Learned a lot during this competition. Thanks to organizer.</p> <p>Note: We also have a solution at 0.936 that we didn't select :(</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
6th place solution : noise robust learning [BarelyBears]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, we would like to thank organizers for such an interesting and realistic problem. Implementations are available: <a href="https://github.com/analokmaus/kaggle-panda-challenge-public">https://github.com/analokmaus/kaggle-panda-challenge-public</a></p> <h1>TL;DR</h1> <p>Label noise is the biggest challenge in this competition. We used <strong>online uncertainty sample mining(OUSM)</strong> and <strong>mixup</strong> to robustly fit CNN models, and blended 4 models with different settings to stabilize the results. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1973217%2F7c9a14d7f3d3de7bf65ce0b4c363fe69%2Fkaggle-panda-challenge.001.png?generation=1595831415189859&amp;alt=media&amp;width=500" alt=""></p> <h1>Tile-based multi instance learning model</h1> <p>The first challenge in this comp was how to deal with those extremely large images. Thanks to <a href="/iafoss">@iafoss</a> ’s great notebook, we used almost identical model with various backbones and tile sizes. We modified classifier part to ordinal regression (<a href="https://arxiv.org/abs/1901.07884">CORAL loss</a>).</p> <h1>Preprocessing and data augmentation</h1> <p>Data augmentation for tile-based CNN model can be applied in two ways: slide level and tile level. Slide level augmentations are applied to whole slide images before tiles are extracted. Since the point is to create slightly different tile sets, we used shift, scale, rotate. Tile level augmentations aim to improve feature extractor performance, and we used shift, scale, rotate, flip, and random dropout. The idea of random dropout is to randomly fill a tile with mean pixel value and regularize the model.</p> <h1>Postprocessing</h1> <p>We used 4 times TTA during inference and optimized thresholds to maximize QWK values. </p> <h1>Validation strategy</h1> <p>As written in task description, the label quality in train data differs a lot from those in test data. So from the very beginning we assumed this part would be critical in this comp. Roughly speaking, train data: noisy and big public test data: clean but small private test data: clean but small So our strategy is <strong>IGNORE CV, CARE ABOUT PUBLIC LB, AND TRUST METHODOLOGY.</strong> For us, the results were unstable due to small size of test data, but not so ‘lottery’.</p> <h1>Handling noisy labels</h1> <p>We read tens of papers about handling noisy labels, and implemented some of them such as:</p> <ul> <li>loss functions (DMI loss, DAC loss, Symmetric loss, <strong>OUSM loss</strong>, etc..)</li> <li>training procedure (CleanNet, Iterative Self-training, <strong>mixup</strong>, etc..)</li> </ul> <p>The common ideas among them are that noisy samples should have different features from correct samples, thus noisy ones should have bigger loss. OUSM(Online Uncertainty Sample Mining) is an approach, in which samples with high loss are excluded from each mini-batch. According to <a href="https://arxiv.org/abs/1901.07759">previous research</a>, this method works with skin lesion classification problem where similar kind of label noise exists. In PANDA competition, it gave us stable boost from around 0.87 to 0.90 on public LB. Then we trained models with different random seeds, and collected samples which are often judged as noise(with big loss). We excluded 10% of ‘most likely to be noisy’ samples from each label because due to the imbalance in label distribution, grade &gt;= 2 samples are more likely to be judged as noise. This new datasets should be less noisy than the original one, and models trained on this new dataset achieve 0.91 on public LB. Apart from OUSM, mixup also showed good performance on public LB. This is consistent with <a href="https://arxiv.org/abs/1710.09412">original paper</a> which reported performance improvement with label corruption.</p> <h1>Pipeline overview</h1> <p>Our pipeline is simple average of the following models</p> <ul> <li>5 fold 224x64Tile-based model, se-resnext50 (OUSM)</li> <li>5 fold 224x64 Tile-based model, se-resnext50 (OUSM with different params)</li> <li>5 fold 224x64 Tile-based model, se-resnext101 (OUSM)</li> <li>5 fold 256x36 Tile-based model, efficientnet-b0 (mixup) This model scored 0.903 on public LB, and 0.932 on private LB. Compared to <a href="/haqishen">@haqishen</a> 's model with no denoising, our final model showed +0.018 on public and +0.017 on private.</li> </ul>
Prostate cANcer graDe Assessment (PANDA) Challenge
[11th place solution] I have survived in this storm
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <ul> <li>Tile extraction is based on my <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">public pipeline</a> with <strong>128x128x128</strong> tiles from intermediate resolution layer</li> <li>Label nose removal gives <strong>~0.005 public and 0.01+ private LB boost</strong></li> <li>tile cutout + tile selection augmentations</li> <li><a href="https://arxiv.org/pdf/1509.07107v2.pdf" target="_blank">kappa loss</a></li> <li>majority voting ensemble of 8 <strong>ResNeXt50</strong> based models (<strong>0.917 public and 0.930 private LB</strong>)</li> <li>more advanced tile selection could give <strong>~0.004 boost</strong> at private LB on average (and the maximum private LB score of <strong>0.941</strong>)</li> </ul> <h2>Introduction</h2> <p>To begin with, I would really like to express my gratitude to organizes and kaggle team for making this competition possible. It was really enjoying working on it and learned many new things. By sharing some of my ideas in this competition, such as <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">tile pooling base pipeline</a> used by many participants, gave me 3 kernel gold medals, so I have reached the kernel grand-master rank. And I have received my first solo competition gold medal. Also, I would like to say congratulations to all winners and people who received medals.</p> <p>However, this day is quite sad for many participants, especially ones who worked very hard throughout the entire competition and got down at private LB. The <strong>chose of the metric by organizers could be done more wisely</strong>: 500+500 test set is definitely not enough for QWK. It is not really normal when LB score is changing by 0.005+ when a different seed is used. The things got deteriorated when the third digit became available for LB score: many people got seduced by overfitting LB noise.</p> <p>Below I outline the main things that worked for me. I have tried many more, but most of them have never worked, and I couldn't get any further improvement of my LB during the last month.</p> <h2>Main challenges</h2> <p>This competition to a large extent was about dealing with noisy data and train/test bias: as reported by organizers, the Redbound train data has only about <strong>0.853</strong> QWK, and I expect that Karolinska train data has 0.95-0.96 QWK. Beyond this, since Redbound data is graded by students, and Karolinska data is graded by only a single expert, while the test data is graded by 3 experts, there could be train/test bias because of the subjective opinion of people performing grading the train set. Therefore, <strong>solely relaying on CV was not really good strategy in this competition</strong>: at some point I saw a consistent decrease (~10 different models) of LB score when I ran training for longer, while CV was increasing. It confirms the hypothesis about the bias, and the trick was to train models only for limited number of epochs (even if CV could be increased), 32-48 depending on the setup, to <strong>prevent learning the bias</strong>.</p> <p>Meanwhile, LB was also not the best thing to trust because of severe noise, but some ppl tried to fit random seed as a hyperparameter 😄. The right thing, in my opinion, in this competition was to find the balance between CV and LB, and <strong>trust to your intuition and the experience gained in the previous competitions</strong>.</p> <h2>Noise</h2> <p>It is the most important part of this competition, in my opinion. After organizers have disclosed that there is a substantial level of noise, especially in Redbound train data, I have explored a number of techniques to deal with the noise: progressive label distillation, JoCoR (Joint Training with Co-Regularization), Co-teaching, negative learning, excluding hard examples from the batch, etc. However, most of them didn’t really work well here. The additional challenge is the bias between train and test and unstable LB. The thing I found to be the best for this data is removal of the uncertain examples from training set based on the out of fold predictions. I excluded ~1400 Redbound and 300 Karalinska data, so my clean training set contains about 8700 items. Relabeling the excluded images didn’t improve the performance. At the end of the competition <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/161909" target="_blank">some ppl have discovered this trick as well</a>, so I got nervous about my LB position 😬 <br> <strong>This trick gave ~0.005 public LB boost and 0.01+ private LB boost.</strong></p> <h2>Pipeline</h2> <p>The method I have used is mainly based on my <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/146855" target="_blank">tile pooling pipeline</a> with several additional tricks:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2Fe6fe32d759a28480343001aa3c661723%2FTILE.png?generation=1588094975239255&amp;alt=media" alt=""></p> <p>Based on my public kernel, one could reach ~0.90 public and 0.91 private LB averaged (over different submissions) using 36x256x256 tile setup and the kappa loss (see below) without any other changes.</p> <p><a href="https://arxiv.org/pdf/1509.07107v2.pdf" target="_blank"><strong>kappa loss</strong></a>: I have used one minus<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1212661%2F99945116e2c9228e352645ee5f0bdfcc%2F2.png?generation=1595472932908419&amp;alt=media" alt=""></p> <p>(both predictions and labels are centered based on the mean value of labels). In my experiments I found that kappa loss &gt; sorted <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/155424" target="_blank">binning loss</a> &gt; binning loss &gt; MSE &gt; CE. The only issue with the loss is that bs should be sufficiently large: I needed to pretrain models on low resolution and then continue training on intermediate resolution with bs =6-8 (<strong>progressive resizing</strong>), while at bs 1-2 I couldn't get convergence. The predicted value is limited within [-0.5,5.5] as <code>yp = 6*sigmoid(p) - 0.5</code>. In addition, I have CE aux for prediction of the Gleason score with 0.08 weight.</p> <p><strong>tile cutout</strong>: <strong>Instead of using all tile tiles, why not to randomly select part of them</strong> (let's say 96 out of 128). So, I can use large bs and the model is regularized in the same way as if cutout is used. It gave me quite good boost for CV and quite fair boost at LB.</p> <p><strong>128x128x128 tiles from intermediate resolution</strong>: It appeared that many smaller tiles work better than 36x256x256. I think that it helps to select the tissue areas more effectively and at the same time prevents overfitting. </p> <p><strong>tile selection augmentation</strong> The idea is quite simple: instead of <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles" target="_blank">generating a single tile set</a>, I can generate 4 with adding sz/2 padding to x, y, or both before cutting the image into tiles and selecting ones having the most of the tissue. So, each tile in these 4 datasets will be different, but it is important not to mix tiles from them. During training I select the dataset by random, so effectively I have x4 data. It is an approximation of tile selection with random offset each time, which would be even more effective (based on my experience in Severstal competition), though, too slow to be used with intermediate res images. I also tried TTA based on tile selection, (as well as selection of the tile set with the largest tissue area out of 4), but I couldn't get any statistically significant improvement.</p> <p><strong>The above tricks gave ~0.005 boost over my baseline if I consider multiple submissions</strong>. Though, score from submission to submission could change quite a bit.</p> <p><strong>Advanced tile selection</strong>: In addition to my main pipeline I also tried to use the method proposed by <a href="https://www.kaggle.com/akensert" target="_blank">@akensert</a> <a href="https://www.kaggle.com/akensert/panda-optimized-tiling-tf-data-dataset" target="_blank">here</a> with 128x128 tiles (but didn't use it for my final sub). It gave ~0.934 private LB single model performance on average for 8 different single 4 fold model subs (<strong>and maximum 0.941 private LB</strong>) and ~0.910 average public LB (~0.912 maximum). Too bad that I didn't create an ensemble based on this method. The trick that I have used in the model for training with such tiles is <strong>n-pooling</strong>: at test I apply the pooling only to nonempty tiles (n for a particular image, while the batch may contain some extra empty tiles for padding), and training is done with random selection of 96 tiles with repetitions (so I don't consider white tiles, which can change the mean statistics at pooling).</p> <p><strong>High resolution</strong>: I tried to train several models on high res/2 resolution, 128x256x256 tiles. With tile cutout I could use batches of size 4 (and include 64 random tiles). However, the results were slightly worse than ones for 128x128x128 tiles from the intermediate resolution layer. It indicates that <strong>going to higher resolution would likely provide only a minor boost</strong>, even if I try to optimize my pipeline for training with small batches. Some idea I had is based on having two conv parts for intermediate and high res tiles. First pass through the low res model selects tiles having the highest uncertainty. Next, the selected tiles (but in high res) are passed through the second conv part. The produced feature maps are downscaled twice and replace the low res feature maps that had high uncertainty. Finally, pooling and head are applied to produce the final prediction. This method would allow to keep overall statistics of tiles with only correcting ones that model is not confident about. However, too large level of noise in the training set, noisy LB inconsistent with train labeling, and small potential gain, which would likely be overshadowed by the noise, have prevented me from going into this direction. Also, more complicated pipeline is more likely to be broken under such competition, where there is no certain way to evaluate the performance.</p> <p><strong>Augmentation</strong>: I have used Albumentations with the following parameters:</p> <pre><code>Compose([ HorizontalFlip(), VerticalFlip(), RandomRotate90(), ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.3, rotate_limit=15, p=0.9, border_mode=cv2.BORDER_CONSTANT), OneOf([#off in most cases MotionBlur(blur_limit=3, p=0.1), MedianBlur(blur_limit=3, p=0.1), Blur(blur_limit=3, p=0.1), ], p=0.2), OneOf([#off in most cases OpticalDistortion(p=0.3), GridDistortion(p=.1), IAAPiecewiseAffine(p=0.3), ], p=0.3), OneOf([ HueSaturationValue(10,15,10), CLAHE(clip_limit=2), RandomBrightnessContrast(), ], p=0.3), ], p=1) </code></pre> <p><strong>Model</strong>: All my models are based on <strong>ResNeXt50</strong>, similar to my public kernel, with batch norm in the head replaced with Group-norm. The optimizer, best model selection based on CV, and other things are similar to my public kernel, and I was using 32-48 epochs, depending on the setup. In addition, I tried ResNet34, ResNeXt101, and EfficientNet, while all of them were performing worse. I think ResNet34 may be not capable enough for this task, while ResNeXt101 is too large to do training on my computer with sufficient bs. However, I would say that <strong>the model is the minor thing in this competition, and the main role is played by considering the noise and by optimizing the pipeline: there is no magic model, but there are hard work and solid understanding of the task and the data</strong>.</p> <h2>Final ensemble</h2> <p>The submission that gave me the 11th place (<strong>0.930 private LB/0.917 public LB</strong>) is based on a majority voting ensemble of 8 models (4 fold) with 6 TTA. They are trained with different train/val splits and other modifications in the training procedure. On average each of the models trained in such manner gave <strong>~0.930 private and ~0.910 public LB</strong> single model 4 fold performance (with the <strong>maximum of 0.938 and 0.916</strong>, respectively). So, I got quite fair score, not good or bad luck (and my LB position almost haven’t changed). However, the large number of models was a way to survive in this storm. My another ensemble of 11 models, not selected as a final one, got 0.934 private LB. And as I mentioned above, more advanced tiling gives about <strong>0.004 boost</strong> on private LB (while similar public score as my main approach based on 128x128x128 tiles), with the average of <strong>~0.934</strong> and the maximum of <strong>0.941 private LB</strong>, but unfortunately, I haven’t built an ensemble based on them for my final submissions.</p> <p><strong>the code snippets are available at:</strong> <a href="https://github.com/iafoss/PANDA" target="_blank">https://github.com/iafoss/PANDA</a></p> <p>And I would like to congratulate all participants and wish the best luck in the next competitions. I hope some of my tricks would be useful to you.</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
3rd public/20th private solution-segmentation + simple tiles and multiheaded attention
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, thanks to the organizers <a href="/wouterbulten">@wouterbulten</a> as i understand it is not easy to collect a dataset like this. Second, thanks to my teammates <a href="/rvslight">@rvslight</a> <a href="/aksell7">@aksell7</a> <a href="/ruozha001">@ruozha001</a> who worked hard with me. Third, congrats to <a href="/iafoss">@iafoss</a> for his solo gold and thanks to him for sharing the incredible tile idea and also all the participants who worked hard on this competition.</p> <p>We suffered in the shakeup, dropping from 3rd to 20th place, but i think our approach is quite interesting, and our selected sub was pretty good and balanced at both public (0.921) and private (0.927) with just 4 models.</p> <p>First I will briefly describe important details of my method using simple tiles which can generate a 0.927 single model single fold private score. My main idea is to keep things simple, apply attention, and use enough augmentation to avoid overfitting to label noise. My pure pytorch code is released on github at <a href="https://github.com/Shujun-He/PANDA">https://github.com/Shujun-He/PANDA</a> (see folder layer1test4maxmeanwuncertainty for the pure pytorch pipeline and I will clearn up and update later). Later, I will detail the segmentation part of our solution. </p> <p>Our best private score (not selected) was achieved by ensembling 5 models (2 simple tiles and 3 segmented tiles) and using median avg (middle 3). Best simple tile (given by iafoss' tile function) setting was 36x256x256, and any number above 36 also works.</p> <h1>Model architecture</h1> <p>Since iafoss released his tile idea, I immediately thought of using attention so the network can learn importance of different tiles and make predictions based on the set of tiles for each WSI. Here sometimes a full blown transformer encoder layer is used and sometimes just nn.MultiheadAttention + Mish activation. Also, resnext50 proved to be much better in this competition. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3355848%2F6845534ba574bd21bfa2006e0f7471b1%2Farch.PNG?generation=1595536578888614&amp;alt=media" alt="">. </p> <p>Mathematically, each tile becomes a feature vector after passed through the backbone, and the transformer encoder layer just operates on the set of feature vectors. 2D positional encoding can be added here but I did not think it was important based on reading about prostate cancer diagnosis. Here we usually used model=512 and nhead=8 same as the default setting of original transformer paper.</p> <p>I actually used multitasking learning by adding multiple attention classifiers on top the backbone:</p> <p>```python class MultiheadAttentionClassifier(nn.Module): def <strong>init</strong>(self,num_classes,out_features,ninp,nhead,dropout,attention_dropout=0.1): super(MultiheadAttentionClassifier, self).<strong>init</strong>() self.attention=nn.MultiheadAttention(ninp, nhead, dropout=attention_dropout) self.classifier=nn.Linear(ninp*2,num_classes) self.dropout=nn.Dropout(dropout) self.mish=Mish()</p> <pre><code>def forward(self,x): x=x.permute(1,0,2) x,_=self.attention(x,x,x) x=self.mish(x) x=x.permute(1,0,2) max_x,_=torch.max(x,dim=1) x=torch.cat([torch.mean(x,dim=1),max_x],dim=-1) x=self.dropout(x) x=self.classifier(x) return x </code></pre> <p>```</p> <p>This always resulted in much better CV convergence than just using isup grade, and lb was alway higher than CV so I stuck with the multitasking learning.</p> <h1>Augmentation</h1> <p>Augmentation wise I use cutout (replacing cutout region with just white pixels) 50% of the time and the other 50% I change the gamma. The tiles always have 50% chance of being rotated/ flipped/transposed. In our N=64 runs, I used a new augmentation which I call whiteout, where I simply turn some tiles white so the model can learn to be invariant to white tiles. </p> <p><code>python def whiteout(tensor,n=6): sample_shape=tensor.shape to_drop=np.random.choice(tensor.shape[1],size=n,replace=False) tensor[:,to_drop]=1 return tensor </code></p> <p>Later I found that after whiteout, even when using masked pooling (blocking white tiles), the model gives almost identical results, indicating that our model is invariant to white tiles.</p> <p>```python MultiheadAttentionClassifier with masked pooling and masked attention: class MultiheadAttentionClassifier(nn.Module): def <strong>init</strong>(self,num_classes,out_features,ninp,nhead,dropout,nlayers=1,attention_dropout=0.1): super(MultiheadAttentionClassifier, self).<strong>init</strong>() encoder_layers = nn.TransformerEncoderLayer(ninp, nhead, ninp*2, attention_dropout) self.attention = nn.TransformerEncoder(encoder_layers, nlayers) self.classifier=nn.Linear(ninp*2,num_classes) self.dropout=nn.Dropout(dropout)</p> <pre><code>def forward(self,x,mask): x=self.dropout(x) x=x.permute(1,0,2) src_key_padding_mask=mask==0 x=self.attention(x,src_key_padding_mask=src_key_padding_mask) x=x.permute(1,0,2) max_x,_=torch.max(x+src_key_padding_mask.unsqueeze(-1)*(-1e-9),dim=1) mean_x=torch.sum(x*mask.unsqueeze(-1),dim=1) tile_count=torch.sum(mask,dim=1).unsqueeze(-1) mean_x=mean_x/tile_count x=torch.cat([torch.mean(x,dim=1),max_x],dim=-1) x=self.dropout(x) x=self.classifier(x) return x </code></pre> <p>```</p> <h1>Progressive upsampling</h1> <p>One thing that really sped up my training was the usage of progressive upsampling. Training is usually 45 epochs with first ten epochs on half resolution tiles (downsized with cv2.resize). At 25 and 36 epochs, learning rate is reduced 10 times. This is a cool idea for people with limited computing power and for people who have a lot, it speed up training even more.</p> <h1>Segmentation model</h1> <p>To be updated. But to put it simply, we basically used masks on lowest resolution images to train a segmentation model distinguishing if the particular tile has cancer in it or not. Subsequently tiles were selected based on which ones are more likely to contain cancer based on trained segmentation model. This method should be better at predicting class 2,3,4,5, which was the case in CV at least. Somehow this method worked not so well in the private test set; however, ensembling this method with my models which use simple tiles still gave a boost.</p> <p>Combining the segmentation tiles with simple tiles worked well in public and also in private (just not as much as the boost in private given by denoising). Based on lb and cv, we thought that segmentation tiles would have better performance on 2,3,4,5 while simple tiles would be better at 0, 1, so the combination logically made sense.</p> <p>I had some worries that this method may be too biased towards predicting cancer, which is probably the reason it did not work well in private (judging from single model scores of segmentation tiles). Surprisingly, in private test set, when we made a mistake, where we used simple tiles instead of segmentation tiles on a model trained on segmentation tiles, we received a higher private lb in that submission than using segmentation tiles, which was of course not submitted since we identified that error.</p> <h1>Conclusion</h1> <p>In the end, we had multiple moments where we selected a 0.932 run, which would have resulted in a gold medal rather than a high silver. However, we changed it based on some reasoning that i still don't think is wrong. So just unlucky. </p> <p>About top solutions, I see most of them using some type of denoising method or just getting lucky based on some public kernels. Of course, using a large ensemble (~10 models) helps as well. What is really surprising to me is how denoising did not bring any recognizable improvement on public lb. I cannot help but think that there is some unintended difference between public and test set, because there is no reason denoising shouldn't work for public lb. In fact, I tried to do some denoising, but the results were not convincing and I stopped, which I do not consider a mistake, because there was no way to validate that anyone's denoising method was indeed working properly.</p>
University of Liverpool - Ion Switching
17th Private & 17th Public Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the kagglers who participated in this competition and especially to those researchers who were desperately trying to overcome 0.946 barrier on public LB 😊 </p> <p>I am really enthusiastic and satisfied since it is our first kaggle competition.</p> <p>We were using a dataset with <a href="https://www.kaggle.com/cdeotte/data-without-drift">removed drift</a> and <a href="https://www.kaggle.com/teejmahal20/a-signal-processing-approach-kalman-filtering">Kalman filtering</a>.</p> <p>Nearly 4% of the train data was cut as too noisy.</p> <p>Our solution is based on <a href="https://www.kaggle.com/siavrez/wavenet-keras">WaveNet</a> and <a href="https://www.kaggle.com/nxrprime/wavenet-with-shifted-rfc-proba-and-cbr">Wavenet with SHIFTED-RFC Proba and CBR</a> backed by a callback that provided early stopping in case macro f1 would not grow in 40 epochs with further weights' rollback.</p> <p>We have modified this estimator with an ensemble technique inspired by <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo method</a>. The base prediction is rounding the medians of predictions.</p> <p>An important step was to split data into 5 groups w.r.t. average open channels. This idea has led us to the fact that our WaveNet ensemble provides a lower bound for the group with the highest average number of open channels.</p> <p>The final estimator looks like <strong>max(predictions)</strong> for the group with the high average number of open channels and <strong>round(median(predictions))</strong> for everything else.</p> <p>We are going to publish a <a href="https://www.kaggle.com/biruk1230/17th-place-simple-wavenet-solution-ion-switching">full kernel</a> shortly.</p>
Tweet Sentiment Extraction
69th solution(silver medal) - Noise modeling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>&gt;It is the first competition and learned a lot from many amazing kagglers' discussions and kernels. Without them, I could not get this high at my first competition. So many thanks to people who shared their kernels, ideas and opinions. This time, I could not share as much as I got from them. I will definitely try to be the kaggler just like them and contribute for the community.</p> <p>It turns out my private score took advantage of selection of high performance starter kernel. Huge appreciation to sazuma but, at the same time, a bit disappointed because the most of my private dataset performance didn't come from my modeling trials, but his work. However, I think my modeling approach is deserved to be shared and believe there was a chance to improve more than current score if properly re-implemented with some tweaks. I couldn't seize the chance because of some mistakes found in my implementation and lack of remaining time. </p> <h1>1. Text Extraction Model</h1> <h3>1. Roberta based <code>start_idx</code>, <code>end_idx</code> extraction 10-fold model + Pseudo Label</h3> <ul> <li>Adopted from sazuma's starter kernel <a href="https://www.kaggle.com/shoheiazuma/tweet-sentiment-roberta-pytorch">Tweet Sentiment RoBERTa PyTorch</a></li> <li>Public score of the 10-fold model is 0.714 and boosted 0.716 by pseudo labels using <a href="https://www.kaggle.com/kazanova/sentiment140">Sentiment140 dataset</a> - turns out the 10-fold model got 0.717 and pseudo label degrades it to 0.716. The number of pseudo labels are 200000 which is about 9-10 times of the number of one fold training set. The submission with 1500000 pseudo label remains the same jaccard score on private dataset. It is likely that pseudo labeling has no advantage for this competition.</li> </ul> <h1>2. Noise Model</h1> <p>As discussed in <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/154415">The magic</a>, noise is my next modeling target for this competition. Some random weird noise such as "onna" (gonna) and "e nice" (nice) appears in the dataset. (Now, revealed it is not random, here <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159254">Some call it magic</a> how it is generated, not confirmed by the competition host, but seems reasonable and lessons learned the importance of deep-dive data exploration).</p> <p>To make my model output the noised <code>selected_text</code>s, post processing is applied as following procedure:</p> <ol> <li>Train the model with cleansed <code>selected_text</code> labels</li> <li>Predict for the test samples</li> <li>"<em>Selectively"</em> inject the noise to the predictions</li> </ol> <p>For 1, we need cleansed text. The cleansing is done with simple preprocessing: filling the words in the separated <code>text</code> by space. Also, add two more columns <code>shifted</code>, <code>front_shifts</code>, <code>back_shifts</code> For example, - "onna" cleansed to "gonna" +1 char-level shift at the front -&gt; <code>front_shifts</code>=1, <code>back_shifts</code>=0, <code>shifted</code>=1 - "o sad" cleansed to "soo sad." +2 char-level shifts at the front and +1 char-level shift at the back -&gt; <code>front_shifts</code>=1, <code>back_shifts</code>=2, <code>shifted</code>=1 - "happy" no cleansed needed, no shifts needed -&gt; <code>shifted</code>=0</p> <p>For 3, I design two roberta-based models </p> <h2>a. Noise label row classifier</h2> <ul> <li>Which rows to inject noise </li> <li>Trained with <code>text</code> as input, <code>shifted</code> as label</li> <li><p>10 fold CV gives 88% accuracy</p> <h2>b. Noise resurrector</h2></li> <li><p>How the amount of character-level shifts occurred to make the cleansed words</p></li> <li>Trained with <code>text</code> and <code>selected_text</code> as input, <code>front_shifts</code> and <code>back_shifts</code> as label</li> <li>10 fold CV gives 0.5 full matching accuracy, jaccard improvement 0.2 after noise resurrection than before(cleansed one)</li> <li>L1 loss significantly boosted its performance (As mentioned below, maybe this is because of some outliers due to my buggy preprocessing code)</li> </ul> <p>Ideally, if the model trained with cleansed data predicts well, I could boost my total jaccard +0.02 since about 10% of train data has <code>shifted</code>=1. However, it doesn't go that much, +0.001. Possible reasons are - The model jaccard is around 0.717, so the noise put into wrong prediction at the first time. - My preprocessing code is naive. In the example "o sad", the answer could be "sad." but my preprocessing code makes it "soo sad." because it just fills the incomplete words. - Lack of hyperparmeter tuning. I wished I have more time </p> <h1>3. 10-folded ensemble</h1> <ul> <li>Majority vote ensemble works better for me than logits averaging.</li> <li>Ensemble one the pseudo labled model's output set from 1, one cleansed <code>selected_text</code> trained model's output set, and two postprocessed models for each of them.</li> <li>Public LB is 0.712 but private LB is 0.718. (Lessons learned that no 100% trust for public LB) <em>It was kind of risky bet to choose this ensemble model, but I made it, haha. If I choose another, my score would be ~0.716</em> 🤣</li> </ul> <h1>4. Additional techniques</h1> <ul> <li>Pretrain model's end layers reinitialization Described here, <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/157806">How to reinitialize specific layer weights?</a></li> <li>Additional tokens to Roberta-base pretrained model Tested on public LB, it gives +0.001</li> </ul> <p><strong>Hope you enjoyed the competition and thank you for reading. Any feedbacks, questions will be welcomed!</strong></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Silver medal solution -> 24 place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello to all participants, organizers and people who are here to learn from the successful solutions or to exchange opinions</p> <p>This was an interesting competition with a lot of challenge even for the experimented computer vision programmers. The main challenge was the fact that the provided data was limited and noisy I am glad with the result, this being the 3rd consecutive Kaggle medal (after gold at Bengali and bronze at M5). I will mention some of the things I tried here and a comment about how that implemented technique worked for me.</p> <p><strong>Pre-processing techniques</strong> a) Tiles design - 256x256x36 tiles <em>(best results)</em> - 256x256x49 tiles <em>(a lot of white space and image was too big to sustained with the number of provided training data)</em> - 128x128x144 tiles <em>(lower results than the 256x256x36 tiles, probably some patterns are interupting by making the slides smaller)</em> - 512x512x9 tiles <em>(also lower results than 256x256x36, the tiles being so big they had a lot of white space when select them)</em> Additional comment: <em>256x256x36 seemed to be the sweet point between making the tiles too big where they will have a lot of empty pixels and 128x128x144 where the tiles patterns are interupted</em></p> <p>b) Data selection Due to noisy labels I design a system to eliminate data which had the biggest probability of being labeled wrong. I used the best 5 fold ensable to predict on training data, average the prediction and eliminate data where the difference between prediction and real label was biggest than a specific threhold The thresholds tested were: - 2 (137 data eliminated) - 3 (39 data eliminated) - 4 (7 data eliminated)</p> <p>Results comment: <em>Best CV results were obtain where I eliminated data where abs(prediction-truth labels)&gt;3 where 39 data were eliminated</em></p> <p><strong>Model arhitectures</strong> - Efficientnet B0 - Efficientnet B1 - Efficientnet B2 - Efficientnet B3 - Efficientnet B4 - SE_Resnext50</p> <p>Result comment: <em>Best result was on B2 arhitecture, B3-B4 lead to overfit and the rest of them simply did not worked for me</em></p> <p><strong>Data Augmentation</strong> - Transpose - Vertical Flip - Horizontal Flip - RandomRotate90 - Rotate at a random angle - Shift starting position pad when making tiles</p> <p>Result comment: <em>All the augmentation were made at 2 levels: tile level + after the tile ensamble In the best results I used: Transpose + Vertical Flip + Horizontal Flip</em></p> <p><strong>Optimizer + scheduler</strong> - Adam + GradualWarmupScheduler + CosineAnnealingLR - RangerLars + GradualWarmupScheduler + CosineAnnealingLR - RangerLars + OneCycleLR - Adam + GradualWarmupScheduler + ReduceLROnPlateau</p> <p>Result comment: Although I tried to use RangerLars in a variety of combination of schedulers, best result was obtain with the good old Adam (Adam + GradualWarmupScheduler + CosineAnnealingLR)</p> <p><strong>TTA</strong> TTA composed of: - Transpose - Vertical Flip - Horizontal Flip - Shift starting position pad when making tiles</p> <p>Result comment: <em>Usually I recommend to use on TTA similar techniques with what you are using on training augmentation, otherwise the prediction will see images that were not taught by the model. This is why I used:Transpose + Vertical + Horizontal Flip</em></p> <p>Congratulations to all participants and to the organizers !!!</p> <p>See you on the next computer vision competition !</p>
TReNDS Neuroimaging
[Public: 0.15881, Private:0.15864] GNN Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats winners and the all participants got the medals ! and thanks to the host and Kaggle Team for holding this tough but interesting competition.</p> <p>Maybe, as stated in <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162739">this post</a>, my final submission was not in time then I missed silver medal😅 </p> <p>But I think my solution may be useful for someone and would like to share summary here.</p> <h2>Overview</h2> <ul> <li>Graph Neural Network approach (single model, 5-fold averaging)</li> <li>develop a model for each target (<code>age</code>, <code>domain1_var1</code>, <code>domain1_var2</code>, <code>domain2_var1</code>, <code>domain2_var2</code>)</li> <li><a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/147128">convert fnc data into symmetric matrices</a>(code is <a href="https://www.kaggle.com/ttahara/technique-for-converting-fnc-to-matrix">here</a>) and use them as adjacency matrices' weights</li> <li>extract (53,512)-dim feature by pretrained resnet10-3D from <a href="https://github.com/Tencent/MedicalNet">MedicalNet</a> (just using feature extractor, <strong>not training on competition data</strong>)</li> </ul> <p>This approach works well for <code>age</code> but not for others, I show you the architecture especially for <code>age</code>.</p> <h2>Model Architecture (for age)</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F473234%2Ff34744551fb5dcc574d3e14f037120d3%2Fsolution_pipline.png?generation=1593505309611553&amp;alt=media" alt="architecture"></p> <h2>Important things (I think)</h2> <h3><strong>Independent</strong> edge parameters</h3> <p>In general GNNs, edge parameters are shared in each layer. But I think relations between components can not be represented by shared parameters.</p> <p>For reducing the number of parameters, I implemented this by a node-wise manner in accordance with the following paper:</p> <p><a href="https://dl.acm.org/doi/10.1145/3308558.3313444">Dressing as a Whole: Outfit Compatibility Learning Based on Node-wise Graph Neural Networks</a></p> <h3>Regulariazation by dropping <strong>edge</strong></h3> <p>This is a simple but powerful method for regularizing GNNs. I think my model couldn't have achieved this score without DropEdge.</p> <p>If you want to know details, see this:</p> <p><a href="https://iclr.cc/virtual_2020/poster_Hkx1qkrKPr.html">DropEdge: Towards Deep Graph Convolutional Networks on Node Classification</a></p> <p><br> I learned many things about GNNs through this competition.</p> <p>Thank you the host and Kaggle team again for giving me the opportunity😃 </p>
TReNDS Neuroimaging
35th Place Solution- Magic PP - Site 2 correction
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Part 1</strong> First of all, Congratulations to my teammate <a href="/prateekagnihotri">@prateekagnihotri</a> for his first silver medal and a lot of thanks for coming up with excellent stacking models. Congratulations to all the teams that did good on private lb. Also, credits should be given to <a href="/david1013">@david1013</a> for the high scoring public kernel that was used in final blend. The best part was we did all these in the last 10 days, that too with just tabular data. We wanted to explore fMRI, but due to limited time, we couldn't go there. 1. <strong>0.0003 public lb loost - Preprocessing for Known site 2 samples</strong> We tried to perform a gaussian fit on IC and fnc features for both site1 and site2.The std was nearly same, so, it looked like we can approximate the site 1 features by just adding mean shift to site 2 features. <code> Mean Shift = Mean(Site 1 samples) - Mean(Site 2 samples) Generated Site 1 sample = Site 2 sample + Mean Shift </code> And then the prediction for known site 2 samples was done using the generated site 1 sample.</p> <p>2.<strong>Calculating No of site 2 samples</strong> We simply solved this equation for all the features that were showing significant shift and plot the histogram with x values obtained from all features: <code> Mean(Unknown Samples) = x * Mean(Site1 samples) + (1-x) * Mean(Site2 samples) Here, x is the percentage of site 1 samples in the unknown dataset. </code> It was clear that 20-40% of the dataset belongs to site 2. </p> <p>3.<strong>0.0002 public lb loost - Postprocessing for Unknown site 2 samples</strong> Our site classifier was ensemble of: - LightGBM on all features - Stacked Ensemble on all features with generated samples. The additional site 2 samples were generated from known site 1 using mean shifts. - Bayesian Model on top 30 features. </p> <p>The individual probabilities were on different scale, specially for model 1, due to class imbalance. So, Ensemble was done by ranking the samples according to their probabilities and then taking the weighted mean of all the ranks. Top 500 ranks were assigned probability 1. Next 500 were assigned 0.9. Next 0.75. Similarly at the bottom, samples were assigned 0, 0.1, 0.25. All the rest samples were assigned probability 0.5</p> <p>On private set, the last one actually worked better than blend but we didn't select that. </p> <p>With the probabilities, final prediction were calculated as: <code>Prediction = p1 * site1_pred + p2 * site2_pred</code> where site1_pred was the actual prediction by the model and site2_pred was the prediction made assuming the sample comes from site2, it was done in manner similar to (1)</p> <p>4.<strong>Models</strong> Blend Ensemble of 1. Public LB with PP 2. 3-level stacking (43&gt;7&gt;1) 3. 3-level stacking (29&gt;5&gt;1)</p> <p>Models used in Stack 2 &amp; 3: Ridge, Bayesian Ridge, Bagging Regressor, SVR, NuSVR, Kernel Ridge, Neural Networks, Lasso, E-Net, GLM</p> <p>All three scored 0.1585 on public lb, blend ensemble gives <strong>0.1584</strong></p> <p>I am new to writing detailed solution. So, pardon me for any mistakes!!</p> <p>*<em>More Details about the stacking model and some more tricks that worked will be posted soon. *</em></p> <p><strong>Full Detailed Solution with code will be uploaded soon</strong></p>
University of Liverpool - Ion Switching
22th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>My first silver medal!</h2> <h2>And I became an Kaggle Expert!</h2> <p>Thank you very much!!!!!</p> <p>I'm a little late in publishing my solution. I'm sorry.</p> <p>I've tried a few ideas, but I'd like to highlight the one that worked the best for my score in particular. <br> The other ideas I have are ones that others will surely be trying.</p> <p>I was very affected by this notebook (<a href="https://www.kaggle.com/jt120lz/open-channel-clear-plot">https://www.kaggle.com/jt120lz/open-channel-clear-plot</a>). Thank you, liuze!!!!</p> <h3>train data</h3> <p>The following graph shows the signal replaced by the average value per openchannels. For example, the red line in Batch5 shows the average value of openchannels = 5 for signal in Batch5.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F9150a9269a3cd46e5afcf5c4d3e33391%2F(1" alt="">.png?generation=1590758184980770&amp;alt=media)</p> <p>As you can see, Batch4 and Batch9 clearly behave differently than the other batches. <br> I've corrected this discrepancy. This is the only thing to do with the train data. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2Fe3a463a9006a1131b65f4a541c29e977%2F(2" alt="">.png?generation=1590758231124352&amp;alt=media)</p> <h3>test data</h3> <p>The group is defined as follows. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F88a196c4a2d0fcde8d75749aa8f2dec9%2FScreenshot%20at%2011-09-03.png?generation=1590804592985105&amp;alt=media" alt=""></p> <p>The problem is test data. test data doesn't have open_channels information, so the above method can't be used.</p> <p>However, what needs to be done is immediately apparent when you look at the kde plot. Compare the test data(group4) with the modified train data(group4). <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F8897b4164ad3cdfe4cb97fa0f70bd67e%2F2.png?generation=1590804832825827&amp;alt=media" alt=""></p> <p>It's clear that we have work to do. Just fix the above misalignment. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2640832%2F468f71b3681175bfd8932379ebe7d980%2F.png?generation=1590804860802524&amp;alt=media" alt=""></p> <p>That's nice.</p> <h2>code</h2> <p>All you have to do is process the following! Thanks!! ```python</p> <h1>--- train ---</h1> <p>off_set_4 = 0.952472 - (-1.766044) off_set_9 = 0.952472 - (-1.770441)</p> <h1>batch4</h1> <p>idxs = df_tr['batch'] == 4 df_tr['signal'][idxs] = df_tr['signal'].values + off_set_4</p> <h1>batch9</h1> <p>idxs = df_tr['batch'] == 9 df_tr['signal'][idxs] = df_tr['signal'].values + off_set_9</p> <h1>--- test ---</h1> <p>off_set_test = 2.750 df_te['signal'] = df_te['signal'].values idxs = df_te['group'] == 4 df_te['signal'][idxs] = df_te['signal'][idxs].values + off_set_test ```</p>
TReNDS Neuroimaging
36th Place Solution (with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone, first of all i would like to thank my great teammates <a href="/joatom">@joatom</a>, @mks2192and <a href="/kpriyanshu256">@kpriyanshu256</a> for such an interesting competition journey. And congratulations to all the winners!</p> <p>It's been a great competition, and my team has spend a lot of time in this competition and finally glad to share that all the hard work paid off.</p> <h2>Our Approach</h2> <p>Our final solution is an ensemble of following models or submissions: * <a href="https://www.kaggle.com/joatom/trends-tabular-nn-0-159">simple tabular nn</a> as mentioned by <a href="/joatom">@joatom</a> in the comments below. * GCN-Tabular Learner by <a href="/kpriyanshu256">@kpriyanshu256</a> * <a href="https://www.kaggle.com/rohitsingh9990/trends-pycaret-training-inference">TReNDS - PyCaret (Training + Inference)</a> by <a href="/rohitsingh9990">@rohitsingh9990</a> * <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">RAPIDS SVM on TReNDS Neuroimaging</a> by <a href="/aerdem4">@aerdem4</a> * and a bunch of other models trained on tabular data only.</p> <p>Both <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">RAPIDS SVM</a>and <a href="https://www.kaggle.com/rohitsingh9990/trends-pycaret-training-inference">TReNDS - PyCaret</a> notebooks are already publicly available, there is nothing much to explain.</p> <h3>Brief Overview of <code>simple tabular nn</code>:</h3> <ul> <li>A detailed explanation of this approach is posted by <a href="/joatom">@joatom</a> in the comments below. <blockquote> <p>Note: This is the most important approach which alone when ensembled with public RAPIDS SVR kernel can easily give us a silver medal.</p></blockquote></li> </ul><p></p> <h3>Brief overview of our <code>TReNDS GCN-Tabular Learner</code>:</h3> <ul> <li>The scenario of using a GCN appears if we treat the correlation values between the fnc entities (eg. SCN(53) ) as edge weights. </li> <li>A little EDA showed that there are ((53*52)/2) columns (excluding the Id) i.e. a completely connected graph and there are in total 53 unique entities. There are also 53 spatial maps for each patient. </li> <li>We assume <code>ICN_numbers.csv</code> gives a mapping of each entity to the corresponding map. Hence, we create a graph of 53 nodes for each patient. To create features for the nodes, we rely on domain specific knowledge ( <a href="https://www.kaggle.com/kpriyanshu256/trends-image-features-53-100">https://www.kaggle.com/kpriyanshu256/trends-image-features-53-100</a> ). </li> <li>We extracted some features from each map using nilearn ( <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018">https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018</a> and <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_talairach.html#nilearn.datasets.fetch_atlas_talairach">https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_talairach.html#nilearn.datasets.fetch_atlas_talairach</a> ). </li> <li>We have used this for the GCN. The features from the GCN are collected and combined with the <code>fnc.csv</code> and <code>loading.csv</code> features. The combined features are passed through dense layers to get the output. </li> <li>Notebook : <a href="https://www.kaggle.com/kpriyanshu256/trends-gcn-tab-oof/">https://www.kaggle.com/kpriyanshu256/trends-gcn-tab-oof/</a> (edited) kaggle.comkaggle.com</li> </ul> <h3>Post-Process</h3> <p>At last after ensembling, we did a little post-processing that we came up on last day of this competition, which gives us a boost of approx 0.002</p> <p><code> Before transformation, the age in the training set is rounded to nearest year for privacy reasons. However, age is not rounded to year (higher precision) in the test set. Thus, heavily overfitting to the training set age will very likely have a negative impact on your submissions. </code> This is the information available on the data page, and we were wondering how we can use this sort of information to get best out of it. so, what we did is to try multiplying <code>age</code> we get from our ensembled models by a factor of <code>1.01</code></p> <p>Unfortunately we try this on last day of competition and we didn't have any submissions left to improve it further, i wish we might have tried it earlier.</p> <p>Link to our final submission kernel <a href="https://www.kaggle.com/rohitsingh9990/36th-place-trends-ensemble?scriptVersionId=37747366">https://www.kaggle.com/rohitsingh9990/36th-place-trends-ensemble?scriptVersionId=37747366</a></p>
Plant Pathology 2020 - FGVC7
20th Place: Bilinear CNN with FL + LS
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1234220%2F9d8ab256053f1510a99a6924c08cca2a%2FBiLinearModel.PNG?generation=1590573890707261&amp;alt=media" alt=""> Kaggle Notebook: <a href="https://www.kaggle.com/jimitshah777/bilinear-efficientnet-focal-loss-label-smoothing">https://www.kaggle.com/jimitshah777/bilinear-efficientnet-focal-loss-label-smoothing</a></p> <p>This was one of the ideas I have used during my experimentation.</p> <p><strong>Apart from this I have tried various other things like:</strong> Different CNN Models(B0,....B7), Different image sizes, different Focal Loss parameters different Label Smoothing parameters Combination of Focal + LS Various Augmentations Early Stopping Late stopping Combination of various CNNs as backbone for BiLinear CNN And a combination of everything that worked.</p> <p>Finally I submitted an ensemble of all my solutions which had a public score of greater than 0.970.</p> <p><strong>Final Note:</strong> Congratulations to all the Winners! It was a great experience for me. I felt like I was learning something or the other day after day, submissions after submissions. Best of luck for future competitions...!</p>
TReNDS Neuroimaging
7th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I have a 3D Conv Net originated from @shentao 's great baseline. I want to thank him for that. I made some small architectural changes on resnet10 and started to boost from median predictions. It converged much faster that way, so that I could make the augmentation less severe. It achieved 0.1695 CV without using any other features.</p> <p>I have used its OOF in my models. Then I have almost equally performing 4 models on top of these OOF predictions and loading + fnc features: - LGB: Before feeding the features, I oversample loading features 50x times and then I use very low feature sampling fraction like 0.015. - MLP: Different dropout rates for 3D CNN OOF, fnc and loading features. - SVM: Same as <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging</a> - Ridge: An overfitting ridge was contributing the best compared to a robust BayesianRidge or HuberRegressor.</p> <p>I made the CV-LB gap closer by using Adversarial Validation predictions as sample weights. At the end, I had 0.1562 CV and 0.1566 LB. I was expecting shaking up thinking that there are even more site2 in private LB (Kaggle usually does that way) but I guess it wasn't the case and this trick lost its importance.</p>
TReNDS Neuroimaging
3rd Place Solution (+ github code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank hosts for organizing this interesting competition. This competition was a lot of fun because we got to try many techniques! And I'm so happy with my first solo gold!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1227363%2F881ec03b0fff4b96bbb71a2521d174f2%2Fsolution%20(2" alt="">.png?generation=1593524751624771&amp;alt=media)</p> <h1>Solution Details</h1> <p>I think that the point of this competition is stacking and diversity of models. So, I built many models as shown below.</p> <h3>1. Simple models by table feature.</h3> <p>First, I trained simple models by fnc and loding feature. - SVM (rbf kernel and linear kernel) - NuSVM - KNN - Ridge - BaggingRegressor(Ridge)</p> <h3>2. NN models by fMRI data.</h3> <h3>2-1. 3D voxel</h3> <p>I applied sample wise and component wise normalization.</p> <ul> <li>ResNet18 replaced by 3D modules.</li> <li>Simple 3dConvNet (smaller parameter than ResNet18) <ul><li>trained by all label</li> <li>trained by age, domain1_var1 and domain1_var2</li></ul></li> <li>(2 + 1)D CNN (suggested in <a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Tran_A_Closer_Look_CVPR_2018_paper.pdf">A Closer Look at Spatiotemporal Convolutions for Action Recognition</a>) <ul><li>better than ResNet18</li></ul></li> </ul> <h3>2-2. masked data</h3> <p><a href="https://nilearn.github.io/modules/generated/nilearn.input_data.NiftiLabelsMasker.html#nilearn.input_data.NiftiLabelsMasker">Masker of Nilearn</a> can extract signals from brain parcellation. Then, I used <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_schaefer_2018.html#nilearn.datasets.fetch_atlas_schaefer_2018">schaefer_2018</a> parcellation and extracted 400 rois from fMRI. (reduce shape [53, 52, 64, 53] to [53, 400]) Finally, I built some models by this [53, 400] signals.</p> <ul> <li>CNN models (input channel is 400) <ul><li>1D ResNet18</li> <li>1D densenet121</li> <li>1D ResNest 14d</li></ul></li> <li>Transformer <ul><li>No PositionalEncoding</li> <li>Self Attention of components by components (feature axis is 400)</li></ul></li> <li>GIN (Graph Isomorphism Network) <ul><li>Impremented by <a href="https://www.dgl.ai/">Deep Graph Library</a></li> <li>I used the network architecture like in [this paper] (<a href="https://arxiv.org/pdf/2001.03690.pdf"></a><a href="https://arxiv.org/pdf/2001.03690.pdf">https://arxiv.org/pdf/2001.03690.pdf</a>).</li></ul></li> </ul> <h3>3. LGBM and XGBoost with table features, NN features and voxel statistical features.</h3> <ul> <li>I computed voxel statistical feature (mean, max, kurt, skew) of each components. <ul><li>I used NiftiMasker for extracting non-zero area.</li></ul></li> <li>LGBM(XGBoost) model with hidden feature of (2 + 1)D CNN</li> <li>LGBM(XGBoost) model with hidden feature of Simple 3dConvNet</li> <li>LGBM(XGBoost) model with hidden feature of GIN</li> </ul> <p>I selected 1024 feature of each models by feature importance and reduced the number of leaves to avoid over-fitting (n_leaves = 2).</p> <h3>4. Stacking</h3> <p>I just looked at the distribution of predictions in the training and test data and erased them if there were too much discrepancy. And I made interaction feature between domains (sum, abs(diff), multiply). After the preprocessing, I used Linear SVM and LGBM for stacking and finally blending the predictions.</p> <h1>Some Insights</h1> <ul> <li>3DCNN with table feature had better CV but this model was useless when stacking.</li> <li>Removing noise prediction had important role on shake-up.</li> <li>MONAI framework is too slow. <a href="https://github.com/PhoenixDL/rising">Rising</a> is faster and I used it. But I don't know if it helped to boost the score...</li> </ul> <h1>What didn't work</h1> <ul> <li>Pseudo label</li> <li>Ridge stacking, XGBoost stacking, Second level stacking</li> <li>GNN with fnc attention</li> <li>Select less feature of lgbm and xgboost than 1024</li> </ul> <p>[Updated] I uploaded code to <a href="https://github.com/shimacos37/kaggle-trends-3rd-place-solution">https://github.com/shimacos37/kaggle-trends-3rd-place-solution</a> with Dockerfile. This code uses several tools. 1. I used <a href="https://github.com/PyTorchLightning/pytorch-lightning">pytorch-lightning</a> to lap training and evaluation code and <a href="https://github.com/facebookresearch/hydra">hydra</a> to manage parameter. 1. I used <a href="https://www.wandb.com/">Weights &amp; Biases</a> to manage expetiments and GCS to save result. Weights &amp; Biases is better for me than the others (tensorboard, mlflow, etc ...)</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
7th Place Solution(simple but messy)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Thank you very much to organizers.</p> <p>This challenge is very similar to APTOS-2019 which I have worked for months as a course assignment. So I simlpy used the pipeline of my course assignment(Based on <a href="https://www.kaggle.com/c/aptos2019-blindness-detection/discussion/107947">Lex Toumbourou‘s solution</a>, thanks a lot) with some revised details. Also thanks a lot to <a href="https://www.kaggle.com/iafoss/panda-16x128x128-tiles">Iafoss</a> and <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">Qishen Ha</a> for their useful notebooks.</p> <p>Our model is simple but messy.</p> <h2>Tiles</h2> <p>We tried 256x256x32, 192x192x64, 154x154x100 but they didn't show some difference on Public LB. </p> <p>We also propoesd a new tile approach. It can contain more pathological parts without destroying the shape features. Large size tile ensures that the shape features will not be lost, while small size tiles ensure that the blank area is not that large.</p> <pre><code>def get_tiles_combine(img,mode=0): images = np.ones((1536, 1536, 3))*255 h, w, c = img.shape result_all=[] pad_h = (256 - h % 256) % 256 + ((256 * mode) // 2) pad_w = (256 - w % 256) % 256 + ((256 * mode) // 2) #print(pad_h,pad_w,c) img2 = np.pad(img,[[pad_h // 2, pad_h - pad_h // 2], [pad_w // 2,pad_w - pad_w//2], [0,0]], 'constant',constant_values=255) windows=[256,256,256,256,192,192,128] x_start=0 for i in range(len(windows)): result = [] window_size=windows[i] for x in range((h+pad_h)//window_size): for y in range((w+pad_w)//window_size): tile=img2[x*window_size:(x+1)*window_size,y*window_size:(y+1)*window_size] result.append([x,y,tile.sum()]) #print(len(result)) result.sort(key=lambda ele:ele[2]) result=result[:1536//window_size] #print(len(result),result) for y in range(min(1536//window_size,len(result))): xx=result[y][0] yy=result[y][1] result_all.append([xx,yy]) images[x_start:x_start+window_size,y*window_size:(y+1)*window_size]=\ img2[xx*window_size:(xx+1)*window_size,yy*window_size:(yy+1)*window_size].copy() img2[xx*window_size:(xx+1)*window_size,yy*window_size:(yy+1)*window_size]=255 x_start=x_start+windows[i] return images </code></pre> <h2>Models</h2> <p>We simply used Efficientnet-B0. We tried B1-B3,Densenet and Resnext, but they didn't show some difference on Public LB and need more GPU memory.</p> <p>According to APTOS-2019, we used the GeM pooling:</p> <pre><code>def gem(x, p=3, eps=1e-6): return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p) class GeM(nn.Module): def __init__(self, p=3, eps=1e-6): super(GeM,self).__init__() self.p = Parameter(torch.ones(1)*p) self.eps = eps def forward(self, x): return gem(x, p=self.p, eps=self.eps) def __repr__(self): return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')' </code></pre> <p>We also designed a more complex model based on b0. It has lower Public LB(average about 0.85) so we didn't add it to our final models. But it got the highest single-model Private LB (max 0.926, average about 0.920, What a pitty!). We will do some more experiments on this model.</p> <h2>Loss and label</h2> <p>BCE Loss and label smoothing: 3-&gt;[0.95,0.95,0.95,0.95,0.05,0.05]</p> <p>We also tried regression with mse loss and smooth L1 loss, but they didn't show any improvement.</p> <h2>Ensemble</h2> <p>8 models with 6 * TTA:</p> <blockquote> <p>1: fold_1 b0 256-tile Public LB:0.879, Private LB:0.904.</p> <p>2: fold_3 b0 256-tile Public LB:0.879, Private LB:0.899.</p> <p>3: fold_4 b0 256-tile Public LB:0.886, Private LB:0.883.</p> <p>4: fold_4 b0 combine-tile Public LB:0.879, Private LB:0.910.</p> <p>5: fold_4 b0 256-tile Public LB:0.880, Private LB:0.909.</p> <p>6: fold_4 b0 256-tile Public LB:0.891, Private LB:0.920.</p> <p>7: fold_4 b0 combine-tile Public LB:0.881 Private LB:0.917.</p> <p>8: fold_0 b0 256-tile Public LB:0.872, Private LB:0.906.</p> </blockquote> <p>The final model has Public LB:0.894, Private LB:0.932.</p>
TReNDS Neuroimaging
30th place solution - and some thoughts
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>30th place solution</h1> <p>First of all thanks to the organizers to putting together this challenging and interesting competition!</p> <p>I've seen many good and well written and well developed solutions on the challenge ... mine is neither. But I just wanted chip in quite late, as I might have used some feature engineering that hasn't been used as widely. Unfortunately I don't have the time right now to dig into it in detail, but will just give you a rough description about what I did.</p> <h2>Reflections</h2> <p>A personal note on how I perceived the challenge. I was really exciting to see this challenge, this should lie in my area of expertise, so why not give it a try. So I did, wrote a for my terms successful EDA notebook, got into the data and somehow got to 13th place. But, as people noted, there was not much going on in terms of discussion etc., so I might have gotten a bit complacent with my position and did not develop much further. Also I might have gotten a bit annoyed, because I couldn't get my errors on <code>domain1_var1</code> and the <code>domain2</code> vars done as much as I liked.</p> <p>Then life happened, vacation, deadlines, an online hackathon. Not much space for the challenge, and suddenly there are 10 days left, I have a deadline coming up, and find myself around place 50th on the leaderboard. So some panic sets in and also some craziness, so I rewrote the analysis, and preprocessing from scratch, run all the different models. Stack them up and well suddenly it's place 34, and 30 on the private leaderboard - my best competition so far.</p> <h2>Preprocessing and Features</h2> <h3>Targets</h3> <ol> <li>I decided to perform stratified cross-validation on my data, so I used <code>KNN(n_clusters=15)</code> on the target variables, setting the missing values to 0, so that this will be taken into account for the stratification approach.</li> <li>Then I imputed the missing data using <code>sklearn.impute.IterativeImputer</code> to get values there. This seemed to be a good balance for me between discarding the data or just filling in a constant. However, I didn't really look into the imputations afterwards...</li> </ol> <h3>Preprocessing and Feature Engineering</h3> <ul> <li><code>loading.csv</code> - Discarded <code>IC_20</code> - Seemed to be very different between train and test (discussions and kernels)</li> <li><code>fnc.csv</code> - Reordering the data for use with <code>nilearn.connectome.vec_to_sym_matrix</code></li> <li>Extracting data from the 4D feature maps. I have a kernel out-there showing how this can be done using different <code>Masker</code> objects using <code>nilearn.input_data</code>. I extracted the sets <code>schaeffer</code> (400 brain regions), <code>msdl</code>, and <code>basc</code> (197 regions) here.</li> <li>Graph-based features: Several publications also focus on using graph-theory based descriptions of functional connectivity as features for machine learning classifiers. So I used the "Brain Connectivity Toolbox", <code>bct.py</code> to derive features. These were just a ton of global and local descriptions, also at different thresholdings of the connectivity matrices (derived from <code>fnc</code>). Around 1700 features in total (with sometimes really bad numerical issues), which I somehow hacked away in the preprocessing before my stacking data. This will be named <code>fnc_graph</code>.</li> <li>Combined data, I also created the set <code>loadfnc</code>, combining <code>fnc</code> and <code>loading</code> data, and <code>loadmsdl</code> combining <code>msdl</code> and <code>loading</code>. Finally, I also used a <code>loadnoise</code> set, where I added some random intercept and Gaussian noise to the <code>loading</code> data (differently for each subset). If that helped at all, I couldn't unfortunately test.</li> </ul> <h2>Stacking, stacking, stacking</h2> <p>I stacked tons of models (32 for each feature) using different regression approaches, and sometimes different preprocessing. <code>ConnectivityMeasure</code> (shortened to <code>CM</code>). is a class from <code>nilearn.connectome</code>, that can be used to transform a <code>n x t</code> matrix in to a <code>n x n</code> connectivity matrix, using differnt kinds of connectivity. The nice thing is, it also fits into sklearn-pipelines as a vectorized version of the matrix is possible.</p> <p>| Data Set | Preprocessing SVR, LGBM | Preprocessing Regression | | -------- | ----------------------- | ------------------------ | | <code>basc</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>msdl</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>schaeffer</code> | CM(tangent), PCA(whiten=True), RobustScaler() | CM(correlation), PCA(whiten=True) | | <code>fnc</code> | PCA(whiten=True) | abs(fnc) &lt; 0.15 = 0, PCA(whiten=True) | | <code>fnc_pca</code> | None | abs(fnc) &lt; 0.15 = 0, None | | <code>loading</code> | RobustScaler() | RobustScaler() | | <code>fnc_graph</code> |Numerical fixes, PCA(whiten=True), RobustScaler() | Numerical fixes, PCA(whiten=True), RobustScaler() | | <code>loadmsdl</code> | PCA(whiten=True), RobustScaler() | PCA(whiten=True), RobustScaler() | | <code>loadfnc</code> | RobustScaler() | RoubstScaler() | | <code>loadnoise</code> | RobustScaler() | RobustScaler()|</p> <h3>SVR</h3> <p>I used sklearns <code>SVR</code> using both a linear and a rbf kernel on the datasets <code>basc</code>, <code>msdl</code>, <code>schaeffer</code>, <code>fnc_pca</code>, <code>loading</code>, and <code>fnc_graph</code>. To figure out the best parameters I applied <code>skopt.BayesianSearchCV</code> with 35 iterations (objective mean absolute error). Parameters optimized where <code>C</code>, <code>epsilon</code> and <code>n_components</code> for PCAs.</p> <p>So here are 2 x 6 = 12 models.</p> <h3>Regression</h3> <p>In this competition I somehow came to like the <code>LassoLars</code> regression of sklearn. So that's what I am using here. The feature selection of it seemed to help actually. Running models on <code>basc</code>, <code>msdl</code>, <code>schaeffer</code>, <code>fnc</code>, <code>loading</code>, <code>fnc_graph</code>, <code>loadmsdl</code>, <code>loadfnc</code>, and <code>loadnoise</code>. This time optimizing mean squared error using <code>BayesianSearchCV</code> for <code>alpha</code> and <code>n_components</code>.</p> <p>So 9 models.</p> <h3>LightGBM</h3> <p>Same datasets as for <code>Regression</code>. Optimizing tree parameters and PCA, best model defined by mean absolute error.</p> <p>Another 9 models.</p> <h3>2D CNN</h3> <p>I also tried to get some more spatial information into the model as well so I set up a small 2D CNN having:</p> <ul> <li>Conv2D Layer, with ReLU activation</li> <li>Maxpooling (2, 2)</li> <li>Flatten</li> <li>Dropout()</li> <li>Dense(1)</li> </ul> <p>Where the number of <code>filters</code>, <code>kernel_size</code>, <code>Dropout</code>, <code>learning_rate</code>, and the <code>loss</code> (mae, mse), where found through <code>BayesianSearchCV</code>.</p> <h2>Training and Prediction</h2> <p>I used the same approach for all models and the final stacking model:</p> <ol> <li>Optimize hyperparameters on 5-Folds.</li> <li>Retrain model on CV-Data</li> <li>Evaluate on hold-out set</li> <li>Retrain on all data</li> <li>Predict on test set.</li> </ol> <p>The final stacking model was again a <code>LassoLars</code> regression, on the outputs of the 31 models. I actually preprocessed the predictions, by slapping a <code>RobustScaler</code> in just for good measure.</p> <h2>Thoughts</h2> <p>I learned quite a lot from the competition, but have to say that I am not really satisfied with what I did (my best competition so far...), and see a lot of room for improvement.</p> <h3>Work smarter</h3> <p>I think the most annoying part for me is, that I just stacked tons of models. In the end not even thinking much about <em>why</em> I am doing it. I just wanted to get that 0.001 Leaderboard boost, to get a little edge. But, if I had invested my time more into careful tuning, preprocessing, and careful model selection, I think I would have gotten more out in the last weekend of the competition, than I did here. In the end, I was mostly waiting for models to finish running and to start the next set of long calculations.</p> <h3>Evaluate, evaluate, evaluate</h3> <p>So far, my intuition on evaluating locally and avoiding overfitting got me quite far (I got my first silver medal basically because of an incredibly heavy shake up of the leaderboard, pushing me a couple of 100 places or so to the top). Here I think I was actually quite lucky - see the forum posts where people discuss about the lack of a shake up.</p> <h3>Get a team</h3> <p>Next time I am in this situation, I think I will team up. Also I apologize to the people who contacted me and I didn't get back to. Here it was mostly bad timing, but I think there is some much to gain in terms of insights, when you can discuss your solutions :)</p> <p>And of course much more. </p>
TReNDS Neuroimaging
No Conv Net (Private: 4th, Public: 2nd)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Some thoughts</h2> <p>We will try to make this dicussion post as comprehensive as possible and answer every question in the comment section. We had to shorten some parts as this would get way to long. </p> <p>It is very interesting that 3d CNNs seem to perform better on Private LB compared to our approach. We thought they would overfit more. It seems like Private LB distribution is closer to Train distribution than to Public LB distribution. </p> <h2>Credits</h2> <p>First of all thank you to the organizers for this great competition and hughe shoutouts to <a href="/churkinnikita">@churkinnikita</a> and <a href="/simakov">@simakov</a> for their well deserved win. It was a very memorable moment for us when they overtook our Public 1st place. </p> <p>Thanks to <a href="/aerdem4">@aerdem4</a> for the amazing kernels and Kudos to everyone who participated in this competition.</p> <h2>About us</h2> <p>Marius and I are lab buddies. We wrote our medical doctor thesis together in the neurophysiology department. That's why we have worked with neuroimaging data before (Fortunately this is not Zillow)</p> <h2>Approach</h2> <p>We wanted to create a comprehensive method for predicting fmri images to be able to directly evaluate, which parts contributed the most to the models prediction and not finetune for Site1 and Site2 but build 1 model for both of them.</p> <p>In the brain it's all about connectivity. Normally fmri images are 4 dimensional with 1 axis being time. But here these were ICA components - different brain networks. Marius had the idea to consider this axis like a time axis and use the same tools to derive features from time used in neuroimaging but for the ICA components. The keyword is Parcellations. Parcellations are an atlas of the brain, they subclassify the brain in smaller networks using functionality and/or anatomical features. Is is then possible to calculate the mean of fmri data points for every component in the atlas. Thereby we reduce the shape of 53*63*52*53 to Number of Parcellation components (a few hundred depending on the atlas) x Number of ICA components. In the following we will refer to this array as PICA. This reduces inter subject and Site variation.</p> <p>We used several atlases to diversify our model to better generalise (we used up to 10 different):</p> <h3>Basc:</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F6c4fcf78897c8f7d99bbd8780ff0e85f%2Fbasc.PNG?generation=1593499876369065&amp;alt=media" alt=""></p> <h3>Schaefer:</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F877ec3c26039269f93033eca98b0139c%2Fschfer300.PNG?generation=1593500180123531&amp;alt=media" alt=""></p> <p>We even calculated our own Parcellations using unsupervised learning approaches such as KMeans and Ward. We will link you some papers here as describing this would be beyond the scope of this post: </p> <p>Bertrand Thirion, Gael Varoquaux, Elvis Dohmatob, Jean-Baptiste Poline. Which fMRI clustering gives good brain parcellations ? Frontiers in Neuroscience, 2014.</p> <p>Vincent Michel, Alexandre Gramfort, Gael Varoquaux, Evelyn Eger, Christine Keribin, Bertrand Thirion. A supervised clustering approach for fMRI-based inference of brain states.. Pattern Recognition, Elsevier, 2011.</p> <h3>Kmeans</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F98a2bbf5e44f65de6a5a2de89cbbea0d%2Fjbmkkmeans.PNG?generation=1593499927440943&amp;alt=media" alt=""></p> <h3>Ward</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Fff709e43fa29f7a401ffb83df3c6936d%2Fjbmkward.PNG?generation=1593499951002666&amp;alt=media" alt=""></p> <p>We have seen this idea of atlases emerge in the discussion. But participants did not know what to further do with them. We had several approaches on this. We will only list the most contributing here, as this would be beyond the scope of this post: </p> <h3>Calculate connectivity of different components</h3> <p>Normally connectivity is derived by looking at the interaction of 2 components derived from parcellation over time. Here we used the ICA components as our “time” feature. We looked at the interaction of 2 components over ICA components using Matrices. </p> <p>We will visualise how this looks like using this basic yeo Parcellations with only 17 components. We used matrices up to 444 components as our RAM could not handle more.</p> <h3>Yeo</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Ff388f5f2ec9fed7d49ee10a484be1964%2Fyeoparcellation.PNG?generation=1593500079292710&amp;alt=media" alt=""></p> <h3>1. Correlation Matrix</h3> <p>The x and y axis are the different yeo components and the matrix is the correlation of these 2 components over the 53 ICA Networks.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F6400fd5cc5ea394eccb2ee5ad1a67cec%2Fcorrelation.JPG?generation=1593500547921516&amp;alt=media" alt=""></p> <h3>2. Covariance Matrix</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2F86c806d53a82c1773842b3fd7501c874%2Fcovariance.JPG?generation=1593500586579723&amp;alt=media" alt=""></p> <p>From now on only links as this gets too long :D</p> <h3>3. Partial Correlation Matrix</h3> <p><a href="https://en.wikipedia.org/wiki/Partial_correlation">https://en.wikipedia.org/wiki/Partial_correlation</a></p> <h3>4. Tangent Matrix</h3> <p>G. Varoquaux et al. “Detection of brain functional-connectivity difference in post-stroke patients using group-level covariance modeling, MICCAI 2010.</p> <h3>5. Precision Matrix</h3> <p><a href="https://en.wikipedia.org/wiki/Precision_(statistics)">https://en.wikipedia.org/wiki/Precision_(statistics)</a></p> <p>… you get the idea</p> <h3>What to do with these Matrices</h3> <p>We then used these matrices as input for our model (SVR, Ridge, Lasso, Elastic Net) and stacked their predictions using SVR and Ridge with a simple blend in the end. </p> <p>We also fed the raw parcellation data to some networks.</p> <p>And we used the public notebooks for IC and FNC features</p> <h2>Results</h2> <p>with Lofo: CV: age: 0.13378; d1v1: 0.14976; d1v2: 0.15016; d2v1: 0.1795; d2v2 0.175050 overall: 0.154682; Private LB: 0.15714; Public LB: 0.15615</p> <p>without Lofo: CV: age: 0.13477; d1v1: 0.15027; d1v2: 0.15077; d2v1: 0.18045; d2v2 0.17545 overall: 0.15539; Private LB: 0.15689; Public LB: 0.15626</p> <h2>Some more Tricks</h2> <ol> <li>Leave one Network Out</li> <li>Only use a single Network</li> <li>Calculate our Parcellations using Site 2 </li> <li>...</li> </ol> <h2>What we think contributed the most to not overfit to Site1 / Site2:</h2> <ol> <li>using Brain Atlases derived from Site1 + Site2</li> <li>using Brain Atlases derived from completely different experiments (e.g. Basc)</li> <li>The shear amount of features used in our models</li> <li>The amount of base models (around 70)</li> <li>The amount of parcellations used</li> </ol> <h2>What did not work</h2> <ol> <li>Calculating our own Parcellations solely on Site 1 -&gt; overfit to Site 1. </li> <li>We tried a lot with 3d CNNs but were not able to improve CV using them. - We were especially interested in remapping the Parcellated brain back to 3d. But this did not give any improvement. We are excited to see how other dealt with this. </li> <li>Autoencoders in many combinations</li> <li>....</li> </ol> <h2>Discussion:</h2> <p>We have not seen this approach anywhere in the literature. We now can perfectly calculate which components were contributing to the models prediction. Which brain regions are important and what interaction between different brain regions are relevant for ageing and the other assessment values. We hope that the organizers will allow us to further develop this idea and discuss the results with our medical supervisors. </p> <h1>Cheers!</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2685252%2Fba6aa2e139b3492443b0e175f30fc755%2Fpotato.JPG?generation=1593500378556395&amp;alt=media" alt=""></p> <p>PS. One of the most memorable moments of this competition for us will be the Japanese dominance in the LB</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
2nd Place Solution [Save the Prostate]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Thank you very much to organizers. Second I would like to thanks my team. We had such a positive, encouraging working environment. Our team contribution generates most of the ideas which you will read below and are shared by members. </p> <h1>Simple Resnet34 (DrHB)</h1> <h1>Image Preprocessing</h1> <p>I used medium resolution, the only preprocessing I did was to remove the white background and store medium resolution on SSD drive: </p> <p>```</p> <h1>function taken from R Guo</h1> <p>def crop_white(image, value: int = 255): assert image.shape[2] == 3 assert image.dtype == np.uint8 ys, = (image.min((1, 2)) &lt; value).nonzero() xs, = (image.min(0).min(1) &lt; value).nonzero() if len(xs) == 0 or len(ys) == 0: return image return image[ys.min():ys.max() + 1, xs.min():xs.max() + 1]</p> <p>```</p> <h1>Cleaning data</h1> <p>Like in APTOS competition, it was essential to clean images from pen marks, etc. I have used excellent work from this post: <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/151323</a> This also reduced the gap between CV and LB</p> <h1>Image Augmenatiosn</h1> <p>Augmentation occurred at two levels. (Slide and Tile): </p> <h3>1) Full slide</h3> <p>After the biopsy slide is open, we do random padding and applying one of the following transformations (similar to R Guo). </p> <p><code> def get_transforms_train(): transforms=A.Compose( [ A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.05, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT, p=0.5,value=(255,255,255)), A.OneOf([ A.Flip(p=0.5), A.RandomRotate90(p=0.5), ], p=0.3 ) ] ) return transforms </code></p> <h3>2) Tile</h3> <p>For each tile I used standard fastai <code>GPU</code> augmentations: <code>rotate=(-10, 10</code>),<code>flip vertically (p=0.5)</code>. For the padding I used <code>reflection</code> it gave a slight boost on CV </p> <h1>Model</h1> <p>I decided to use a very simple model resnet34 but throughout competitions ended up doing few modifications </p> <h3>1) Making square features:</h3> <p>The main idea is builds up on <a href="/iafoss">@iafoss</a>. Aftter resnet enccoder we reshape features to look like a square in a following way: <code>x = x.view(x.shape[0], x.shape[1], x.shape[2]//int(np.sqrt(N)), -1)</code> . Here <code>N</code> represents Number of Tiles. After this we pass all the features to <code>SqueezeExcite</code> Block</p> 2) SqueezeExcite block <p>After reshaping features, we added 1 SE block to enable the network to learn features for individual slides based on tiles. </p> <p>experiment done by <a href="/cateek">@cateek</a> ```</p> <h1>code adopted</h1> <h1><a href="https://github.com/rwightman/pytorch-image-models/tree/master/timm/models">https://github.com/rwightman/pytorch-image-models/tree/master/timm/models</a></h1> <p>def make_divisible(v, divisor=8, min_value=None): min_value = min_value or divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) # Make sure that round down does not go down by more than 10%. if new_v &lt; 0.9 * v: new_v += divisor return new_v def sigmoid(x, inplace: bool = False): return x.sigmoid_() if inplace else x.sigmoid() class SqueezeExcite(nn.Module): def <strong>init</strong>(self, in_chs, se_ratio=0.25, reduced_base_chs=None, act_layer=nn.ReLU, gate_fn=sigmoid, divisor=1, **_): super(SqueezeExcite, self).<strong>init</strong>() self.gate_fn = gate_fn reduced_chs = make_divisible((reduced_base_chs or in_chs) * se_ratio, divisor) self.avg_pool = nn.AdaptiveAvgPool2d(1) self.conv_reduce = nn.Conv2d(in_chs, reduced_chs, 1, bias=True) self.act1 = act_layer(inplace=True) self.conv_expand = nn.Conv2d(reduced_chs, in_chs, 1, bias=True) def forward(self, x): x_se = self.avg_pool(x) x_se = self.conv_reduce(x_se) x_se = self.act1(x_se) x_se = self.conv_expand(x_se) x = x * self.gate_fn(x_se) return x ```</p> <h3>3) Pooling Layer</h3> <p>Once the feature passed thru SqueezeExcite Layer, I did Normal pooling. Our experiment showed that the batch normalization layer was messing with the last layer's features, so we removed it and saw a slight jump on local cv. </p> <p><code> self.pool = nn.Sequential(AdaptiveConcatPool2d(), Flatten(), nn.Linear(2*nc,512), nn.ReLU(inplace=True), nn.Dropout(0.4), nn.Linear(512,7), </code></p> <h3>4) Final Head</h3> <p>I used two heads. One head was for classification second was for regression. I noticed that training with two looses makes training much smoother (with sigmoid trick below) and yields higher local CV (0.88 -&gt; 0.90). In the final prediction, I use output only for the regression head. </p> <p>One small modification that I did before calculating loss is that the regression head used sigmoid to scale outputs between (-1. 6.). This enables much smoother training without bumps and faster convergence.</p> <p>```</p> <h1>idea taken from fastai</h1> <p>def sigmoid_range(x, low, high): return torch.sigmoid(x) * (high - low) + low ```</p> <h1>Training</h1> <p>I trained in two phases. In the First phase was trained with 49 tiles and later finetuned with 81 tiles. Both phases were using standard one cycle.</p> <h1>Final Model</h1> <p>I trained 5 fold wich resulted on the CV of 0.911 and PB: 0.922.</p> <p>Our Best Ensemble was simple average. Of 4 models. </p> <p><code> @drhb resnet34 5 FOLD (CV -0.911) + <a href="/rguo97">@rguo97</a> 5 FOLD (two stage attention model CV 0.92 ) + <a href="/xiejialun">@xiejialun</a> FOLD (EFNET) (CV 0.915-0.917) + <a href="/cateek">@cateek</a> Se 1 FOLD (CV -0.91) Final Standing </code></p> <p><a href="/xiejialun">@xiejialun</a> <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169303">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169303</a> <a href="/rguo97">@rguo97</a> <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169108#940504">https://www.kaggle.com/c/prostate-cancer-grade-assessment/discussion/169108#940504</a></p> <p><code>LB: 0.914 PB: 0.937</code></p>
Tweet Sentiment Extraction
4th place solution overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Main parts of the solution:</h1> <p>• Pre-processing • Predicting 3 (start, end) candidates and assigning scores to them • Scoring them with external model and adding one more score to each candidate • Blending scores for candidates from different models • Selecting the best candidate • Post-processing</p> <h1>Pre-processing and post-processing.</h1> <p>“Magic” part. I came up with the algorithm, which is similar to those already described in great details in other solutions, so I will explain it only briefly.</p> <p>Pre-processing is the procedure that reduces the amount of ‘noise’ in initial data and the goal of post-processing is to bring it back. They are based on the counting of extra spaces in the part of a tweet before selected text. Extra space is any space which is present in a tweet, but not in ‘ ’.join(tweet.split()).</p> <p>So, in pre-processing I shift indices of selected text to the right on the amount of extra spaces and in post-processing to the left. The nuance here is that the length of selected text should be calculated with the single spaces around it. So in post-processing usually target end_idx is start_idx + len(selected_text)+2, but if selected_text ends with dot, comma, question mark etc. only space before the text should be taken into account and end_idx is start_idx + len(selected_text)+1.</p> <h1>Predicting 3 (start, end) candidates</h1> <h3><em>Architecture</em></h3> <p>The model in this case is a transformer. I used BERT, RoBERTa and ELECTRA. The input for the model is the following: BERT or ELECTRA: <code>[CLS] [POSITIVE] tweet [SEP]</code> RoBERTa: <code>&lt;s&gt; [POSITIVE] tweet &lt;/s&gt;</code> ‘[POSITIVE]’ can also be ‘[NEUTRAL]’ and ‘[NEGATIVE]’, these are added sentiment tokens. </p> <p>Embeddings for them are initialized with the embeddings of corresponding words ‘positive’, ‘neutral’ and ‘negative’. At the early stage I also tried to put [SEP] between sentiment token and tweet, but it worked slightly worse. Did not experiment with this setup later.</p> <p>As target each model gets indices of start and end tokens of selected text. The model has four heads: 1) QA dense head (just a linear layer without any dropout) for predicting start and end tokens. Takes token representation as the concatenation of the corresponding hidden states from the last two layers of the transformer. Tried here to take weighted sum of hidden states from all layers with learnable weights, but it performed a bit worse.</p> <p>Loss is computed with KL divergence to add label smoothing: true target token is given 0.9 probability and two of its neighbors (left and right) both take 0.05. If true target token is in the beginning of the sentence and we are calculating loss for start logits then true token still gets 0.9, but two following are taking 0.06 and 0.04. Analogous thing is implemented if true end token is the last: its proba is 0.9, but two previous have 0.06 and 0.04.</p> <p>2) Linear layer to predict binary target for each token: if it should be in selected text or not. Takes hidden states from the last layer. Experimented with other layers a lot, but it did not improve the performance. The loss in binary cross-entropy.</p> <p>3) Linear layer to predict a sentiment of each token. Also uses only the last layer of a transformer. Predicts 3 classes – neutral, positive and negative. Tokens from selected text are labeled as having the same sentiment as the tweet, while all other tokens are assigned neutral class. The loss here is the usual cross-entropy for each token separately.</p> <p>4) Two linear layers with ReLU in between to predict the sentiment of the whole tweet. Concatenates mean and max pooling over all tokens in a tweet skipping cls and sentiment tokens. Then concatenates such representations from the last two layers of a transformer and passes it through the multi-sample dropout. Also utilizes the momentum exchange (arxiv 2002.11102) before calculating the loss with cross-entropy.</p> <h3><em>Training phase</em></h3> <p>During training, the total loss is calculated as the weighted sum of losses from all four heads. Training is performed on 8 folds with AdamW optimizer and using SWA over a get_cosine_with_hard_restarts_schedule_with_warmup scheduler for 10 epochs. SWA snapshots were taken at the end of each epoch, despite this steps did not coincide with the steps of the minimal learning rate for the combination of parameters I used (num_warmup_steps, num_cycles). And for some reason (maybe it increases diversity between snapshots?) it worked better than taking snapshots at the end of each learning rate cycle. </p> <p>Tried to implement self-distillation from 2002.10345, which looks very interesting and promises to increase stability of the training, but it only made the performance quite significantly worse. Maybe did something wrong in implementation.</p> <h3><em>Inference phase</em></h3> <p>1) At the inference time, the first head is used to create a set of (start, end) candidates. First of all, each pair of (start, end) indices where end &gt;= start is assigned a logit as a sum of individual start and end logits. All cases where end &lt; start are given -999 logits. Then softmax is applied across all pairs to obtain probabilities for candidates and top 3 of them are selected to be used for the further processing. Tried other numbers of candidates, but 3 worked best. Let’s call the probability of a candidate from this head ‘<em>qa_prob</em>’.</p> <p>2) The output of the second head is the set of logits: one for each token. To obtain a score for each of the selected (start, end) candidates I took the sigmoid from the tokens and calculated the average log of the resultant token probabilities across candidate tokens. Let’s call the output number as ‘<em>score_per_token</em>’.</p> <p>3) The output of the third head is used in a very similar way to the previous. The only difference is that instead of sigmoid the softmax is taken over each token logits (there are 3 of them here – by the number of sentiments) and the proba corresponding to the sentiment of the tweet is selected. Then the same averaging operation as for previous head is applied to obtain a score for candidates. Let’s call it ‘<em>sentiment_per_token</em>’. So in the end of this stage at inference time we have 3 (start, end) candidates with 3 scores assigned to each of them</p> <h1>External scorer</h1> <h3><em>Architecture</em></h3> <p>Used ELECTRA with the following input: <code>[CLS] ([POSITIVE]|[NEUTRAL]|[NEGATIVE]) tweet [SEP] selected_text_candidate [SEP]</code></p> <p>Single head (linear-&gt;tanh-&gt;dropout-&gt;linear) on top of the transformer is fed with the concatenation of the cls token hidden states from the last two layers to predict if the current candidate for selected text is correct or not. Loss is computed with cross-entropy after application of momentum exchange.</p> <p>Tried to add a head for predicting a jaccard for a candidate along with or instead of classification head, but it made results worse.</p> <h3><em>Training phase</em></h3> <p>Dataset for training is built with all tweets each having three candidates from the previous model and also tweet with true selected_text is added if it is not present among candidates. Trained it for 3 epochs with AdamW and SWA.</p> <h3><em>Inference phase</em></h3> <p>3 candidates for each tweet are scored with this model. It ouputs two logits which are softmaxed and then the log of class 1 proba is taken as the score for the candidate. Will call it ‘<em>external_score</em>’ in the following. So after this step we have 3 candidates and each of them has 4 scores.</p> <h1>Ensembling different transformers</h1> <p>BERT, RoBERTa and ELECTRA are actually ensembles of 8 (by the number of folds) models for which usual logits averaging is implemented. For BERT I used 4 bert-base-cased and 4 bert-large-cased models. For RoBERTa – 5 roberta-base-squad2 and 3 roberta-large. For ELECTRA – 6 electra-base-discriminator and 2 electra-large-discriminator. External scorer is the combination of 4 electra-base-discriminator.</p> <p>Each of three models – BERT, RoBERTa and ELECTRA – outputs 3 candidates for a given tweet. If there is an intersection between these sets of candidates, then only this intersection is considered. If intersection is empty then the union of BERT and ELECTRA candidates worked best.</p> <p>The final score for each candidate is the weighted sum of <em>qa_prob</em>, <em>score_per_token</em>, <em>sentiment_per_token</em> and <em>external_score</em> inside the model type (BERT, RoBERTa or ELECTRA) and then the weighted (here models are weighted) sum of these sums. The final prediction is the candidate with the largest score, which then goes through post-processing. Also in case if there are two candidates with very close score, tried to predict their concatenation, but it did not really bring much.</p> <p>The solution appeared to be quite sensitive to the weighting coefficients, which is disadvantage, because they cannot be reliably selected in cross-validation.</p> <p>Eventual CV score for a wide range of weights was around 0.732. Individual performance among models was the best for ELECTRA. And RoBERTa scored better then BERT.</p> <p>Thanks for reading!</p>
TReNDS Neuroimaging
11th place solution (14th with pure SVR)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks again to the organizers for a great competition and for all my competitors for a great challenge. As my first kaggle competition, I learned a lot and trying to keep up with the top scores was a thrill (while also very stressful)!</p> <p>I finally was able to write up the (almost) whole code base to my approach. Here's how it worked and what made my approach sometimes unique:</p> <ol> <li><a href="https://www.kaggle.com/miykael/trends-exploration-of-the-targets">TReNDS: Exploration of the targets</a>: Having a closer look at the targets I decided for the following transformations. <ul><li><strong>First</strong>, targets were scaled using a <code>RobustScaler</code>.</li> <li><strong>Second</strong>, targets 2 to 5 were transformed to a more normal distribution using a power of 1.5</li> <li><strong>Third</strong>, I took the decision to restrict all predictions to the unique values in the target set. Especially for <code>age</code> this was helping a lot.</li></ul></li> <li><a href="https://www.kaggle.com/miykael/trends-feature-exploration-engineering">TReNDS: Feature exploration &amp; engineering</a>: Concerning the feature datasets, I removed about 1% of extrem outliers (i.e. subjects with often very strange feature values) and I've created two additional feature datasets: <ul><li><strong>Intra subject</strong> features: By computing the correlations between all 53 MRI maps within a subject, I've created 1'378 new features hopefully representing within subject characteristics.</li> <li><strong>Inter subject</strong> features: By computing the correlation between each of the 53 MRI maps and the corresponding averaged population map, I've created 53 new features hopefully representing between subject characteristics.</li></ul></li> <li><a href="https://www.kaggle.com/miykael/trends-data-scaling-and-modeling">TReNDS: Data scaling and modelling</a>: Following the lead from the kaggle discussions/notebooks I investigated a feature dataset unique scaling approach. However, instead of scaling the FNC dataset just with a factor of 500, I fine tuned the exact scaling factor for each of my 4 feature datasets. Additionally, this fine tuning was done separately for each of the 5 targets, and for each of the models I've explored (i.e. Ridge and SVR with an RBF kernel).</li> </ol> <p>These three notebooks brought me to a leaderboard position below 20. Only in the last 5 days I've discovered the <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">TReNDS Multi-Layer Model</a> notebook showcasing a stacking approach. By adapting this code and optimizing it to my individually scaled dataset, I was able to push my score the ladder up to my final position.</p> <p>As an additional goody, I've also tried to profit from the particular relationship between <code>domain2_var1</code> and <code>domain2_var2</code> (as described in my first notebook) and was hoping that a multi-target prediction approach could help. To do so, I've implemented a dense neural network with two outputs and multiple loss functions. Unfortunately, this approach wasn't fruitful. Nonetheless, the corresponding notebook can be found here: <a href="https://www.kaggle.com/miykael/multi-loss-neural-network-for-domain2-targets">Multi-loss neural network for domain2 targets</a></p> <p>Thanks again for a great competition.</p> <p>Cheers, Michael</p>
Prostate cANcer graDe Assessment (PANDA) Challenge
9th Place Solution [ResNeSt might be the key]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thank you very much to organizers.</p> <p>My solution is simple. I ended up with the following models in my final ensemble submission.</p> <p>Label Smoothing + Ordinal regression + efficientnet_b0 x 2 Label Smoothing + Ordinal regression + GeM + ResNeSt50 x 2</p> <p>I was not confident with efficientnet_b0 since there was difference of about .02-.03 between the CV (.90XX) and Public LB (.87XX). It seemed to me like it was overfitting the LB despite using Label Smoothing and less stable. Whereas, ResNeSt50(Split-Attention Network) gave me stable difference between Public LB (.85XX) and CV (.86XX) along with Label Smoothing.</p> <p>We all knew that qwk was not a stable metric for given amount of Public Test Images. We also knew that training data seemed to have ample amount of noise. So, My intuition was to select one less scoring more stable model and one best Public LB scoring model for ensembling. <br> I manually weighted the models during ensembling. I observed that giving more weight to efficientnet_b0 (Best Public LB scoring Model) leads to decrease in Public LB, therefore I selected the ensemble having equal weights (simple average).</p> <p>After the releasing of Private LB the same Resnest50 gave a private LB score of .90XX.</p> <p>I was pretty sure to atleast land in bronze level zone, but shake up seemed to be very rough eventually landing me in gold zone. (completely unexpected)</p> <p>For training, I used both the techniques for tiling the images (List of tiles and Single large Image of Tiles). I used 36x256x256 (level - 1) tiles along with simple augmentations like hflip,flip,transpose,rotation..</p> <p><strong>What More Could Have been Done ??</strong></p> <p>Since I was using kaggle and Colab for training, I was constrained to 30 hr weekly and 12hr limit respectively. It is clear that I don't had enough gpu compute to do more experiments.</p> <p>But the following seemed to be much effective approach :- </p> <p>I also tried training seresnext50 + AdaptiveConcatPool + Classification (pretty sure that Ordinal Regression would have scored better) on Single Large Image of tiles dataset with noisy labels removed (Images Having Pen marks ) along with <strong>Progressive Resizing</strong> (ie) training in following sequence:-</p> <p>16x128x128 (Level-2) --&gt; 16x160x160 (Level-2) --&gt; 20x224x224 (Level-1) and so on....</p> <p>Make sure to load weights from previous stage while training the next stage.</p> <p>Due to the limited compute power I wasn't able to train seresnext50 on larger image size, my final input image size was 20x224x224 which gave me CV of .81XX and Public LB of .82XX (pretty stable). When the Private LB came, the resnext50 model trained in very same way mentioned above gave private LB of .89XX. Since I wasn't able to train this on larger images I wasn't able to use it in final ensemble.</p> <p>I am pretty sure it would have given a decent boost to my final score.</p>
TReNDS Neuroimaging
10th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this exciting competition and also congratulations to all the winners.</p> <p>Here is a brief description of my solution.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2615556%2Fd058e1058828f2adcb7b3c3ba2abeab2%2Fsol%20(6" alt="">.png?generation=1593597555606133&amp;alt=media)</p> <ul> <li>My final submission (public: 0.15665, private: 0.15741) is an average of several variations of the prediction above.</li> <li>The blending part of the pipeline above uses not only OOF predictions but predictions on training sets (Gaussian noise is added to reduce overfitting). This improved public LB score 0.1577 -&gt; 0.1569.</li> <li>I used only ridge regressor as the 2nd level model. The parameter alpha and the scales of features were optimized with Optuna.</li> </ul>
Prostate cANcer graDe Assessment (PANDA) Challenge
25th place solution [Kaggle_gaggle]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Thank you very much to organizers and thanks to <a href="/kyunghoonhur">@kyunghoonhur</a> for collaborating with me!</p> <p>We are happy to get unexpected medal(Our PB rank is 124th). Individually, this is my first medal on kaggle and this medal made me even more into kaggle.</p> <p>Based on <a href="/haqishen">@haqishen</a> train &amp; inference notebook, we will mention some of things we tried and a comment about how those works affect on our final result.</p> <h2>Tile Size Selection</h2> <ul> <li>256x256x36 tiles : best results</li> <li>128x128x16 tiles : lower result than 256x256x36 tiles. It seemed necessary to raise the image resolution.</li> <li>256x256x16 tiles : better result than 128x128x16 tiles, but not satisfactory.</li> <li>256x256x36 tiles with little white as possible : Since 256x256x36 tiles have lots of white spaces, we tried to remove white spaces based on <a href="/rftexas">@rftexas</a>’s <a href="https://www.kaggle.com/rftexas/better-image-tiles-removing-white-spaces">notebook</a>. It achieves lower train loss than simple 256x256x36 tiles but quadratic weighted kappa score did not improved.</li> </ul> <h2>Augmentation</h2> <p>Several different augmentation were tested (Transpose, VerticalFlip, HorizontalFlip, RandomRotate, Blur, etc), but not much performance improvement was seen. Just taking basic augmenation configuration based on <a href="/haqishen">@haqishen</a> ’s notebook. Albumentation library</p> <blockquote> <p>Transpose(p=0.5) VerticalFlip(p=0.5) HorizontalFlip(p=0.5) All the augmentation were made at 2 levels: tile level + after the tile concatenated</p> </blockquote> <h2>Model</h2> <p>Similar to other competition (Deep learning for image classification), the most popular model architecture (Resnet, efficientnet) we tried. Among many several Resnet model structure, SE_Resnext50 was shown the highest score (except more than 50 model because our GPU limitation). Efficientnet showed stable and high score at CV. We couldn't get high level of efficientnet model due to our GPU unfortunately , but some discussion let us know that deep and heavy size model will lead to overfit (Effnet b6) So we focus on Efficientnet B0 and B1, between them not much difference shown.</p> <h2>Optimizer &amp; schedular</h2> <p>Adam optimzer Adam + GradualWarmupScheduler + CosineAnnealingLR</p> <h2>Inference</h2> <p>a) TTA(Test Time Augmentation)</p> <p>Based on tile generation method from Quishen Ha kernel, slight augmentation was added when conducting tile extraction That code is at mode=0 or mode=1 option of PANDA dataset generation class. Difference between mode =0 and mode1 is the sequence of tile into the concatenated input (36 x tile). So, when inferencing model, mode1 tile and mode 2 tile were considered as augmented data for test time augmentation(TTA). Additionally, we added transform augmentation in the same way of train (2 levels, tile + concatenated input). From several experiments, TTA showed quite positive effects on our public score when increasing the number of augmentation data. However, considering this competition is code competition which limits the submission time below 9 hours, we made intermediate number of TTA not as much like more than 100 TTA for preventing over of regular submission time.</p> <blockquote> <p>16TTA(mode=0) + 16TTA(mode=1) Transpose(p=0.5) VerticalFlip(p=0.5) HorizontalFlip(p=0.5)</p> </blockquote> <p>b) Model Ensemble</p> <p>The hardest part in this competition was how consider overfit on our training data and how predict shake up from private data. We carefully watched our CV score and LB score and continuously compere them. At last, from the comparison CV and LB for each fold, we got the fold which had the most similar result between CV and LB score.</p> <p>Ensemble result [Efficient net b0(fold0) and Efficient net b1 (fold0 and fold1)] showed the best score at public score and final(private) score both.</p>
M5 Forecasting - Accuracy
3rd place solution - NN approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone, We would like to share our solution with you.</p> <p>Firstly, we would like to thank hosts, Kaggle for opening this great competition. Also, thanks to all participants for useful discussions. Personally, this is the first competition I've joined, and I am happy to achieve a meaningful result.</p> <p>Our goal was to achieve a high rank in this competition using deep learning (neural network) approach. I and my colleague are more familiar with deep learning methods and we believed that NN can get competitive result with other ML approaches (especially gradient boosting).</p> <hr> <h3>Summary</h3> <p>We trained modified DeepAR with Tweedie loss and make a final prediction from the ensemble of multiple models chosen using the past 14 periods of WRMSSEs.</p> <h3>Network</h3> <p>Our base network is DeepAR(<a href="https://arxiv.org/abs/1704.04110">https://arxiv.org/abs/1704.04110</a>) which consisted of multiple LSTMs. We modified the baseline network to predict 28days with rolling predictions in the training phase (original DeepAR only predict 1-day in the training phase). Our modified DeepAR generates losses from rolled 28days and this gives more stable predictions on rolled prediction.</p> <h3>Loss</h3> <p>Tweedie Loss - Thanks to <a href="/timetraveller">@timetraveller</a> for a good discussion (<a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/150614">https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/150614</a>)</p> <h3>Features</h3> <p>We used the following features. All features are concatenated and fed to the network.</p> <ul> <li>Sale values <ul><li>Lag 1 value</li> <li>Moving average of 7, 28 days</li></ul></li> <li>Calendar: all values are normalized to [-0.5,0.5] <ul><li>wday</li> <li>month</li> <li>year</li> <li>week number</li> <li>day</li></ul></li> <li>Event <ul><li>Event type : use embedding</li> <li>Event name : use embedding</li></ul></li> <li>SNAP : [0, 1]</li> <li>Price <ul><li>raw value</li> <li>Normalized across time</li> <li>Normalized within the same dept_id</li></ul></li> <li>Category <ul><li>state_id, store_id, cat_id, dept_id, item_id : use embedding</li></ul></li> <li>Zero sales <ul><li>Continuous zero-sale days until today</li></ul></li> </ul> <h3>Training Scheme</h3> <p>We randomly sampled 28-days slice from each sequence and made 64 slices for each batch. Each batch is fed to the network and network predicts the next 28-days. Training iterated 300 epoch (one-epoch = 30490/64 iterations). We used Adam optimizer and used cosine annealing for the learning rate schedule. We used all periods including validation(~1942) for training.</p> <h3>CV</h3> <p>It was hard to select CV period as WRMSSE is severely fluctuated according to the period. Besides, we found that 1914~1942 period is more unreliable as the characteristics of this period are more different from other periods. (There were many items start to selling from zero-sales) As the level 12 value is intermediate and sporadic, we conclude that the network also struggles to learn about training values. And we conclude that we don't need to much care about over-fitting. Instead, we concentrate to select good model from the trained model, which has low WRMSSE values for a specific period.</p> <p>We evaluated WRMSSE for the past 14 periods ((1914, 1886, 1858,... 1550) and selected a model that has a low mean of WRMSSEs. For each model, we evaluated every 10-epoch (from 200~300 epoch) and selected the top-3 epoch. From the trained 8 models, we make an ensemble model from 8*3 selected model for the final prediction. - More precisely, we make a final prediction using 24 + 19 models (24 models have applied dropout, 19 models are not applied dropout). This was just a heuristic strategy.</p> <h3>What we tried, but failed</h3> <ul> <li>Models <ul><li>We used other network architectures including CNN, Transformers, ..., etc. We could not find any model superior to others, and we choose the most basic model, DeepAR. </li></ul></li> <li>Loss <ul><li>Classification loss (CE loss)</li> <li>WRMSSE as a loss</li></ul></li> <li>Forecast Reconciliation <ul><li>Used different level but worse than using lv12</li></ul></li> </ul> <hr> <p>We are happy to achieve the goal with our method. More importantly, we've learned many things from this competition. Thanks to all again!</p>
TReNDS Neuroimaging
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We( <a href="/yasufuminakama">@yasufuminakama</a>, <a href="/berserker408">@berserker408</a> ) would like to thank kaggle &amp; host for the interesting competition and to all the participants for giving us a lot of ideas. And congrats to winners!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1695531%2Fb512c7d9fbdd8d728b42b10cd34ba8f8%2FTReNDS_8th_place_solution.png?generation=1593492256129741&amp;alt=media" alt=""></p> <h1>3D map features</h1> <p>We extracted 512 3D map features using 3D-Resnet10. As I posted to External Data Thread, we used the one shared by <a href="/shentao">@shentao</a> ( see <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797">https://www.kaggle.com/c/trends-assessment-prediction/discussion/147797</a> ). More details, you can check his discussion. Thanks a lot to him. As written in <a href="https://www.kaggle.com/c/trends-assessment-prediction/data">https://www.kaggle.com/c/trends-assessment-prediction/data</a>, <code> The scores (see train_scores.csv) are not the original age and raw assessment values. They have been transformed and de-identified to help protect subject identity and minimize the risk of unethical usage of the data. Nonetheless, they are directly derived from the original assessment values and, thus, associations with the provided features is equally likely. Before transformation, the age in the training set is rounded to nearest year for privacy reasons. However, age is not rounded to year (higher precision) in the test set. Thus, heavily overfitting to the training set age will very likely have a negative impact on your submissions. </code> We need to avoid overfitting to the training data, to avoid overfitting to the training data, 3D map features helped a lot.</p> <h1>Models</h1> <p>We used fillna data, fillna by SVM+Ridge.</p> <h2>NN</h2> <p>NN base structure is below. ``` class CFG: hidden_size1=64 hidden_size2=128 hidden_size3=6 dropout=0.5</p> <p>class TabularNN(nn.Module): def <strong>init</strong>(self): super().<strong>init</strong>() self.mlp1 = nn.Sequential( nn.Linear(26, CFG.hidden_size1), nn.BatchNorm1d(CFG.hidden_size1), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size1, CFG.hidden_size1//2), nn.BatchNorm1d(CFG.hidden_size1//2), nn.Dropout(CFG.dropout), ) self.mlp2 = nn.Sequential( nn.Linear(1378, CFG.hidden_size2), nn.BatchNorm1d(CFG.hidden_size2), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size2, CFG.hidden_size2//2), nn.BatchNorm1d(CFG.hidden_size2//2), nn.Dropout(CFG.dropout), ) self.mlp3 = nn.Sequential( nn.Linear(512, CFG.hidden_size3), nn.BatchNorm1d(CFG.hidden_size3), nn.Dropout(CFG.dropout), nn.PReLU(), nn.Linear(CFG.hidden_size3, CFG.hidden_size3//2+1), nn.BatchNorm1d(CFG.hidden_size3//2+1), nn.Dropout(CFG.dropout), ) self.head = nn.Sequential( nn.Linear(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1, CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1), nn.BatchNorm1d(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1), nn.Dropout(0.1), nn.Linear(CFG.hidden_size1//2+CFG.hidden_size2//2+CFG.hidden_size3//2+1, 5), ) def forward(self, cont_x): loading_x = cont_x[:, :26] fnc_x = cont_x[:, 26:1404] img_x = cont_x[:, 1404:] loading_x = self.mlp1(loading_x) fnc_x = self.mlp2(fnc_x) img_x = self.mlp3(img_x) x = torch.cat((loading_x, fnc_x, img_x), 1) x = self.head(x) return x <code> Criterion was optimized per target using optuna like below. This improved results. </code> def weighted_nae(inp, targ, W=torch.FloatTensor([0.3, 0.175, 0.175, 0.175, 0.175])): return torch.mean(torch.matmul(torch.abs(inp - targ), W / torch.mean(targ, axis=0)))</p> <p>W_DICT = {'age': torch.FloatTensor([1.0, 0.28405462973430023, 0.15685327697616772, 0.06391563565928776, 0.014662813285223892]), 'domain1_var1': torch.FloatTensor([0.6638609260536665, 1.0, 0.14338389219696276, 0.1384448250515486, 0.1682642506073942]), 'domain1_var2': torch.FloatTensor([0.5528250413216551, 0.2767068350188637, 1.0, 0.7764813978084125, 0.1654236853816111]), 'domain2_var1': torch.FloatTensor([0.7856514193916039, 0.15941727005223633, 0.43970552609312813, 1.0, 0.16377864561256422]), 'domain2_var2': torch.FloatTensor([0.502415239502587, 0.2892349530842958, 0.2382098399995868, 0.15870234342595627, 1.0])} ``` Averaged NN scored CV: 0.15786, LB: 0.15781 without post-processing. We prepared more averaged NN prediction and used it for final blend.</p> <h2>other models</h2> <p>We referred four public kernels, and retrain them with our 3D map features. <a href="https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging">https://www.kaggle.com/aerdem4/rapids-svm-on-trends-neuroimaging</a> ( <a href="/aerdem4">@aerdem4</a> ) <a href="https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging">https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging</a> ( <a href="/tunguz">@tunguz</a> ) <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">https://www.kaggle.com/david1013/trends-multi-layer-model</a> ( <a href="/david1013">@david1013</a> ) <a href="https://www.kaggle.com/andypenrose/baggingregressor-rapids-ensemble">https://www.kaggle.com/andypenrose/baggingregressor-rapids-ensemble</a> ( <a href="/andypenrose">@andypenrose</a> ) Thanks to these four kernels' authors.</p> <h2>site2 detect model</h2> <p>This part is similar to the public kernel except adding the 3D map features. The valid AUC can reach 0.98. Thanks to Bojan. <a href="https://www.kaggle.com/tunguz/adversarial-trends-site">https://www.kaggle.com/tunguz/adversarial-trends-site</a> ( <a href="/tunguz">@tunguz</a> )</p> <h1>post-processing</h1> <p>This trick is quite common in regression competitions. And it improved our score by ~0.0003. <code> sub.loc[(sub.target=='age')&amp;(sub.site==1),'Predicted'] *= 1.01 sub.loc[(sub.target=='age')&amp;(sub.site==2),'Predicted'] *= 1.02 sub.loc[(sub.target=='domain1_var1'),'Predicted'] *= 1.01 sub.loc[(sub.target=='domain2_var1')&amp;(sub.site==2),'Predicted'] *= 1.01 </code></p> <h1>What did not work</h1> <ul> <li>pseudo labeling</li> </ul> <h1>Final result</h1> <p>Public score: 0.15683 (Public 10th) , Private score: 0.15729 (Private 8th)</p>
ALASKA2 Image Steganalysis
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Methods:</strong> I combined a model trained in the spatial domain (YCbCr) and a model trained in the DCT domain with a second-level model trained using their bottleneck features. The spatial domain model was seresnet18 initialized with imagenet weights. To make training work I removed the stride in the first conv layer and the max-pooling layer. It's important for the model to stay at higher resolutions longer to capture the weak stego signals. Channel-attention with the se-block also gives significant improvement when comparing the results of resnet18 and seresnet18. To model in the DCT domain, I transformed the 8x8=64 DCT components as input "channels", so the original 3x512x512 becomes (64x3=192)x64x64. The 192 raw DCT values was one-hot encoded before entering the CNN following the idea of this paper (<a href="http://www.ws.binghamton.edu/fridrich/Research/OneHot_Revised.pdf">http://www.ws.binghamton.edu/fridrich/Research/OneHot_Revised.pdf</a>). The CNN has six 3x3 conv layers with residual connections and se-blocks, spatial size is constant 64x64 until the last GAP layer. For augmentations, besides rotate90 and flips, cutmix worked pretty well. Each time a stego image was met during training, I also randomly re-assigned +1 and -1 to the DCT values in the modified positions. The 2nd-level models was a regular fully-connected net.</p> <p><strong>Results:</strong> I made totally four 65000x4/10000x4 (train/validation) splits. On each split, I trained one set of the models described above. The final submission was a simple average of the probabilities. Except split4 spatial domain model, which was trained with mixup and 10 classes, all the other models were trained with cutmix and 12 classes. The validation results are shown below. For this competition, we should not use public LB for model selection. | |YCC| DCT|combined| | --- | --- | |split3|0.9395|0.8706|0.9439| |split4|0.9381|0.8749|0.9434| |split6 | 0.9409 | 0.8674 | 0.9452| |split7 | 0.9418 | 0.8719 | 0.9453|</p>
ALASKA2 Image Steganalysis
[12th place] Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners and participants. Thank you to the organizers for this interesting competition! </p> <p>Wow, <a href="https://www.kaggle.com/felipekitamura" target="_blank">@felipekitamura</a> and I are super surprised that we somehow ended up in the gold medal range! Our solution doesn't really have a lot of novelty, but here it is, in brief:</p> <p>4-model ensemble:</p> <ul> <li>EfficientNet-B8 (0.930 -&gt; 0.920)</li> <li>EfficientNet-B5, change initial stride 2 to stride 1 (0.926 -&gt; 0.920)</li> <li>EfficientNet-B4, change initial stride 2 to stride 1 (0.928 -&gt; 0.920)</li> <li>EfficientNet-B0, change first two stride 2 to stride 1 (0.930 -&gt; 0.921)</li> </ul> <p>All models were trained on original 512x512 RGB input with flip augmentation.</p> <ul> <li>Initialization: ImageNet</li> <li>4 classes</li> <li>Multisample dropout, dropout 0.2</li> <li>Concat pooling (concatenate global max and global average pooling)</li> <li>Vanilla cross-entropy loss</li> <li>AdamW optimizer, cosine decay, initial LR 5.0e-4, 50 epochs. B4 trained slightly differently, using OneCycleLR (30% warmup, peak LR 3.0e-4, 30 epochs) with original striding, then fine-tuned for 30 epochs with initial stride 1</li> <li>Single 90/10 split</li> <li>Models were trained on 4x Quadro RTX 6000 24GB using a batch size of around 32-40. B8/B5 models were trained on 8x V100 32GB. We didn't start seriously participating in this competition until the last 2 weeks, so without these hardware resources it would have been very difficult to do well. This is probably the competition where I used the most GPU compute, as I usually don't even bother with multi-GPU training.</li> </ul> <p>It was tough to get local CV because I kept running into a bug with the wAUC computation… In order to fix it, it made all the wAUCs for the models fall into the 0.34-0.35 range. Still unsure what was going on!</p> <p>Best single model public LB 0.930 (private 0.921) -&gt; simple ensemble LB 0.936 (private 0.928).</p> <p>Other things we tried:</p> <ul> <li>TTA: many other teams had improvements with TTA, but we did not. For example, the B0 model went from 0.930 to 0.926 on public LB with TTA. <strong>However</strong>, this improved private LB performance from 0.920 to 0.924! </li> <li>Mixup: we experimented with a mixup strategy where we mixed together the cover and stego images of the same source. It took much longer to train, though, so we abandoned the idea. </li> <li>ResNet/ResNeSt: didn't work as well. Removing the max pooling helped, but EfficientNets were better. MixNet also performed well, but EfficientNets were the best overall. </li> </ul>
ALASKA2 Image Steganalysis
18th Place Brief Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank Kaggle for organizing such an interesting competition. Unfortunately, I had not paid much attention to it until just a couple of weeks ago, and am regretting not spending more time on it. Especially since models for the competition took <strong>really</strong> long time to train.</p> <p>Like many others, I decided to seriously join the competition after discovering the wonderful <a href="https://www.kaggle.com/shonenkov">Alex Shonenkov</a> starter <a href="https://www.kaggle.com/shonenkov/train-inference-gpu-baseline">kernel</a>. Most of my work was building upon his work, and experimenting with different network backbones, training schedules, ensembles, and to a lesser degree augmentations.</p> <p>My final ensemble consists of the following three EfficientNet model architectures:</p> <ul> <li><p>B1 - I have only trained one of these networks successfully. The best result can be found in <a href="https://www.kaggle.com/tunguz/best-b1-inference/">this kernel</a>, and all the weights are in <a href="https://www.kaggle.com/tunguz/eb1-weights">this dataset</a>. The best single model achieves 0.918 locally, 0.925 on public LB and 0.911 on private LB.</p></li> <li><p>B2 - I have trained several of these networks, and managed to get quite a bit of an improvement over Alex's original model. However, these networks have only made a minor contribution to my final blend. The best result can be found in <a href="https://www.kaggle.com/tunguz/best-b2-inference">this kernel</a>, and all the weights are in <a href="https://www.kaggle.com/tunguz/eb2-weights/">this dataset</a>. The best single model achieves 0.923 locally, 0.929 on public LB and 0.916 on private LB.</p></li> <li><p>B4 - These networks have been the main workhorses behind my solution. Unfortunately, they are really slow to train, and I pretty much pushed all of my compute resources to get the best solution with them. Did not have any additional capacity (compute + time) to train bigger models, which I suspect would have performed even better. The best result can be found in <a href="https://www.kaggle.com/tunguz/best-b4-inference">this kernel</a>, and all the weights are in <a href="https://www.kaggle.com/tunguz/alaska2-eb4-model-weights">this dataset</a>. The best single model achieves 0.930 locally, 0.940 on public LB and 0.925 on private LB.</p></li> </ul> <p><strong>Training Schedule</strong></p> <p>This is probably the biggest source of improvement for my networks over the base one that was found in the public kernel. I started with the same training schedule as in the original kernel, but then I <strong>re-trained</strong> all the weights - twice. The second time I decreased the LR to 0.0005, and increased patience to 2. I added JpegCompression to the augmentations, but otherwise did not change anything. I also increased training to 50 epochs, which took a really long time to go through, since each epoch would take about 2.5 hours on an V100 GPU.</p> <p><strong>Train/Val Split</strong></p> <p>For the most of the models I stuck to the original 80/20 split, but have used different seeds. The choice of split seems to have been very important in this competition - the final models varied between 0.933 and 0.940 on LB. I have also trained a couple of models with the 95/5 split, but since they were B2 networks, they did not contribute significantly to the final ensemble. </p> <p><strong>Final Blend</strong></p> <p>Since I've used different splits for my training, it was not possible to do a consistent local validation for the ensemble, so I was forced to rely on LB. Again, since I was only able to work on this competition for less than two weeks, that did not leave me a lot of submissions on which to base my validation strategy, so I had to rely on my intuition a lot. For instance, I did not use an equally weighted average for the blend of the same model on different splits, but based the weights on the individual model's performance on LB. It is highly likely that the weights I had finally used are suboptimal, but I feel fortunate that my two submissions were the best ones on private LB. </p>
ALASKA2 Image Steganalysis
[31st place] with public kernel
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> and his <a href="https://www.kaggle.com/shonenkov/train-inference-gpu-baseline" target="_blank">public kernel</a><br> I have set up the size of batches for my hardware and have used B7</p>
ALASKA2 Image Steganalysis
[4th place] Definitely it's overfitting to the public LB. (0.948->0.930)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Definitely it's overfitting to the public LB. But a solo gold is good enough to me. Maybe the best to me, because no need to submit my mess source code :) Congrats to everyone in this competition. We have learned a lot from it.</p> <p>Here is a brief of my solution.</p> <p>■ Using Tensorflow and TPU</p> <p>■ Preparation of the data Save DCT(512x512x16) as png into .tfrec file. (lossless) Save quality factor into .tfrec file. Save draft payload into .tfrec file. (If there any change in 8x8 area, count as 1)</p> <p>Transfer DCT into YCbCr in TF way at training time. By using this method, I can train one epoch in 15~20 mins for Effnet B0/B1, and 30~45 mins for B6/B7. The reason of using YCbCr is <a href="https://www.kaggle.com/c/alaska2-image-steganalysis/discussion/150359#845167">here</a>.</p> <p>■ Augmentation All of the augmentations are implemented in the TF way. (maybe there is something wrong here)</p> <p>Flip LR/UD Flip +/- (IMO it's available by using YCbCr) Multiply random number (0.98~1.02). (We are detecting the change of wavelets) Random shuffle the 24x24 / 32x32 / 40x40 blocks. IMO this can make model focus on wavelets of block but the contents of image. Keeping the boarder(16 pixels) because UERD always change this area. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2006644%2F86c070756b810c74293e6ada0707b630%2Fshuffle.png?generation=1595300985030345&amp;alt=media" alt=""></p> <p>No Rot90, because the quantization table is not diagonal symmetry. (maybe I was wrong)</p> <p>■ Models Changing one pixel by +/-1 in DCT will cause a change of 8x8 block in YCbCr, There are a lot of patterns to learn, so I tried B6/B7/B8 and the final result is the ensemble of them. The <a href="https://www.kaggle.com/wuliaokaola/alaska2-best-b6-inference">best single model</a> is B6(Local/public/private: 0.940/0.940/0.929)</p> <p>Multiclass(4) + Aux loss (payload, mae) Quality factor as input. Not target. Using small learning rate at final stage. TTA: LR/UD/+-, 8 per image. (see Augmentation)</p> <p>■ Training I just focused on one fold and refined it on all data. Regrettably, it is overfitting to the public LB. By observing the public/private LB, I think many of others are like me except the winner. Congratulations again.</p> <p>If I trained more folds maybe I could ...... :) (It will take a long time and forget it now) </p> <p>■Update 7/22 Add a link of <a href="https://www.kaggle.com/wuliaokaola/alaska2-best-b6-inference">best single model</a>.</p>
Tweet Sentiment Extraction
[48th place] what worked and what did not work
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Solution is built on top of the amazing kernel from Abhishek: <br> <a href="https://www.kaggle.com/abhishek/roberta-on-steroids-pytorch-tpu-training">https://www.kaggle.com/abhishek/roberta-on-steroids-pytorch-tpu-training</a></p> <p>What worked: <br> - for neutral: selected_text = text - increased batch size (64) - label smoothing - modified CE loss to penalize distance from true span (see this kernel: <a href="https://www.kaggle.com/laevatein/tweat-the-loss-function-a-bit">https://www.kaggle.com/laevatein/tweat-the-loss-function-a-bit</a>) - post-processing to account for offset induced by empty space, a.k.a. the magic, see below =&gt; +0.004 LB - pseudo-labelling (surprise!) =&gt; +0.002. I produced pseudo-labelled samples from the original dataset by predicting the sentiment and selected text using only the samples in the training split for each fold iteration, in order to avoid leakage in the validation split. I took the most confident 4000 pos/neg and 2000 neutral, based on sum of start/end logits. For neutral pseudo-samples selected_text = text. <br> - ensembling: average logits of 3 seeds 5-fold models + 3 seeds models running on the entire training dataset (adding 1 epoch) =&gt; +0.001 LB</p> <p>What did not work: well, all the rest, in particular: <br> - augmentation (synonym, word swapping, character replacement, ...) - pos/neg upsampling - fancy heads on top of transformers - ...</p> <p><code> def fix_offset(x): <br> txt = x['text'] <br> if txt == '': <br> return '' pred = x['selected_text'] new_txt = ' '.join(txt.strip().split()) start = new_txt.find(pred) end = start + len(pred) to_end = len(new_txt) - end return txt[start: len(txt) - to_end] </code></p>
ALASKA2 Image Steganalysis
Private LB 13th Solution - so near yet so far :)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, congratulation to all, and especially to the prize winner and gold medalists.<br> Secondly, ouch, narrowly missing out on gold.</p> <p>So here are some essential points of our solutions:</p> <ul> <li><p>DL framework: Pytorch</p></li> <li><p>Used Architecture: Effnet(timm/geffnet) B0/B2/B3/B4/B5, <a href="https://github.com/clovaai/rexnet" target="_blank">Rexnet</a> 1.3/1.5/2.0 </p></li> <li><p>Model customization: <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> did some amazing work in adding convolution head and SE head to our chosen architecture, and dropping the last two blocks from before the final FC output also helps - I will let them expand on these. </p></li> <li><p>Upsampling Cover: We find that upsampling cover image to match amount to Stegos helps to improve both CV and LB</p></li> <li><p>Method of ensemble: we have trained about 80+ models, we have taken the top models with top 6CV in each fold, and did unweighted gmean </p></li> </ul> <p>We were particularly careful about overfitting the LB so throughout the competition we didn't use it for feedback. But in general, we observed that for the same fold of data, better CV usually (not always) yields better LB</p> <p>Best Single Model - B5 with upsampling covers, added convolution heads, drop blocks -&gt; Private LB 0.927, Public LB 0.928</p> <p>Congratulation to my deal teammates <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/strideradu" target="_blank">@strideradu</a> and <a href="https://www.kaggle.com/yl1202" target="_blank">@yl1202</a> for the effort, I really had a great time in this short and yet intensive competition! :)</p>
iWildCam 2020 - FGVC7
LB #8 Documentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: iWildCam 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone! Thanks to the organizers for putting together such a fun competition and really interesting set of data. Here is our submission (rank #8 or #10 depending on whether you count the [Deleted] entries) and some other things we tried / wanted to try.</p> <p>Our submission was a pretty simple ensemble of a resnet 152 and a resnet 101.</p> <p>We used all of the provided Megadetector annotations to create a dataset of cropped images with species labels. We then split the data into a training and validation set (just 2 fold), making sure the validation set came from distinct locations and keeping the class distributions approximately the same. We used a ‘mini’ version of each of these (~5-10% of the images) as a mini-training and mini-validation set, which we did our hyperparameter tuning on. </p> <p>The resnets were pretrained on ImageNet and finetuned on the camera trap crops with pretty heavy data augmentation (random flip, random rotate, random zoom, random contrast and lighting modification, random warp, and mixup). Both models were further fine-tuned for a couple epochs with a class-balanced subset of crops which contained no more than 100 examples per class.</p> <p>Within each image, each Megadetector detection is cropped, sent through the classifiers, and then weighted by Megadetector’s detection confidence. We tuned how much to weight this using a simple ‘diminishing factor’ <em>d</em>: (detector_confidence + d) / (d + 1) * crop_prediction.</p> <p><strong>(Here’s the interesting part that didn’t really go anywhere)</strong> We then multiply each image’s prediction vector by a set of ‘geographical priors’ we constructed in a similar fashion to host Elijah’s paper <a href="https://arxiv.org/abs/1906.05272">Presence-Only Geographical Priors for Fine-Grained Image Classification</a>. Since we didn’t have latitude and longitude information, we used the datetime (encoded to sin/cos as in the paper) and the mean normalized difference vegetation index (basically infrared minus red) of each satellite image to construct the geographical priors model. We used a ‘diminishing factor’ for these priors too, but only ever saw about a ~1% increase in our validation score, and a 0.2% increase in our test score. But we thought this was interesting and are excited to see what others tried with the satellite imagery.</p> <p>We tried a couple other methods in addition to mean NDVI that didn't work as well. First, we tried just using resnet-extracted image features for each satellite image as features for the model (replacing latitude and longitude in the paper’s model) - but, since the data is ‘presence-only’ the model relies upon generated negative examples, which we couldn’t really generate reasonably for the image features. We tried just selecting random satellite images from the dataset to use as ‘negatives’, but this is such a limited number of data points it didn’t seem to work. And just generating a random length-2048 feature vector as a negative seemed silly, since it seems unlikely that a satellite image would ever generate those features.</p> <p>Then we tried the mean NDVI thing. Cool. As a sidenote, we eliminated all the cloud, shadow, and 0-value pixels from the satellite images using the QA data, so the NDVI we calculate is only for visible land.</p> <p>Then we tried making that more complicated. Instead of using the mean NDVI (one value per image), we modeled a normal distribution of pixel-wise NDVI values over each satellite image, created a cumulative distribution function for those values for each image, and sampled 100 points from the CDF for features. That allowed us to generate more complex negative examples by generating random normal distributions with a mean between -1 and 1 (the values that the NDVI can take). This seemed like it was helping, as the validation loss of the geo priors model went down…but when we actually used it for priors for classification our score went down.</p> <p>Anyway, this didn’t really help much, but I thought it was cool. Would love to chat more if anyone has any thoughts on this (or maybe the hosts can tell me why my ideas are bad :P )</p> <p><strong>(Back to the boring part)</strong> We then perform a moving average of the image-wise prediction vectors over each sequence. So, each image’s prediction becomes the average of the images nearest to it. We tuned this window size.</p> <p>We then tried a majority vote ‘with dissenters’ within sequences (this may have a real name). It’s just a majority vote for the final class of each sequence, but if an image has a differing classification with a confidence higher than our ‘dissenter threshold’, we let it keep its prediction. This was meant to help sequences that were mostly empty save for a few frames. We tuned this threshold as well as a ‘handicap’ for non-empty images, such that a majority vote can only be for ‘empty’ if it has at least ‘handicap’ more empty images than the 2nd place class in the vote.</p> <p><strong>What we wanted to try</strong> Sara’s paper! <a href="https://arxiv.org/abs/1912.03538">Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection</a> We actually implemented a modification of this which just uses the memory banks and attention networks during classification (still using Megadetector for detections). Unfortunately we entered this thing pretty late and didn’t have time to train it….but we’ll give it a go as a ‘Late Submission’ :)</p> <p>Well, that was a spiel. If you read through hopefully it’s either interesting, or you’re reading this a year from now as a starting point for iWildcam 2021. Heh</p> <p>So long, and thanks for all the ocellated turkey….</p>
TReNDS Neuroimaging
1st private (1st public) place + code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Our team: Nikita Churkin <a href="/churkinnikita">@churkinnikita</a>, Dmitry Simakov <a href="/simakov">@simakov</a>.</p> <p>First of all, thanks to the organizers, Kaggle team and all participants: it was important and very exciting challenge for our team. In addition, we want to thank everyone who wished us luck in kaggle discussions.</p> <p>The core of our approach is: I. Generate powerful features from 3D fMRI data. II. Train-test “matching”. III. Training of a diverse ensemble (different models and feature subsets). IV. Сombining all together.</p> <p>Let us describe the details.</p> <h2>I. Generate features from 3D fMRI data.</h2> <p>We tried to use 3D CNN models at first, but did not spend so much time on it. Small network (resnet10 without pretraining) with AdamW, L1 regularization, <a href="https://github.com/HolyBayes/pytorch_ard">ard</a> layers, snapshots and CutOut3D augmentation worked better as a regressor. 3D autoencoders (simple or UNet-like) + PCA (or ICA) also worked, but was slightly worse for us. </p> <p>The score overall was quite low (only age and domain2-var2 were higher than median prediction) and composing with others features was difficult (3D CNN model trains 10 epochs, whereas MLP on loadings and fnc 70 epochs). So we (mostly <a href="/simakov">@simakov</a>) decided that it was too hard problem for many of top20 participants and they must use something different (it is great that we are wrong, your solutions are cool and interesting!).</p> <p>Our research showed that (obviously) the most important problem was high dimensionality of 3D data. Most related papers use some dimensionality reduction methods to solve this problem. We already had data after such procedure (see data description). But we wanted to move further. If we understood right, data preparation pipeline consisted of two stages: intra-subject level and inter-subject level (it will be cool to hear more about it from <a href="/miykael">@miykael</a>). For the second one, we had to work with all subjects, but they did not fit in the memory. Nipype probably can work with such data, but it is too complicated for fast usage (and <a href="/miykael">@miykael</a> great notebooks don't help ). So, we used classics – sklearn! Sklearn has some algorithms for mini-batch dimensionality reduction: dictionary-learning and Incremental PCA with useful method <code>partial_fit</code>. In theory, we prefer to use as much components as possible (there is probably a lot of information in 3D scans), use as much features (channels) as possible (for better interactions among them), and have smallest batch-size (RAM limit). This algorithms have condition that bath-size &gt;= n_components.</p> <p>The most useful scheme was Incremental PCA with n_components 200, batch-size 200. We splited channels in groups by 10 and flattened inside them (6 groups in total). All training took 9 hours for 24 threads processor and 64gb RAM. As a result, we had 1200 PCA features and they improved our model by 2.3e-3. We also tried dictionary-learning for diversity with n-components 100, with batch-size 100 and n-iters 10 (training procedure took 2 days). We fitted models on train + test data without any additional preprocessing (different settings were generating lower scores or asked for much more RAM). </p> <p>We refused to use 3D CNN models in our submissions – it was too long and we already saw great score on the CV and Public LB.</p> <p>In addition, we computed some simple statistics, like mean, std, and quantiles inside each feature of 3D scans and similar stats for FNC matrix.</p> <h2>II. Train-test matching</h2> <h3>Preprocessing for constructed features.</h3> <p>The idea is actually simple: add a bias to different columns in test set to make it closer to train set. Linear models showed good performance in the competition so it was expected that adding biases would help a lot (at least for linear models). There are lots of way to figure out possible biases: we used minimization of Kolmogorov-Smirnov test’s statistic between train[col] and test[col]+ b.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2F4bafd16ce91cf310ef6e47fe0240c188%2Foffsets.png?generation=1593547322071002&amp;alt=media" alt=""></p> <p>You can see that resulting distributions are not perfectly matched but all gaps mostly disappeared: it was enough to boost our public/private score significantly for any single model or the whole ensemble. We used biases for site2 data (known points + classifier output) and separate biases for other test data.</p> <p>The same logic can be applied (as postprocessing) to submission.</p> <h3>Design of good site2 classifier</h3> <p>At the beginning, we thought that classifier was the most important part for surviving shakeup. As it was mentioned before, offsets for site2 (both known and with classifier) improved the public LB score by 1е-4. The same effect they had in the private LB. So, we can took the first place even without classifier and offsets for site2.</p> <p>Nevertheless, our approach:</p> <p>Raw loading, fnc and pca features were used for site2 classifier. We applied StandartScaler for train + test data (it boosts score by ~0.02 roc-auc). Regression ElasticNet was used for modeling. Then, we made pseudo-labes inside training fold (hard labels for only site2 with threshold 0.3) and retrained model with pseudo-labels (only +0.004 gain). Final metrics were: 0.973 roc-auc, 0.76 F1-score (subject to site2 class). Our model detected ~1400 new site2 observations, but 200-400 of them were False Positive errors.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2F3f8ad9c1dd6686b70ea3190bd73e078b%2Fscheme.png?generation=1593547171811243&amp;alt=media" alt=""></p> <p>If we apply offsets before training classifier, roc-auc becomes 0.67 and our model poorly distinguishes site1 form site2.</p> <h3>Postprocessing.</h3> <p>Postprocessing used the same logic as a preprocessing: we try to make the distribution of site2 predictions (revealed + from classifier) closer to distribution of site1 predictions. We calculate KS statistic for site2 and not site2 predictions and find best shift. But the effect of postprocessing was small: only 5e-5 for both public and private. </p> <h2>III. Training of a diverse ensemble (different models and feature subsets).</h2> <p>Main and only “model” for domain1_var2 was median prediction.</p> <p>And for other targets we used ML models to create "score" for every label and every feature subset. Scores are just OOF predictions on the data of only one source (like, for example, FNC features or dictionary learning features). Random Forests and GBM performed poorly in making scores, learning on raw features or stacking.</p> <p>But we wanted to use something tree-based in our ensemble (to add diversity) and recalled about <a href="https://arxiv.org/abs/1109.0887">RGF </a>.</p> <p>RGF showed nice performance in construction of scores and all levels of stacking (especially for age prediction).</p> <p>Making of scores allowed us to effectively use more sophisticated and/or slow algorithms (like MLP or RGF) on top of, in some way, all data at once. Scores were build by blending some fitted linear models (like Ridge or OMP) and RGF.</p> <p>Our stacking approach involved different models on different datasets (including scores) for each target. Final stacking for complicated labels such domain2-var1 and domain1-var1 included only blend of basic linear models + NuSVM, KernelRidge and Gaussian Process trained on different datasets. Final ensemble of domain2-var2 contained more models but nothing truly different from complicated case (just a little more of dataset-model packages).</p> <p>Because detailed listing of all target-dataset-model is boring we are going to explain detailed stacking scheme only for age.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2Feae94538eefadc9324d5d2952ec94d0e%2Fstack.png?generation=1593547091734113&amp;alt=media" alt=""></p> <h2>TLDR:</h2> <p>1) Incremental PCA for 3d images. 2) Offsets for test features (like we did in ION). 3) Large (4-5 levels) stack. Our path can be summarised in that plot:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F597945%2Fdb5df8ddc55fcc420de046a016722a95%2FProgress2.png?generation=1593547263683229&amp;alt=media" alt=""></p> <p>UPDATE: Our code: <a href="https://github.com/DESimakov/TReNDS/">https://github.com/DESimakov/TReNDS/</a></p>
TReNDS Neuroimaging
9th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I joined that competition pretty late so I started with a brute force attack. </p> <p>My solution is basically a big 3 level stack ensemble of tabular models. My final prediction have 24 baseline models.</p> <ul> <li>6x Ridge</li> <li>1x Lasso</li> <li>3x RAPIDS SVR</li> <li>2x XGboost</li> <li>2x KNN</li> <li>9x MLP</li> <li>1x 2D CNN</li> </ul> <p>Each model is trained using different datasets including the original features and additional engineered features extracted from 3D brain scans.</p> <p>For the second level stacked predictions I used 4 models: - BaggingRegressor( base_estimator=Ridge ) - KNeighborsRegressor - NuSVR - RandomForestRegressor</p> <p>My final prediction is a weighted average of those 4 model. My final local CV is 0.1559 and LB: 0.15724 (Public)/ 0.15742(Private). I believe the gap between local and LB scores are due some site2 differences.</p> <p>Each fMRI scan have 4 dimensions (53, 52, 63, 53) and it makes things hard when the topic is extracting features. Basically the first axis represents 53 different scans (not time related) so the dimension of each brain scan is only (52, 63, 53).</p> <p>To extract features from fMRI data I used many approaches, but the ones that performed better were when I calculated metrics comparing images from two different scans. So, most of my features are 2-way interactions of all 53 scans for each patient. That way I created thousands of features to describe each fMRI. </p> <p>To make the MLP works better I had to use BatchNorm in all layers and Dropout of around 0.7~0.8 to generalizes well. That was a key point in my models since most of the features have a very small signal or are just random noise.</p> <p>My 2D CNN was trained using projections of each 3D image in the 3 axis: x, y, z. So each fMRI scan is reduced to a 2D representation. That model scored 0.17x locally.</p> <ul> <li>Autoencoders embeddings didn't worked for me.</li> <li>3D CNNs scored very bad to me and I regret for not using it in my stacking since top solutions used it.</li> <li>Post processings site2 worked just a little bit for Public LB.</li> <li>Feature Selection overfited trainset, so I didn't used it.</li> <li>I'm surprised that using simple KFold for CV worked pretty well for stacking in this small dataset (5877 rows)</li> <li>RAPIDS t-SNE features didn't worked.</li> </ul>
Tweet Sentiment Extraction
14th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>&nbsp;&nbsp;&nbsp;&nbsp;First of all, i want to thank the Kaggle team for holding this interesting competition, This is my first time to take part in the kaggle competition And is the first time to participate in the discussion, I am very happy and lucky to get the gold medal. The only regret is that I used the fast submission method before, on the last hours, I had to upload my model weights. The upload speed was very slow, so I gave up the model ensemble.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;my solution is not special. I tried many tricks, but most of them can't improve my score, such as adversarial training, language model pretraining, highly dropout, train two model and concat embedding, joint training add classification loss, train different model albert, xlnet, bert, roberta large, roberta base squad. </p> <h1>Summary of my useful solution</h1> <ul> <li>preprocess<br> when i use preprocess ,it gives me 0.712-&gt;0.717.</li> <li>pseudo labeling<br> when i use Pseudo labeling, it gives me 0.717-&gt;0.720, Pseudo labeling should use higher <br> learning rate and voting through three models to generate labels. my base model's learning rate is 2e-5, when i use labeled data , it changes to 4e-5.</li> <li>postprocess<br> when i use postprocess, it gives me 0.720-&gt;0.729. </li> </ul> <h1>postprocess code</h1> <pre><code>def get_orgin_tweet_idx(origin_tweet, selected_text): selected_text = selected_text.strip() selected_start = selected_text.split()[0] for num, i in enumerate(origin_tweet): if origin_tweet[num:num + len(selected_start)] == selected_start: if ' '.join(origin_tweet[num:].split())[:len(selected_text)] == selected_text: return num def get_pp_result5(orign_tweet, selected_text): selected_text = selected_text.strip() standard_orig_text = ' '.join(orign_tweet.split()) standard_selected_text = ' '.join(selected_text.split()) if standard_orig_text == standard_selected_text: return selected_text if selected_text == None or selected_text.strip() == '': return orign_tweet idx = get_orgin_tweet_idx(orign_tweet, selected_text) if idx == None: return orign_tweet if idx == 0: return selected_text if selected_text[-1].isalpha(): selected_text = selected_text + ' ' else: if selected_text.endswith('.') and orign_tweet[0] == ' ': lst = selected_text.split()[-1] tp_idx = -2 for sst in range(len(lst) - 1, -1, -1): if lst[sst] == '.': tp_idx = sst if tp_idx != 0: selected_text = ' '.join(selected_text.split()[:-1] + [selected_text.split()[-1][:tp_idx + 1]]) else: selected_text = selected_text if orign_tweet[idx - 1] == ' ': selected_text = ' ' + selected_text if orign_tweet[:idx - 1].split(' ').count('') &lt; 2: if orign_tweet[:idx - 1].split(' ').count('') == 1 and orign_tweet[idx - 1] == '.': return '.' + selected_text.strip() return selected_text else: if orign_tweet[idx - 1] == '.': selected_text = '.' + selected_text.strip() selected_text = orign_tweet[idx - 1 - (orign_tweet[:idx - 1].split(' ').count('') - 1):idx - 1] + selected_text selected_text = selected_text[:len(selected_text) - (orign_tweet[:idx - 1].split(' ').count('') - 1)] return selected_text </code></pre> <p>For any questions, you can contact me, and I am willing to communicate and learn. </p>
TReNDS Neuroimaging
20th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Everyone,</p> <p>Ending final validation, I share my solution.</p> <p>First, Channel-wise 3D convolutions are applied to fMRI spatial maps. All channels share the weight of convolution to prevent overfitting. Output features are thrown into Edge Update GNN with FNC correlation.</p> <p>The outputs form GNN are averaged and concatenated with sMRI loading. Finally, conventional MLP is applied and prediction for age and other target variables are obtained.</p> <p>Code: <a href="https://github.com/toshi-k/kaggle-trends-assessment-prediction">https://github.com/toshi-k/kaggle-trends-assessment-prediction</a> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F172060%2F7b73d84ca14bf27a964417366d92e5bb%2Fconcept.png?generation=1593698635618709&amp;alt=media" alt=""></p>
Plant Pathology 2020 - FGVC7
The 2nd Solutions(Single fold and single model)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Plant Pathology 2020 - FGVC7 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Settings - Pseudo Label (&gt;=0.95 confidence) - Single fold</p> <p>Data augmentation - HorizontalFlip - VerticalFlip - ShiftScaleRotate - IAAEmboss、IAASharpen、Blur - IAAPiecewiseAffine - Random Erasing</p> <p>Backbone - Pnasnet5large(0.98182 in private LB, 0.98189 public in LB, final submission) - ResneSt101 (0.97481 in private LB, 0.98240 public in LB)</p> <p>Imagesize - Resize the image into (545,545)</p> <p>Loss and Learning rate - CrossEntropyLoss - CosineSchedule with 30 epoch and Warmup</p>
M5 Forecasting - Uncertainty
3rd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2><strong>Edit</strong></h2> <p>Finally our joint paper (with my colleague <a href="https://www.kaggle.com/voglinio" target="_blank">Costas Voglis</a>) is published by <a href="https://www.sciencedirect.com/science/article/pii/S0169207022000012" target="_blank">IJF</a>. Preprint can be downloaded <a href="https://www.researchgate.net/profile/Konstantinos-Vogklis/publication/358384908_Blending_gradient_boosted_trees_and_neural_networks_for_point_and_probabilistic_forecasting_of_hierarchical_time_series/links/620f897e08bee946f38ae1ff/Blending-gradient-boosted-trees-and-neural-networks-for-point-and-probabilistic-forecasting-of-hierarchical-time-series.pdf?_sg%5B0%5D=r6g6AvzEhj6_qRn6ABHj1aeUTxNzwqQ-HtA8IS0-fugvZt_pAYiPmJw6p0MqWmVp-4LM-UfjKR2DKxqXdpW7eg.qFxWw0NEbdLkFygkRNhfdd6W8L8xYJNTu0wwAP1_pIPOxtGsjQhciK5NxlRNYbewoFhcueVqfCHLOoYEPHh1fg&amp;_sg%5B1%5D=IhTK6Cj3EgiJrJvROVZgvg5Q7N2nofb2-aW8IrOzzKlxORwdvsRZZFwvi_VHUhjpA_FAAwx5E69ZNX6yZ4b2WEAp7EeGTg0ZHpeFtrxQ9PT9.qFxWw0NEbdLkFygkRNhfdd6W8L8xYJNTu0wwAP1_pIPOxtGsjQhciK5NxlRNYbewoFhcueVqfCHLOoYEPHh1fg&amp;_sg%5B2%5D=xcZL4-Ogyb4bZpCB_W5Pix-Yk-p8dx7bf_f8Rgbencr-bwcUPjvl2rM0CDa9FXslXLG5aMvJ9EhVKAc.HbdPkuNdOrYa8DPxIKxV0zexUyd3eShvk6Btv7r8lPgl_QYsZNTL7FVjvG_mlr44foKRNTcz7KUgjjI1mpobQA&amp;_iepl=" target="_blank">here</a>.</p> <h2><strong>Thanks</strong></h2> <p>I would like to thank the organizers for putting out such an exceptional competition on this scale. I also need to mention that my involvement in the M5 competition was both tasked and sponsored by my employer (Nodalpoint Systems, Athens, Greece). Due to other engagements, including a <a href="https://medium.com/the-downlinq/spacenet-6-announcing-the-winners-df817712b515" target="_blank">3rd prize in the SpaceNet 6 challenge</a> hosted at Topcoder (team SatShipAI), I was able to get involved in both M5 tracks (Accuracy &amp; Uncertainly) no sooner that the beginning of May, i.e. more than 2 months after the competitions had started.</p> <h2><strong>Background</strong></h2> <p>Together with my colleague <a href="https://www.kaggle.com/voglinio" target="_blank">Costas Voglis</a>, we formed a team (Nodalpoints) for the Accuracy track of M5, ending up in the <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/164685" target="_blank">21st position</a>. For several reasons, we preferred to participate in the Uncertainty track separately, having first <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/133467#864904" target="_blank">confirmed</a> that this was indeed not a violation of the competition terms. Costas himself ended up 26th in the Uncertainty track.</p> <h2><strong>Solution</strong></h2> <p>My solution is an extension of my own contribution to the team Nodalpoints submission to the M5 Accuracy track; in other words, the <strong>starting point</strong> of my solution is <strong>my own part</strong> of the predictions submitted to the M5 accuracy track by the team Nodalpoints. </p> <p>Having many failures at the beginning trying to tackle this competition with a classic RNN approach, probably due to the hierarchical nature of the outcome, I realized that I wouldn’t get good results if I continued treating it as a time series problem, and I should treat / transform it to a regression problem instead (as already suggested in several forum threads). </p> <p>All necessary code is now available at <a href="https://github.com/IoannisNasios/M5_Uncertainty_3rd_place" target="_blank">GitHub repo</a></p> <h2><strong>Accuracy Modeling:</strong></h2> <p>• <strong>1 lightgbm model per store</strong> (10 stores) training for different number of rounds for every store (700-1600) using a total of 70 features and all available data. <br> • <strong>3 keras models</strong> (with categorical embeddings) of almost similar architecture training on the last 17*28=476 days data using only 19 features.<br> • Ensembling, weighted geometric mean:<br> ((Lgbm ** 3) * KerasSimpleAverageOf3Models) ** (1/4)<br> <strong>Keypoint:</strong> Keras models fails on last day’s prediction (outlier – see note below) probably due to NBA finals on that day (and because I only used 1 year’s data for training – mainly to speed up). For that day, I have just let Lightgbm predictions alone (no ensemble).</p> <p>(Note: from sales_train_evaluation.csv file, mean sales per day over the last 2 years (28*26 days) is 1.25 with a standard deviation of 0.22. Keras last day mean prediction is 3.9, which is over 6 sigma away from the mean, thus an outlier.)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F1e1d28c6d07b48e924d4dc3e18f510d1%2Fsales_per_day.png?generation=1594722839639810&amp;alt=media" alt=""></p> <p>This M5 accuracy solution is the outcome of a regression problem. But there is another dimension which is not mined: for every item we get the predictions for 28 days; these regression predictions can benefit from the fact that this is actually a time-series problem, utilizing a simple <strong>exponential weighted mean</strong> (row-wise):</p> <p>Acc_subm.iloc[:,1:]=Acc_subm.iloc[:,1:].ewm(com=0.04, axis=1).mean().values</p> <p>This is a post-process that should be done in the Accuracy track, but because it was a last minute finding (2 hours before competitions closing), I only inserted it in the Uncertainty track (it cost us 3 places in the Accuracy one).</p> <h2><strong>Uncertainty Predictions:</strong></h2> <p>Furthermore, for the Uncertainty track, we had to calculate the median and four prediction intervals, not just for every item but for all 12 levels. Having the median from the Accuracy track as a starting point for level 12, <strong>with simple aggregations we obtain the median for the other 11 levels</strong>. Going from the median to complete 9 quantiles estimations was done mostly by <strong>tuning coefficient multipliers</strong>. Tuning was done using only 1 fold (trying to overfit the public LB), but for more accurate results more folds should be used.</p> <p>The higher the aggregation level, the more confident we are in the point prediction and thus we use lower (closer to 1) coefficient multipliers. For multipliers estimation for each level the <strong>normal distribution was used in levels 1 – 9 and a skew-normal distribution for levels 10-12</strong>. Also, <strong>due to right-skewness</strong> in our sales data on every aggregation level, the last of 9 quantiles (=99.5%) was furthermore multiplied with a factor (1.02 or 1.03).</p> <p>In the Accuracy track, I worked on 3 different folds, and the final model ensembling weights were selected from the mean score of those folds. Looking at every fold alone, I noticed that there was a multiplier that could be used to maximize accuracy. These multipliers were 1.04, 0.97 and 1.0 for each fold. The meaning of this is that the <strong>final submission in the Accuracy track will be very sensitive to multipliers</strong>, and this will affect the Uncertainty track, too. Needing to minimize this volatility for the Uncertainty track, I decided to amplify the above tuned coefficient multipliers by a really small amount. <strong>All tuned multipliers (except for the median) was moved away from 1 by 0.5%.</strong></p> <p><strong>Level 12</strong> (the only non-aggregated one) was the most difficult for accurate estimations, and the above calculations were not enough here. Statistical calculations played a major role for this level. <strong>Sales quantiles</strong> were calculated over the last 364 days and over the last 28 days for every item and later weighted averaged (weights = [1, 1.75], accordingly). Also, <strong>weekly sales quantiles</strong> were calculated over the last 13<em>28 days and 3</em>28 days, and these results were simply averaged. Finally, Level 12 was calculated as:<br> all quantiles excluding median<br> <strong>0.2*coefficient tuning + 0.7 * sales quantiles + 0.1 * weekly sales quantiles</strong><br> median (not sure if should have done this)<br> 0.8*coefficient tuning + 0.2 * sales quantiles</p> <p>As for Level 12, so for <strong>level 11 of item_id – state_id aggregation</strong>, sales quantiles were calculated too and final predictions were calculated as:<br> all quantiles excluding median (which was left as-is)<br> <strong>0.91*coefficient tuning + 0.09 * sales quantiles</strong></p> <h2><strong>Flowchart</strong></h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F5f625ee90b2de7015c1a7eddcce93ae7%2FFlowchart.jpeg?generation=1594723487531854&amp;alt=media" alt=""></p> <h2><strong>Late submissions</strong></h2> <p>From late submissions both to M5 Accuracy and Uncertainty competitions, as expected, there is a strong correlation between scores.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F219107%2F9f61b260d3b094b1135e3dd67b023035%2Facc-unc.png?generation=1594723317372193&amp;alt=media" alt=""></p> <h2><strong>Summary</strong></h2> <ul> <li>I used accuracy predictions as a starting point for uncertainty estimations.</li> <li>Better accuracy results give better uncertainty estimations.</li> <li>Weaker Keras models strongly increase overall accuracy performance.</li> <li>Ensembling of statistical calculated sales quantiles with tuned coefficient multipliers for level 12 was the key in the uncertainty competition.</li> <li>Talking about multipliers: if I had used the magic number 0.97, I would have ended up on the top of the leaderboard in the uncertainty track and in place 6 in the accuracy.</li> <li>I played around with recurrent implementations, but they never reached an appropriate maturity level in order to include them in the final blend.</li> </ul>
M5 Forecasting - Accuracy
21st Place Solution (Nodalpoints)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>M5 Forecasting - Accuracy Writeup</h1> <h2>Thanks</h2> <p>Me and Ouranos would like to thank the organizers for putting out such a exceptional competition on this scale. We also need to mention that our involvement in the M5 competition was both tasked and sponsored by our company (Nodalpoint LTD, Athens, Greece). This gave us almost one month and half to be active in the competition.</p> <h2>Warm up (Costas)</h2> <p>First I would like to share a personal opinion regarding the hierarchical nature of this competition. I began working on hierarchical stuff one a half month ago focusing on R packages 'forecast', 'hts' (hierarchical time series) and Professor Hyndman's m4 winning solution repository. I was modelling time series in every level-1 beeing the most aggregated and level-12 the most disagregated (item_ids) in the compensate. </p> <p>I soon after compensating 30490 low level time series, I had some descent results on levels 10, 11, 12 (better from the best performing LightGBM model), but when aggregated on top levels they produced worst results. So even if the lightGBM model failed to capture the individual sales count for each item, by daily aggregation the errors were cancelling out each other producing better high-level time series. That realization made me hold back from the classical time series approach and focus to other directions for the last one month. </p> <p>I am currently experimenting to see how this approach (hts and m4-winning forecasting per time series) would eventually score by itself and if it could further help our blend.</p> <h2>Solution</h2> <p>We followed the most common strategy: “treating the problem as a regression one”. So, no classic time series models or even a proper recurrent one. One of the key ingredients of our approach was the validation set selection:</p> <ul> <li>Validation set 1: ['d_1914', 'd_1915', …, 'd_1941'] (=public LB data)</li> <li>Validation set 2: ['d_1886', 'd_1887', ', …, 'd_1913']</li> <li>Validation set 3: ['d_1578', 'd_1579', …, 'd_1605’] (exactly one year before the private LB)</li> </ul> <p>All modelling and blending were based on improving the weighted mean and standard deviation on these three folds. We used 4 models in our final blend:</p> <h3>Model 1 (lgb_nas) Tabular regression using 10 LightGBM models on each store</h3> <ul> <li><p>Features We used publicly available features (thanks to the exceptional work of Konstantin Yakovlev. <br> The features were divided in the following categories:</p> <ol><li><p>Categorical item_id, dept_id, cat_id</p></li> <li><p>Price related price_max, price_min, price_std, price_mean, price_norm, price_unique, price_momentum, momentum_m, momentum_y</p></li> <li><p>Calendar related event_name_1, event_name_2, event_type_1, event_type_2, snap_CA, snap_TX, snap_WI, tm_d, tm_w, tm_m, tm_y, tm_wm, tm_w_end</p> <ol><li><p>Lag related</p> <ol><li><p>Lag only (15 features): sales_lag_28, sales_lag_29, …, sales_lag_42</p></li> <li><p>Rolling only (8 features): rolling_mean_7, rolling_mean_14, rolling_mean_30, <br> rolling_std_30, rolling_mean_60, rolling_std_60, rolling_mean_180, rolling_std_180 </p></li> <li><p>Lag and roll (12 features): rolling_mean_tmp_i, j for i in [1, 7, 14] and j in [7, 14, 30, 60]. These features were king of tricky because they need recursive evaluation during.</p></li></ol></li></ol></li> <li>Mean encodings enc_cat_id_mean, enc_cat_id_std, enc_dept_id_mean, enc_dept_id_std, enc_item_id_mean, enc_item_id_std</li></ol></li> <li><p>Model LightGBM model using the following configuration: <code> { <br> 'boosting_type': 'gbdt', 'objective': 'tweedie', 'tweedie_variance_power': 1.1, 'metric': 'rmse', 'subsample': 0.6, 'subsample_freq': 1, 'learning_rate': 0.02, 'num_leaves': 2**11-1, 'min_data_in_leaf': 2**12-1, 'feature_fraction': 0.6, 'max_bin': 100, 'n_estimators': …, <br> 'boost_from_average': False, 'verbose': -1, 'num_threads': 12 } <br> </code></p></li> </ul> <p>We trained 10 models, one for each store using variable number of estimators:</p> <p><code> rounds_per_store1={ 'CA_1': 700, 'CA_2': 1100, 'CA_3': 1600, 'CA_4': 1500, 'TX_1': 1000, 'TX_2': 1000, 'TX_3': 1000, 'WI_1': 1600, 'WI_2': 1500, 'WI_3': 1100 } </code></p> <h3>Model 2 (lgb_cos): Tabular regression using 1 LightGBM for all data</h3> <ul> <li><p>Features We used publicly available features, divided in the following categories:</p> <ol><li><p>Categorical item_id, dept_id, cat_id, store_id</p></li> <li><p>Price related price_max, price_min, price_std, price_mean, price_norm, price_unique, price_momentum, momentum_m, momentum_y</p></li> <li><p>Calendar related event_name_1, event_name_2, event_type_1, event_type_2, snap_CA, snap_TX, snap_WI, tm_d, tm_w, tm_m, tm_y, tm_wm, tm_w_end</p></li> <li><p>Lag related</p> <ol><li><p>Lag only (15 features): sales_lag_28, sales_lag_29, …, sales_lag_42. </p></li> <li><p>Rolling only (8 features): rolling_mean_7, rolling_mean_14, rolling_mean_30, rolling_std_30, rolling_mean_60, rolling_std_60, rolling_mean_180, rolling_std_180 </p></li> <li><p>Lag and roll (12 features): rolling_mean_tmp_i, j for i in [1, 7, 14] and j in [7, 14, 30, 60]. These features were king of tricky because they need recursive evaluation during.</p></li></ol></li> <li><p>Mean encodings enc_cat_id_mean, enc_cat_id_std, enc_dept_id_mean, enc_dept_id_std, enc_item_id_mean, enc_item_id_std</p></li></ol></li> </ul> <p>• Model A single LightGBM model using the following configuration: <code> { ‘boosting_type’: ‘gbdt’, ‘objective’: ‘tweedie’, ‘tweedie_variance_power’: 1.1, ‘metric’: ‘rmse’, ‘subsample’: 0.5, ‘subsample_freq’: 1, ‘learning_rate’: 0.03, ‘num_leaves’: 2047, ‘min_data_in_leaf’: 4095, ‘feature_fraction’: 0.5, ‘max_bin’: 100, ‘n_estimators’: 1300, ‘boost_from_average’: False, ‘verbose’: -1, ‘num_threads’: 8 } </code></p> <h3>Model 3 (keras_nas): NN regression using Keras model with embeddings</h3> <p>We created a neural network model using Keras/Tensorflow that uses low-dimensinoal embeddings for the categorical inputs: * Features: We used the following subset of the publicly available features. All continuous features were scaled in [0, 1] using min/max values from the training set. * Continuous * sell_price, * sell_price_rel_diff, * rolling_mean_28_7, * rolling_mean_28_28, <br> * rolling_median_28_7, * rolling_median_28_28, * logd: np.log1p(sales.d-sales.d.min()) * snap_CA, * snap_TX, * snap_WI, * Categorical * wday, * event_name_1, * event_type_1, * event_name_2, * event_type_2, * dept_id, * store_id, * cat_id, * state_id</p> <h3>Model 4 (fastai_cos): NN regression using a fastai tabular package</h3> <ul> <li><p>Features Same as Model 2 (lgb_cos). Embeddings for item_id were reduced at 10. Used fillna(-1) to fill in non-available entries. No scaling used.</p></li> <li><p>Model</p> <ul><li>Architecture Following the fastai tabular interface the model was created using <code> tabular_learner(data, layers=[128, 64, 32, 16], ps=[0.05, 0.05, 0.05, 0.05], emb_drop=0.04, emb_szs={'item_id': 10}) </code> The model above was trained for 8 epochs using a custom Tweedie loss function </li> <li>Loss function Custom Pytorch implementation of Tweedie loss, that clipped zeros at 1.0e-6 to avoid NaNs <h3> </h3></li></ul></li> </ul> <p>Final blend For the final blending, we chose weighted geometric mean based on the average metric on three validation sets. We noticed that <strong>keras_nas</strong> NN predictions had an unexpected large peak on the last day (day 1969). See for example the plot below: ![](<a href="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F281494%2F1d938189adcd75e76bcdcfae6c296ae5%2FScreenshot%202020-07-07%20at%209.47.17%20AM.png?generation=1594104456786778&amp;alt=media">https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F281494%2F1d938189adcd75e76bcdcfae6c296ae5%2FScreenshot%202020-07-07%20at%209.47.17%20AM.png?generation=1594104456786778&amp;alt=media</a> =500x400)</p> <p>After confirming this behavior to be dominant in almost all cases, we decided to exclude that model’s prediction for the last day. So our final ensembling looks like: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F281494%2F0c7aa88490d093ba1805dee8ba0380f4%2FScreenshot%202020-07-07%20at%2010.15.31%20AM.png?generation=1594106672787168&amp;alt=media" alt=""></p> <p>In the table below we present the score of each model and the final ensemble for each validation set. </p> <p>| |lgb_nas| lgb_cos| keras_nas| fastai_cos| ensemble| | --- | --- |--- |--- |--- |--- | |weights days 1-27 |3.5 |1.0 |1.0 | 0.5 |0.531 |weight day 28 |3.0 | 0.5 | 0.0 | 1.5 |0.531 |Val. set 1 |0.474 |0.470 |0.715 |0.687 |0.531 |Val. set 2 |0.641 |0.671 |0.577 |0.631 |0.519 |Val. set 3 |0.652 |0.661 |0.746 |0.681 |0.598 |Average|0.589 |0.602 |0.679 |0.667 |0.549 |Std |0.08 |0.09 |0.05 |0.02 |0.03</p> <p>Correlation matrix flatten()-ing the 30490 x 28 prediction:</p> <p>| Final pred. | lgb_nas| lgb_cos|keras_nas |fastai_cos| | --- | --- |--- |--- |--- | | lgb_nas | 1.0000| 0.9941|0.9420 |0.9251 | | lgb_cos | 0.9941 | 1.0000 |0.944 | 0.9281 | | keras_nas |0.942 |0.9441 | 1.0000| 0.8952| | fastai_cos | 0.9251 | 0.9281|0.8952 |1.0000 |</p> <h2>Summary</h2> <ul> <li>Validation sets are very important. We were lucky enough to select those three because of it was relatively fast to calculate predictions for them (tripled the computation). </li> <li>In time series modelling you need to be extra careful not to leak information from train to validation set.</li> <li>Model diversity was really important. Ouranos spent many hours trying to stabilize keras_nas outcomes. A tedious process with a lot of trial-and-error that resulted in a hand full of features that gave great diversity</li> <li>Our local validation was <strong>0.549</strong> and our final private score was <strong>0.552</strong> without any magic multiplier. We did tried out various multipliers but 1.0 was the best choice. This was by far the most important result of our methodology. We manage to emulate the performance on the private test. We were confident about the final score but we just didn't know were all the competitors would end up.</li> <li>Talking about multipliers: if we 've used the magic number 0.97 we would end up on the top of the leaderboard </li> <li>We regret not adding classical methods (R packages hts, forecast) to the ensemble </li> <li>We ‘ve also played around with recurrent implementations, but they never reached the level maturity so as to include them in the final blend</li> <li>Our best submission was our last one!!</li> </ul>
Global Wheat Detection
5th Place Solution (EfficientDet)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you for host and competitors.<br> It is interesting competition for me.</p> <h1>TL;DR</h1> <ol> <li>EfficientDetB3 5Fold -&gt; 10 image Test with Pseudo Labeling</li> <li>EfficientDetB5 All Training(EMA) -&gt; Prediction MultiScaleTraning(1280,768,1024+4Flip) 0.766</li> <li>EfficientDetB5 All + EfficientDetB4 All + EfficientDetB5 All(With Pseudo) Public 0.7752</li> </ol> <h1>Solution</h1> <h2>1st Phase</h2> <p>EfficientDetB3 for test 10 images pseudo labeling<br> Test image is "test source", I want to get test source knowledge</p> <h2>2nd Phase</h2> <p>I used EMA for the ensemble technique in 2nd phase.</p> <ol> <li>AdamW 100epoch 640 batchsize 4(for bn parameter turning)</li> <li>AdamW 40epoch 1024 batchsize 1(bn layer freeze)</li> </ol> <p>use Cosine Annealing</p> <p>Augmentation</p> <ul> <li>Mixup</li> <li>Mosaic</li> <li>Scale</li> <li>Hue</li> <li>Random Brightness</li> <li>Cutout</li> <li>GridMask</li> </ul> <h2>3rd Phase(Kernel)</h2> <h3>Prediction</h3> <p>EfficientDetB5 All + Predict MultiScale(768, 1024, 1280 * 4Flip) -&gt; Pseudo Labeling of All test. </p> <h3>Pseudo Labeling</h3> <p>Traning Parameter as follows</p> <ul> <li>EfficientDet B5 image size 1024 </li> <li>Epoch 5</li> <li>Use EMA</li> <li>Mixed Precision(AMP)</li> </ul> <h3>Ensemble</h3> <p>EfficientDetB5 All + EfficientDetB4 All + EfficientDetB5 All(With Pseudo) using WBF</p> <h1>Why Shakedown?</h1> <ol> <li>failed Threshold turning.(lower threshold is better in private) <br> When setting threshold 0.1, the score go to 0.695. but I can not get selection.</li> <li>Use more heavy augmentation for robustness </li> </ol>
COVID19 Global Forecasting (Week 4)
~1st Place Solution LGBM with some adjustments
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <p>LGBMs with quantile regression were trained on time-series and geo features for short term predictions. Aggressive 1w average smoothing was used for long term predictions. Due to the large number of small locations the top 30 country/state had to be adjusted manually.</p> <strong>Notebooks</strong> <p><a href="https://www.kaggle.com/gaborfodor/c19w5-create-submission">https://www.kaggle.com/gaborfodor/c19w5-create-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-check-submission">https://www.kaggle.com/gaborfodor/c19w5-check-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-train-lgbs">https://www.kaggle.com/gaborfodor/c19w5-train-lgbs</a></p> <h2>Feature Extraction</h2> <ul> <li>Population</li> <li>Latitude, Longitude</li> <li>Day of week</li> <li>Share of total cases for each day of week</li> <li>Rolling mean/std for 1w, 2w, 3w</li> <li>Cumulative totals</li> <li>Confirmed - Fatality rate</li> <li>Trend from last 2-3 weeks</li> <li>Normalized features by population</li> <li>Nearby features based on the closest 5-10-20 locations <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1a011ee06339075039995790bd86b046%2FScreenshot%202020-05-31%20at%2010.32.51.png?generation=1591101401062776&amp;alt=media" alt=""></li> </ul> <p>Rescaled and rounded features to 1-2 decimals to decrease overfitting</p> <h3>External data</h3> <p>I started with searching public us county level demographic data (age, income, population density, covid19 lockdown info etc.) I found a few useful sources though I did not have time to clean and merge them. The only external data I used was the geo encoded lat-lon coordinates for each location.</p> <h2>Modeling</h2> <p>For each target/quantile/forecast lag separate model was trained with location based 5-fold CV and early stopping based on pinball loss. Models were only trained to predict the next 1-14 days.</p> <ul> <li>Trained bunch of LGBMs with random parameters to blend</li> <li>Sample Weighting based on location weights and time decay </li> </ul> <h2>Post processing</h2> <ul> <li>Clipped negative predictions at 0 </li> <li>Made sure the 0.05 (0.95) quantile predictions are not higher (lower) than the median</li> <li>Smooth daily predictions (<code>Y[t] * 0.66 + Y[t-1] * 0.33</code>)</li> <li>For US country total used the state level rollups for median</li> <li>Manually inspected and adjusted the top 30 countries</li> <li>Flat long-term predictions based on the last predicted weekly average</li> <li>Small daily decay was added to 0.05 quantile and median</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1788c82f1c31a3e3900073134256424a%2FScreenshot%202020-06-02%20at%2014.39.01.png?generation=1591101619319599&amp;alt=media" alt=""></p>
COVID19 Global Forecasting (Week 5)
~1st Place Solution LGBM with some adjustments
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <p>LGBMs with quantile regression were trained on time-series and geo features for short term predictions. Aggressive 1w average smoothing was used for long term predictions. Due to the large number of small locations the top 30 country/state had to be adjusted manually.</p> <strong>Notebooks</strong> <p><a href="https://www.kaggle.com/gaborfodor/c19w5-create-submission">https://www.kaggle.com/gaborfodor/c19w5-create-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-check-submission">https://www.kaggle.com/gaborfodor/c19w5-check-submission</a> <a href="https://www.kaggle.com/gaborfodor/c19w5-train-lgbs">https://www.kaggle.com/gaborfodor/c19w5-train-lgbs</a></p> <h2>Feature Extraction</h2> <ul> <li>Population</li> <li>Latitude, Longitude</li> <li>Day of week</li> <li>Share of total cases for each day of week</li> <li>Rolling mean/std for 1w, 2w, 3w</li> <li>Cumulative totals</li> <li>Confirmed - Fatality rate</li> <li>Trend from last 2-3 weeks</li> <li>Normalized features by population</li> <li>Nearby features based on the closest 5-10-20 locations <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1a011ee06339075039995790bd86b046%2FScreenshot%202020-05-31%20at%2010.32.51.png?generation=1591101401062776&amp;alt=media" alt=""></li> </ul> <p>Rescaled and rounded features to 1-2 decimals to decrease overfitting</p> <h3>External data</h3> <p>I started with searching public us county level demographic data (age, income, population density, covid19 lockdown info etc.) I found a few useful sources though I did not have time to clean and merge them. The only external data I used was the geo encoded lat-lon coordinates for each location.</p> <h2>Modeling</h2> <p>For each target/quantile/forecast lag separate model was trained with location based 5-fold CV and early stopping based on pinball loss. Models were only trained to predict the next 1-14 days.</p> <ul> <li>Trained bunch of LGBMs with random parameters to blend</li> <li>Sample Weighting based on location weights and time decay </li> </ul> <h2>Post processing</h2> <ul> <li>Clipped negative predictions at 0 </li> <li>Made sure the 0.05 (0.95) quantile predictions are not higher (lower) than the median</li> <li>Smooth daily predictions (<code>Y[t] * 0.66 + Y[t-1] * 0.33</code>)</li> <li>For US country total used the state level rollups for median</li> <li>Manually inspected and adjusted the top 30 countries</li> <li>Flat long-term predictions based on the last predicted weekly average</li> <li>Small daily decay was added to 0.05 quantile and median</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F18102%2F1788c82f1c31a3e3900073134256424a%2FScreenshot%202020-06-02%20at%2014.39.01.png?generation=1591101619319599&amp;alt=media" alt=""></p>
COVID19 Global Forecasting (Week 4)
~5th Place Simple Linear QuantReg Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The task here is a time series forecast to predict Covid19 daily numbers of Cases and Fatalities for countries and also for some US regions. But they also challenge us to predict the 5% and 95% quantiles of that numbers and the metric chosen for that is the Pinball loss and optimizing it isn't an easy task. Pinball loss guarantee that all forescasts are between 5 and 95 quantile range, every forecast outsite that limits is extremely penalized, so models needs to take care when predicting the lower and upper limits.</p> <p>For this challenge I build a simple linear model using <a href="https://www.statsmodels.org/devel/generated/statsmodels.regression.quantile_regression.QuantReg.html">QuantReg</a> regressor from statsmodels lib. QuantReg showed to be very good to estimate the quantiles forecasts out of the box. </p> <p>To train the model I used data from 2020-04-01 to 2020-04-26 and to validate from 2020-04-27 to 2020-05-10. Note that I used 2 weeks only for validation and the Private LB will be calculated using 30 days, so I expect some drifts in my forecasts for the last 15 days of private data.</p> <p>Were trained two independent models, one for Daily Cases and one for Fatalities. Both models uses similar features build from some lags and rolling windows statistics. Due the quantile bands forecast, it is logical that a good feature could be the rolling window of the standard deviation. I added rolling windows of std() with windows size 7, 14, 21 and 28, but due a typo in my code only windows 7 and 28 are being used in the final submission. I tried other kinds of rolling windows functions but everything else decreased my validation scores. All my predictions in the validation period and in Private dataset are made recursively (prediction for day t becomes the lag(1) of day t+1).</p> <p>Also to avoid issues with the 5 and 95 quantiles close to the end of the 30 days period, I added a linear margin every day in order to always increase the 95 quantile and decrease the 5 quantile forecast.</p> <p>The validation score for daily Cases is 0.2619 and for Fatalities 0.2341, ~0.248 in average for the two weeks validation period. Right now, about 15 days of Private data are already scored and my Pinball score is 0.2301, pretty close of my calculated scores. </p> <p>Here is an example of my forecasts for US Cases and Fatalities: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F9f43bc1d7f00db208c3af8d362aafe3f%2FScreenshot%20from%202020-05-29%2015-50-52.png?generation=1590778310937696&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F4accb58cccc80b5beb662b047397a22e%2FScreenshot%20from%202020-05-29%2015-31-44.png?generation=1590777149092734&amp;alt=media" alt=""></p> <p>Link for my notebook <a href="https://www.kaggle.com/titericz/test-1?scriptVersionId=33803268">here</a>.</p>
COVID19 Global Forecasting (Week 5)
~5th Place Simple Linear QuantReg Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The task here is a time series forecast to predict Covid19 daily numbers of Cases and Fatalities for countries and also for some US regions. But they also challenge us to predict the 5% and 95% quantiles of that numbers and the metric chosen for that is the Pinball loss and optimizing it isn't an easy task. Pinball loss guarantee that all forescasts are between 5 and 95 quantile range, every forecast outsite that limits is extremely penalized, so models needs to take care when predicting the lower and upper limits.</p> <p>For this challenge I build a simple linear model using <a href="https://www.statsmodels.org/devel/generated/statsmodels.regression.quantile_regression.QuantReg.html">QuantReg</a> regressor from statsmodels lib. QuantReg showed to be very good to estimate the quantiles forecasts out of the box. </p> <p>To train the model I used data from 2020-04-01 to 2020-04-26 and to validate from 2020-04-27 to 2020-05-10. Note that I used 2 weeks only for validation and the Private LB will be calculated using 30 days, so I expect some drifts in my forecasts for the last 15 days of private data.</p> <p>Were trained two independent models, one for Daily Cases and one for Fatalities. Both models uses similar features build from some lags and rolling windows statistics. Due the quantile bands forecast, it is logical that a good feature could be the rolling window of the standard deviation. I added rolling windows of std() with windows size 7, 14, 21 and 28, but due a typo in my code only windows 7 and 28 are being used in the final submission. I tried other kinds of rolling windows functions but everything else decreased my validation scores. All my predictions in the validation period and in Private dataset are made recursively (prediction for day t becomes the lag(1) of day t+1).</p> <p>Also to avoid issues with the 5 and 95 quantiles close to the end of the 30 days period, I added a linear margin every day in order to always increase the 95 quantile and decrease the 5 quantile forecast.</p> <p>The validation score for daily Cases is 0.2619 and for Fatalities 0.2341, ~0.248 in average for the two weeks validation period. Right now, about 15 days of Private data are already scored and my Pinball score is 0.2301, pretty close of my calculated scores. </p> <p>Here is an example of my forecasts for US Cases and Fatalities: <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F9f43bc1d7f00db208c3af8d362aafe3f%2FScreenshot%20from%202020-05-29%2015-50-52.png?generation=1590778310937696&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F54836%2F4accb58cccc80b5beb662b047397a22e%2FScreenshot%20from%202020-05-29%2015-31-44.png?generation=1590777149092734&amp;alt=media" alt=""></p> <p>Link for my notebook <a href="https://www.kaggle.com/titericz/test-1?scriptVersionId=33803268">here</a>.</p>
COVID19 Global Forecasting (Week 4)
W5 LB scorecard & analysis update 0611
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please upvote if you think this is usefull to follow, then I know if it's good to continue the updates and not creating lots of spam :) And for you who click the downvote, this is not a state-of-the-art scorecard, please contribute with a new scorecard, you probably do better than me at this, I just making a good try :)</p> <p>An idea to the scorecard is also to get more use of the analysis and see some patterns that helps to create better models and predictions to those in need e.g. identify models there are better in short, medium and long term, use them thereafter and combine them.<br> Explanation to the columns:<br> Score diff - is the % difference between 2 LB scores(1d = 1 day difference)<br> Dist. to the top - distance to the leader.<br> Models - which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.<br> Diff from last day - the difference between the distance to the leader, from yesterday, approach the leader or lose momentum.<br> KPI "1d(day) best mover…" - Total move(score difference) the group did against the leader and the individual contribution. Track the individual movement within the group against the leader, see the performance in that timeline regardless what speed has as a whole.<br> Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc - <br> TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p>One can calculate in many different ways to see that comparison, this was a quick way to do it.</p> <p>Also check notebooks below for more analysis.<br> <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions" target="_blank">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> <br> <a href="https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/" target="_blank">https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/</a></p>
COVID19 Global Forecasting (Week 5)
W5 LB scorecard & analysis update 0611
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Please upvote if you think this is usefull to follow, then I know if it's good to continue the updates and not creating lots of spam :) And for you who click the downvote, this is not a state-of-the-art scorecard, please contribute with a new scorecard, you probably do better than me at this, I just making a good try :)</p> <p>An idea to the scorecard is also to get more use of the analysis and see some patterns that helps to create better models and predictions to those in need e.g. identify models there are better in short, medium and long term, use them thereafter and combine them.<br> Explanation to the columns:<br> Score diff - is the % difference between 2 LB scores(1d = 1 day difference)<br> Dist. to the top - distance to the leader.<br> Models - which technique(LGB, math, Fastai etc) that are used in the models, in top 20, can be interesting to have and follow. Because a team has two submission, I just add what I see, and complements ahead to the team's total.<br> Diff from last day - the difference between the distance to the leader, from yesterday, approach the leader or lose momentum.<br> KPI "1d(day) best mover…" - Total move(score difference) the group did against the leader and the individual contribution. Track the individual movement within the group against the leader, see the performance in that timeline regardless what speed has as a whole.<br> Prediction Top20, 1, 3 &amp; 5 day ahead predictions with calc - <br> TodayScore+(((TodayScore-YesterdayScore)+((TodayScore-prev5dscore)/5))/2)PredDayAhead</p> <p>One can calculate in many different ways to see that comparison, this was a quick way to do it.</p> <p>Also check notebooks below for more analysis.<br> <a href="https://www.kaggle.com/gaborfodor/w5-top-submissions" target="_blank">https://www.kaggle.com/gaborfodor/w5-top-submissions</a> <br> <a href="https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/" target="_blank">https://www.kaggle.com/david1013/covid-19-week-5-daily-predictions-top-teams/</a></p>
M5 Forecasting - Accuracy
123rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will share my solution on <a href="https://www.kaggle.com/xxbxyae/123rd-place-solution">this notebook</a>. <br> (latest version is just running...)</p> <ul> <li>try: Basic FE(lag/diff/roll, calender feats)</li> <li>model: single LightGBM <ul><li>objective: 'poisson' or 'tweedie' (poisson was better.)</li> <li>hyperparameter tuning with Optuna(not selected as final sub)</li></ul></li> <li>CV: Custom time series split by opened kernel.</li> <li>not try: weight tuning, FE with marketing knowledge. <ul><li>We have to select only 1 submission as final, so I selected "robust" solution.</li></ul></li> <li>result <ul><li>public : 0.61444</li> <li>private : 0.60825 (123rd place)</li></ul></li> </ul> <p>IMHO, the question was "how to reduce uncertainly" in this task. <br> (I know that this competition name is "Accuracy"...🤔) </p> <p>Congratulations for winners! And thanks for organizers and all participants! 🐱 </p>
Global Wheat Detection
[42nd place solution]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulation to all, especially to the prize winner and gold medalists and thanks to the organizers for this interesting competition.<br> So here are some essential points of our solutions (I will update more detail later)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2545270%2F4c4ce442608f2f655f7182babf7a7bca%2FUntitled%20Diagram%20(2).png?generation=1596588070622865&amp;alt=media" alt=""><br> <strong>1. Validation strategy</strong><br> I use a fixed validation set from the beginning of the competition (663 images). The validation data is from usask_1 and a part of other sources.<br> <strong>2. Augmentation</strong></p> <p>2.1. My custom augmentation: <br> - crop and pad: I realize that the boxes at the border after cropping is not good. Then I remove all the boxes at the border and remove the image content inside the boxes too.<br> - crop and resize<br> - resize and pad<br> - <a href="https://www.cs.tau.ac.il/~turkel/imagepapers/ColorTransfer.pdf" target="_blank">Color Transfer between Images</a><br> - mixup: Instead of mixup 2 images in training set, I use 1 image in training set and 1 image without wheat head. <br> - Rotation: random rotate images and use pseudo label as ground truth.</p> <p>2.2. Albumentation:<br> - A.HorizontalFlip(p=0.5),<br> - A.VerticalFlip(p=0.5),<br> - A.RandomRotate90(p=0.5), <br> I don't use mosaic augmentation because it does not want to cooperate with my custom augmentation.<br> <strong>3. SPIKE dataset:</strong> I use WBF on pseudo label and provided ground truth label.<br> <strong>4. Results</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2545270%2F9312433df689b4ebe397f0674b067b9b%2FUntitled.png?generation=1596632145633431&amp;alt=media" alt=""></p> <p>P/s: Is there a chance for us to move into gold zone after verifying solution? </p>
COVID19 Global Forecasting (Week 4)
4th Place Neural Network Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think the models we build are too late for being useful in this pandemic. They can hopefully (but not hopefully) be used in another pandemic. Therefore I wanted to have a re-usable Machine Learning solution. So far, it performs okay but it is early to talk about it. Anyway, I wanted to explain it before I forget the details. I have 2 notebooks, one for training and one for inference: <a href="https://www.kaggle.com/aerdem4/covid-19-w5-training">https://www.kaggle.com/aerdem4/covid-19-w5-training</a> <a href="https://www.kaggle.com/aerdem4/covid-19-w5-pipeline">https://www.kaggle.com/aerdem4/covid-19-w5-pipeline</a></p> <p>It is CNN Model trained on last 2 weeks of data. In last covid-19 competitions, my NN had diverged by time. I have learned from my mistakes and didn't want to do recursive predictions for this one. I have trained different model for each n day ahead. Here is the model architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F471945%2Fa274824b8ac4631b021d4bcbbcc7aac2%2Fmodel.PNG?generation=1590745456988152&amp;alt=media" alt=""></p> <p>Some tricks that worked: - Cumulative sum over relu as the last layer guarantees that I have non-zero predictions and quantiles are always in order. - Training directly with pinball loss. In initial epochs, I have trained for 0.15, 0.50 and 0.85 quantiles. Then late epochs are trained with actual quantiles (0.05, 0.50, 0.95). Without this, model starts being cautious very early and gets stuck in a local minimum. - Bagging 5 NN models for each day to reduce randomness - Smoothing predictions for each day n: <code>y(n) = 0.2*y(n-1) + 0.6*y(n) + 0.2*y(n+1)</code> - Rounding predictions to integers. This probably helps a little with zero cases.</p>
COVID19 Global Forecasting (Week 5)
4th Place Neural Network Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I think the models we build are too late for being useful in this pandemic. They can hopefully (but not hopefully) be used in another pandemic. Therefore I wanted to have a re-usable Machine Learning solution. So far, it performs okay but it is early to talk about it. Anyway, I wanted to explain it before I forget the details. I have 2 notebooks, one for training and one for inference: <a href="https://www.kaggle.com/aerdem4/covid-19-w5-training">https://www.kaggle.com/aerdem4/covid-19-w5-training</a> <a href="https://www.kaggle.com/aerdem4/covid-19-w5-pipeline">https://www.kaggle.com/aerdem4/covid-19-w5-pipeline</a></p> <p>It is CNN Model trained on last 2 weeks of data. In last covid-19 competitions, my NN had diverged by time. I have learned from my mistakes and didn't want to do recursive predictions for this one. I have trained different model for each n day ahead. Here is the model architecture:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F471945%2Fa274824b8ac4631b021d4bcbbcc7aac2%2Fmodel.PNG?generation=1590745456988152&amp;alt=media" alt=""></p> <p>Some tricks that worked: - Cumulative sum over relu as the last layer guarantees that I have non-zero predictions and quantiles are always in order. - Training directly with pinball loss. In initial epochs, I have trained for 0.15, 0.50 and 0.85 quantiles. Then late epochs are trained with actual quantiles (0.05, 0.50, 0.95). Without this, model starts being cautious very early and gets stuck in a local minimum. - Bagging 5 NN models for each day to reduce randomness - Smoothing predictions for each day n: <code>y(n) = 0.2*y(n-1) + 0.6*y(n) + 0.2*y(n+1)</code> - Rounding predictions to integers. This probably helps a little with zero cases.</p>
Global Wheat Detection
1st place solution [MIT-Compliant]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for this interesting competition!<br> Congratulation to all, this is my first solo gold medal. As there are lots of great public notebooks posted, I will keep my solution simple and short.<br> Special thank to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for amazing repo (<a href="https://github.com/rwightman/efficientdet-pytorch" target="_blank">https://github.com/rwightman/efficientdet-pytorch</a>)</p> <p><strong>Summary</strong></p> <ul> <li>Custom mosaic data augmentation</li> <li>MixUp</li> <li>Heavy augmentation</li> <li>Data cleaning</li> <li>EfficientDet</li> <li>Faster RCNN FPN</li> <li>Ensemble multi-scale model: Weighted-Boxes-Fusion, special thank <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> </li> <li>Test time augmentation(HorizontalFlip, VerticalFlip, Rotate90)</li> <li>Pseudo labeling</li> </ul> <p><strong>Data Processing</strong></p> <ul> <li><p>Custom mosaic augmentation:<br> Mosaic is a data augmentation method that combines 4 training images into one for training (instead of 2 in CutMix). Instead of randomly cropping a piece of the image, I create custom mosaic augmentation as bellow to keep the border information:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2F6c0205c6ac7299bc0ea644811521e4c3%2Fmosaic.png?generation=1596590064167930&amp;alt=media" alt=""></p></li> <li><p>MixUp: <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2Fda4e1aaafcc421760391e099b4ffca11%2Fmixup.png?generation=1596590747904905&amp;alt=media" alt=""></p></li> <li><p>Heavy augmentation:<br> RandomCrop, HorizontalFlip, VerticalFlip, ToGray, IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, Blur, CLAHE, Sharpen, Emboss, RandomBrightnessContrast, HueSaturationValue</p></li> </ul> <p>examples after apply mosaic, mixup, augmentation<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1393170%2Fe226aefe53eab2e2a87a304cd2a84c5e%2Fmixup.png?generation=1596590604527225&amp;alt=media" alt=""></p> <ul> <li>External data<br> I use 2 external data:<br> *wheat spikes (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/164346)" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/164346)</a>, for license please refer (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/164346#928613" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/164346#928613</a>)<br> *wheat 2017 (<a href="https://plantimages.nottingham.ac.uk/" target="_blank">https://plantimages.nottingham.ac.uk/</a>) at post (<a href="https://www.kaggle.com/c/global-wheat-detection/discussion/148561#863159)" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/148561#863159)</a>. <br> I create annotations (bounding box) for all images and crop to 1024x1024. I contacted author to ensure the right without license issue.</li> <li>Data Cleaning<br> *Deleted tiny bounding boxes (width or height &lt; 10px)<br> *Fixed too big bounding boxes</li> </ul> <p><strong>Model</strong></p> <ul> <li>5 folds, stratified-kfold, splitted by source(usask_1, arvalis_1, arvalis_2…)</li> <li>Optimizer: Adam with initial LR 5e-4 for EfficientDet and SGD with initial LR 5e-3 for Faster RCNN FPN</li> <li>LR scheduler: cosine-annealing</li> <li>Mixed precision training with nvidia-apex</li> </ul> <p><strong>Performance</strong><br> Valid AP/Public LB AP</p> <ul> <li>EfficientDet-d7 image-size 768: Fold0 0.709/0.746, Fold1 0.716/0.750, Fold 2 0.707/0.749, Fold3 0.716/0.748, Fold4 0.713/0.740</li> <li>EfficientDet-d7 image-size 1024: Fold1,3</li> <li>EfficientDet-d5 image-size 512: Fold4</li> <li>Faster RCNN FPN-resnet152 image-size 1024: Fold1</li> <li>Ensemble 9 models above using wbf can achieve <strong>0.7629 Public LB/0.7096 Private LB</strong> (old testset)</li> </ul> <p><strong>Pseudo labeling</strong></p> <ul> <li>Base: EfficientDet-d6 image-size 640 Fold1 0.716 Valid AP</li> <li>Round1: Train EfficientDet-d6 10 epochs with trainset + hidden testset (output of ensembling), load weight from base checkpoint<br> Result: [old testset] 0.7719 Public LB/0.7175 Private LB and [new testset] 0.7633 Public LB/0.6787 Private LB</li> <li>Round2: Continue train EfficientDet-d6 6 epochs with trainset + hidden testset (output of pseudo labeling round1), load weight from pseudo labeling round1 checkpoint<br> Result: [old testset]0.7754 Public LB/0.7205 Private LB and [new testset]0.7656 Public LB/0.6897 Private LB</li> </ul> <p>Github source code at <a href="https://github.com/dungnb1333/global-wheat-dection-2020" target="_blank">https://github.com/dungnb1333/global-wheat-dection-2020</a></p>
Global Wheat Detection
[34th Place Solution using single EfficientDet + Pseudo Labeling]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all Congratulation to all, especially to the prize winner and gold medalists and thanks to the organizers for this interesting competition!</p> <p>So here is our solution to this competition (Very very basic):</p> <h2>Training</h2> <ul> <li>Single EfficientDetD5 (Thanks to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for his great notebook)</li> <li>40 epochs using full dataset (No CV)</li> <li>Image size: 512 * 512</li> <li>Augmentations: Random Crop, Random Cutout, Hue Saturation Value, Brightness Random Brightness Contrast, Random Horizontal and Vertical flips and random Cutmix.</li> <li>Training notebook: <a href="https://www.kaggle.com/kaushal2896/gwd-efficientdet-training-on-entire-dataset" target="_blank">https://www.kaggle.com/kaushal2896/gwd-efficientdet-training-on-entire-dataset</a></li> </ul> <h2>Pseudo Labeling</h2> <ul> <li>Pseudo labeled the above model with 10 epochs (Thanks to <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> for his work)</li> <li>Used the same augmentations even while pseudo labeling: Random Crop, Random Cutout, Hue Saturation Value, Brightness Random Brightness Contrast, Random Horizontal and Vertical flips and random Cutmix.</li> <li>Same image size: 512 * 512.</li> <li>Used both TTA and WBF for final predictions.</li> <li>Pseudo Labeling notebook: <a href="https://www.kaggle.com/kaushal2896/efficientdet-pseudo-labeling-wbf-tta" target="_blank">https://www.kaggle.com/kaushal2896/efficientdet-pseudo-labeling-wbf-tta</a></li> </ul> <h2>What didn't work for us?</h2> <ul> <li>Custom augmentations like mixup.</li> <li>Training on larger image sizes without Random crop augmentation.</li> <li>Same solution with EfficientDetD6.</li> <li>Training on higher number of epochs.</li> <li>FasterRCNN with ResNet51 and ResNet101.</li> </ul> <p>Here are different approaches I've tried during this competition: <a href="https://github.com/Kaushal28/Global-Wheat-Detection" target="_blank">https://github.com/Kaushal28/Global-Wheat-Detection</a></p>
COVID19 Global Forecasting (Week 4)
My Solution for week 5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everybody, this week I did not have time to adapt my previous approach so I simply used a public baseline and then added some code for quantiles. Basically I used poisson distribution to estimate 0.05 and 0.95 quantile.</p>
COVID19 Global Forecasting (Week 5)
My Solution for week 5
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everybody, this week I did not have time to adapt my previous approach so I simply used a public baseline and then added some code for quantiles. Basically I used poisson distribution to estimate 0.05 and 0.95 quantile.</p>
Global Wheat Detection
[30th place solution] Weird TTA trick? Detect by quadrant provided a great boost!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all! It's been a real pleasure working alongside you during this competition. Before YOLOv5 came into the mix, I was sitting in 2nd place on the public leaderboard thanks to a neat little trick that gave me a bigger boost than expected.</p> <p>I was working off an EffecientDetB5 model which gave around 0.72 on the public scoreboard without any TTA. With my TTA trick I got a massive boost to 0.7535. I'll explain it below, and would love to get your opinion on why it might have been so useful. Also, here is a <a href="https://www.kaggle.com/alexandersoare/gwd-detect-by-quadrant-tta-trick" target="_blank">notebook with a working example</a>.</p> <p>The motivation was that I didn't have time to be training 1024x1024, but I still wanted to work with full resolution while training 640x640. So the idea was:</p> <p>1) Break the test images up into 640x640 quadrants (so there would be some overlap between them).<br> 2) Do inference on all 4 quadrants and stitch the results together.<br> 3) Do inference on the full image resized to 640x640 (to help fill gaps - more on this later).<br> 4) Ensemble with WBF.</p> <p>Here's a diagram explaining the quadrants + full image resized.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4256010%2Ffd3e335bf416544a894ff550af5af5eb%2Fdiag1.png?generation=1596729863641670&amp;alt=media" alt=""></p> <p><strong>About the stitching</strong></p> <p>I didn't want to keep any boxes that got too close to the edge of the quadrants because they could cut through the middle of a wheat head. So I discarded them.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4256010%2Fdef474abb689561acb28b252830e14ca%2Fdiag2.png?generation=1596730533838943&amp;alt=media" alt=""></p> <p>And now you can see why I also did inference on the rescaled full image. I wanted to capture any boxes that may have been discarded with the method above.</p> <p><strong>About the training</strong></p> <p>I found that with this method, mosaic augmentation during training didn't help my score. I stuck with doing random resized crops of the training images. The size of the crops were uniformly distributed in [640, 1024]. That way I had the benefit of training near full resolution a lot of the time.</p> <p><strong>Question to Kagglers</strong></p> <p>I'm actually not familiar with the concept of multi-scale inference. I understand it's a TTA technique. What exactly is happening in it? Is it the similar to what I did?</p>
COVID19 Global Forecasting (Week 4)
9th place solution (a lot of LGBMs blended)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 4) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach was quite straightforward: I have trained a lot of very simple LGBM models, blended their predictions, and applied an additional ~linear transformation to optimize the final metric.</p> <p><a href="https://www.kaggle.com/ilialar/w5-solution-2">Notebook with my final solution</a></p> <p><strong>Overview</strong></p> <ul> <li><p>Each model takes as input numbers of new cases and fatalities during the last N days divided by the maximum value of cases/fatalities for this region up to the last day in input and predicts the number of cases or fatalities on Kth day from the last input day (again divided by max value up to date).</p></li> <li><p>Each model is trained using all available data: all regions and all possible N+K days frames. But different days had different weights – the later the higher. The weight for each particular day is proportional to <code>(0.9)**(-n)</code> which means, that last days in the train set had much greater significance during training.</p></li> <li><p>Each model is just <code>lightgbm.LGBMRegressor()</code> with default parameters. I have trained a lot of different models iterating over N (<code>range(5, 40, 5)</code>) and K (<code>range(1,32)</code>). </p></li> <li><p>Then I have used all models to make all possible predictions for each day in the test or validation sets (I have used the last 30 days for validation) and computed min, max, and average prediction for each day.</p></li> <li><p>It turned out, these predictions were quite pessimistic – predicted values were very high. That’s why I have applied additional linear transformation and deviation factor and computed the final predictions using formulas below (coefficients are optimized on validation). </p></li> </ul> <p>``` pred_0.5 = a[0] * mean + a[1] * min + a[2] * max pred_0.05 = pred_0.5 + (b[0] * mean + b[1] * min + b[2] * max) * dev(lamb_min) pred_0.95 = pred_0.5 + (c[0] * mean + c[1] * min + c[2] * max) * dev(lamb_max)</p> <p><code>`` Where</code>a<code>,</code>b<code>, and</code>c<code>are just linear coefficients. </code>dev(lamb) = np.array([lamb * n for n in range(num_days_to_predict)])` – is used to make predictions for later days less (or more) confident.</p> <p>What I have not done but probably should (I have joined quite late and just did not have enough time): - Optimize hyperparameters of LGBM - Smooth predictions (I see that I have a lot of outliers at some days) by (for example) using 5% and 95% percentiles of predicted values instead of min and max. - Use other information as inputs or somehow utilize information about connections between different regions.</p>
COVID19 Global Forecasting (Week 5)
9th place solution (a lot of LGBMs blended)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: COVID19 Global Forecasting (Week 5) <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My approach was quite straightforward: I have trained a lot of very simple LGBM models, blended their predictions, and applied an additional ~linear transformation to optimize the final metric.</p> <p><a href="https://www.kaggle.com/ilialar/w5-solution-2">Notebook with my final solution</a></p> <p><strong>Overview</strong></p> <ul> <li><p>Each model takes as input numbers of new cases and fatalities during the last N days divided by the maximum value of cases/fatalities for this region up to the last day in input and predicts the number of cases or fatalities on Kth day from the last input day (again divided by max value up to date).</p></li> <li><p>Each model is trained using all available data: all regions and all possible N+K days frames. But different days had different weights – the later the higher. The weight for each particular day is proportional to <code>(0.9)**(-n)</code> which means, that last days in the train set had much greater significance during training.</p></li> <li><p>Each model is just <code>lightgbm.LGBMRegressor()</code> with default parameters. I have trained a lot of different models iterating over N (<code>range(5, 40, 5)</code>) and K (<code>range(1,32)</code>). </p></li> <li><p>Then I have used all models to make all possible predictions for each day in the test or validation sets (I have used the last 30 days for validation) and computed min, max, and average prediction for each day.</p></li> <li><p>It turned out, these predictions were quite pessimistic – predicted values were very high. That’s why I have applied additional linear transformation and deviation factor and computed the final predictions using formulas below (coefficients are optimized on validation). </p></li> </ul> <p>``` pred_0.5 = a[0] * mean + a[1] * min + a[2] * max pred_0.05 = pred_0.5 + (b[0] * mean + b[1] * min + b[2] * max) * dev(lamb_min) pred_0.95 = pred_0.5 + (c[0] * mean + c[1] * min + c[2] * max) * dev(lamb_max)</p> <p><code>`` Where</code>a<code>,</code>b<code>, and</code>c<code>are just linear coefficients. </code>dev(lamb) = np.array([lamb * n for n in range(num_days_to_predict)])` – is used to make predictions for later days less (or more) confident.</p> <p>What I have not done but probably should (I have joined quite late and just did not have enough time): - Optimize hyperparameters of LGBM - Smooth predictions (I see that I have a lot of outliers at some days) by (for example) using 5% and 95% percentiles of predicted values instead of min and max. - Use other information as inputs or somehow utilize information about connections between different regions.</p>
Global Wheat Detection
Thank you to all and rank "997th"'s approach
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to say thank you to all the people giving awesome notebooks with their very cool Insights.this was my first Kaggle competition so it is special for me. Before joining this competition I knew Object detection only by theory and I was "Keras Guy", this competition helped me to implement a full model written in Pytorch and I am very much happy. Those starter notebooks and discussions were really beneficial.<br> I wanted to share my approach, although it's not that good( you can see the rank and tell). I faced some problems as I have a Laptop with only 4gb of RAM and 2gb GPU, I had to solely depend on Kaggle kernels, which had a limited time, doing it was a little hard for me.<br> So here is my approach:</p> <ol> <li>I used 512X512 sized images to train a base FasterRCNN(resnet50) model using all the data and no data augmentation (30 epochs).</li> <li>I used Stratified KFolds to make 5 folds of the data.</li> <li>Each fold of data had the same augmentations including horizontal vertical flipping, Random Shadow, Gaussian Blur, Cutouts.</li> <li>I used the base model to retrain on each fold now using 1024x1024 images with augmentations.</li> <li>Lastly, I used WBF ensemble to get the result from each of the models.<br> I will be really happy if you comment hoe could I make this model better or why didn't it worked.<br> So this was In my part, I hope I can use the knowledge in other competitions, hope you guys are safe and safe and see you soon in other competitions. I am so eager to see top solutions!!<br> Special thanks to <a href="https://www.kaggle.com/pestipeti" target="_blank">@pestipeti</a> <a href="https://www.kaggle.com/kaushal2896" target="_blank">@kaushal2896</a> <a href="https://www.kaggle.com/matthewmasters" target="_blank">@matthewmasters</a> for the notebooks and discussions and clearing my doubts.<br> Thankyou!!!😀 </li> </ol>
TReNDS Neuroimaging
24th place - 3D fMRI Feature Engineering Partial Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was the best competition I joined by far in my opinion. I really enjoyed working on this topic and I might even want to work on neuroimaging as a job. I was little bit disappointed because most of the teams didn't even bother looking at <code>fMRI_train</code> and <code>fMRI_test</code>. Those teams lost large amount of information by skipping that part of the dataset. Our solution heavily relies on 3D fMRI data and I'm updating my data analysis kernel with all feature engineering code described here. It has more detailed explanation in this kernel here. (It takes 8 hours to create those features so kernel is still comitting)</p> <p><a href="https://www.kaggle.com/gunesevitan/trends-neuroimaging-data-analysis-3d-features">https://www.kaggle.com/gunesevitan/trends-neuroimaging-data-analysis-3d-features</a></p> <h2>3D Feature Engineering</h2> <p>Features are based on the differences of fMRIs between different target groups. The biggest differences in those fMRIs are the spread and intensity of active/inactive areas. I plotted the youngest and oldest subject's fMRIs side by side for 53 components, and tried to extract some features. Some of those features worked, most of them didn't. </p> <p><img src="https://i.ibb.co/44mHnBn/fmri.jpg" alt=""></p> <p>Data in <code>fMRI_train</code> and <code>fMRI_test</code> are represented in 4 dimensional space: 3 spatial dimensions and one component dimension. The 4th dimension is time in most cases but it is 53 different components in this dataset. 3D brain is mask is applied to 4D samples in order to convert them into a restructured 2D representation.</p> <p><img src="https://nilearn.github.io/_images/masking1.jpg" alt=""></p> <p><code> fmri_mask = nl.image.load_img('../input/trends-assessment-prediction/fMRI_mask.nii') sample = h5py.File(f'../input/trends-assessment-prediction/fMRI_train/10001.mat', 'r')['SM_feature'][()] sample = np.moveaxis(sample, [0, 1, 2, 3], [0, 3, 2, 1]) masked_sample = sample[:, fmri_mask.get_data() == 1].astype(np.float32) </code> This code loads the fMRI mask <code>(53, 63, 52)</code> and a random sample <code>(53, 52, 63, 53)</code>. Reorients the axes of sample <code>(53, 53, 63, 52)</code> and apply the mask to it. Final shape of the sample is <code>(53, 58869)</code> which is <code>(n_components, n_voxels)</code>. All of our features are extracted in this space. The same fMRIs look like this when they are masked.</p> <p><img src="https://i.ibb.co/3BmNgRH/fmri-masked.jpg" alt=""> </p> <p>The features created in this space are: * Skew and kurtosis of every 53 components * Skew of <code>np.diff(component)</code> * Active and inactive regions are created for 53 components with the lines below, and their skew and shapes are extracted * <code>active = component[component &amp;gt; component_mean + component_std]</code> * <code>inactive = component[component &amp;lt; component_mean - component_std]</code> * Active mean / inactive mean and active std / inactive std are created for 53 components * Component skew - active skew and component skew - inactive skew are created for 53 components * More active and inactive regions are created for 53 components with the lines below, and their shapes are extracted * <code>more_active = component[component &amp;gt; component_mean + (2 * component_std)]</code> * <code>more_inactive = component[component &amp;lt; component_mean - (2 * component_std)]</code> * Final features are each component (signal) are divided into 15 equal parts which corresponds to bounding boxes in 3D space. Active region shape and mean of each box are extracted with the lines below. I intended to capture active region spreads and intensities in fixed locations with those features.</p> <p><code> box_count = 15 for b, box in enumerate(np.array_split(component, box_count)): features[f'{icn_order[i]}_box{b}_mean'] = box.mean() box_active = box[box &amp;gt; component_mean + component_std] features[f'{icn_order[i]}_box{b}_active_spread'] = box_active.shape[0], </code></p> <p>At the end I created 2279 new features and they helped my not so special models to reach 0.1581. <code> Number of Bounding Box Features: 1590 Number of Statistical Features: 689 </code></p> <p>I also tried smoothing with Gaussian filter before feature engineering but it either had no effect or lost too much information as seen below. (First image has no smoothing, second image has smoothing with factor 5 and third image has smoothing with factor 10)</p> <p><img src="https://i.ibb.co/CJqjVbf/smoothing.jpg" alt=""></p> <h2>Models</h2> <p>My models were very simple. I used 2 ridge regressions and 2 rapids SVRs. Two of those models included FNC features and two of them didn't. I used 10 KFold with shuffle as cross-validation and trained each model with 5 different shuffle seeds. I also tried to drop 5% of FNC features and bounding box features in every different iteration for diversity. Finally, I added ±0.1 random noise to <code>age</code> for increasing cardinality since it was rounded. Models are shared in this kernel here.</p> <p><a href="https://www.kaggle.com/gunesevitan/trends-neuroimaging-linear-model-ensemble">https://www.kaggle.com/gunesevitan/trends-neuroimaging-linear-model-ensemble</a></p>
Global Wheat Detection
[From 447 to 135th place Solution] EfficientDet Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners and to all the players that learnt new things!</p> <p>Since the start of the competition we have used FasterRCNN then Shifted to EfficientDet then Shifted to YOLOv5. Then used a combination of both EfficientDet+YOLOv5. When we knew that YOLOv5 is prohibited we turned back to EfficientDet and kept optimizing it.</p> <p><strong>Training</strong><br> Data Cleaning: </p> <ul> <li>Deleted tiny bounding boxes (width or height under a certain threshold)</li> <li>Deleted hunge bounding boxes (width or height over a certain threshold)</li> </ul> <p>Data Augmentation: </p> <ul> <li>Online Augmentations</li> <li>Flips &amp; Rotations (90 multiples)</li> <li>Crop &amp; Resize</li> <li>Hue Saturation Contrast Brightness </li> <li>Moving Blur </li> <li>Noise Injection (Gaussian Noise)</li> <li>Cutout (Made different versions of that) </li> <li>Cutmix </li> </ul> <p>Training Process: </p> <ul> <li>Used EfficientDet5 (&amp; EfficientDet7)</li> <li>Used both 512x512 and 1024x1024 images</li> <li>Startified 5 Folds Splitting </li> <li>By Source Splitting (for some Fine Tuning)</li> <li>Manual tweaks when training the models (&amp; Early Stopping)</li> <li>Change Augmentation distributions every certain number of epochs</li> </ul> <p><strong>Inference</strong><br> A picture is worth a thousand words:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1739631%2F7dffedb28611b84b7c76165e8873215d%2Fstar.png?generation=1596624192299446&amp;alt=media" alt=""></p> <p>Here you find my inference <a href="https://www.kaggle.com/ffares/ensemble-models-tta-efficientdet-512-1024" target="_blank">kernel</a>, please upvote the notebook if you like it, and most imprtantly feel free to give your feedback so I can learn from my mistakes. </p> <p>I want to thank the hosts of this competition, kaggle and all the participants who submitted compliant solutions. Special thanks goes for <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> <a href="https://www.kaggle.com/pestipeti" target="_blank">@pestipeti</a> and many others who kept sharing during the competition.</p> <p>We will update with further details.</p>
Global Wheat Detection
[Update ] The 9th place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Introduction</h1> <p>(This post was jointly written by the winning team, see details below)</p> <p>First, we wish to express our congratulations to the winners and to all the nominees as well for their remarkable achievements and continued excellence and look forward to exciting times in object detection algorithms.</p> <p>In addition, many thanks to Kaggle and the organizers for holding this interesting competition. And special thanks to my colleagues <a href="https://www.kaggle.com/orkatz2" target="_blank">@orkatz2</a> (Or Katz) and <a href="https://www.kaggle.com/solomonk" target="_blank">@solomonk</a> (Shlomo Kashani, author of the book “Deep Learning Interviews”).</p> <h1>Code</h1> <ul> <li>training code: <a href="https://github.com/amirassov/kaggle-global-wheat-detection" target="_blank">https://github.com/amirassov/kaggle-global-wheat-detection</a></li> <li>submission code: <a href="https://www.kaggle.com/amiras/pseudo-ensemble-detectors-3-st-universenet-r10" target="_blank">pseudo ensemble: detectors (3 st)+universenet r10</a></li> </ul> <h1>Summary</h1> <p>Our solution is based on the excellent MMDetection framework (<a href="https://github.com/open-mmlab/mmdetection)" target="_blank">https://github.com/open-mmlab/mmdetection)</a>. We evaluated and tested numerous models in an attempt to find the ones which are better suited for wheat detection. Why? Because there is not always a direct correlation between the COCO mAP score, as reported by the authors of the original models, and the actual mAP on the wheat corpus. Therefore, experimentation was extremely important. Amongst the many models we evaluated in MMDet are: </p> <ul> <li>GFocal</li> <li>ATSS</li> <li>UniverseNet </li> <li>DetectorRS </li> <li>SOLO-v2 (used for segmentation but we experimented with it anyway)</li> </ul> <p>We then trained an ensemble of the top-performing models which were: </p> <ul> <li>DetectoRS with the ResNet50 backbone (<a href="https://github.com/joe-siyuan-qiao/DetectoRS" target="_blank">https://github.com/joe-siyuan-qiao/DetectoRS</a>)</li> <li>UniverseNet+GFL with the Res2Net101 backbone (<a href="https://github.com/shinya7y/UniverseNet" target="_blank">https://github.com/shinya7y/UniverseNet</a>)</li> </ul> <p>To increase the score a single round of pseudo labelling was applied to each model. Additionally, for a much better generalization of our models, we used heavy augmentations.</p> <h1>Jigsaw puzzles</h1> <p>In the original corpus provided by the organizers, the training images were cropped from an original set of larger images. Therefore, we collected and assembled the original puzzles resulting in a corpus of 1330 puzzle images. The puzzle collection algorithm we adopted was based on <a href="https://github.com/lRomul/argus-tgs-salt/blob/master/mosaic/create_mosaic.py" target="_blank">https://github.com/lRomul/argus-tgs-salt/blob/master/mosaic/create_mosaic.py</a>. But we were unsuccessful in collecting the bounding boxes for puzzles. Mainly because of the existence of bounding boxes that are located on or in the vicinity the border of the image. For this reason, we generated crops for the puzzles offline in addition to training images and generated boxes for them using pseudo labelling.</p> <h1>Validation approach</h1> <p>We used MultilabelStratifiedKFold with 5 folds (<a href="https://github.com/trent-b/iterative-stratification" target="_blank">https://github.com/trent-b/iterative-stratification</a>) stratified by the number of boxes, a median of box areas and source of images. We guaranteed that there isn’t any leak between the sub-folds, so that the images of one puzzle were used only in that one particular fold.</p> <p>Referring to the paper <a href="https://arxiv.org/abs/2005.02162" target="_blank">https://arxiv.org/abs/2005.02162</a>, one can see wheat heads from different sources. We assumed that the wheat heads of usask_1, ethz_1 sources are very different from the test sources (UTokyo_1, UTokyo_2, UQ_1, NAU_1). Therefore, we did not use these sources for validation.</p> <p>However, our validation scores did not correlate well with the Kaggle LB. We only noticed global improvements (for example, DetectoRS is better than UniverseNet). Local improvements such as augmentation parameters, WBF parameters etc. did not correlate. We, therefore, shifted our attention to the LB scores mainly. </p> <p>We trained our models only on the first fold.</p> <h1>Augmentations</h1> <p>Due to the relatively small size of our training set, and another test set distribution, our approach relied heavily on data augmentation. During training, we utilized an extensive data augmentation protocol:</p> <ul> <li>Various augmentations from albumentations library (<a href="https://albumentations.ai):" target="_blank">https://albumentations.ai):</a><ul> <li>HorizontalFlip, ShiftScaleRotate, RandomRotate90</li> <li>RandomBrightnessContrast, HueSaturationValue, RGBShift</li> <li>RandomGamma</li> <li>CLAHE</li> <li>Blur, MotionBlur</li> <li>GaussNoise</li> <li>ImageCompression</li> <li>CoarseDropout</li></ul></li> <li>RandomBBoxesSafeCrop. Randomly select N boxes in the image and find their union. Then we cropped the image keeping this unified.</li> <li>Image colorization (<a href="https://www.kaggle.com/orkatz2/pytorch-pix-2-pix-for-image-colorization" target="_blank">https://www.kaggle.com/orkatz2/pytorch-pix-2-pix-for-image-colorization</a>) </li> <li>Style transfer (<a href="https://github.com/bethgelab/stylize-datasets)" target="_blank">https://github.com/bethgelab/stylize-datasets)</a>. A random image from a small test (10 images) was used as a style. </li> <li>Mosaic augmentation. <code>a, b, c, d</code> -- randomly selected images. Then we just do the following:</li> </ul> <pre><code>top = np.concatenate([a, b], axis=1) bottom = np.concatenate([c, d], axis=1) result = np.concatenate([top, bottom], axis=0) </code></pre> <ul> <li>Mixup augmentation. <code>a, b</code> -- randomly selected images. Then: <code>result = (a + b) / 2</code></li> <li>Multi-scale Training. In each iteration, the scale of image is randomly sampled from <code>[(768 + 32 * i, 768 + 32 * i) for i in range(25)]</code>.</li> <li>All augmentations except colorization and style transfer were applied online.<br> Examples of augmented images:</li> </ul> <table> <thead> <tr> <th><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2F54b1605a052bc34520cce5f7ca81f86f%2F0.jpg?generation=1596619282971124&amp;alt=media" alt=""></th> <th><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2Fe7bad4c91782a5614ee5731f3610e376%2F6.jpg?generation=1596619343754665&amp;alt=media" alt=""></th> </tr> </thead> <tbody> <tr> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2F4dfab25fb6ed84e7ef545c42361382c5%2F27.jpg?generation=1596619635969807&amp;alt=media" alt=""></td> <td><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2Fcb53ccda22bf16bef104caf9e25459df%2F47.jpg?generation=1596619666130434&amp;alt=media" alt=""></td> </tr> </tbody> </table> <h1>External data:</h1> <p>SPIKE dataset: <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/164346" target="_blank">https://www.kaggle.com/c/global-wheat-detection/discussion/164346</a></p> <h1>Models</h1> <p>We used DetectoRS with ResNet50 and UniverseNet+GFL with Res2Net101 as main models. DetectoRS was a little bit more accurate and however much slower to train than UniverseNet:</p> <ul> <li>Single DetectoRS Public LB score without pseudo labeling: 0.7592</li> <li>Single UniverseNet Public LB score without pseudo labeling: 0.7567</li> </ul> <p>For DetectoRS we used:</p> <ul> <li>LabelSmoothCrossEntropyLoss with parameter 0.1</li> <li>Empirical Attention (<a href="https://github.com/open-mmlab/mmdetection/tree/master/configs/empirical_attention" target="_blank">https://github.com/open-mmlab/mmdetection/tree/master/configs/empirical_attention</a>)</li> </ul> <h1>Training pipeline</h1> <p>In general, we used a multi-stage training pipeline:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2F6b28b3ab6458d9a0763d34c78abf15a0%2FSelection_125.png?generation=1596629231468089&amp;alt=media" alt=""></p> <h1>Model inference</h1> <p>We used TTA6 (Test Time Augmentation) for all our models:</p> <ul> <li>Multi-scale Testing with scales [(1408, 1408), (1536, 1536)]</li> <li>Flips: [original, horizontal, vertical]</li> </ul> <p>For TTA was used a standard MMDet algorithm with NMS that looks like this for two-stage detectors (DetectoRS):<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F413252%2F6fff06749b880222e0c06bea777b8e84%2FSelection_126.png?generation=1597392868005836&amp;alt=media" alt=""></p> <p>For one-stage detectors (UniverseNet), the algorithm is similar, only without the part with RoiAlign, Head, etc.</p> <h1>Pseudo labelling</h1> <ul> <li>Sampling positive examples. We predicted the test image and received its scores and the bounding boxes. Then we calculated <code>confidence = np.mean(scores &gt; 0.75)</code>. If the confidence was greater than 0.6 we accepted this image and used for pseudo labelling.</li> <li>Sources [usask_1, ethz_1] and augmentations like mosaic, mixup, colorization, style transfer weren’t used for pseudo labelling.</li> <li>1 epoch, 1 round, 1 stage.</li> <li>Data: original data + 3 x pseudo test data</li> </ul> <h1>Ensemble</h1> <p>We used WBF (<a href="https://github.com/ZFTurbo/Weighted-Boxes-Fusion" target="_blank">https://github.com/ZFTurbo/Weighted-Boxes-Fusion</a>) for the ensemble. The distribution of DetectoRS and UniverseNet scores is different. So we applied scaling using <code>rankdata</code> (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html):" target="_blank">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html):</a></p> <pre><code>scaled_scores = 0.5 * (rankdata(scores) / len(scores)) + 0.5. </code></pre> <p>WBF parameters:</p> <ul> <li>weights=[0.65, 0.35] respectively for models [DetectoRS, UniverseNet]</li> <li>iou_thr=0.55</li> <li>score_thr=0.45</li> </ul> <h1>Our final submission pipeline (0.7262 on Private LB and 0.7734 on Public LB):</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F413252%2Fc2f34231c3dbffc0e8335b8d9cb15898%2FSelection_120.png?generation=1596619828315578&amp;alt=media" alt=""></p> <h1>Some observations from our submissions:</h1> <ul> <li>Pseudo crops from jigsaw puzzles (DetectoRS R50): 0.7513 -&gt; 0.7582</li> <li>Tuning of pseudo labeling parameters for sampling positive examples (ensemble): 0.7709 -&gt; 0.7729</li> <li>Pseudo labeling (DetectoRS R50): 0.7582 -&gt; 0.7691</li> <li>Pseudo labeling (UniverseNet Res2Net50): 0.7494 -&gt; 0.7627</li> <li>SPIKE dataset (DetectoRS R50): 0.7582 -&gt; 0.7592</li> <li>Deleting [usask1, ethz1] from pseudo labeling (DetectoRS R50): 0.7678 -&gt; 0.7691</li> </ul> <h1>What we didn’t do:</h1> <ul> <li>MMDetection and YOLOV5 ensemble. That was the main goal but since YOLOV5 was disqualified we gave up this option.</li> <li>MMDetection and EfficientDet ensemble. We only managed to make a submission for the first time on the last day but we could not get a good enough result. We believe that if we had a few more days we could improve the result by tune the WBF parameters.</li> <li>We stopped improving YOLOV5 about a month before the end of the competition. </li> </ul> <h1>What didn’t work:</h1> <ul> <li>Wheat Ears Detection Dataset: CV improved, but LB did not.</li> <li>More than 1 rounds of pseudo labeling with the same model.</li> <li>Scale-aware testing like this: <a href="https://drive.google.com/file/d/14VwSjMeRZUtisZtqQPmbll6w4zvZXIAQ" target="_blank">https://drive.google.com/file/d/14VwSjMeRZUtisZtqQPmbll6w4zvZXIAQ</a></li> <li>Rotate90 TTA for DetectoRS and UniverseNet</li> </ul> <h1>Acknowledgements</h1> <p>Thanks to internet service provider HOSTKEY (<a href="https://www.hostkey.com" target="_blank">https://www.hostkey.com</a>) that gave me a grant for access to GPU servers.</p> <p><strong>The code will be published soon.</strong></p>
Global Wheat Detection
[80th place solution😂] Source-separated Psuedo Labeling
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks for the host for holding this competition! </p> <p>And great thanks to our team members, <a href="https://www.kaggle.com/datahobbit" target="_blank">@datahobbit</a>, <a href="https://www.kaggle.com/ratthachat" target="_blank">@ratthachat</a>, <a href="https://www.kaggle.com/yashchoudhary" target="_blank">@yashchoudhary</a>, <a href="https://www.kaggle.com/nitindatta" target="_blank">@nitindatta</a> and it was a pleasure teaming up. Everyone played a great part in the solutions which <a href="https://www.kaggle.com/ratthachat" target="_blank">@ratthachat</a> elaborates! It was fun having discussions and everybody was a great analyst/coder. We did a great job and made it into the TOP-2% ;)</p> <p>This is our solution of our team Wheat Fields!</p> <p>(I will come back to update with the details.)</p> <h2>Models</h2> <ul> <li>Model: EffdetD5 2 folds ensemble </li> </ul> <p>We evaluated our CV based on data source separated CV with original weightings.</p> <h2>Source separated psuedo labeling</h2> <h3>Idea</h3> <p>We were scared that psuedo labeling with the entire test set prior will overfit to the UTokyo data which should be 70% of the test data. With this source-split, we can train model which overfit to UTokyo and other model to the rest, maximizing the PB.</p> <h3>Step.1</h3> <p>Predict on test set using Best CV Model and split the test set to two: TestSetB :UToyko which image has <code>number of wheats &lt;39</code> and TestSetC: <code>number of wheats&gt;=39</code>. We save the labels for psuedo label as well</p> <h3>Step2:</h3> <p>train two models --&gt; Model-B train with pseudo [TestSetB, traindata] and Model-C train with pseudo [TestSetC,traindata].</p> <h3>Inference:</h3> <p>Predict TestSetB with Model-B and TestSetC with Model-C.</p> <h1>Codes!</h1> <p>I uploaded my training codes here.<br> <a href="https://github.com/kentaroy47/kaggle-wheat-arutema47" target="_blank">https://github.com/kentaroy47/kaggle-wheat-arutema47</a></p>
Prostate cANcer graDe Assessment (PANDA) Challenge
Lightweight siamese network solution (ResNet18 -> PB 0.8966)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Prostate cANcer graDe Assessment (PANDA) Challenge <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>In this post I present the ideas for the <a href="https://www.kaggle.com/c/prostate-cancer-grade-assessment/overview">PANDA</a> Kaggle competition. Please refer to the description of the competition for more insights. <a href="https://github.com/chiamonni/">@chiamonni</a> and <a href="https://github.com/mawanda-jun/"></a><a href="/mawanda">@mawanda</a>-jun worked on this project (<a href="https://github.com/chiamonni/PANDA_Kaggle_competition">repo</a>).</p> <h1>Contents</h1> <ul> <li>Problem overview</li> <li>Dataset approach</li> <li>Network architecture</li> <li>Results</li> </ul> <h1>Problem overview</h1> <p>The Prostate cANcer graDe Assessment (PANDA) Challenge requires participants to recognize 5 severity levels of prostate cancer in prostate biopsy, plus its absence (6 classes).</p> <p><img src="https://storage.googleapis.com/kaggle-media/competitions/PANDA/Screen%20Shot%202020-04-08%20at%202.03.53%20PM.png" alt="Illustration of the biopsy grading assigment"></p> <p>Therefore, this is a classification task.</p> <p>The main challenges Kagglers faced where related to: - <strong>dimensionality</strong>: images were quite large and sparse (~50K x ~50K px); - <strong>uncertainty</strong>: labels were given by experts, which were sometimes interpreting the cancer gravity in different ways.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Ffe60ff9e1ab7555ba3c4f3f1866631b2%2F192863a82b5a954ba0fa56b910574e1a.jpeg?generation=1596380531799397&amp;alt=media" alt="cancer image"></p> <h1>Dataset approach</h1> <p>I decided to analyze each image and extract relevant "crops" to be stored directly on disk in order to reduce compute time while reading the images from disk. Therefore, I used the 4x reduced images (level 1 of original dataset) and extracted squared patches of 256px with the "akensert" <a href="https://www.kaggle.com/akensert/panda-optimized-tiling-tf-data-dataset">method</a>. Then, I stored the crops in an image with the slideshow of crops.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fcc3ab4fed6daa78c122b54d8a80a9d31%2F0b6e34bf65ee0810c1a4bf702b667c88.jpeg?generation=1596380612573127&amp;alt=media" alt="akensert crops"></p> <p>Each image came with a different number of crops. So, I realized a binned graph counting how many times a certain number of crops occured. The "akensert" method is the first metioned, the "cropped" one is a simple "strided" crop, in which I kept each square that was at least covered with 20% of non-zero pixels.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2F1aee3408042bed534ee52feb5148860a%2Fnumber_crops_personal_akensert.png?generation=1596380665002903&amp;alt=media" alt="number of crops"></p> <p>From the graph it is clear that the "akensert" method is more reliable (the curve is tighter) than the first I explored. In addition, I decided to select 26 random selected crops from each image: - in the case they were less than 13 I doubled them, and filled the remaining with empty squares; - in the case they were more, I randomly selected 26. I thought about this method as a regularization. In fact, the labels could have been assigned wrongly and selecting only a part of the crops could lead to a better generalization capability of my model. In addition, I forced my model to understand the gravity of the cancer from a part of the whole image in the 40% of the dataset, which I think helped it to generalize the proble better.</p> <h2>Dataset augmentation</h2> <p>I found out that modifying the color of the images (with random contrast/saturation/ecc) augmentations was not giving me any particular advantage. In addition, I found out that simple flipping/rotation really helped me out in leveraging the differences between CV and LB. I also added a random occlusion augmentation, which covered each crop with a rectangle of ranging size of [0, 224) and really helped me in generalize the model performance w.r.t. the LB. As a side note, I think that those augmentations really helped my model perform so well in the private leader board (I gained +3% accuracy).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fe3889ccbe4626e95b3a1bc5eeabf6946%2Ftest.jpeg?generation=1596380705355303&amp;alt=media" alt="test"></p> <p>An example of the resulting augmentations, with 8x8 crops.</p> <h1>Network architecture</h1> <p>For the network architecture I took inspiration from the method used from experts, that is: 1. look closely to the tissue; 2. characterize each tissue part with the most present gravity of cancer patterns; 3. take the two most present ones and declare the cancer class.</p> <p>Therefore, I created a siamese network which received each crop at a time with shared weights. The output of each siamese branch was then <strong>averaged</strong> with the others as a sort of polling, and then brought to the <a href="https://www.kaggle.com/haqishen/train-efficientnet-b0-w-36-tiles-256-lb0-87">binned</a> output. See the image below for further insight.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2101276%2Fc96851ac147d872029fa0cd09de4daab%2Fnetwork_architecture.png?generation=1596380747042850&amp;alt=media" alt="network architecture"></p> <p>Since my computing resources were limited in memory (8GB VRAM, Nvidia 2070s) I was able to train this network with a <a href="https://github.com/facebookresearch/semi-supervised-ImageNet1K-models">ResNet18 semi-weakly pretrained</a> model.</p> <h1>Cross-validation</h1> <p>Since my model was performing so coherently among the CV and LB I decided not to do any cross validation. In fact, I simply trained the model with a 70/30 train/validation split of the whole training set.</p> <h1>Hyper parameters selection</h1> <p>The best hyper parameters I selected, within the trained weights, are under the folder <code>good_experiments</code>.</p> <h1>Results</h1> <p>The aforementioned architecture resulted in: - CV: 0.8504 - LB: 0.8503 - PB: 0.8966</p> <p>Those results are quite interesting, since most of the competition participant used a EfficientNetB0 which is far bigger and more accurate in most benchmarks. I would have liked to train this particular architecture on a bigger machine, with more interesting architectures, hopefully with even better results.</p>
ALASKA2 Image Steganalysis
[74th place] First Medal - How I missed the Silver Medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the organizers for this interesting competition! Congrats to all the winners and participants.</p> <p>In fact, I haven't tried a lot. We realized the way too late.</p> <p>However it can be helpful for someone, so simply write it down. (I did other experiments - pretrained other models, optimizers, etc. - but I'm going to write down the main issues.)</p> <h1>EfficientNet-B0, EfficientNet-B1, EfficientNet-B2</h1> <p>It was good to do simple experiments, but it was hard to exceed 0.930 lb.</p> <h1>EfficientNet-B7</h1> <p>And then I trained b7. It had the best performance on the about epoch 30 and achieved 0.933 lb. (I haven't tested many epochs due to limited GPU resources.)</p> <h1>EfficientNet-B6</h1> <p>Learned just before the close of the competition and achieved 0.930 lb at about 28 epoch.</p> <h1>Agumentation</h1> <p>flip augmentation rotate +- 10 (just B6) cutmix (our teammate - B3, @<a href="https://www.kaggle.com/kani23">kani23</a> )</p> <p>When I applied rotate and cutmix, the validation score was a little higher. like this: <code>[RESULT]: Train. Epoch: 21, summary_loss: 0.82028, final_score: 0.87635, time: 4017.75592 [RESULT]: Val. Epoch: 21, summary_loss: 0.75268, final_score: 0.90341, time: 325.94613</code></p> <h1>TTA</h1> <p>Above 0.930 lb, it didn't work in public lb. but in private, Actually, there was a very big difference. <code>EfficientNet-B7 + no TTA : 0.932 lb / (private 0.916)</code> <code>EfficientNet-B7 + TTA : 0.932 lb / (private 0.920)</code></p> <p>Unfortunately I ensemble with the results of not applying TTA. So, it didn't score better than my single model. 😭 </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3492127%2Ff14ba2a956412e823537089b0d2c05d2%2F41.png?generation=1595355611359172&amp;alt=media" alt=""></p> <p>Anyway, I learned a lot and I will try harder in the future. Thanks!</p> <p>p.s. Thanks to our teammate @<a href="https://www.kaggle.com/kani23">kani23</a>, p.s.2. <a href="https://www.kaggle.com/loveall"></a><a href="/loveall">@loveall</a> - You can not use our results for your paper job. because you have contributed nothing and please don't take someone else's kernel anymore.</p>
TReNDS Neuroimaging
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the host for organizing this competition and thank you to Kaggle for the computing resources.</p> <p>Congratulations to all the winners and medal finishers. Thanks for sharing your solutions!</p> <h2>Parcellation</h2> <p>My initial approach was to use 3D CNNs but I couldn't get a good performance out of them so I switched to a parcellation-based approach.</p> <p>I wanted to use a parcellation with a moderate number of brain regions to keep the number of features manageable. I ended up using the <a href="https://nilearn.github.io/modules/generated/nilearn.datasets.fetch_atlas_aal.html">AAL</a> parcellation which has 116 regions.</p> <h2>Age prediction</h2> <p>For each of the 3D maps, I extracted the mean intensity in each region (116 regions x 53 maps = 6148 features).</p> <p>I used the loading, fnc and these maps-derived features to train a ridge model. I got a decent score for age prediction by just tuning the alpha parameter and two weight variables. The weight variables are used to scale down the fnc and maps-derived features.</p> <p>Then I extracted the variance for each region and added them to the feature list. This led to a 0.0007 LB boost.</p> <p>I tried using skew and kurtosis as features but this didn't improve the score. My final age prediction is a weighted average of 3 ridge models.</p> <h2>Domain variables predictions</h2> <p>Unfortunately, these maps-derived features didn't seem to help much in predicting the domain variables. I only trained a single model for each of the domain variables.</p> <h2>Site effects</h2> <p>I built a site classifier using logistic regression and ridge models. I later stumbled upon the <a href="https://github.com/Warvito/neurocombat_sklearn">Combat</a> harmonization method and I used it to adjust the features for site effects.</p>
Jigsaw Multilingual Toxic Comment Classification
21st Place solution : Magic of Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Jigsaw Multilingual Toxic Comment Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Big thanks to <a href="/riblidezso">@riblidezso</a> who share this amazing notebook: <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> .</p> <p>In short, ensemble and second step fine-tuned by language is magic (and overfitting too).</p> <h1>PB score timeline</h1> <p>Ensemble Gmean of top score public submissions -&gt; 0.9473 (8 June) Ensemble LGBM solution and Gmean submission -&gt; 0.9478 (10 June) (Magic) Add external data to validate dataset and change to validate by language in second step training of XLM and ensemble by language -&gt; 0.9500 (21 June) Unfortunately, we boosted the score to 0.9500 in the last 2 days but no more TPU left to train new external data.</p> <h1>Validate Dataset</h1> <p>We separated validate data to 3 set by language and find another 3 languages from external data to use in second step training in <a href="/riblidezso">@riblidezso</a> notebook. The out of fold score looks promising so we blend them to our main submission and boost the score to 0.9500.</p> <h1>Ref</h1> <p>Gmean notebook: <a href="https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x">https://www.kaggle.com/paulorzp/gmean-of-low-correlation-lb-0-952x</a> XLM-Roberta: <a href="https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large">https://www.kaggle.com/riblidezso/train-from-mlm-finetuned-xlm-roberta-large</a> LGBM solution: <a href="https://www.kaggle.com/miklgr500/lgbm-solution">https://www.kaggle.com/miklgr500/lgbm-solution</a> External data: <a href="https://www.kaggle.com/blackmoon/russian-language-toxic-comments">https://www.kaggle.com/blackmoon/russian-language-toxic-comments</a> , <a href="https://www.kaggle.com/shonenkov/open-subtitles-toxic-pseudo-labeling">https://www.kaggle.com/shonenkov/open-subtitles-toxic-pseudo-labeling</a> (Magic) Validate and fine-tuned by language: <a href="https://www.kaggle.com/medrau/train-from-mlm-finetuned-val-per-lang">https://www.kaggle.com/medrau/train-from-mlm-finetuned-val-per-lang</a></p>
University of Liverpool - Ion Switching
Changing the Target Approach ( Top 10% solution )
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: University of Liverpool - Ion Switching <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone from the Ion Switching competition, and congratulations to the winners !</p> <p>We weren't first in that competition, but we are still proud of our ideas, method and of what we have achieved. After all, it's all about trying. Therefore, we present here the method that brought us a bronze medal. </p> <p>This competition was definitely not easy, as the data was generated with a Hidden Markov Model (HMM). We worked with both the competition data and the <a href="https://www.kaggle.com/cdeotte/data-without-drift">data without drift</a> created by <a href="https://www.kaggle.com/cdeotte">Chris Deotte</a> for this competition. We thank him in the name of all other kagglers who used his processed data as well. </p> <p>We considered the open_channels as states of a Markov Chain Process. Our main work was based on 2 features : - the probability that open_channels at time T would change from T-1 (just an indicator of the change, not its value !) - open_channels shifted from 1 (at previous timestep). Our method was to estimate those and to make an iterative prediction on test. We did all predictions based on an aggregation by subbatches of 100_000.</p> <p>We aggregated subbatches by resemblance, based on criterias such as : - frequencies and amplitudes obtained from the Fourier decomposition ; - mean, std ; And that gave us the same group as <a href="https://www.kaggle.com/c/liverpool-ion-switching/discussion/153269">here</a></p> <ol> <li><p>Modeling probability of leaving the previous state (probability to change) : This was a binary classification on open_channels ! = open_channels_shift_1. We used a simple light gradient boosting method (lightgbm) which achived precision and recall superior to 0.95 on nearly all batches, and very often around 0.98+. This binary variable can be predicted in a very efficient way.</p></li> <li><p>Iteratively predicting open_channels : Taking into account both open_channels shifted from 1 and the probability to change was really important to get 94.2+ on cross-validation. The problem here was the cost in computation time, as we were making iterative prediction on test. For the value of open_channelsshifted_1 at the starting of test mini-batchs of 100_000, we just took them from public kernels which were well performing and had exactly the same values.</p></li> <li><p>Final model : Our final model is a lightgbm on 5 features : </p> <ul><li>IsChanged (predicted by binary classification)</li> <li>open_channels_shift_1 (iteratively predicted)</li> <li>signal </li> <li>signal_pct_change</li> <li>signal shifted ahead / back from 1, 2, and 3 timesteps </li></ul></li> </ol> <p>This model achieved 94.2+ on cross-validation and 94.389 on private leaderboard.</p>
ALASKA2 Image Steganalysis
[34th place] My first silver solution :P
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks a lot to the hosts for organizing such an interesting competition! Also, congrats to all the participants, especially to the winners.</p> <p>Thanks to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> with his fantastic kernel <a href="https://www.kaggle.com/shonenkov/train-inference-gpu-baseline" target="_blank">here</a>, I (and maybe many others) had a great approach to the problem. I mainly used his kernel as baseline of my solution, with some modification. Here are details.</p> <h1>Augmentation</h1> <ul> <li>Horizontal + vertical flip</li> <li>Rotate 90</li> <li>Cutout</li> <li>Crop + padding</li> </ul> <p>From my understandings, when applying DCT changes, the errors will be spread along all the pixels. We use DL model to catch those changes in the pixel domain. So, any augmentation without interpolation (by interpolation I mean some image processing like resize, rotate non-90, etc) can be used (discussion).</p> <h1>Modeling</h1> <p>I've tried quite a few backbone for this competition:</p> <ul> <li>Se-ResNeXt, DenseNet, NASNet, etc. They might work as well, but converge very slowly in comparison to EfficientNet</li> <li>EfficientNet B3, B4, B5 (B6, B7 or B8 seem forever for me :((… ). They worked quite well!</li> </ul> <p>I used 2 kinds of input: RGB and YCrCb. Their performance are toe to toe to each other, but together they boost my score.</p> <p>Optimizer: RAdam along with Lookahead</p> <h1>Inference</h1> <ul> <li>TTAx8: flip + transpose</li> <li>Weight ensemble: 7 models, comprise of B3, B4, B5 with RGB, YCrCb input:</li> </ul> <pre><code>------------------------------------ EfficientNet-B3 RGB, YCrCb, fold 0 EfficientNet-B3 RGB, whole dataset EfficientNet-B4 RGB, YCrCb, fold 4 EfficientNet-B5 RGB, YCrCb, fold 2 ------------------------------------ </code></pre> <h1>Result</h1> <ul> <li>My best single model score: 0.92</li> <li>My best ensemble score: 0.924</li> </ul> <p>I will update this post if I remember something interesting to share. Thank you all again!</p>
TReNDS Neuroimaging
2nd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the competition organizers for interesting and leak-free competition! And congrats to winners!</p> <h1>Overview</h1> <p>My solution is 2nd stage stacking, and then applied weight optimization. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F4117c544c79b1e8f34a3b8287e07f0a7%2F2nd%20place%20pipeline.png?generation=1593488007825961&amp;alt=media" alt=""></p> <h1>OOF correlation heatmap</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2Fa4f1dddd12d18ef0d623d34af62cea07%2FOOF%20corr%20hm.png?generation=1593484113552540&amp;alt=media" alt=""></p> <p>These names stands for below <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F317344%2F625b974272f82c8b1f52da3036664979%2Foofs.png?generation=1593484633888789&amp;alt=media" alt=""></p> <h1>Findings</h1> <ul> <li>SVM and linear model works well in tabular data, but tree-based model doesn’t work as well as KNN</li> <li>3D ResNet18 is best CNN here for me. Comparing to that, ResNet50, ResNext50, ResNext101 doesn’t work well.</li> <li>Splitting 3D fMRI into 3~6 pieces gave me boost, but 8 pieces are useless.</li> <li>I'm loved by 2nd place</li> </ul> <h1>Appendix</h1> <p>脳 means brain in Japanese, and pronounced / nou /, so <strong>脳 brainer</strong> stands for <strong>No Brainer</strong>. Actually I joined this competition 2 weeks ago.</p>
M5 Forecasting - Uncertainty
4th Place Solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I published my solution according to <a href="https://mofc.unic.ac.cy/m5-competition/">the guidelines of this competition</a>.</p> <p><a href="https://github.com/marisakamozz/m5">https://github.com/marisakamozz/m5</a></p> <p>If you would like to know more about my solution, please see <a href="https://github.com/marisakamozz/m5/blob/master/ModelSummary.md">Model Summary</a>.</p> <p>Here is my <a href="https://www.slideshare.net/MasakazuMori2/kaggle-m5-forecasting-uncertainty-4th-place-solution">Presentation Slide</a>.</p> <p>This is my model structure.</p> <p><img src="https://github.com/marisakamozz/m5/raw/master/model.png" alt="Model Structure"></p> <p>Thank you.</p>
ALASKA2 Image Steganalysis
[8th Place] Brief solution.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Before posting</h1> <p>Before posting, Thanks for hosting this great competition. And also, Thanks to our great teammates: <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>, <a href="https://www.kaggle.com/youhanlee" target="_blank">@youhanlee</a>, <a href="https://www.kaggle.com/wonhyukahn" target="_blank">@wonhyukahn</a>, <a href="https://www.kaggle.com/bhjang" target="_blank">@bhjang</a>.</p> <p>Sorry for this brief solution, but details will be updated soon.</p> <h1>Pipeline</h1> <p>Our pipelines are below:</p> <blockquote> <p>EDA -&gt; CV strategy -&gt; Data -&gt; Augmentations -&gt; Models -&gt; Ensemble</p> </blockquote> <h1>1. EDA</h1> <blockquote> <p>will be updated soon</p> </blockquote> <h1>2. CV strategy</h1> <blockquote> <ul> <li>Stratified KFold based on quality factor and stego information insertion amount</li> </ul> </blockquote> <p>Information insertion amount could be calculated below:</p> <blockquote> <p>a = difference between cover and stego to DCT coefficients per channel<br> b = number of non-zero DCT coefficients per channel<br> information insertion amount = a / b</p> </blockquote> <p>Then, After binning the information into 10, it was divided into k folds for each quality factor. At early stage we used 5 folds, but later 10 folds.</p> <p>Because it took so long to train model, we didn't train all folds. One of the k folds was selected and used as hold-out.</p> <h1>3. Data</h1> <blockquote> <ul> <li>Image: From RGB to ReconstructedRGB</li> <li>Label: From binary to multi-label</li> </ul> </blockquote> <p>For image, we use reconstructed RGB instead of RGB. When jpeg image is loaded using cv2, the small value is rounded and truncated. Since Stego information is a very fine information, it was important to preserve these values. Thus, We create an image without rounding and truncation using jpegio library.</p> <p>For label, as mentioned earlier in discussions, multi-label classification was used rather than binary label. we used 10 targets (Cover, 3 quality factors * 3 stego algorithms).</p> <h1>4. Augmentations</h1> <blockquote> <ul> <li>Flip, Rotate90, Cutout, GridShuffle, GridDropout</li> </ul> </blockquote> <p>I tried a lot for augmentation experiments.<br> First most importantly, we remove augmentations that changes the original pixel value. Thus, we use <strong>Flip, Rotate90, Cutout, GridShuffle, and GridDropout</strong>. </p> <p>The best result was GridShuffle. This augmentation came from thinking about the answer to the next question: How can our model focus on fine stego information, not the shape of the object in image? <br> To answer these question, We tried shuffle pixels then shuffle 8x8 tiles. In JPEG image, 8x8 tiles are important because DCT information is included in 8x8 tiles. Stego information is inserted at the DCT level, and since the DCT information is continuous within 8x8 tiles, We decided to mix these 8x8 tiles. <br> If the unit size of tile is set to 8x8, a total of 64*64 tiles are generated. Mixing all these tiles breaks the <strong>texture information</strong> along with the shape information, which hinders the model from being trained. Thus, we chose a suitable sized tile that is not too small and trained the model.<br> The GridDropout is similar.</p> <p>code is below:</p> <ul> <li>GridShuffle(Albumentations)</li> </ul> <blockquote> <p>grid_shuffle = OneOf([<br> RandomGridShuffle((2, 2), p=1.0),<br> RandomGridShuffle((2, 4), p=1.0),<br> RandomGridShuffle((2, 8), p=1.0),<br> RandomGridShuffle((2, 16), p=1.0),<br> RandomGridShuffle((2, 32), p=1.0),<br> RandomGridShuffle((4, 4), p=1.0),<br> RandomGridShuffle((4, 8), p=1.0),<br> RandomGridShuffle((4, 16), p=1.0),<br> RandomGridShuffle((4, 32), p=1.0),<br> RandomGridShuffle((8, 8), p=1.0),<br> RandomGridShuffle((8, 16), p=1.0),<br> RandomGridShuffle((8, 32), p=1.0),<br> RandomGridShuffle((16, 16), p=1.0),<br> RandomGridShuffle((16, 32), p=1.0),<br> RandomGridShuffle((32, 32), p=1.0),<br> ], p=0.7)</p> </blockquote> <h1>5. Models</h1> <blockquote> <ul> <li>Framework: Pytorch</li> <li>Backbone: Efficientnet b0, b0(double resolution), b3, b4, b7, resnext</li> <li>Scheduler: ReduceOnplateau</li> <li>Loss: CrossEntropy Loss</li> <li>Optimizer: AdamP, AdamW</li> <li>Epochs: 100+</li> <li>Use TTA 8x</li> </ul> </blockquote> <h1>6. ETC</h1> <blockquote> <ul> <li>Freeze backbone and finetune fc layer</li> <li>Use all data as training</li> <li>Flipped Stego Images</li> </ul> </blockquote> <h1>7. Scores</h1> <ul> <li>Best Single Model in public leaderboard - Public LB: 945, Private LB: 925</li> <li>Best Single Model in private leaderboard - Public LB: 940, Private LB: 928</li> <li>Ensemble - Public LB: 944, Private LB: 929</li> </ul>
M5 Forecasting - Accuracy
Top 3% simple solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h3>Code is here: <a href="https://github.com/btrotta/kaggle-m5">https://github.com/btrotta/kaggle-m5</a></h3> <p>The code is quite short (&lt;300 lines) and uses only fairly basic features in a LightGBM model. </p> <h3>Modelling approach</h3> <p>I didn't use any "magic" adjustment factors. I also didn't use any custom metrics, just rmse. I think the evaluation metric is noisy, especially for features with short history, because random fluctuations in the day-to-day sales history can cause products to be weighted very differently even if they have similar long-term average. So I thought trying to optimise for this metric would lead to overfitting.</p> <p>Rather than using a recursive approach, I trained separate models for each day of the forecasting horizon, and for each n I recalculated the features so that the n-day-ahead model is trained on data that has been lagged by n days. Based on discussions in the forum (specifically, this post <a href="https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/144067#">https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/144067#</a>), I decided that the recursive approach was only performing well on the training period by coincidence.</p> <p>I noticed that in the test period, there are very few new products (i.e. products that have not been sold before the test period). So I excluded from the training set rows before the first sale date of a product in a store, and also excluded these rows when calculating aggregate features.</p> <p>I used 3 years of data to calculate the features (to reduce noise and capture seasonal trends) and 1 year to actually train the model. I excluded December from the training period because of the effect of Christmas.</p> <h3>Features</h3> <p>The feature engineering is mainly common sense: as well as the obvious date features, just lagged sales at various levels of aggregation. For the aggregated features, I took the mean of sales at 3 levels of aggregation: - item and store - item (aggregated over all stores) - dept id and store id</p> <p>The idea of this was that the higher levels of aggregation provide a less noisy view of item-level and store-level trends.</p> <p>Specifically, the features are: - dept id and store id (categorical) - day of week, month, snap (i.e. is today a snap day for the current store) - days since product first sold in that store - price relative to price 1 week and 2 weeks ago - item-level holiday adjustment factor (for each holiday and each item, calculate the average change in sales in the week leading up to the holiday and the holiday itself) - long-term mean and variance of sales at the 3 levels of aggregation - long-term mean and variance of sales at the 3 levels of aggregation for each day of week - average of last 7, 14, and 28 days of sales at the 3 levels of aggregation - average sales lagged 1-7 days at the 3 levels of aggregation</p>
Global Wheat Detection
185th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers and kaggle team for hosting this competition! Also great thanks to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for his efficientdet-pytorch repo, <a href="https://www.kaggle.com/zfturbo" target="_blank">@zfturbo</a> for his weighted-box-fusion repo, to <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for his amazing public notebook based on which I started this competition and to everyone who has been active in the discussion forum during the competition. Finally congratulations to all the fair prize winners and to the fair medal winners.</p> <p><strong>Summary:</strong></p> <ul> <li>5 fold d2 trained on 1024x1024 + 5 fold d4 trained on 512x512</li> <li>Heavy augmentations (Added more augmentations along with the alex's public kernel augmentations for example: Noise augment, Scale and crop, this gave me 0.01x-0.2x boost)</li> <li>Used weighted-boxes-fusion for ensemble</li> <li>5xTTA</li> <li>Pseudo labeling(1 epoch, 1 round, training data + 2x pseudo test data)</li> </ul> <p>All models ensemble with 5xTTA and wbf achieve 0.7563 public LB/ 0.6956 private LB. Adding pseudo labeling improves the score to 0.7595 public Lb/ 0.7050 private LB. </p> <p><strong>Did not work for me:</strong></p> <ul> <li>Cleaning the bounding boxes</li> <li>Gradient accumulation</li> <li>Cutmix and Mixup scores were not as good as the models without them</li> <li>WBF threshold Tweaking </li> <li>Tweaking efficientdet loss function hyper-parameters </li> </ul>
ALASKA2 Image Steganalysis
3rd place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ALASKA2 Image Steganalysis <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for hosting this interesting, leakage free competition. I learned a lot from this competition. I also thank the other participants. I am very impressed with the unique solutions of the other participants. I will read all posted solutions.</p> <p>Here is a description of the my solution.</p> <h2>Validation</h2> <p>I split the file names randomly into 80% and 20% stratifying based on quality factor. Looking at the distribution of predictions for the test set, it seems that the percentage of cover images is relatively high. However, I thought that the AUC based metric is robust to such imbalance data, so I didn't take any special care of it at validation level.</p> <h2>Methods</h2> <p>I have created an ensemble of two models: one for inputting 3-channel color images and the other for feature engineering and inputting DCT coefficient.</p> <h3>3-channel color image model (validation score: 0.9382 (RGB), 0.9378 (YUV), 0.9364 (Lab))</h3> <ul> <li>I made 3 models that inputs RGB, YUV, and Lab, respectively. <ul><li>I used cv2 for loading and conversion. I also tried to use non-quantized YCrCb directly from DCT coefficient in the beginning, but the accuracy was almost the same as the YUV read with cv2, so I chose to use faster cv2.</li></ul></li> <li>4-class classification</li> <li>EfficientNet-b5 <ul><li>I also tried RegNet, ResNeSt, HRNet, PyConv, and others, but EfficientNet was more accurate for this task than the other models which took a similar amount of time in my implementation.</li></ul></li> <li>Augmentation is Flip &amp; Rotate90 (8 types in total) and CutMix <ul><li>I tried to do CutMix with the data in different folder with the same file name, but the accuracy became worse. Did it become too difficult?</li></ul></li> <li>SGD + Cosine Annealing, 50 epoch per cycle <ul><li>Hyperparameters are almost identical to the EfficientNet experiment in <a href="https://arxiv.org/abs/1905.13214">RegNet paper</a>.</li> <li>When I tried b0 and b2, I found that the accuracy of multiple cycles of Cosine Annealing was better than one cycle with the same total number of epochs. Especially, the gain at the second cycle was large. In the end, I ran 4 cycles of 50 epoch.</li></ul></li> </ul> <h3>DCT coefficient model (validation score: 0.9022)</h3> <p>I thought it would be difficult to make a differentiation in a model that uses RGB, so I attempted to create a network that directly handles the DCT coefficient by trial and error, but this was the only one that worked.</p> <ul> <li>Input features are <ul><li>one-hot encoding of the value {&lt;= -16, -15, -14, …, -2, -1, 1, 2, …, 14, 15, 16 &lt;=} </li> <li>positional encoding (I'm not sure if it helped) <ul><li>quantization matrix / 50</li> <li>a matrix such that matrix[i, j] = cos(pi * (i % 8) / 16) * cos(pi * (j % 8) / 16)</li></ul></li></ul></li> <li>Model architecture is <ul><li>EfficientNet-b2</li> <li>Change the first stride 2 to 1</li> <li>Change dilation of Conv2d to 8. This modification is applied Conv2d layers from the first (patched) stride 2 Conv2d to the next stride 2 Conv2d.</li></ul></li> <li>Augmentation is Flip &amp; Rotate90</li> <li>SGD + Cosine Annealing, 50 epoch, 1 cycle</li> <li>This model takes a long time to train, so I were not able to do a proper ablation study, but there was a difference of about 0.01 depending on whether the stride was changed or not.</li> </ul> <h3>Ensemble with MLP (validation score: 0.9399, private: 0.930)</h3> <p>Since outputs from the DCT coefficient model is quite different from outputs from 3-channel color image models, mixing it well improves the accuracy, but since it is considerably weaker than 3-channel models as a single model, it doesn't work well when simply averaging it. So I used MLP.</p> <ul> <li>Input the feature maps of DCT Binary, RGB, YUV, Lab after Average Pooling of each network. (There was a mistake, and softmax was applied to each feature maps.)</li> <li>To cancel the difference in scale between the networks, we use BatchNorm as follows. <code> x = torch.cat([bn(feat.reshape(-1, 1)).reshape(feat.shape) for feat, bn in zip(feats, self.bns)], dim=1) </code></li> </ul> <h3>Holdout stacking with LightGBM to push the last 0.001 (validation score: 0.9405, private: 0.931)</h3> <p>The holdout validation set has 15000 x 4 (Cover and Stegos) x 8 (flip / rotate90) = 4800000 examples, which is large enough to train LightGBM by splitting the holdout set into 5 folds. I used the MLP predictions and the t-SNE of each CNN feature map as features. The hyperparameter is determined by <a href="https://optuna.readthedocs.io/en/latest/reference/generated/optuna.integration.lightgbm.LightGBMTunerCV.html#optuna.integration.lightgbm.LightGBMTunerCV">LightGBMTunerCV</a> further splitting the train data in each fold into 80% and 20%.</p>
SIIM-ISIC Melanoma Classification
176th Place Write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank kaggle and the organizers for hosting such interesting competition and thank for my team <a href="https://www.kaggle.com/seshurajup" target="_blank">@seshurajup</a> <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> <a href="https://www.kaggle.com/jakkulasasikiran" target="_blank">@jakkulasasikiran</a> <a href="https://www.kaggle.com/mounikabellamkonda" target="_blank">@mounikabellamkonda</a>. This was great learning and collaboration.</p> <p>special thanks to <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> we worked hard for the last two months.</p> <h2>[Summary]</h2> <ul> <li>First, we trained a simple model using <strong>E1 256X256</strong> using 5Fold <strong>CV: 0.88</strong></li> <li>Then we analyze the predictions mostly <strong>FP</strong> and <strong>FN</strong></li> </ul> <p><strong>FP</strong> <code>Target = 1 and Predict &lt; 0.2</code><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2058044%2F6d599f0c9466d695b86a10bad076b821%2FFP.png?generation=1597827587126099&amp;alt=media" alt=""><br> <strong>FN</strong> <code>Target = 0 and Predict &gt; 0.8</code><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2058044%2Fc123c7520e87fcb326cb52a2cc3dd1c6%2FFN.png?generation=1597827662475852&amp;alt=media" alt=""></p> <ul> <li>we removed <strong>FP</strong> and <strong>FN</strong> base on the threshold </li> <li>trained the same model for training, we exclude <strong>FP</strong> and <strong>FN</strong> cv increased to <strong>0.9012</strong></li> <li>using same <strong>E1</strong> model to find <strong>FP</strong> and <strong>FN</strong> on <strong>2017-2018-2019</strong> data</li> <li>we removed <strong>FP</strong> and <strong>FN</strong> on <strong>2017-2018-2019</strong> </li> <li>same model trained again on <strong>2017-2018-2019-2020</strong> we get CV:0.92+ LB:0.95+ PLB:0.925+</li> </ul> <h2>[PyTorch]</h2> <p>for PyTorch setup, we are used DeepFake Winners solution scripts: <a href="https://github.com/selimsef/dfdc_deepfake_challenge" target="_blank">link</a> Big thanks to <a href="https://www.kaggle.com/selimsef" target="_blank">@selimsef</a></p> <ul> <li>For CV we are used <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified Leak-Free KFold CV</a> thanks to <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a> </li> <li>For training, we are used Pytorch 1.6 Native AMP. Notebook: <a href="https://www.kaggle.com/gopidurgaprasad/siim-pytorch-1-6-native-amp" target="_blank">link</a></li> <li>Optimizer: <code>Adam</code></li> <li>Schedule : <code>LearningRateScheduler</code> for <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a> notebook</li> <li>CUTMIX + MIXUP it working some time </li> <li>Sampling: Upsample(US) + Downsample(DS) + Nosampling(NS) for same model at the end we take simpleaverg for those. <br> <code>sampler = BalanceClassSampler(labels=train_dataset.__get_labels__(), mode="downsampling")</code><br> you found WeightedClassSampler from this notebook : <a href="https://www.kaggle.com/gopidurgaprasad/pytorch-weightedclasssampler" target="_blank">link</a></li> <li>Loss: simply<code>BinaryCrossentropy</code></li> <li>Augmentations:</li> </ul> <pre><code>A.Compose([ A.OneOf([ A.VerticalFlip(), A.HorizontalFlip(), A.Flip(), A.Rotate() ], p=0.5), A.Transpose(p=0.2), A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=15), A.RandomRotate90(p=0.1), A.OneOf([ A.RandomGridShuffle(p=0.1), A.Cutout(num_holes=8, max_h_size=size//8, max_w_size=size//8, fill_value=0, p=0.2), A.CoarseDropout(max_holes=4, max_height=size//8, max_width=size//8, p=0.2), A.GridDropout(p=0.2), RandomEraser(p=0.2), BitMask(size=size, p=0.1), ], p=0.11), A.RandomBrightness(limit=(-0.2,0.2), p=0.1), A.OneOf([ IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC), IsotropicResize(max_side=size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_LINEAR), IsotropicResize(max_side=size, interpolation_down=cv2.INTER_LINEAR, interpolation_up=cv2.INTER_LINEAR), ], p=1), A.PadIfNeeded(min_height=size, min_width=size, border_mode=cv2.BORDER_CONSTANT) </code></pre> <h4>Pytorch Models</h4> <table> <thead> <tr> <th>Model</th> <th>SIZE</th> <th>RESIZE</th> <th>SAMPLE</th> <th>FOLDS</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>E4</td> <td>512</td> <td>380</td> <td>DS</td> <td>5</td> <td>0.9175</td> </tr> <tr> <td>Sk50</td> <td>512</td> <td>512</td> <td>DS</td> <td>5</td> <td>0.9178</td> </tr> <tr> <td>E6</td> <td>768</td> <td>528</td> <td>DS</td> <td>10</td> <td>0.9266</td> </tr> <tr> <td>E5</td> <td>512</td> <td>456</td> <td>DS</td> <td>5</td> <td>0.9288</td> </tr> <tr> <td>E7</td> <td>512</td> <td>512</td> <td>DS</td> <td>5</td> <td>0.9224</td> </tr> <tr> <td>Dpn92</td> <td>512</td> <td>512</td> <td>DS</td> <td>5</td> <td>0.9218</td> </tr> <tr> <td>E5</td> <td>512</td> <td>456</td> <td>DS</td> <td>10</td> <td>0.9229</td> </tr> <tr> <td>D201</td> <td>512</td> <td>512</td> <td>DS</td> <td>5</td> <td>0.9266</td> </tr> <tr> <td>D201</td> <td>512</td> <td>512</td> <td>US</td> <td>5</td> <td>0.9240</td> </tr> <tr> <td>E6</td> <td>768</td> <td>528</td> <td>US</td> <td>5</td> <td>0.9203</td> </tr> <tr> <td>E4</td> <td>512</td> <td>380</td> <td>US</td> <td>10</td> <td>0.9242</td> </tr> <tr> <td>E5</td> <td>512</td> <td>456</td> <td>NS</td> <td>5</td> <td>0.9153</td> </tr> <tr> <td>D201</td> <td>512</td> <td>224</td> <td>NS</td> <td>5</td> <td>0.9185</td> </tr> <tr> <td>B5</td> <td>512</td> <td>456</td> <td>US</td> <td>10</td> <td>0.9234</td> </tr> <tr> <td>D201</td> <td>512</td> <td>224</td> <td>US</td> <td>5</td> <td>0.9068</td> </tr> </tbody> </table> <h4>SimpleAverage</h4> <ul> <li>simply average all models CV:<strong>0.9372</strong> LB:<strong>0.9544</strong> PLB<strong>0.9388</strong></li> </ul> <h4>OptimizeAUC</h4> <ul> <li>for optimizing OOF AUC we are using OptimizeAUC function from <a href="https://www.amazon.in/dp/B089P13QHT/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1#:~:text=This%20book%20is%20for%20people,learning%20and%20deep%20learning%20problems." target="_blank">Approaching (Almost) Any Machine Learning Problem</a> by <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> Big thanks to him.</li> <li>you found the OptimizeAUC in this notebook: <a href="https://www.kaggle.com/gopidurgaprasad/optimize-auc-using-oof" target="_blank">link</a></li> <li>CV:<strong>0.94741</strong> LB:<strong>0.9492</strong> PLB<strong>0.9344</strong></li> </ul> <p><strong>Pytorch Training Notebook:</strong> <a href="https://www.kaggle.com/gopidurgaprasad/siim-final-d201-ns" target="_blank">link</a></p> <h2>[TF]</h2> <ul> <li>we are using <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified KFold with TFRecords</a> again Big thanks to <a href="https://www.kaggle.com/cdotte" target="_blank">@cdotte</a></li> <li>thanks to <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> running those TF models one by one every data</li> </ul> <table> <thead> <tr> <th>Model</th> <th>SIZE</th> <th>FOLDS</th> <th>CV</th> </tr> </thead> <tbody> <tr> <td>E7</td> <td>512</td> <td>15</td> <td>0.8826</td> </tr> <tr> <td>E6</td> <td>512</td> <td>15</td> <td>0.9194</td> </tr> <tr> <td>E5</td> <td>512</td> <td>15</td> <td>0.9243</td> </tr> <tr> <td>E7</td> <td>512</td> <td>10</td> <td>0.9163</td> </tr> <tr> <td>E6</td> <td>512</td> <td>10</td> <td>0.9027</td> </tr> <tr> <td>E5</td> <td>512</td> <td>10</td> <td>0.9275</td> </tr> <tr> <td>E4</td> <td>512</td> <td>10</td> <td>0.9114</td> </tr> <tr> <td>E3</td> <td>512</td> <td>10</td> <td>0.9133</td> </tr> <tr> <td>E2</td> <td>512</td> <td>10</td> <td>0.9154</td> </tr> <tr> <td>E1</td> <td>512</td> <td>15</td> <td>0.9142</td> </tr> <tr> <td>E7</td> <td>768</td> <td>5</td> <td>0.9076</td> </tr> <tr> <td>E6</td> <td>768</td> <td>5</td> <td>0.9161</td> </tr> <tr> <td>E5</td> <td>768</td> <td>5</td> <td>0.9063</td> </tr> <tr> <td>E4</td> <td>768</td> <td>5</td> <td>0.9144</td> </tr> <tr> <td>E3</td> <td>768</td> <td>5</td> <td>0.9019</td> </tr> <tr> <td>E2</td> <td>768</td> <td>5</td> <td>0.8917</td> </tr> <tr> <td>E1</td> <td>768</td> <td>5</td> <td>0.8866</td> </tr> <tr> <td>E7</td> <td>384</td> <td>5</td> <td>0.9212</td> </tr> <tr> <td>E6</td> <td>384</td> <td>5</td> <td>0.9237</td> </tr> <tr> <td>E5</td> <td>384</td> <td>5</td> <td>0.9278</td> </tr> </tbody> </table> <h4>SimpleAverage</h4> <ul> <li>simply average all TF models CV:<strong>0.9301</strong> LB:<strong>0.9503</strong> PLB<strong>0.9376</strong></li> </ul> <h4>OptimizeAUC</h4> <ul> <li>CV:<strong>0.9446</strong> LB:<strong>0.9538</strong> PLB<strong>0.9348</strong></li> </ul> <h2>[PyTorch + TF]</h2> <h4>SimpleAverage</h4> <ul> <li>simply average all PyTorch and TF models CV:<strong>0.9402</strong> LB:<strong>0.9538</strong> PLB:<strong>0.9399</strong></li> <li>This is our <strong>best-selected submission</strong></li> </ul> <h4>OptimizeAUC</h4> <ul> <li>CV:<strong>0.949354</strong> LB:<strong>0.9496</strong> PLB:<strong>0.9362</strong></li> </ul> <h2>🎊 Congratulations to all Winners and Learners 🎊</h2>
SIIM-ISIC Melanoma Classification
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners, especially my teammates Qishen <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> and Gary <a href="https://www.kaggle.com/garybios" target="_blank">@garybios</a> who are both computer vision competition veterans. I learned a lot from you. So happy to see you both on top of another LB after the deepfake drama.</p> <p>Big shoutout to my colleague Chris <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> who I feel like served as an unofficial host of this competition with all your datasets, notebooks and tutorials. The competition wouldn't be this popular with all your contribution. We used your resized jpg images and triple stratified leak-free folds. Thanks Chris.</p> <h2>How we survived the shake</h2> <p>The test set is very small, with very small proportion of positive samples. So public LB has huge variance. Even the 2020 train data is not big enough for validation purpose, due to its small positive samples. </p> <p>To have stable validation, we used 2018+2019+2020's data for both train and <strong>validation</strong>. We track two cv scores, <code>cv_all</code> and <code>cv_2020</code>. The former is much more stable than the latter.</p> <p>Second key to LB survival is ensemble. Our single model's LB-cv correlation is essential 0, but the bigger the ensemble, the more stable the LB. In the last few days, our ensemble's LB was steadily increasing as we added better models.</p> <p>Our final ensemble 1 optimizes <code>cv_all</code> and final ensemble 2 optimizes <code>cv_2020</code></p> <p>Ensemble 1: <code>cv_all=0.9845, cv_2020=0.9600, public=0.9442, private=0.9490</code> (1st place)<br> Ensemble 2: <code>cv_2020=0.9638, public=0.9494, private=0.9481</code> (3rd place)</p> <p>Our best single model has <code>cv_2020=0.9481</code></p> <p>All scores above are 5 fold, TTAx8.</p> <h2>TPU vs GPU, TF vs torch</h2> <p>TPU with TF seems to be dominating the public notebooks in this competition. It seems to be faster than torch/GPU on EfficientNets. However, our experiments showed that its high LB scores are due to better luck with the public LB. Our torch/GPU models have much better CV, thanks to the better PyTorch ecosystem and flexibility for faster experiments.</p> <h2>Models</h2> <p>Our ensembles consists of EfficientNet B3-B7, se_resnext101, resnest101. There are models with or without meta data. Input size ranges from 384 to 896. (All input are from Chris's resized jpgs. For example, for 896 input we read 1024 jpgs and resize to 896.) </p> <h2>Meta data</h2> <p>In some (not all) of our models, we used 14 meta data from <a href="https://www.kaggle.com/nroman/melanoma-pytorch-starter-efficientnet" target="_blank">this</a> and <a href="https://www.kaggle.com/awsaf49/xgboost-tabular-data-ml-cv-85-lb-787#Image-Size" target="_blank">this</a> public notebooks, as illustrated below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1120704%2Fae8a6c8597a1f1ab4649fc72627c9378%2Fmeta_NN.png?generation=1597734243315450&amp;alt=media" alt=""></p> <h2>Targets</h2> <p>We find that using diagnosis as targets with cross entropy loss instead of binary target with BCE loss can boost score by ~0.01. </p> <p>2020 data's diagnosis sets and 2019 data's diagnosis sets are different. We manually map 2020's to 2019's. There are 9 labels.</p> <pre><code>2020 seborrheic keratosis -&gt; BKL 2020 lichenoid keratosis -&gt; BKL 2020 solar lentigo -&gt; BKL 2020 lentigo NOS -&gt; BKL 2020 cafe-au-lait macule -&gt; unknown 2020 atypical melanocytic proliferation -&gt; unknown 2020 nevus -&gt; NV 2020 melanoma -&gt; MEL </code></pre> <p>For prediction, we simply take the <code>MEL</code> class's softmax probability.</p> <h2>Augmentations</h2> <pre><code>transforms_train = A.Compose([ A.Transpose(p=0.5), A.VerticalFlip(p=0.5), A.HorizontalFlip(p=0.5), A.RandomBrightness(limit=0.2, p=0.75), A.RandomContrast(limit=0.2, p=0.75), A.OneOf([ A.MotionBlur(blur_limit=5), A.MedianBlur(blur_limit=5), A.GaussianBlur(blur_limit=5), A.GaussNoise(var_limit=(5.0, 30.0)), ], p=0.7), A.OneOf([ A.OpticalDistortion(distort_limit=1.0), A.GridDistortion(num_steps=5, distort_limit=1.), A.ElasticTransform(alpha=3), ], p=0.7), A.CLAHE(clip_limit=4.0, p=0.7), A.HueSaturationValue(hue_shift_limit=10, sat_shift_limit=20, val_shift_limit=10, p=0.5), A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=15, border_mode=0, p=0.85), A.Resize(image_size, image_size), A.Cutout(max_h_size=int(image_size * 0.375), max_w_size=int(image_size * 0.375), num_holes=1, p=0.7), A.Normalize() ]) transforms_val = A.Compose([ A.Resize(image_size, image_size), A.Normalize() ]) </code></pre> <h2>Post processing</h2> <p>When ensembling different folds, or different models, we first rank all the probabilities of each model/fold, to ensure they are evenly distributed. In pandas, it can be done by <code>df['pred'] = df['pred'].rank(pct=True)</code></p> <h2>Code</h2> <p><a href="https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution" target="_blank">https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution</a></p>
SIIM-ISIC Melanoma Classification
[2nd place] Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Code available here: <a href="https://github.com/i-pan/kaggle-melanoma" target="_blank">https://github.com/i-pan/kaggle-melanoma</a></p> <p>Wow. Did not expect this. But super excited to finally get my solo gold medal and complete the journey to Grandmaster. I'm grateful to benefit from the shakeup this time after suffering in PANDA. Thank you to all the organizers, and congratulations to all the other winners and participants. </p> <p>This year, I accomplished 2 milestones: 1) I graduated medical school and became a doctor; and 2) I became a Kaggle Competitions Grandmaster. I actually just started my intern year as a doctor working 70-80 hours/week so did not have too much time to dedicate to this competition. The key was having a pipeline that allowed me to quickly iterate on experiments so I could start them in the morning, go to work, then analyze the results when I came back. Long story short, to all those who are starting out, keep going. Put in the time and effort. Compete, read winners' solutions, and learn, over and over again. </p> <p>I have to thank my teammates in previous competitions <a href="https://www.kaggle.com/felipekitamura" target="_blank">@felipekitamura</a>, <a href="https://www.kaggle.com/alexandrecc" target="_blank">@alexandrecc</a>, and <a href="https://www.kaggle.com/jamesphoward" target="_blank">@jamesphoward</a>. And I also have to thank the Kaggle community as I have learned so much from participating.</p> <p>Also huge thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for all the insights he shared. I used his triple stratified dataset to split my data for CV. </p> <p><strong>Environment</strong><br> PyTorch 1.6 with automatic mixed precision<br> 4x NVIDIA GTX 1080 Ti 11GB provided by HOSTKEY (<a href="http://www.hostkey.com" target="_blank">www.hostkey.com</a>) as part of their Kaggle competitions grant<br> 4x Quadro RTX 6000 24GB </p> <p><strong>Batch Size</strong><br> In the beginning, I was training models on GTX 1080 Ti GPUs provided by HOSTKEY. I wasn’t able to get multi-GPU training to work, so I was just experimenting with batch sizes of 8-16 on a single 11GB 1080 Ti. My CV scores were not that high, usually a 5-fold average of around 0.92, with similar results on LB. The TPU kernels were doing so well (on public LB, at least), that I thought it was in part due to increased batch size. At that point, I switched over to my 4x Quadro RTX6000 24GB setup to leverage more GPU memory and multi-GPU training. I aimed for a batch size of 64 while maximizing image resolution for a particular backbone (and if that would not fit, I settled for BS 32 with gradient accumulation 2).</p> <p><strong>Backbones &amp; Image Resolution</strong><br> I tried several backbones in the EfficientNet, SE-ResNeXt, and ResNeSt families. I also tried BiT-ResNet (recently released by Google). EfficientNet performed better on CV so I decided to stick with EfficientNets for the remainder of the competition. I used the 1080 Ti GPUs to experiment with different backbones, all other hyperparameters held constant. Some of you asked whether this necessarily transfers when I switch over to a larger GPU and increase the batch size/change the image resolution. I did briefly compare backbones on the larger GPU and the results seemed to be consistent (EfficientNet &gt; all others). </p> <p>I experimented with backbones of different sizes from pruned EfficientNet-B3 to EfficientNet-B8, using the implementation from <a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a>. For smaller backbones, I tried larger resolutions (up to 1024 x 1024) and for EfficientNet-B8 I went down to 384 x 384. The image size for each backbone was selected so that I could use batch size of 64 (16/GPU) during training. I found that smaller EfficientNets at higher resolutions were not as good. The best models for me were EfficientNet-B6 (initialized with noisy student weights) and EfficientNet-B7, at image resolutions of 512 x 512 and 640 x 640, respectively, so I only used these moving forward.</p> <p><strong>Base Model</strong><br> My base model was your standard convolutional neural network backbone with linear classification head. I used generalized mean pooling with trainable parameter p (not sure if this was any better than average or max, as I just stuck with GeM from the beginning). I also used multisample dropout following <a href="https://www.kaggle.com/qishenha" target="_blank">@qishenha</a>’s implementation in one of this winning NLP solutions. </p> <p><strong>From 2 Class to 3 Class</strong><br> I felt that more granular classes would result in better feature representations that could help improve performance. The majority of melanomas are dark (exception being rare amelanotic melanomas), so differentiating them from benign nevi is probably the most challenging task. The 2019 data all had auxiliary diagnoses, including nevi, whereas a large fraction of the 2020 data was unknown. </p> <p>I trained a model on 2019 data only using the diagnosis as the target. Then, I applied this model to the 2020 data. My main focus was on labeling the unknowns as nevus or not nevus, as I know they are not melanoma. To find the threshold at which I would label an image as nevus, I used the 5th percentile of the 2019 model’s predictions on the 2020 data which had a known label of nevus.</p> <p>Now, all of the 2019 and 2020 data has a label of other, benign nevus, or melanoma, and I trained my model on these 3 classes using vanilla cross-entropy loss. I did not try label smoothing.</p> <p><strong>Upsampling</strong><br> There was a lot of discussion over whether or not to upsample malignant images or not. I did upsample malignant images for 2020 data. Because I used 2019 data and the percentage of melanomas in that dataset was much higher, I wanted to make sure that the 2019 melanomas did not overwhelm the 2020 melanomas. To that end, I upsampled the 2020 melanomas 7 times so that there was about an equal number of melanomas from both datasets.</p> <p><strong>Training</strong><br> I used AdamW optimizer and cosine annealing with warm restarts scheduler, initial learning rate 3.0E-4. I did 3 snapshots, 2 epochs each for EfficientNet-B6 and 3 epochs each for EfficientNet-B7. I noticed that it did not take long for the models to start overfitting. I found that this gave better results than using one cycle, so I stuck with it. Out of the 3 snapshots, I just took the one that did the best on that validation fold. Every experiment I ran, I did 5-fold CV using Chris Deotte’s triple stratified data splits. Single validation folds were not stable for me, so in order to really understand my model performance and the effects of my adjustments I had to look at 5-fold CV average. I only validated on 2020 data. </p> <p>In the beginning, I was using metadata by using embeddings for age, sex, and location. Each embedding was mapped to a 32-D vector and concatenated to the final feature vector before input into the linear classification layer. I did not want to spend too much time tuning this because I was afraid I would overfit to the distribution of the training set. I just used mean/mode imputation for missing values. </p> <p>It wasn’t until the last several days of the competition that I decided to train models without metadata, so I could be eligible for the without context special prize. It turns out that these models were actually my highest scoring private LB solutions!</p> <p><strong>Augmentations</strong><br> I knew that augmentation would be important given the small percentage of melanomas in the 2020 data. I used the RandAugment strategy, implemented here: <a href="https://github.com/ildoonet/pytorch-randaugment" target="_blank">https://github.com/ildoonet/pytorch-randaugment</a>. I used N=3 augmentations with magnitude M/30 where M was sampled from a Poisson distribution with mean 12 for extra stochasticity. For those unfamiliar with RandAugment, M is essentially the “hardness” of the augmentation (angle for rotation, % zoom, gamma for contrast adjustment, etc.). For augmentations like flips, M is not relevant. I tried other augmentations such as mixup, cutmix, and grid mask, but those did not help.</p> <p>I also used square cropping during training and inference. During training, a square was randomly cropped from the image if it was rectangular (otherwise, the entire image was used), where the length of the square image was the size of the shortest side (i.e., 768x512 would be cropped to 512x512). During inference, I spaced out 10 square crops as TTA and took the average as the final prediction (again, unless the image was already square - then no TTA was applied). I found that this gave me better results than rectangular crops or using the whole image. </p> <p><strong>Pseudolabeling</strong><br> Pseudolabeling was key to my solution. Given the limited number of 2020 melanomas, I felt that pseudolabeling would help increase performance. 2019 melanomas were helpful but still different from 2020 melanomas. I took my 5-fold EfficientNet-B6 model, trained without metadata, and obtained soft pseudolabels (3 classes) for the test set. When combining the test data with the training data (2019+2020), I upsampled images with melanoma prediction &gt; 0.5 7 times (same factor as I did for 2020 training data). I used <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>'s implementation (<a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733" target="_blank">https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733</a>) of cross entropy in PyTorch (without label smoothing) so I could use soft pseudolabels. </p> <p><strong>CV vs LB</strong><br> I knew early on that it would be easy to fit public LB, given the small number of melanomas that would be in the public test. At the same time, the CV for my different experiments was much tighter than the LB, so I was nervous to fully trust CV as there may have been differences between training and test data. With that in mind, I favored solutions that had reasonably high CV and LB. </p> <p>There is a fair amount of luck that goes into picking the right solution, but you should be able to justify to yourself why you are picking a certain solution over another (going by CV score, LB score, some combination of CV/LB, or some hypotheses about the private test set that would favor one solution over another). </p> <p>My 2nd place solution was an ensemble of 3 5-fold models:</p> <ul> <li>EfficientNet-B6, 512x512, BS64, no metadata (CV 0.9336 / public 0.9534)</li> <li>EfficientNet-B7, 640x640, BS32, gradient accumulation 2, no metadata (CV 0.9389 / public 0.9525)</li> <li>Model 1, trained on combined training and pseudolabeled test data (CV 0.9438 / public 0.9493)</li> </ul> <p>Note that the CV score does not account for the 5-fold blend effect. </p> <p>My highest scoring private LB solution was actually model 3 alone, which I did not select.</p> <p>I trained other models with metadata, but best private LB score was 0.945 (public LB 0.959), with similar CV. </p>
SIIM-ISIC Melanoma Classification
139th Place Summary
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to winners.. </p> <p>I would like to thank my teammate Sebastien (@Anhmeow) for his amazing contribution in getting this position. Our best solution was rank ensemble of 8 different models which we had created in parallel and ensembled it using stacking. </p> <p>We had created 80 models in total and selected only those models which were less correlated.</p> <ol> <li>EfficientNet B4 - Image size - 512 - Sebastien</li> <li>EfficientNet B4 - Image Size - 768 - Sebastien</li> <li>EfficientNet B6 - Image Size - 768 - Sebastien</li> <li>EfficientNet B5 - Image Size - 384 - Manoj</li> <li>EfficientNet B6 - Image Size - 512 - Including external data - Manoj</li> <li>EfficientNet B5 - Image size - 384 - without external data - Manoj</li> <li>EfficientNet B5 - Image size - 384 - Including external data - Manoj</li> <li>EfficientNet B6 - Image size - 512 - without external data - Sebastien</li> </ol> <p>The mean ensemble of these models scored around 0.9432 in private LB while rank ensemble scored 0.9406. </p> <p>Special thanks to <a href="https://www.kaggle.com/Cdeotte" target="_blank">@Cdeotte</a> for providing wonderful notebooks and discussion threads</p> <p>What we tried but couldn't complete:</p> <ol> <li><p>GAN's - Inspired by this paper <a href="https://www.paperswithcode.com/paper/melanoma-detection-using-adversarial-training-1" target="_blank">https://www.paperswithcode.com/paper/melanoma-detection-using-adversarial-training-1</a> and Rohit's post. <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F413189%2F3dd7a16f80823987e41fcd2d9f252904%2FGAN%20Sample.png?generation=1597751688287238&amp;alt=media" alt=""></p></li> <li><p>Chipnet - Even though the performance was impressive, we couldn't complete it due to paucity of time. </p></li> </ol>
SIIM-ISIC Melanoma Classification
21st Public - 53rd Private - Trust Your CV
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Melanoma Model Ensemble!</h1> <p>Thank you Kaggle, SIIM, and ISIC for an exciting competition. Thank you Kagglers for wonderful shared content and great discussions! </p> <p>Early on I decided to build a large ensemble instead of optimizing a single model. The AUC metric seemed very unstable with this unbalanced dataset and using ensembles, heavy TTA, and crop augmentation helped stabilize it.</p> <h1>My Final 3 Submissions</h1> <p>My main (first) submission was an ensemble that maximized CV where all models used the same <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">triple stratified leak-free CV</a> with <code>seed = 42</code>. (How to CV explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175614" target="_blank">here</a>). Next I believed public notebooks could add diversity too, so my final 3 submissions were:</p> <ul> <li><code>Sub_1</code> - 9 models of mine ensemble - CV 0.9505 LB 0.9578 - Private 0.9418</li> <li><code>Sub_2</code> - 5 mine plus 10 public single models - CV ??? LB 0.9662 - Private 0.9425</li> <li><code>Sub_3 = 0.75 * Sub_1 + 0.25 * Sub_2</code> - CV ??? LB 0.9603 - Private 0.9429</li> </ul> <h1>Crop Augmentation</h1> <p>Crop Augmentation was key to prevent overfitting during training when using external data, upsampling, and large EfficientNet backbones. Crop augmentation also helped stabilize AUC particularly when used in TTA.</p> <p>Previously we saw that using different image sizes adds diversity (explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">here</a>). What many people don't realize is that you can use TFRecord sized 512x512 but train on random 256x256 crops (different each epoch). Then training goes fast because your EfficientNet only processes size 256x256 but you are getting features from 512x512 resolution. This allows us to train quickly on large sizes such as 1024x1024 and 768x768 (using crops of 512 and 384 respectively).</p> <p>In the picture below, we read the top row from TFRecords, then random crop, then train with the bottom row.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fb9fd40b45aa428e23a9a470322aa8c7b%2Fcrop.png?generation=1597700942674639&amp;alt=media" alt=""></p> <h1>My Final Models</h1> <p>The following 8 models have ensemble CV 0.9500, Public LB 0.9577, and Private LB 0.9420 . Then ensembling meta data with geometric mean: <code>image_ensemble**0.9 * tabular_model**0.1</code>, increases CV to 0.9505, Public LB to 0.9578, and Private LB to 0.9418.</p> <table> <thead> <tr> <th>CV</th> <th>LB</th> <th>read size</th> <th>crop size</th> <th>effNet</th> <th>ext data</th> <th>upsample</th> </tr> </thead> <tbody> <tr> <td>0.936</td> <td>0.956</td> <td>512</td> <td>384</td> <td>B5</td> <td>2018</td> <td>1,1,1,1</td> </tr> <tr> <td>0.935</td> <td>0.937</td> <td>768</td> <td>512</td> <td>B6</td> <td>2019 2018</td> <td>3,3,0,0</td> </tr> <tr> <td>0.935</td> <td>0.949</td> <td>768</td> <td>512</td> <td>B7</td> <td>2018</td> <td>1,1,1,1</td> </tr> <tr> <td>0.933</td> <td>0.950</td> <td>1024</td> <td>512</td> <td>B6</td> <td>2018</td> <td>2,2,2,2</td> </tr> <tr> <td>0.927</td> <td>0.942</td> <td>768</td> <td>384</td> <td>B4</td> <td>2018</td> <td>0,0,0,0</td> </tr> <tr> <td>0.920</td> <td>0.941</td> <td>512</td> <td>384</td> <td>B5</td> <td>2019 2018</td> <td>10,0,0,0</td> </tr> <tr> <td>0.916</td> <td>0.946</td> <td>384</td> <td>384</td> <td>B345</td> <td>no</td> <td>0,0,0,0</td> </tr> <tr> <td>0.910</td> <td>0.950</td> <td>384</td> <td>384</td> <td>B6</td> <td>2018</td> <td>0,0,0,0</td> </tr> </tbody> </table> <p>The above models use a variety of different augmentation, losses, optimizers, and learning rate schedules. External data explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/164910" target="_blank">here</a>. Upsample explained <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/169139" target="_blank">here</a>.</p> <h1>My Training</h1> <p>If you download my popular notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">here</a> to your local machine or cloud provider, then you can run the code quickly using multiple GPUs by adding the following one line of code</p> <pre><code>DEVICE = "GPU" strategy = tf.distribute.MirroredStrategy() </code></pre> <p>Most of my models including my most accurate single model with CV 0.936 and LB 0.956 were trained using four Nvidia V100 GPUs. Thank you Nvidia for the use of GPUs!</p> <h1>Ensemble Pseudo Code</h1> <p>In the past 2 months, I trained 50+ diverse models. How do we ensemble 50+ models? Train all models using the same triple stratified folds <code>seed = 42</code> from my notebook <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">here</a>. Then to create an ensemble, start with the model that has largest CV and repeatedly try adding one model to increase CV. Whichever one additional model increases the CV the most (and at least 0.0003), keep that model and then iterate through all models again. Repeat this process until CV score stops increasing (by at least 0.0003).</p> <pre><code> # START ENSEMBLE USING MODEL WITH LARGEST CV Repeat until CV does not increase by 0.0003+ : # TRY ADDING EVERY MODEL ONE AT A TIME AND REMEMBER # HOW MUCH EACH INCREASES THE ENSEMBLE CV SCORE for k in range( len(models) ): for w in [0.01, 0.02, ..., 0.98, 0.99]: # TRY ADDING MODEL k WITH WEIGHT w TO ENSEMBLE trial = w * model[k,] + (1-w) * ensemble auc_trial = roc_auc_score(true, trial) # ADD ONE NEW MODEL TO ENSEMBLE THAT INCREASED CV THE MOST # CHECK NEW CV SCORE. IF IT INCREASED REPEAT LOOP </code></pre> <h1>Ensemble Starter Notebook</h1> <p>I posted my solution code <a href="https://www.kaggle.com/cdeotte/forward-selection-oof-ensemble-0-942-private" target="_blank">here</a> showing how to ensemble OOF files using forward selection. The notebook uses 39 of my Melanoma models' OOF files. Forward selection chooses 8 of them and achieves CV 0.950, Public LB 0.958, Private LB 0.942.</p>
SIIM-ISIC Melanoma Classification
27th place solution - 2nd with context prize (edited)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is a summery of "Yuval and nosound" model (27th place)</p> <p>You can find more in:</p> <ul> <li><a href="https://github.com/yuval6957/SIIM-Transformer" target="_blank">our github repository</a> </li> <li><a href="https://github.com/yuval6957/SIIM-Transformer/blob/master/SIIM%20paper.pdf" target="_blank">Paper</a></li> <li><a href="https://github.com/yuval6957/SIIM-Transformer/blob/master/SIIM%20presentation.pdf" target="_blank">Presentation</a></li> <li><a href="https://www.youtube.com/watch?v=qa6zimKQcno&amp;t=15s" target="_blank">video</a></li> </ul> <h2>1. Summary</h2> <p>Our solution is based on two step model + Ensemble:</p> <ol> <li>Base model for feature extraction per image</li> <li>Transformer model - combining all the output features from a patient and predict per image. </li> <li>The 2nd stage also included some post - processing and ensembling.</li> </ol> <h3>Base Model:</h3> <p>As base model we used a models from the <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet</a> family[6]:</p> <ul> <li>EfficientNet b3 </li> <li>EfficientNet b4 </li> <li>EfficientNet b5 </li> <li>EfficientNet b6 </li> <li>EfficientNet b7 </li> </ul> <p>All models were pre-trained on imagenet using noisy student algorithm. The models and weights are from <a href="https://github.com/rwightman/gen-efficientnet-pytorch" target="_blank">gen-efficientnet-pytorch</a>[2].</p> <p>The input to these model is the image and meta-data such as age, sex, and anatomic Site. The meta-data is processed by a small fully connected network and it’s output is concatenated to the input of the classification layer of the original EfficientNet network. This vector is going through a linear layer with output size of 256 to create the “features”, and then after an activation layer to the final linear classification layer. </p> <p>This network has 8 outputs and tries to classify the diagnosis label (there are actually more than 8 possible diagnoses, but some don’t have enough examples). </p> <h3>Transformer Models:</h3> <p>The input to the Transformer models are a stack of features from all images belonging to the same patient + the metadata for these images.</p> <p>The transformer is a stack of 4 transformer encoder layers with self attention as described in <a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a> [1]. Each transformer encoder layer uses 4 self attention heads. </p> <p>The output of the transformer is a N*C, where N is the number of input feature vectors (the number of images) and C is the number of classes (8 in this case). Hence, the transformer predicts the class of each feature vector simultaneously, using the information from all other feature vectors.</p> <p>The metadata is added using a “transformer style”, i.e. each parameter is transformed to a vector (size 256) using an embedding matrix and then added to the feature vector. for continuous values (like age) the embedding matrix was replaced by a 2 layer fully connected network. </p> <h3>Ensembling the output of all networks:</h3> <p>The data was split to 3 folds, 3 times (using 3 different seeds for splitting), and the inference was done using 16 (or 12) TTAs. giving 144 predictions from each model. These were averaged and then the outputs of all the models were averaged. All averaging was done on the outputs before softmax and there form it is actually geometric averaging.</p> <h3>Training</h3> <p>The heavy lifting was the training and inference of the base models. This was done on a server with 2 GPUs – Tesla V100, Titan RTX that worked in parallel on different tasks. Training one fold of one model took 3H (B3) to 11H (B7, B6 large images) on the Tesla and 20% more on the Titan, this sums up to about one day for B3 and 3.5 days for B7. Inferencing all the training data for 12 TTA’s + test data for 16 TTA’s to get the features for the next level took another 4h - 14h. The transformer training took less than 1H for the full model (3 folds*3seed). </p> <p>The total time it took to train all models and folds is about 2.5W for one Tesla (~1.5W using the 2 GPUs).</p> <h2>2. Models and features</h2> <h3>Base models</h3> <p>As base models we tried various types of models (pre-trained on Imagenet):</p> <ul> <li>Densenets – 121, 161, 169, 201</li> <li>EfficientNet B0, B3 , B4, B5, B6, B7 with and with noisy student pre-training and with normal pretraining </li> <li>ResNet 101</li> <li>Xception </li> </ul> <p>At the end we used EfficientNet as it was best when judging accuracy/time</p> <p>The noisy student version performed better than the normal one.</p> <p>We also tried different image sizes and ended up using a <code>400*600</code> images in most cases, except one were we used <code>600*900</code> with the B6 network.</p> <h4>Metadata</h4> <p>As was described above the metadata was processed by a small fully connected nn and its output was concatenated to the output of the EfficientNet network (after removing the original top layer).</p> <p>We also tried a network without metadata, and used the metadata as targets, i.e. this network predicted the diagnosis, but also the sex, age and anatomic site. The final predictions (including transformer) when using this approach weren't as good as the metadata as input approach. </p> <h4>Model’s output</h4> <p>Although the task at hand is to predict melanoma yes/no, it is better to let the network choose the diagnosis among a few possible options. This lets the network “understand” more about the image. The final prediction is the value of the Melanoma output after doing softmax on the output vector. </p> <h4>Features</h4> <p>The final layer in this model is a linear layer with 256 inputs and 8 outputs, we use the input to this layer as features. </p> <h4>Augmentation</h4> <p>The following augmentations where used while training and inference:</p> <p>Random resize + crop</p> <p>Random rotation</p> <p>Random flip</p> <p>Random color jitter (brightness, contrast, saturation, hue)</p> <p><a href="https://arxiv.org/abs/1708.04552" target="_blank">Cutout</a>[3] - erasing a small rectangle in the image</p> <p>Hair - Randomly adding “hair like” lines to the image</p> <p>Metadata augmentation - adding random noise to the metadata as was done in the <a href="https://www.sciencedirect.com/science/article/pii/S2215016120300832?via%3Dihub" target="_blank">1st place solution in ISIC 2019 challenge</a> [7].</p> <h4>TTA</h4> <p>For inference each image was augmented differently 16 times and the final prediction was the average. These augmentations were also used for extracting 16 different features vectors per test image.</p> <p>The same was done to extract 12 features vectors for the train images (12 and not 16 because of time limits).</p> <h3>Transformer Network</h3> <p>The input to the transformer network is the features from all the images from one patient.</p> <p>The inspiration for this kind of model came from a previous competition in which we participated in RSNA<a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection" target="_blank"> Intracranial Hemorrhage Detection</a>. In that competition, all the top solutions (including ours) used a two stage network approach (although none of them were transformers).</p> <p>Using a transformer seems appropriate in this case because transformers are built to seek relationships between embedding (feature) vectors in it’s input.</p> <p>As this is not a full seq2seq task, we only used encoder layers. The transformer is a stack of 4 encoder layers with 4 attention heads in each layer. (we also tested higher numbers of layers and attention heads - no performance improvement).</p> <h4>metadata</h4> <p>The metadata was incorporated in the network by adding “metadata vectors” to the input vectors - each value was transformed to a vector size 256 and added. The discrete values’ transformation was done using a trainable embedding matrix and the continuous values using a small nn.</p> <h4>output</h4> <p>The output of this network is a matrix of size N*C where N - the number of images, C - the number of classes. Which means it decides on all the images of the patient at once. </p> <h4>Limit the input size</h4> <p>A transformer can be trained on different number of feature vectors, by using padding. But when the range of numbers is very large, from a couple of hundred images for some patients to a handful for others, this may cause some implementation issues (like in calculating the loss). To simplify these issues, we limited N to 24 feature vectors, and for each patient we randomly divided the images to groups of size up to 24. </p> <p>This might degrade the prediction as the most “similar” images might accidentally fall into different groups, but as we use TTA, this issue is almost solved. </p> <h4>Augmentation</h4> <p>From the base model we extract a number of feature vectors (12 for train and 16 for test) using different augmentation for the images and metadata. In the training and inference steps of the transformer model we randomly choose one of these vectors.</p> <p>Another augmentation is the random grouping as stated above.</p> <h2>3. Training and Inferencing</h2> <p>The original 2020 competition data is highly unbalanced, there are only 2-3% of positive targets in the train and test data. Although we were able to train the base model using uneven sampling, the best way to get good training was to add the data from ISIC 2019 competition which has a much higher percentage of melanoma images. </p> <p>We split the training data to 3 folds keeping all the images from the same patient in the same fold, and making sure each fold has a similar number of patients with melanoma. The ISIC2019’s data was also split evenly between the folds. The same folds were kept for the base and the transformer models. </p> <p>To get more diversity we had 3 different splits using 3 seeds </p> <h3>Preprocessing</h3> <p>All images were resized to an aspect ratio of 1:1.5, which was the most popular aspect ratio of the images in the original dataset. We prepared 3 image datasets of sizes <code>300*450</code>, <code>400*600</code>, <code>600*900</code>. Most of the models were trained using the <code>400*600</code> dataset, as 300*450 gave inferior results and the <code>600*900</code> didn’t improve the results enough.</p> <p>For the metadata we had to set the same terminology for the 2020 and 2019 datasets.</p> <h3>Loss Function</h3> <p>The loss function we used was cross entropy. Although the task is to predict only melanoma we found it is better to predict the diagnosis which was split to 8 different classes one of which was melanoma. The final prediction was the value for the melanoma class, after a softmax function on all classes. We also tried a binary cross entropy on the melanoma class alone and a combination between the two, but using cross entropy gave the best results.</p> <p>The same loss was used for the base model and the transformer, but in the transformer we needed to regularize for the different number of predictions in each batch resulting from the different number of images for each patient. </p> <p>We also tried using focal loss which didn’t improve the results, but we left one of the transformer models which was trained with focal loss in the ensemble (A model with cross entropy loss gave the similar CV and LB).</p> <h4>Training the transformer model</h4> <p>The transformer model was trained in two steps. For the first step we used the data from both competitions (2019, 2020). For the 2019 competition we don’t have information about the patient, and each image got a different dummy patient_id, meaning the transformer didn’t learn much from these images. In the 2nd stage we fine-tuned the transformer using only the 2020 competition’s data.</p> <p>In both steps we used a sampler that over sampled the larger groups.</p> <h4>Inference</h4> <p>As stated above, the inference was done using TTA. For the base model we used 12-16 different augmentations and for the transformer model 32.</p> <h3>Ensembling</h3> <p>For our final submissions we used 2 ensembles:</p> <h5>Without Context Submission:</h5> <ol> <li>EfficientNet B3 noisy student image size 400*600</li> <li>EfficientNet B4 noisy student image size 400*600</li> <li>EfficientNet B5 noisy student image size 400*600</li> <li>EfficientNet B6 noisy student image size <strong>600*900</strong></li> <li>EfficientNet B7 noisy student image size 400*600</li> </ol> <h5>With Context</h5> <p>All the “without context” model +</p> <ul> <li>Transformer on features from A.</li> <li>Transformer on features from B.</li> <li>Transformer on features from C using focal loss</li> <li>Transformer on features from D.</li> <li>Transformer on features from E.</li> </ul> <h2>4. Interesting findings</h2> <h3>CV and LB</h3> <p>Although the number of images in the competition was very large, the number of different patients wasn’t large enough to give a stable and reliable CV and LB. And the correlation between the two was low. At the end we trusted neither and we submitted the models we felt were the most robust.</p> <h3>What didn’t work</h3> <ol> <li>Using different sizes of images as suggested in <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">CNN Input Size Explained</a> didn’t show any improvement.</li> <li>Using Mixup[5] and MixCut[4] augmentation didn’t work</li> <li>Larger transformers didn’t improve the results. </li> </ol> <h2>5. Better “Real World” model</h2> <p>This model can be much simpler if we won’t ensemble and take only one base model, the EfficientNet B5 is probably the best compromise.</p> <p>In the real world scenario the images from previous years will probably be tagged already. In that case we can use a full transformer with encoder and decoder which will perform a seq2seq operation. </p>
TReNDS Neuroimaging
16th Place “Simple” Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, We(<a href="https://www.kaggle.com/kannahashimoto">@Kanna Hashimoto</a> , <a href="https://www.kaggle.com/toseihatori">@hatry</a>) would like to thank you all for organizing a very interesting themed competition. </p> <h1>Solution overview</h1> <ul> <li>Adversarial Validation for site1/site2 <ul><li>Our policy was that we should only model using features that do not affect the site's classification.(It may have been meaningless as a result.)</li> <li>We did adversarial validation to classify site1(all training data) or site2(test data , which is clearly revealed to be site2) for only loading features. It result AUC 0.9.</li> <li>Excluding the top 3 features that contribute to the classification of site1 and site2.As a result, AUC decreased by 0.6 ~ 0.7.</li></ul></li> <li>stacking <ul><li>local cv of 1st models are 0.163 ~ 0.1575</li> <li>2nd model local cv is 0.15674, LB is 0.15766, private is 0.15786.</li></ul></li> <li>We did not use 3d CNN models.</li> </ul> <h1>features</h1> <ul> <li>loading, fnc features <ul><li>We created multiplication and subtraction variables.</li></ul></li> <li>Correlation between images <ul><li>We used the correlation coefficients for the 53 fMRI 3d images for each user ID. Removing the 0 pixels improved the local cv a bit.</li></ul></li> <li><p>histogram features, statistical features</p> <ul><li>Image histograms and statistics were also useful. Just computing histograms and statistics on the whole image improved the local cv, but features that computed on the images that had been split into smaller cubes were more effective.</li></ul></li> </ul> <h1>Models</h1> <ul> <li>1st models <ul><li>Ridge + Optuna + random seed averaging</li> <li>MLP + random seed averaging</li> <li>LightGBM + random seed averaging</li></ul></li> <li>2nd model <ul><li>Baysian Ridge + Optuna</li></ul></li> </ul>
TReNDS Neuroimaging
54th Private & 44th Public Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the kagglers who participated in this competition!</p> <p>This one feels quite pleasant for me and my teammate since it has brought us to the Competitions Expert tier with a good ranking! :)</p> <p>We've started out with RAPIDS-based blending of classical ML models inspired by <a href="https://www.kaggle.com/tunguz/rapids-ensemble-for-trends-neuroimaging">this notebook</a>. We were trying to pick coefficients via <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html">SciPy linear least-squares bounded problem solver</a>, but we have faced the overfitting bound soon enough to give up this technique.</p> <p>The game-changer for us was <a href="https://www.kaggle.com/david1013/trends-multi-layer-model">Multi-Layer model</a> generously shared by @david1013. We started with blending 60 output .csv's with different seeds. An important thing is that our validation has shown that this ensemble underestimates most of the values, especially <strong>age</strong>. So we tried out an element-wise maximum of 60 predictions, and it was much better w.r.t. both the CV and public LB.</p> <p>However, <strong>age</strong> value remained lower than expected. Running out of time, we tried to pick a good multiplier for it as a part of post-processing. The best we have tried was <strong>1.005</strong>, so we've stopped on this predictor.</p> <p>Thanks!</p>
SIIM-ISIC Melanoma Classification
24th place solution and my way to get it
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello everyone, congrats to all winners and thanks Kaggle and SIIM for this competition </p> <p>This is my second medal awarded competiton (finally I'm an expert, yeah!!!), and my second medical image analisys competiton. My first one was <a href="https://www.kaggle.com/c/aptos2019-blindness-detection" target="_blank">APTOS Blindness Detection</a> last year and these are some similar problems I faced here:</p> <p><strong>Imbalanced classes</strong></p> <p>A common problem for many medical image analysis is imbalanced data distribution (few images of positive cases). There were 5 classes (or 5 degrees of disease, because the problem could be also interpreted as regression) and 2 classes strongly prevailed. In this competition it was binary classification case and the first one was much larger. The obvious way to fix classes balance was to use external data, and here is the second challenge:</p> <p><strong>Use of external data</strong> </p> <p>In both competitins we had to deal with quite a large amount of external data, you can use it to balance classes or just use all of this data. In APTOS many people pretrained their models on old external images and finetune in actual data (and so did I). Inept use of external data could decrease your score, old (external) and actual datasets were quite different. In SIIM old and actual data was much less different, so we were more free to use it.<br> I used 2018+2019 external data for all training pipeline, validating on 2020 data only. </p> <h2>My approcah</h2> <p>At the beginning I tried various light models (like EffNet b0-b3, ResNet18 with image size 256x256) using PyTorch and various augmentation techniques both for metadata and for images, experementing with focal loss and label smoothing. But maximum I achived was 0.933 on public LB. But in discussions participants told about experiments with sizes like 512x512 and quite large models like EffNetB6, these kind of experiments was hard for me because I used PyTorch and GPUs only</p> <p>All my way in deep learning I had used PyTorch, but topics related to tensorflow and TPU appeared in discussion and public notebooks more and more, I could not ignore it anymore. Finally I found that using TPU I can train heavier models on larger images! </p> <p><strong>Making ensemble</strong></p> <p><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/154683" target="_blank">2019 1st place solution</a> is blending of various versions of EfficientNet and various augmentation techniques and image sizes, so I decided to implement something like that</p> <p>I used pipeline like <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Tripple Stratified KFold</a>, using 2018+2019+2020 data for training and only 2020 data for validation </p> <table> <thead> <tr> <th>Model</th> <th>Image Size</th> <th>Initial Weights</th> <th>Public Score</th> <th>Private Score</th> </tr> </thead> <tbody> <tr> <td>EfficientNetB5</td> <td>512</td> <td>ImageNet</td> <td>0.9524</td> <td>0.9200</td> </tr> <tr> <td>EfficientNetB5</td> <td>768</td> <td>ImageNet</td> <td>0.9469</td> <td>0.9185</td> </tr> <tr> <td>EfficientNetB5</td> <td>1024</td> <td>ImageNet</td> <td>0.9479</td> <td>0.9285</td> </tr> <tr> <td>EfficientNetB6</td> <td>384</td> <td>ImageNet</td> <td>0.9497</td> <td>0.9277</td> </tr> <tr> <td>EfficientNetB6</td> <td>512</td> <td>ImageNet</td> <td>0.9488</td> <td>0.9258</td> </tr> <tr> <td>EfficientNetB6</td> <td>768</td> <td>ImageNet</td> <td>0.9502</td> <td>0.9277</td> </tr> <tr> <td>EfficientNetB6</td> <td>1024</td> <td>ImageNet</td> <td>0.9478</td> <td>0.9287</td> </tr> <tr> <td>EfficientNetB7</td> <td>512</td> <td>ImageNet</td> <td>0.9480</td> <td>0.9239</td> </tr> <tr> <td>EfficientNetB7</td> <td>768</td> <td>ImageNet</td> <td>0.9379</td> <td>0.9083</td> </tr> <tr> <td>EfficientNetB5</td> <td>512</td> <td>NoisyStudent</td> <td>0.9480</td> <td>0.9194</td> </tr> <tr> <td>EfficientNetB6</td> <td>512</td> <td>NoisyStudent</td> <td>0.9435</td> <td>0.9242</td> </tr> <tr> <td>EfficientNetB7</td> <td>512</td> <td>NoisyStudent</td> <td>0.9510</td> <td>0.9218</td> </tr> <tr> <td>EfficientNetB5</td> <td>768</td> <td>NoisyStudent</td> <td>0.9543</td> <td>0.9295</td> </tr> <tr> <td>EfficientNetB6</td> <td>768</td> <td>NoisyStudent</td> <td>0.9424</td> <td>0.9218</td> </tr> <tr> <td>ResNet152</td> <td>512</td> <td>ImageNet</td> <td>0.9184</td> <td>0.8901</td> </tr> <tr> <td>InceptionResNetV2</td> <td>768</td> <td>ImageNet</td> <td>0.9338</td> <td>0.9161</td> </tr> <tr> <td>Blending</td> <td>---</td> <td>---</td> <td>0.9602</td> <td>0.9406</td> </tr> </tbody> </table> <p>Being based on <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/160147" target="_blank">this idea</a>, I mixed various models with various training image sizes. </p> <p><strong>Public submissions</strong></p> <p>We can't ignore it since have a public submission in a silver medal zone!<br> I also made two submmissions with mixing this ensemble and subs from some public kernels, like these ones <br> <a href="https://www.kaggle.com/truonghoang/stacking-ensemble-on-my-submissions" target="_blank">This one</a>, because it consists of models like VGG and SeResNext, I think they are quite uncorrelated with my effnets</p> <p><a href="https://www.kaggle.com/datafan07/analysis-of-melanoma-metadata-and-effnet-ensemble" target="_blank">This one</a>, because it has good work on metadata analysis</p> <p>I did not use kernels kind of "public blending" or "public minimax ensemble" which were in fact combinations of others, trying to choose from public submissions these ones which correlate as little as possible with my ensemble and with each other. </p> <p>At the end days of competition I fell from ~80 position to ~240 and it was nervous, but after th shakeup I, like many others, was thrown quite high (about 200 positions)! <br> And I really not sure about reasons for it! On the one hand blending of all my models without any public submissions was about on 800 + position on Public LB but something like 140-150 on Private LB, on the other simple blending of public subs gained a silver zone.. I think this story teaches us not to trust Public LB and choose the most stable models for final sub. </p> <p>I think it was quite a lottery, many conducted successful experiments, but chose wrong final submissions. What about me - I think I was saved by quite an ensemble and careful use of public submissions</p> <p>Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his great <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">kernel</a> and TFRecords datasets covering almost all possible approaches. They helped me alot throughout the competition!</p>
M5 Forecasting - Accuracy
103rd place (silver medal) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Accuracy <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>As a rookie, I'm here to learn from all you wise kagglers. I've learned a lot by reading your public kernels. Thank you all!</p> <p>Although this competition seems a "a lottery", it's more imprtant to summary the experiences but not ranking.</p> <p>In return, this is my solution, which most based on public kernels. Thanks to <a href="https://www.kaggle.com/ragnar123/">ragnar</a> and all other kernel authors!</p> <h3>Accuracy-Competition</h3> <ul> <li><p>FE: adding a <code>weekends</code> feature.</p></li> <li><p>I used 4 single models:</p> <ul><li><p>time series split 3 fold cv - rmse loss function</p></li> <li><p><a href="https://www.kaggle.com/rikdifos/timeseriessplit-cv-poisson">time series split 3 fold cv - poisson loss function</a></p></li> <li><p>time series split 3 fold cv - self-defined loss function</p></li> <li><p>rolling prediction</p></li></ul></li> <li><p>Then I calculate all prediction's mean value, the I recode very tiny prediction to 0. Finally, I use a magic multiplier close to 1.</p></li> </ul> <p>All code could be downloaded from my github repo: <a href="https://github.com/songxxiao/m5_compete">https://github.com/songxxiao/m5_compete</a></p>
M5 Forecasting - Uncertainty
18th place (silver medal) solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: M5 Forecasting - Uncertainty <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for all selfless public kernel authors! My work just basicly based on your notebooks. It's more important to concern knowledge but not result, right? This sharing is to thanks all community members. It's you made Kaggle such a friendly, wise community. </p> <h3>Accuracy-Competition</h3> <ul> <li><p>FE: adding a <code>weekends</code> feature.</p></li> <li><p>I used 4 single models:</p> <ul><li><p>time series split 3 fold cv - rmse loss function</p></li> <li><p><a href="https://www.kaggle.com/rikdifos/timeseriessplit-cv-poisson">time series split 3 fold cv - poisson loss function</a></p></li> <li><p>time series split 3 fold cv - self-defined loss function</p></li> <li><p>rolling prediction</p></li></ul></li> <li><p>Then I calculate all prediction's mean value, the I recode very tiny prediction to 0. Finally, I use a magic multiplier close to 1.</p></li> </ul> <h3>Uncertainty-Competition</h3> <ul> <li><p>Use <a href="https://www.kaggle.com/szmnkrisz97/point-to-uncertainty-different-ranges-per-level">public kernel</a> released by <a href="https://www.kaggle.com/szmnkrisz97">KrisztianSz</a> to convert point prediction to uncertainty. I use accuracy final submission as input and get a single submission 1.</p></li> <li><p>Use a <a href="https://www.kaggle.com/ulrich07/quantile-regression-with-keras">public kernel</a> released by <a href="https://www.kaggle.com/ulrich07">Ulrich GOUE</a> as a single submission 2.</p></li> <li><p>Finally, I did an averaging from above 2 submission files and recode all negative prediction to 0.</p></li> </ul> <p>All code could be downloaded from my github repo: <a href="https://github.com/songxxiao/m5_compete">https://github.com/songxxiao/m5_compete</a></p>
SIIM-ISIC Melanoma Classification
105 place solution (0.9413 on LB).
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Summary</h2> <ol> <li>Ensemble of 6 models gave a Private LB of 0.9413; The ensemble code in <a href="https://www.kaggle.com/mpsampat/final-simple-oof-ensembling-methods-6-models" target="_blank">here</a></li> <li>The models are just simple modifications of <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">baseline</a></li> <li>We will share these later</li> </ol> <h2>Details:</h2> <ol> <li><p>We used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">baseline</a> as a starter. </p></li> <li><p>Then we train various different models. In the end we had 12 different models. (lost some models because i forgot to save oof.csv :( ). </p></li> <li><p>then for ensemble it seems liked number of models combined was important. </p></li> <li><p>So we ranked the 12 models by their OOF_AUC. </p></li> <li><p>The ensemble was done as per the <a href="https://www.kaggle.com/steubk/simple-oof-ensembling-methods-for-classification" target="_blank">starter</a> kernel provided by <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a></p></li> <li><p>This was modified for 4,5,6,8,9,12 models. </p></li> <li><p>This is version with <a href="https://www.kaggle.com/mpsampat/final-simple-oof-ensembling-methods-6-models" target="_blank">6 models</a> which gave us the Private LB score of 0.9413. </p></li> <li><p>Then we found the OOF_AUC with Bayesian Optimization looked promising. </p></li> <li><p>the difference between OOF_AUC_Bayesian_Optimization and LB was not very large. </p></li> <li><p>For ensemble of 6 models we should have run 12<em>11</em>10<em>9</em>8*7 combinations; </p></li> <li><p>but i did not have time to run this. so I picked top 6 models; and a few different combinations of top 6 models. </p></li> <li><p>The summary is in this <a href="https://docs.google.com/spreadsheets/d/1Gw7xJwkxxp-61YHHOEkS2DImxD4jkQ5HYRi-0j72QJM/edit?usp=sharing" target="_blank">spreadsheet</a></p></li> <li><p>Heuristics used to select models: </p></li> <li><p>We have 4 measures of ensemble AUC: <br> 14a OOF_avg_auc, OOF_rank_auc, OOF_pow_auc,OOF_bo_auc</p></li> <li><p>I took the average of these 4 models. **If a model is good, them my heuristic is that all 4 OOF_AUC measures should be high. **</p></li> <li><p>I also prefer, the highest OOF_bo_AUC or highest OOF average **with lowest number of models. **</p></li> <li><p>We did not finish full extensive analysis but ensemble of 6 models scored very high; <br> 18 Lastly, the difference between OOF_bo_AUC and Public LB is quite low for this combination.</p></li> </ol>
SIIM-ISIC Melanoma Classification
From 66x online tutorials to winning the First Provisional Silver Medal
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi everyone. After completing 66x online tutorials and going through shake-ups in the initial competitions I am able to secure my first silver medal. Special thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his amazing notebook. I blended models of images with all size :). Special thanks to amazing mentors <a href="https://www.kaggle.com/dmitrylarko" target="_blank">@dmitrylarko</a> and 4x grandmaster <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>. Good Luck to everyone.</p>
SIIM-ISIC Melanoma Classification
222th Place Solution: Correlation CV vs Public LB 0.16, vs Private LB 0.73
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thank organizers for holding this competition and Kagglers for posting truly useful datasets, notebooks, discussions!!!</p> <p>Many competitors have noticed that correlation of CV with Public LB is unstable. Some competitors have solved it by</p> <ul> <li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175412" target="_blank">Using external data for validation</a></li> <li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/175344" target="_blank">Ensemble</a></li> </ul> <p>By seeing these posts, I wondered how unstable correlation of my single models CV with Public LB and Private LB are. I investigated that by using late submissions.</p> <p>Note that, throughout this post, "CV" means "OOF CV", not "Averaged CV over the folds".</p> <p>I used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">Triple Stratified Leak-Free KFold CV dataset</a> with 5 folds, and didn't include external data in validation phase. (Thank <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for providing very useful dataset and clear explanation. I could spend much time for training diverse models because of your datasets!)</p> <p>Trained models are </p> <ul> <li>Efficient Net without meta data</li> <li>Efficient Net with meta data</li> <li>ResNest without meta data</li> </ul> <p>As I wrote in title, my single models CV are more correlate with Private LB than Public LB!</p> <ul> <li>CV vs Public LB: 0.16</li> <li>CV vs Private LB: 0.73</li> <li>(Public LB vs Private LB: 0.53)</li> </ul> <p>Actual values of CV, Public LB, Private LB are below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1772718%2F91caae5b19967358979bb6e9fb2817eb%2F2020-08-20_00h07_39.png?generation=1597849679012485&amp;alt=media" alt=""></p> <p>I hypothesize the reason of huge gap between Public LB and Private LB is very few amount of positive samples in Public test set. <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> 's <a href="https://www.kaggle.com/cpmpml/number-of-public-melanoma-is-78-or-77" target="_blank">notebook</a> and <a href="https://www.kaggle.com/sirishks" target="_blank">@sirishks</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/167215" target="_blank">analysis</a> show that the number of positive samples in Public and Private test set are 77 or 78 and 182.<br> (or maybe I was just lucky enough to got such relatively stable correlation of CV with Private LB. And also I'm not sure about my training settings is suitable for getting stable correlation.)</p> <h1>Details of my solution</h1> <h3>Training settings</h3> <p>Training settings of Efficient Net wo/w meta data are different for those of ResNest wo meta data (to obtain diverse models). The way to use meta data is identical to <a href="https://arxiv.org/abs/1910.03910" target="_blank">1st place solution of ISIC 2019</a>. Some settings are also based on this solution.</p> <h4>Common settings</h4> <ul> <li>Pytorch with GPU and TPU</li> <li><a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/165526" target="_blank">Triple Stratified Leak-Free KFold CV dataset</a> with 5 folds</li> <li>External data are <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/164910" target="_blank">whole 2017 and 2018, malignant of 2019</a>, <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/169139" target="_blank">additional 580 malignant samples</a> prepared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> (big thanks again!)</li> <li>RAdam optimizer</li> <li>BCE loss</li> <li>16 time TTA for validation and test phase. (4 different scale with horizontal flip, vertical flip, both of them. Actual TTA images look like below. This is also inspired by ISIC 2019 1st place solution)</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1772718%2Ff4a76e46576a1c13fa5132f48c4813fc%2F2020-08-19_23h50_09.png?generation=1597849603207907&amp;alt=media" alt=""></p> <h4>Efficient Net</h4> <ul> <li>b0 ~ b6 with different image size (combinations of model size with image size are same with <a href="https://arxiv.org/abs/1905.11946" target="_blank">original paper</a>, like b0-224, b1-240, …, b6-528.)</li> <li>Pretrained weights on ImageNet</li> <li>Augmentations (actual snippet below)</li> </ul> <pre><code>p = .5 train_transforms = A.Compose([ A.RandomResizedCrop(img_size, img_size, p=p), A.ShiftScaleRotate(rotate_limit=(-90, 90), p=p) A.HorizontalFlip(p=p), A.VerticalFlip(p=p), A.HueSaturationValue(p=p), A.RandomBrightnessContrast(p=p), A.OneOf([A.IAAAffine(scale=1.0, translate_percent=(.5, .5), translate_px=None, rotate=0.0, shear=(50, 50), order=1, cval=0, mode='reflect', p=p), A.IAAAffine(scale=1.0, translate_percent=(.5, .5), translate_px=None, rotate=0.0, shear=(30, 30), order=1, cval=0, mode='reflect', p=p), A.IAAAffine(scale=1.0, translate_percent=(.3, .3), translate_px=None, rotate=0.0, shear=(0, 0), order=1, cval=0, mode='reflect', p=p)], p=p), A.Cutout(num_holes=1, max_h_size=32, max_w_size=32, fill_value=0, always_apply=False, p=p), A.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2() ]) val_transforms = A.Compose([ A.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2() ]) </code></pre> <ul> <li>12 epochs for training wo meta data</li> <li>30 epochs for training w meta data (initial weighs are best AUC's one got by training wo meta data)</li> <li>No LR scheduler</li> </ul> <h4>ResNest</h4> <p>First of all, thank <a href="https://www.kaggle.com/arroqc" target="_blank">@arroqc</a> for posting <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173272" target="_blank">topic telling ResNest is faster than Efficient Net implemented in Pytorch!</a>. I'm really suffered from same problem at that time and I guess training ResNest really helped to increase my CV in limited time!</p> <ul> <li>ResNest50, 101, 200 with different image size (combinations of model size with image size are 50-224, 101-256, 200-320, which are based on <a href="https://github.com/zhanghang1989/ResNeSt#pretrained-models" target="_blank">those of pretrained models</a>)</li> <li>Pretrained weights on ImageNet</li> <li>3 Dense head with ReLU, Dropout (probability=0.3). The number of neurons of each head is 1024, 512, 256. (This is based on <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a> 's <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/172882#961932" target="_blank">post</a>. Thanks to sharing your ideas! I guess it made my models diverse and increased CV of ensemble!)</li> <li>Image is resized from 1024x1024 resolution</li> <li>Augmentations (actual snippet below)</li> </ul> <pre><code>p = .5 train_transforms = A.Compose([ A.RandomResizedCrop(img_size, img_size, scale=(0.5, 1.0), p=1), A.ShiftScaleRotate(rotate_limit=(-90, 90), p=p), A.HorizontalFlip(p=p), A.VerticalFlip(p=p), A.HueSaturationValue(p=p), A.RandomBrightnessContrast(p=p), A.IAAAffine(scale=1.0, translate_percent=(.0, .0), translate_px=None, rotate=0.0, shear=(2.0, 2.0), order=1, cval=0, mode='reflect', p=p), A.Cutout(num_holes=1, max_h_size=32, max_w_size=32, fill_value=0, always_apply=False, p=p), A.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2() ]) val_transforms = A.Compose([ A.Resize(int(img_size*1.25), int(img_size*1.25), interpolation=cv2.INTER_AREA, always_apply=False, p=1), A.CenterCrop(img_size, img_size, always_apply=False, p=1.0), A.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ToTensorV2() ]) </code></pre> <ul> <li>CyclicLR scheduler (This is also based on <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a>'s <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/172882#961932" target="_blank">post</a>. Thanks again for sharing your ideas! It seemed like that this helped to stabilize training!)</li> </ul> <h2>Ensemble for Submission</h2> <p>My final submission is hugely relied on OOF CV. I ensembled relatively high OOF CV models by taking summation of probability of rank with optimal weighs find by Optuna. (The code for rank AUC is based on <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> 's <a href="https://www.kaggle.com/steubk/simple-oof-ensembling-methods-for-classification" target="_blank">notebook</a>. Thanks for sharing! This ensemble really boosted my CV!)</p> <p>Only some portion of models are used to ensemble, which are </p> <ul> <li>b0 and b1 w meta</li> <li>b3, 5 and 6 wo/w meta</li> <li>b4 wo meta</li> <li>resnest 50, 101 and 200 wo meta</li> </ul> <p>(I selected these models by just seeing OOF CV, because remaining time is 1 hour at that time! I believe there were more wise way to select models!)</p> <p>Ensemble score are CV: 0.9534, Public LB: 0.9449, Private LB: 0.9391</p> <h2>Things Not Work Well</h2> <ul> <li>Oversampling malignant image to balance 1:1 (inspired by <a href="https://arxiv.org/abs/1710.05381" target="_blank">this paper</a>)</li> <li>Label smoothing with alpha 0.05 (inspired by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">notebook</a>)</li> <li>Increase epochs to 100 with StepLR scheduler with step size = 25 and gamma = 1/5 (inspired by 2019 ISIC 1st place solution)</li> </ul> <p>Thanks for reading my post and sorry for my poor english!</p> <p>As this is my first hardworking comp, I have learned huge amount of things! For me, unstable CV vs Public LB correlation is really good learning opportunity because I believe that don't overfit to known test data is important in real world problems.</p> <p>P.S. I don't have my own rich computational resources, so used cloud gpu/tpu at GCP, AWS, Azure. It costs around $1041 😅. In next comp, cost-effective approach is definitely needed! </p>
SIIM-ISIC Melanoma Classification
Missed Opportunity - 4th place - Solo Gold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners.</p> <p><strong>Scores:-</strong><br> <strong>public LB=0.9618</strong><br> <strong>private LB=0.9482</strong> </p> <p>I am Gutted.</p> <p><a href="https://www.kaggle.com/sheriytm/melanoma-clf-xgb-missed-opportunity-4th-place" target="_blank">https://www.kaggle.com/sheriytm/melanoma-clf-xgb-missed-opportunity-4th-place</a></p> <p><strong>UPDATE</strong></p> <p>In late submission, using the mean ensemble of 27 models which gave my private LB score in the above ensemble script scores :-<br> <strong>public LB=0.9618</strong><br> <strong>private LB=0.9487</strong> </p>
SIIM-ISIC Melanoma Classification
126th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to thank kaggle and the organizers for hosting such interesting competition.<br> And i like to thank my teammates <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> <a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a> <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/shivamcyborg" target="_blank">@shivamcyborg</a> for such an interesting competition journey. And congratulations to all the winners!</p> <p>It's been a great competition, and my team has spend a lot of time and computational resources in this competition.</p> <h1>Brief summary of our solution:</h1> <ul> <li>Our final solution is an ensemble of around 150 models (yes you heard it right <strong>150 models</strong>), 140 trained on Pytorch and 10 trained on TF.</li> <li>With a simple rankblend of Pytorch and TF models at 50-50 ratio, we are able to get 0.9528 public LB and 0.9409 private LB.</li> </ul> <h2>Pytorch models overview:</h2> <ul> <li><p>For CV, we have used <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">Triple Stratified Leak-Free KFold CV</a> shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>.</p></li> <li><p>For scheduler we have used both <code>CosineAnnealingLR</code> and <code>ReduceLROnPlateau</code>.</p></li> <li><p>For optimizer we have used <code>Lookahead optimizer</code> on top of Adam.</p></li> <li><p>Loss : focal , rankloss , bce</p></li> <li><p>We trained 140 models with different base models ranging from <code>EffB0-EffB7</code>, <code>Renset</code>, <code>Resnext</code>, <code>DenseNet</code> and a lot more and on various image sizes (256, 384, 512, 768) and even few with 1024 image size.</p></li> <li><p>Thanks to <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> and the huge experience he brought to the team, we have also trained few double scale models, transformer models, and models with bigger heads.</p></li> <li><p>For augmentations we have used 2 types of augmentations, basic augs and then advanced augs.</p></li> <li><p>Most of Pytorch models are trained 2 times, first with basic augs and then later finetuned with <br> advancaed augs.</p></li> </ul> <h3>Pytorch Augmentations:</h3> <p><strong>Basic Augs:</strong></p> <pre><code>train_transform_basic = transforms.Compose([ DrawHair(), transforms.RandomResizedCrop(size=IMG_SIZE, scale=(0.7, 1.0)), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.ColorJitter(brightness=32. / 255.,saturation=0.5), transforms.ToTensor(), transforms.RandomErasing(), transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]) ]) </code></pre> <p><strong>Advanced Augs:</strong></p> <pre><code>SKIN_TONES = [ [75, 57, 50], [180,138,120], [90,69,60], [105,80,70], [120,92,80], [135,100,90], [150,114,100], [165,126,110], [195,149,130], ] train_transform_albument = A.Compose([ ColorConstancy(p=1), A.OneOf([ A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.8), int(IMG_SIZE * 0.8)), height=IMG_SIZE, width=IMG_SIZE), A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.85), int(IMG_SIZE * 0.85)), height=IMG_SIZE, width=IMG_SIZE), A.RandomSizedCrop(min_max_height=(int(IMG_SIZE * 0.9), int(IMG_SIZE * 0.9)), height=IMG_SIZE, width=IMG_SIZE), A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_LINEAR), A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_CUBIC), A.RandomResizedCrop(height=IMG_SIZE, width=IMG_SIZE, interpolation=cv2.INTER_LANCZOS4), ], p=0.7), A.OneOf([ DrawScaleTransform(p=0.2, img_size=IMG_SIZE), MicroscopeTransform(p=0.1), DrawHairTransform(n_hairs=6) ], p=0.5), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.RandomRotate90(p=0.5), A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, p=0.5), A.OneOf([ A.Cutout(num_holes=1, max_h_size=64, max_w_size=64, fill_value=random.choice(SKIN_TONES)), ], p=0.5), A.OneOf([ A.Cutout(num_holes=1, max_h_size=64, max_w_size=64, fill_value=random.choice(SKIN_TONES)), ], p=0.5), A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) </code></pre> <blockquote> <p>Note: 140 models are trained with different combinations of augs mentioned above.</p> </blockquote> <h2>Tensorflow model overview:</h2> <ul> <li>Tensorflow training pipeline is almost similar to this kernel by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, but with different augs.</li> </ul> <h3>Analysis</h3> <ul> <li>We also have used GradCAM to visualize our Pytorch model results, as shown below:</li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F119ac2b52aec821096c4253e157c4582%2FScreenshot%202020-08-18%20at%201.21.27%20PM.png?generation=1597737186953608&amp;alt=media" alt="!"></p> <h3>Ensemble Techniques:</h3> <ul> <li>Stacking: For stacking we have used both LGBM with Bayesian Optimization and Neural Networks.</li> <li>RankBlend</li> </ul> <h1>What didn't Worked:</h1> <ul> <li>For us we didn't find much improvement using meta features</li> <li>Tiles: Breaking images into 3x3 tiles.</li> </ul> <h1>Not pursued but started :</h1> <ul> <li>We have tried SAGAN using Self attention to generate images and also DCGAN used in this paper to generate 256*256 images. However didn't pursue this because of lack of time and lack of variety of images. Here are some images: </li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F908f5f953c1813b72ed81bda3c11d96e%2Fimage%20(1).png?generation=1597740337658687&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F106e926e20ec5d29102ed90a2910c8d9%2Fimage.png?generation=1597740394187819&amp;alt=media" alt=""></p> <ul> <li><p>We tried to break images into 3x3 tiles and then trained models on top of it. Here are some images:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3982638%2F259b42655ce883c7acef46da77d307b8%2FScreenshot%202020-08-18%20at%202.08.46%20PM.png?generation=1597740065473900&amp;alt=media" alt=""></p></li> <li><p>We also tried to solve this problem as multiclass classification for Melanoma, Benign, Nevus.</p></li> <li><p>And a lot more…</p></li> </ul> <p><strong>[END]: At the end i would like to say that we have tried a lot of things in this competition, i can't even list all of them here, its been a great learning opportunity for each one of us, and we tried to make most out of it. The results are not as we expected but i am glad that we worked hard and learned a lot of new things.</strong></p>
Global Wheat Detection
2nd Place Solution with Code [MIT-Compliant]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Global Wheat Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>[update] The code is now available at <a href="https://github.com/liaopeiyuan/TransferDet" target="_blank">https://github.com/liaopeiyuan/TransferDet</a> .</p> <p>Now that the results are out, I've decided to share a bit more than <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/172433" target="_blank">my previous post</a>.</p> <p>First of all, huge thanks to <a href="https://www.kaggle.com/rwightman" target="_blank">@rwightman</a> for his PyTorch implementation of EfficientDet and <a href="https://www.kaggle.com/shonenkov" target="_blank">@shonenkov</a> for his amazing starter kernel. I've also learned a lot from the discussion posts.</p> <p>This is an interesting challenge with a very noisy dataset and huge domain shift between train/test, and within train/test as well. The kernel running time limit also gives possibility to do semi-supervised learning, which can in a way reduce the aforementioned problem.</p> <p>The first choice I made was the ways I validate and perform model selection. This is a tough choice because 1. object detection is inherently a hard task and 2. the training data are noisy and unstable.<br> Eventually my decision was "relative detection loss" on a 20% stratified split with bad boxes removed (by manual inspection), as hinted <a href="https://www.kaggle.com/c/global-wheat-detection/discussion/166564#927359" target="_blank">here</a>. My reasoning is as follows:</p> <p>We know that train and test distributions are wildly different, so mAP to me would make little sense because it's a heavy processed result (with hyperparameters in between of nms, and which mAP to look at, etc.) And detection loss we use to train our networks can capture the overall classification and regression performance. In addition, I don't use this loss to compare across models, and instead I accumulated a list of procedures that I considered as "comparable under loss," by pure empirical results and a little bit of instinct, and used them to select minor choices such as augmentations. For model selection overall, I took the public LB with a grain of salt, and in a way because I ran out of time to train diverse models, I didn't have to make those huge choices of model ensemble. I believe that I can investigate into this more once the entire set is released and cleaned.</p> <p>Now, on to data insights. We can all agree that the data, even with blatantly bad boxes removed, are really noisy, in two forms: noisy anchor and noisy boxes. </p> <ul> <li>For noisy boxes, this is the direct result of different understanding of wheat heads by annotators (including an automatic one, Yolov3). So I ran a few experiments to ascertain the noise. I degraded the boxes' x,y coordinates by 10% and 5% respectively (kind of the norm on articles robust object detection), and trained with my previously best pipeline. Interestingly, for both degradation we've seen a drop in model performance, so in a way the data is more informative than a 5% shift. My models were extremely costly to run, so I didn't run any more tests or designed experiments that could reveal more about bbox noise, but I had a basic idea of how to proceed. </li> <li>Noisy anchors, on the other hand, is a little tricky, because the problem is not just about disagreements between labellers. Remark that our training data are 1024 crops of larger pictures, and (correct me if I'm mistaken) annotation is only done on crops, with the wheat heads on edges only getting annotated if 1/3 of the head is visible. This is kind of crucial because APs in this competition has a huge impact on the resulting mAP performance. Models that are too confident over the border (identifying all wheat heads no matter if it's over 1/3 or not, sometimes even leaves mistakenly) or too pessimistic both degrades final performance. This also explains my jigsaw trick, which I will explain in the following paragraphs. Finally, the source images are really different. One could train a simple CNN classifier and classify the source with validation accuracy 99.99%. Even so, you can learn a metric learning model on say 4 sources and achieve 80%+ homogeneity when extracting embedding on the remaining 3 sources that the model has never seen.</li> </ul> <p>Therefore, all the models I developed are trying to address these issues. I first tried DetectorRS, but only managed to get pass 0.72 LB. So I moved on to the EfficientDet pipeline. I ran some simple tests from D4 to D7, and concluded that D6 with COCO-pretrained weight is the best. Here are my reasonings.</p> <p>D0-D5: not large enough<br> D7: Remark that D7 got its precision by larger input size (and hence larger anchors). I really couldn't fit 1536-resized images on my GPU and get reasonable results.<br> D6 with noisy student/AdvProp pretrained cls backbone and random-initialized FPN/box, cls net: Object detection is quite hard. I couldn't make these models converge.</p> <p>The next thing I did was trying to improve baseline performance of my EfficientDet-D6 model. I tried over 10 different techniques by dissecting the timm-efficientdet repo line by line and hacking into them, but turned out simplicity and elegance prevailed. The best performance is achieved by the default setting of the D6 model (with anchors unchanged; I tried KNN clustering of anchors in train set but it gave worse results), with default huber loss and focal loss. Augmentation wise, it's just an empirically deduced recipe:</p> <pre><code> A.Compose( [ A.RandomSizedCrop(min_max_height=(800, 1024), height=1024, width=1024, p=0.5), A.OneOf([ A.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit= 0.2, val_shift_limit=0.2, p=0.9), A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.9), ],p=0.9), A.ToGray(p=0.01), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.RandomRotate90(p=0.5), A.Transpose(p=0.5), A.JpegCompression(quality_lower=85, quality_upper=95, p=0.2), A.OneOf([ A.Blur(blur_limit=3, p=1.0), A.MedianBlur(blur_limit=3, p=1.0) ],p=0.1), A.Resize(height=1024, width=1024, p=1), A.Cutout(num_holes=8, max_h_size=64, max_w_size=64, fill_value=0, p=0.5), ToTensorV2(p=1.0), ], p=1.0, bbox_params=A.BboxParams( format='pascal_voc', min_area=0, min_visibility=0, label_fields=['labels'] ) ) </code></pre> <p>and for <code>__getitem__</code>:</p> <pre><code>if self.test or random.random() &gt; 0.33: image, boxes = self.load_image_and_boxes(index) elif random.random() &gt; 0.5: image, boxes = self.load_cutmix_image_and_boxes(index) else: image, boxes = self.load_mixup_image_and_boxes(index) </code></pre> <p>Remark that mixup is the 1:1 version; I didn't get the beta version to work (I tried to reweight the boxes during loss calculation; I got worse results). Then I tuned the WBF over TTA (8x, because 16x has duplicates) pipeline a bit with basic sanity checks and exclusion of boxes that have low score or are too big or small. The overall baseline gave public LB of around 0.753.</p> <p>Then, I realized that similar to the TGS competition I participated few years back, we can get more data by doing jigsaws. You can check out one of the popular public kernels for how to restore the original full-sized images. After this, I realized two more problems: </p> <ol> <li><p>Boxes over the edge are fractured in the sense that if a wheat head appears in half in one image and in half in another, there will be two boxes in the final image. This will generate huge amount of noise and will certainly degrade model performance.</p></li> <li><p>This is more subtle. Remark that in the data insight section I talked about the conundrum of boxes over the edge. Now, if we are now cropping 1024 patches from the larger image, the resulting distribution is actually different than the one represented by what we now can see as crops at special positions. I didn't really document the changes in model performance, but it is indeed significant enough for training.</p></li> </ol> <p>My solutions is as follows:</p> <ol> <li><p>I tried to correct every single label manually but got bored after 30 minutes. So I wrote an automatic procedure that does that for me:</p> <ul> <li>Get all boxes that have one of the coordinates divisible by 1024</li> <li>Prune those over the edge of the big picture</li> <li>Generate the segments over the edge and match it to box (aka we only care about the side on the edge)</li> <li>Calculate pariwise-IoU over edges</li> <li>Pair boxes using a greedy approach</li> <li>Fuse boxes<br> I then used the corrected data to train my models. And, to encourage APs gain for my models, The images have a 0.5 chance of feeding into the network without cropping, e.g., resizing an 2048x2048 down to 1024x1024 instead of cropping it from the larger image. I did some ablation experiments and this does improve model performance.</li></ul></li> <li><p>What I did here is basically treating images obtained from jigsaw as "pseudo-labels," or simply images that come from different distributions.</p> <ul> <li>The first model I developed is based on the paper "A Simple Semi-Supervised Learning Framework for Object Detection". Basically, the resulting loss is a weighted sum of loss calculated by original image and jigsaw image separately. </li> <li>Then, I remembered the trick introduced in the paper "Adversarial Examples Improve Image Recognition", where images of different distributions goes through different BatchNorms. So I copied the batchnorm stats from baseline and replicate it twice, and during 2-nd stage training original data and jigsaw data passes through different norms, and the stats are calculated separately. During inference, I use the distribution of the original data. I tried extended scenario such as a 7-split norm by source, or average of different splits of norms during inference, all of which gave improvements locally, but not on public LB, so I didn't investigate further.</li></ul></li> </ol> <p>I also designed a detection variant of FixMatch, but I couldn't fit it in my GPU with image size = 1024.</p> <p>Finally the semi-supervised learning part was pretty mundane, as fancy techniques (STAC, SplitBatchNorm, Pi-Model, Mean Teacher, etc) did not fit under the kernel eval time frame. All I did was to freeze the backbone to allow training with 16GB vram, and different hyperparameters to generate the pseudolabels. You can see the details in my open-sourced kernels. Most choices are more empirical than insightful.</p> <p>Sorry for this messy solution journal as working alone gives you a bad habit of trying too many stuff and not documenting well enough. Also, since I'm working alone, I didn't have enough time to investigate in depth some of the ideas I brought up above. I will seek to improve the solution journal in the upcoming days, and maybe write a tech report if I have time.</p>
SIIM-ISIC Melanoma Classification
11th Place Solution Writeup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank my teammates who all did a great job: <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a>, <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a>, <a href="https://www.kaggle.com/skgone123" target="_blank">@skgone123</a> and <a href="https://www.kaggle.com/coolcoder22" target="_blank">@coolcoder22</a>. This was a great learning and collaboration experience and exchange of ideas! Also thanks to Kaggle and competition organizers for setting up this interesting competition!</p> <p>In our solution, we were trying to build a large ensemble of diverse models with a goal to be more robust against overfitting and resist the future shakeup. Results have shown that our strategy worked well: trusting CV and diversity was important to achieve a good result on private LB.</p> <h2>Data</h2> <p>We used 5-fold cross-validation with the data partitioning scheme and <code>tfrec</code> files that were kindly provided by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> in his <a href="https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords" target="_blank">great notebook</a>. We used 2020 data for validation and different combinations of 2017/18/19/20 data for training. </p> <p>We also applied image segmentation to detect and crop the lesions and constructed the cropped data set. Several models in the ensemble were trained using the cropped images.</p> <p>The malignant images in the training folds were upsampled for some of the models. </p> <h2>Image Processing</h2> <p>We considered a wide range of augmentations in different models:</p> <ul> <li>horizontal/vertical flips</li> <li>rotation</li> <li>circular crop (a.k.a <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/159476" target="_blank">microscope augmentation</a>)</li> <li>dropout</li> <li>zoom/brightness adjustment</li> <li>color normalization</li> </ul> <p>On the TTA stage, we used the same augmentations as on the training stage and usually varied the number of augmentations between 10 and 20.</p> <h2>Models</h2> <p>We were focusing on EfficientNet models and trained a variety of architectures on different image sizes. Most of the final models were using <code>B5</code> with <code>512x512</code> images. We experimented with adding attention layers and meta-data for some models and played with the learning rates and label smoothing parameters. We also explored Densenet and Inception architectures but observed worse performance.</p> <p>Some models were initialized from the pre-trained weights. To get the pre-trained weights, we fitted CNNs on the complete train + test + external data to predict <code>anatom_site_general_challenge</code> as a surrogate label. Initializing from the pre-trained weights instead of the Imagenet weights improved our CV. I set up a <a href="https://www.kaggle.com/kozodoi/pre-training-on-full-data-with-surrogate-labels" target="_blank">notebook</a> demonstrating the pre-training approach.</p> <p>The best single model was <code>EN-B5</code> trained on <code>384x384</code> with attention and meta features, which achieved private LB of <code>0.9380</code>.</p> <h2>Ensembling</h2> <p>By the end of the competition, the size of our ensemble reached 91 models. To address potential train/test inconsistencies we filtered models using two criteria: </p> <ul> <li>Removing models where the mean correlation of predictions with the other models demonstrated a large gap between OOF/test predictions</li> <li>Removing models that ranked high in the adversarial validation model</li> </ul> <p>This reduced our set to 58 models with OOF AUC in between <code>0.8751</code> and <code>0.9377</code>. Based on this set of models, we decided to go with three diverse submissions:</p> <ol> <li>Conservative solution aimed at being robust against overfitting: ranked average of the top-performing models. This reached: <code>CV 0.9474, Public 0.9521, Private 0.9423</code></li> <li>Solution that achieved the best CV. We blended two ensembles: (i) average of top-k model predictions and (ii) stacking with meta-features and prediction stats across the models (min, max, range). The <code>k</code> and ensemble weights were optimized on CV: <code>CV 0.9532, Public 0.9576, Private 0.9459</code></li> <li>Solution that achieved the best public LB. Here, we blended (2) with some of the best public LB submissions including public notebooks. This resulted in: <code>Public 0.9690, Private 0.9344</code></li> </ol> <p>The second solution delivered the best private LB performance and secured us the 11th place. </p> <h2>Conclusion</h2> <p>As expected, aiming for a high public LB score in this competition was dangerous due to the small size of the public test set and high class imbalance. Trusting CV and building a diverse set of models helped us to survive the shakeup.</p>
TReNDS Neuroimaging
15th Place (3DResnet Part)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and the host for putting on a really interesting competition. It was really great being on a super talented team with <a href="/cpmpml">@cpmpml</a> <a href="/philippsinger">@philippsinger</a> <a href="/ericfreeman">@ericfreeman</a> and <a href="/fakeplastictrees">@fakeplastictrees</a> - I learned a lot from them in such a short period of time and thankful that they teamed. Also congratulations to all the gold medal finishers, I'm really looking forward to reading your solutions!</p> <h2>3D (4D?) Resnet</h2> <p>My main contribution to our team was through building a 3D Resnet. I decided to use Keras and most of the architecture was built off <a href="https://github.com/JihongJu/keras-resnet3d/blob/master/resnet3d/resnet3d.py">this code I found on github</a> I just had it modify it some to work with tf2.0. The models themselves did not fare well on the LB, but when blended with good tabular provided a good jump.</p> <p>We initially had a 0.1589LB Tabular model -&gt; Blending 20/80 with 3D Resnet improved to 0.1587 LB --&gt; Adding mixup augmentation improved to 0.1578 LB</p> <p>I also converted all the fMRI and tabular data to [tfrecords] (<a href="https://www.tensorflow.org/tutorials/load_data/tfrecord">https://www.tensorflow.org/tutorials/load_data/tfrecord</a>) format to optimize loading from disk.</p> <p>The main part of the model itself looked like this: ``` input = Input(shape=input_shape, name='fMRI')</p> <h1>first conv</h1> <p>conv1 = conv_bn_relu3D(filters=64, kernel_size=(7, 7, 7), strides=(2, 2, 2), kernel_regularizer=l2(reg_factor) )(input) pool1 = MaxPooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding="same")(conv1)</p> <h1>repeat blocks</h1> <p>block = pool1 filters = 64 for i, r in enumerate(repetitions): block = residual_block3d(block_fn, filters=filters, kernel_regularizer=l2(reg_factor), repetitions=r, is_first_layer=(i == 0) )(block) filters *= 2</p> <h1>last activation</h1> <p>block_output = bn_relu(block)</p> <h1>average poll and classification</h1> <p>pool2 = AveragePooling3D()(block_output) pool_dense = Dense(256*4*4, activation='sigmoid')(pool2) flatten1 = Flatten()(pool_dense)</p> <h1>Dense output</h1> <p>dense_out = Dense(units=num_outputs))(flatten1) ```</p> <h2>Augmentations</h2> <h2>4D Mixup</h2> <p>The biggest gain I made with this model was by adding a customized version of mixup for 4D. It was the same idea as mixup with images - I combined two random images and then made the target for this "new" brain as the weighted averages of the labels. I did this in the tensorflow dataset pipeline per batch. While it did slow down training a good bit (200seconds per epoch became 300 seconds per epoch) it improved CV and LB score by ~0.001 after blending.</p> <p>The augmentation code was based off of <a href="/cdeotte">@cdeotte</a> 's notebook for 2d images <a href="https://www.kaggle.com/cdeotte/cutmix-and-mixup-on-gpu-tpu">here</a>: </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F644036%2Ff05fcc9a899b7cd0efbe56523c17cb24%2F4dmixup.png?generation=1593481515794349&amp;alt=media" alt=""></p> <p><code> def mixup4D(image, label, prob=1.0): """ 4D Mixup of fMRI images size (BATCH,53, 63, 52, 53) """ imgs = []; labs = [] for j in range(BATCH): P = tf.cast( tf.random.uniform([],0,1)&amp;lt;=prob, tf.float32) k = tf.cast( tf.random.uniform([],0,BATCH),tf.int32) a = tf.random.uniform([],0,1)*P # MAKE MIXUP IMAGE img1 = image[j,] img2 = image[k,] label1 = label[j,] label2 = label[k,] imgs.append((1-a)*img1 + a*img2) # Average Labels labs.append((1-a)*label1 + a*label2) image2 = tf.reshape(tf.stack(imgs),(BATCH,53, 63, 52, 53)) label2 = tf.reshape(tf.stack(labs),(BATCH,5)) return image2,label2 </code></p> <h3>4D Cutmix</h3> <p>After mixup worked so well I took a lot more time to get 4D cutmix working... but that didn't show any CV or LB improvements. I'm assuming I could've gotten it to work better if I had perfected the cube shape, but experimentation took too long to fully flesh out. I'm guessing with my implementation most of the time the cube consisted of area not within the mask region.</p> <h3>Mirror Flip</h3> <p><code> def random_mirror_flip(image, label): imgs = [] for j in range(AUG_BATCH): uniform_random = random_ops.random_uniform([], 0, 1.0) mirror_cond = math_ops.less(uniform_random, .5) result = control_flow_ops.cond( mirror_cond, lambda: flip_left_right(tf.reverse(image[j,], axis=[0, 2])), lambda: image[j,] ) imgs.append(result) image2 = tf.reshape(tf.stack(imgs),(AUG_BATCH, 53, 63, 52, 53)) return image2, label </code></p> <p>I created a custom augmentation for mirror flip. After reading some literature it seemed like this was a common technique, but it didn't improve my CV or LB score. I think that since the left/right side of the brain has more meaning than MRI images of different organs - My intuition is that the mirror flip just made the model lose the knowledge of different brain regions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F644036%2Fbb1f1e7d95f685db2a3cb7e17b6b52b8%2Fmirror_flips.png?generation=1593482089335977&amp;alt=media" alt=""></p> <p>That's all. It's a bummer to miss out on gold but I am grateful to have learned so much in this competition.</p>
TReNDS Neuroimaging
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: TReNDS Neuroimaging <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Amazing team for professional growth, victories and interesting communication:</strong> Sergey Bryansky <a href="/sggpls">@sggpls</a>, Andrei Lupu <a href="/andreils">@andreils</a>, Möbius <a href="/arashnic">@arashnic</a>, Yury Bolkonsky <a href="/bolkonsky">@bolkonsky</a> and Kirill Shipitsyn <a href="/kirillshipitsyn">@kirillshipitsyn</a> 👍 😃 </p> <p><strong>Thanks all participants and organazers for such interesting competition!</strong></p> <p><strong>FE:</strong> we converted 4d images to 3d and train 3D CNN Autoencoder for each spatial map (1,1008*53) for each case, using PCA with different kernels to compress the data to (1,2048) for each.</p> <p>We used these features + original + site classifier features for training our pipeline. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1192776%2F16850a2fdcc31314682948022d477deb%2Fscheme.PNG?generation=1593498901941299&amp;alt=media" alt=""></p> <p>Also we were using target binning technique and feature selection which improve our final score <strong>(0.15663 public LB 6th place, 0.15703 private LB 6th place).</strong> 🔥 </p>
SIIM-ISIC Melanoma Classification
24th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Trust your CV</strong><br> I entered this competition after Panda and am surprised by the result.<br> The most important things here should be your CV. Always trust it. A single positive sample in public testset can have 0.0064 effect on public LB and the effect is very likely to accumulate during ensembling and selecting.<br> I used <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s JPEG files and his triple stratifed CV. They help me quickly start trainning and build a strong CV. Thank to his great work.</p> <p><strong>Best Single Model</strong><br> I used pytorch and 2xRTX2080 to generate my result and used no meta data.<br> The best single model is efficientnet-b5 on 512x512 resolution. I firstly train model on 2019 data with 8 classes. Then I finetune it with 2020+2018/2017 data. This method improves 0.01 on CV,0.005 on private LB but no improvement on public LB.<br> Augmentations: ImageCompression, Flips, ShiftRotateScale, HueSaturationValue, RandomBrightnetnessContrast, CutOut.<br> This model achieves 0.944 on CV (before TTA) and 0.9447-0.9462 on private LB(different averaging method). Sadly I picked the lowest one. xD.</p> <p><strong>Ensemble</strong><br> I ensembled this model with some of my previous models. Most of these overfitted.<br> The ensembles scores 0.9452-0.9492 on private LB. Made the wrong choise again. xD.</p>
SIIM-ISIC Melanoma Classification
302th Solution Writeup - Always trust your CV
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: SIIM-ISIC Melanoma Classification <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This competition has been very tough for me because I thought I'm stuck at the bottom of the LB and everybody seems to be way ahead of me. But I stick to my gut and trust my CV and in the end it pays off.</p> <h3>Data preprocessing:</h3> <p>To prevent leaks, I use MD5 hash to remove all duplicates on both original and external data.</p> <h3>Data augmentation:</h3> <ul> <li>ShiftScaleRotate</li> <li>Cutout </li> <li>Flip </li> <li>BrightnessContrast.</li> </ul> <h3>Validation Strategy:</h3> <p>At first I combined all data and did Triple Stratified KFold but the CV scores was way too high compared to public LB. Then I experimented with Triple Stratified KFold on original training set and noticed that CV and LB gaps are much smaller (0.917 - 0.919). So I concluded the test data must have similar distribution as the original training data.<br> My final strategy: Triple Stratified KFold on original training set, include external data during training of each folds.</p> <h3>Model (Image size):</h3> <ul> <li>B4 (384)</li> <li>B5 (456)</li> <li>B6 (512)</li> </ul> <h3>Ensemble:</h3> <p>For each model, take average of 3 folds. Then combine all models predictions using power average with p = 2.</p> <p>Overall validation scores (Average of 3 models): 0.938 CV, 0.389 Public LB, 0.376 Private LB.<br> I'm super happy because my CV, public LB and private LB are in line 😁😁.<br> Training kernel (TPU): <a href="https://www.kaggle.com/quandapro/isic-training-tpu" target="_blank">https://www.kaggle.com/quandapro/isic-training-tpu</a>.</p>