Title of Competition
stringclasses
168 values
Title of Writeup
stringlengths
9
139
User
stringclasses
1 value
Writeup
stringlengths
419
61.5k
Lyft Motion Prediction for Autonomous Vehicles
23rd solution (single model based on Resnet18)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This was a nice dataset to play with. It is like HPC simulations, one have to relax and let things run for long. To my surprise, one can go very far using simple concepts and published work. My solution is simply an implementation of the two papers from Uber<br> 1- Uncertainty-aware Short-term Motion Prediction of Traffic Actors for Autonomous Driving, <a href="https://arxiv.org/abs/1808.05819" target="_blank">https://arxiv.org/abs/1808.05819</a><br> 2- Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks, <a href="https://arxiv.org/abs/1809.10732" target="_blank">https://arxiv.org/abs/1809.10732</a></p> <p>Now to some details which I think contributed to my best performing model:</p> <p>1- Sample the data to break the serial correlation in time. I only trained on frames with 60+10+1 gaps. I wanted to avoid data leakages so I kept a gap of history length (10) + future prediction (50) + (1) extra. Also, if a frame is too crowded (i.e. too many agents), I sampled a sub-set of the agents (20 in my code below).</p> <pre><code>def get_dataset_masks( zarr_path, zarr_dt, th_agent_prob: float, min_frame_history: int, min_frame_future: int, scene_mask=None, chop_data=False, chop_idx_list=[100, 200], chop_agents=False): """ Modified from create_chopped_dataset Returns: mask: numpy array of bool to pass to AgentDatasetExtended """ if scene_mask is None: scene_mask = np.ones(len(zarr_dt.scenes), dtype=np.bool) else: assert len(zarr_dt.scenes) == len(scene_mask), "mask should be equal length" agents_mask_path = Path(zarr_path) / f"agents_mask/{th_agent_prob}" if not agents_mask_path.exists(): # don't check in root but check for the path assert 0 agents_mask_original = np.asarray(convenience.load(str(agents_mask_path))) agents_mask = np.zeros(len(zarr_dt.agents), dtype=np.bool) # for scene in zarr_dt.scenes.get_mask_selection(scene_mask): for scene_idx in range(len(zarr_dt.scenes)): if scene_mask[scene_idx] == 0: continue scene = zarr_dt.scenes[scene_idx] if chop_data: for num_frame_to_copy in chop_idx_list: kept_frame = zarr_dt.frames[scene["frame_index_interval"][0] + num_frame_to_copy - 1] agents_slice = get_agents_slice_from_frames(kept_frame) # In create_chopped_dataset: no mask for min_frame_history mask = agents_mask_original[agents_slice][:, 1] &gt;= min_frame_future num_agents_per_frame = mask.sum() max_agents_per_frame = 20 if chop_agents and num_agents_per_frame &gt; max_agents_per_frame: # or 10 removed_indices = np.random.choice( np.where(mask)[0], num_agents_per_frame - max_agents_per_frame, replace=False) mask[removed_indices] = False agents_mask[agents_slice] = mask.copy() else: first_frame = zarr_dt.frames[scene["frame_index_interval"][0]] last_frame = zarr_dt.frames[scene["frame_index_interval"][1] - 1] agents_slice = get_agents_slice_from_frames(first_frame, last_frame) past_mask = agents_mask_original[agents_slice][:, 0] &gt;= min_frame_history future_mask = agents_mask_original[agents_slice][:, 1] &gt;= min_frame_future mask = past_mask * future_mask agents_mask[agents_slice] = mask.copy() return agents_mask </code></pre> <p>2- I had a clean implementation of the Multiple-Trajectory Prediction (MTP) loss. I trained on the MoN loss and validated on the nll loss</p> <pre><code>def uber_like_loss_new(gt, pred, confidences, avails): batch_size, num_modes, future_len, num_coords = pred.shape # ensure that your model outputs logits gt = gt[:, None, :, :] # add modes avails = avails[:, None, :, None] # add modes and cords l2_error = torch.sum(((gt - pred) * avails) ** 2, dim=-1) # reduce coords and use availability l2_error = torch.sum(l2_error, dim=-1) # reduce future_len best_mode_target = torch.argmin(l2_error, dim=1).detach() classification_loss = torch.nn.functional.cross_entropy(confidences, best_mode_target, reduction='none') alpha = 1.0 MoN_error = classification_loss + alpha * l2_error[torch.arange(batch_size), best_mode_target] MoN_error = MoN_error.reshape(-1, 1) error = torch.nn.functional.log_softmax(confidences, dim=1) - 0.5 * l2_error # reduce future_len max_value, _ = torch.max(error, dim=-1, keepdim=True) # error are negative at this point, so max() gives the minimum one nll_error = -torch.log(torch.sum(torch.exp(error - max_value), dim=-1, keepdim=True)) - max_value return MoN_error, nll_error </code></pre> <p>3- I extracted some meta data from the AgentDataset. Here is a minimal implementation </p> <pre><code>class AgentDatasetExtended(AgentDataset): def __init__( self, cfg, zarr_dataset, rasterizer, perturbation, agents_mask, min_frame_history, min_frame_future, transform, l5kit_version, ): assert perturbation is None, "AgentDataset does not support perturbation (yet)" super(AgentDatasetExtended, self).__init__( cfg, zarr_dataset, rasterizer, perturbation, agents_mask, min_frame_history, min_frame_future) self.min_frame_future = min_frame_future self.min_frame_history = min_frame_history self.transform = transform self.l5kit_version = l5kit_version def __getitem__(self, index: int) -&gt; dict: """ Differs from parent returning the indices of the frame, agent and scene """ if index &lt; 0: if -index &gt; len(self): raise ValueError("absolute value of index should not exceed dataset length") index = len(self) + index index = self.agents_indices[index] track_id = self.dataset.agents[index]["track_id"] frame_index = bisect.bisect_right(self.cumulative_sizes_agents, index) scene_index = bisect.bisect_right(self.cumulative_sizes, frame_index) if scene_index == 0: state_index = frame_index else: state_index = frame_index - self.cumulative_sizes[scene_index - 1] data_dic = self.get_frame(scene_index, state_index, track_id=track_id) # track_id = self.dataset.agents[index]["track_id"] # centroid = self.dataset.agents[index]["centroid"] # yaw = self.dataset.agents[index]["yaw"] velocity = self.dataset.agents[index]["velocity"] label_probabilities = self.dataset.agents[index]["label_probabilities"] # data_dic['track_id2'] = np.int64(track_id) # data_dic['centroid2'] = centroid # data_dic['yaw2'] = yaw data_dic['velocity'] = velocity data_dic['label_probabilities'] = label_probabilities data_dic['ego_translation'] = self.dataset.frames[frame_index]["ego_translation"] data_dic['ego_rotation'] = self.dataset.frames[frame_index]["ego_rotation"] # matrix data_dic['timestamp'] = self.dataset.frames[frame_index]["timestamp"] data_dic['hour'] = datetime.fromtimestamp(data_dic['timestamp'] / 1e9).hour data_dic['weekday'] = datetime.fromtimestamp(data_dic['timestamp'] / 1e9).weekday() if self.transform: data_dic = self.transform(data_dic) return data_dic </code></pre> <p>4- I had a second head of 47 inputs (probably too much), which is fed all sort of metadata (label_probabilities, yaw, extent, acceleration, ego_translation, ego_centroid_diff, hour, weekday) and then passed through two fully connected layers before merging with the pooling layers of the backbone. This concatenated vector is then passed to dense layer, relu and then the output layer.</p> <p>5- I applied a <code>cumsum</code> function to NN output to force the NN to produce something like the differences, then I applied the image_to_world_matrix transformation before calling the loss function of the real coordinate. I wanted the output of my NN model to be in the image space (conceptually). </p> <p>There are many small additional details. I used lookahead optimizer wrapped around ADAM with starting learning rate of 1e-4 that and then decreased the LR by 0.99 every 100000 steps over batches of 32 samples (which takes about an hour on my machine RTX-Titan-X). Learning rate is reduced after 20 iterations of the 320k samples. Best solution is reached after 110 of these 320k samples (best model Private LB 13.580, Public LB 14.265). I then averaged 7 models based on checkpointing (Private LB 13.154, Public LB 13.722). My image size is 336x336 with a resolution of 0.25x0.25, trained on full data with the following frames <code>chop_idx_list_train = [11, 11 + 61, 11 + 2 * 61, 11 + 3 * 61]</code></p> <p>Probably, few simple modification could produce a bit better score.<br> (a) A bigger patch of 64 samples might perform better (diversity of the samples). <br> (b) Change the backbone to ResNet50 or MobileNet-V2</p>
Lyft Motion Prediction for Autonomous Vehicles
14th place solution. Custom Mask and LSTM encoder/decoder
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great thanks to Lyft and kaggle team for hosting this competition. Congrats to all the winners. And a special thanks to all my teammates for teaming up!! The time I spent discussing together was very insightful and most enjoyable.</p> <h1>Create Mask</h1> <p>Our solution’s point is that we made the custom mask.<br> Agent data one frame away from the same scene is very similar. So, in order to learn efficiently, we created a custom mask so that the data is loaded every N frames. For example, by sampling data every 5 frames, the training data is reduced by a factor of 5, allowing for faster learning.<br> To make it more efficient, we balanced the data according to the target distance. About 65% of the data had a target distance of less than 6 meters, and they were easy tasks. So in order to efficiently train the difficult data, we downsampled the data below 6 m. This halved the data. In the end, the number of data was roughly 80,000 iterations x 64 batch size.</p> <h1>Model architecture</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1057275%2F98319e9931caf1a99cbfdfdbf0c78237%2Ffinal_model.jpg?generation=1606994527055759&amp;alt=media" alt=""></p> <p>In addition to CNN, we used LSTM encoder and LSTM decoder.</p> <ul> <li>Image size: 300x300</li> <li>Optimizer: Adam</li> <li>Scheduler: Onecyclelr</li> <li>Num history: 10</li> <li>Target: positions + cosine/sine of yaw</li> </ul> <p>First, we trained the model with the target distance balanced mask. Additional learning with the non-balanced mask. We used log10-scaled loss when additional training, because train/validate loss goes up and down with a large margin repeatedly. For two days of model training by using this method, we got a public score under 13, and a private score about 12.</p> <h1>Didn’t work</h1> <p>history_positions more than 10 (Public score around 15, slow convergence)<br> metalabeling<br> Seq2seq like LSTM decoder</p> <hr> <h1>Discussion</h1> <h2>Stacking</h2> <p>As discussed in <a href="https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/199531#1091632" target="_blank">this thread</a>, we found that stacking boosted the score, but had no time left integrating the best model to stacking…</p> <h2>Satellite image</h2> <p>Until halfway through, I had mistakenly used satellite_debug as the 3 channel map information instead of semantic_map. Interestingly, this model got a public score of 13.3.<br> I didn’t pursue it further because I realized the mistake, but it seems that satellite maps can score somewhat well in this task. Try it if you like it!!!</p> <h2>3D-CNN(R2+1D)</h2> <p>Rasterized image data in the form of RGB×T×H×W, then fed them into R2+1D.<br> Validation score was equivalent to resnet18, but the process of rasterizing and the 3D-CNN model were so heavy that the training required twice ~ triple as much time.</p> <h2>Only sequence model</h2> <p>We trained Seq2seq model not only with the target agent’s history position, but also with the surrounding vehicle’s history position. Despite no map information, this model got a validation score of 24. We integrated this trained seq2seq model into the trained resnet18 model, and fine-tuned the concat model. Sadly, improvement of the score was marginal.</p>
Lyft Motion Prediction for Autonomous Vehicles
6th place: Micro-inputs, Lots of Data + Distance Order to Ensemble!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First up: great competition! The data was about as stable as I’ve ever encountered on Kaggle – kudos to the organisers. Congratulations to all of the winners - and to everyone who worked hard and completed the competition. </p> <p>It was a pleasure to work with <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a> and <a href="https://www.kaggle.com/sheriytm" target="_blank">@sheriytm</a> on this – thank you both! </p> <p>I’m sure that my teammates will post separately, so I’ll stick with some observations from my own workflow here. Very interested to hear how everyone else approached the problem, as there was a lot of scope for different approaches! </p> <p>I ended up with two small input sizes in the interest of speed: 128 + 5 channels, 196 + 5 channels. </p> <ul> <li>The 128+5 model got to 11.9 after 68.8M samples. </li> <li>The 196+5 model reached 11.15 after 71.4M samples. </li> </ul> <p>Reading others’ comments and solutions it seems that some of the items that I thought were key to doing well were not, in fact! I’m sure cherry picking ideas that worked from different teams could lead to something very interesting.</p> <p><strong>Data, data, data:</strong></p> <p>There seemed to be one major key to success in this competition: how much data you could access, and how you sampled it. </p> <p>Early on it became clear that grouping scenes while training gave a significant uplift: when a single agent from each scene was selected before moving to the next iteration, scores improved.</p> <p>Taking this idea and following it through to train_full.zarr gave the next breakthrough: training on a chopped version of train_full.zarr yielded further improvements. </p> <p>The next jump in performance came from training on multiple chops of train_full.zarr. (You can adapt the l5kit code to create lightweight chops comprising just a small number of historic frames. This allowed for big savings on RAM/disk space.)</p> <p>We debated why there were such performance gains from training using chopped versions of train_full.zarr rather than randomly accessing indices. From early experiments it seemed like ensuring scene diversity was important: in the same way that we might enforce class balance while training, enforcing scene diversity (and perhaps more fundamentally, driver diversity?) seemed to matter. </p> <p><strong>Raster size v data coverage tradeoff:</strong></p> <p>Covering as much data as possible mattered, and training for as long as possible mattered. A single chop of train_full.zarr (approx 825K samples) could be shown to the model 12 times before it stopped learning. Obviously if you could show the model different samples you would do a lot better! But this seemed to be the hard limit.</p> <p>Keeping the model as small as possible meant it could iterate through these samples much more quickly. The inputs for the models that I ended up using were raster size 128 and 196, history_num_frames = 5, condensed into five input channels: </p> <p>sum(agent_history), agent_current, sum(ego_history), ego_current, sum(semantic_map)</p> <p>I was originally using Resnet18, but then switched to Resnest50 on <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a>'s recommendation and it gave an improvement of about -1 in nll. (Incidentally, <a href="https://www.kaggle.com/rytisva88" target="_blank">@rytisva88</a> had the best performing single model in our group).</p> <p>Summing the semantic map meant that red and green traffic signals were treated the same. This seemed to work fine, surprisingly. Perhaps because only yellow gave additional information not already contained in the traffic movement.</p> <p><strong>Acceleration:</strong></p> <p>Eyeballing the predicted trajectories, it became clear that the models were ultimately making a bet on acceleration: typically, modes 0, 1, 2 represented trajectories arising from different agent speeds. </p> <p>Once it became clear that this was key it was possible to look at sampling the data such that we balanced these cases. </p> <p>A scene containing lots of agents was indicative of traffic. Gridlocked traffic obviously does not move much and leads to a lot of duplication in inputs. The models implemented a sampling scheme whereby scenes with a large number of agents were undersampled. The sampling proportion was: min(1, 7/agent_count). Thus, scenes containing 14 agents had 50% of those agents selected for each training iteration, etc.</p> <p><strong>Ensembling:</strong></p> <p>This was initially a tricky one: averaging models based on confidence values didn’t work. However, when the importance of acceleration was taken into account, the route to ensembling made sense: order the model modes by distance covered, then average the results. When this was implemented all models could be ensembled very quickly, with positive results. We also looked at incorporating curvature here, but it didn’t make any difference: distance was the key.</p> <p>The final ensemble optimized weights on the validation set. The weighting scheme incorporated distance and confidence values. </p> <p><strong>Code:</strong></p> <p>Code for contribution to our team solution can be found <a href="https://github.com/ciararogerson/Kaggle_Lyft" target="_blank">here</a></p> <p><strong>Ideas that didn’t work: many! Here are a few…</strong></p> <p><em>Traffic lights:</em></p> <p>I couldn’t get additional traffic light information to add anything: many different angles were tried! The most promising was probably traffic light persistence: we included an additional channel in the model where instead of traffic light lane lines, we drew lines containing the number of frames since the traffic light had turned to its current colour (up to a maximum of 100). Ultimately this didn’t add anything.</p> <p><em>Day/hour:</em></p> <p>Adding channels for day/hour values proved better than concatenating them directly before the dense layers of the model, but it still didn’t help much.</p> <p><em>Interpolation:</em></p> <p>Having the penultimate model layer output 25 sets of (x, y) points, followed by an interpolation between these points to make the final 50 point trajectory did not work.</p> <p><em>Weighted loss function:</em></p> <p>Given the importance of capturing acceleration, we tried a version of the loss function that weighted the last 10 points of the trajectory equal to the first 40. This didn’t help.</p>
Lyft Motion Prediction for Autonomous Vehicles
1st Place Solution & L5Kit Speedup
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1st Place Solution &amp; L5Kit Speedup</h1> <h2>Introduction</h2> <p>First of all, I would like to thank Lyft for this great challenge and of course for their huge dataset. I must say I was a bit overwhelmed by the sheer amount of agents in the training set, but it was a fun experience to be able to train without ever using training data twice. Well, we did, when we used pre-trained models, but more on that later.</p> <p>I also want to thank my Teammates <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> and <a href="https://www.kaggle.com/nvnnghia" target="_blank">@nvnnghia</a> for the great collaborative teamwork during the competition. We worked well together with a pipeline consisting of two repositories (version controlled custom l5kit and a training repo) and used logging in neptune.ai to keep track of the experiments. </p> <h2>TL;DR</h2> <p>Our solution is an ensemble of 4 efficientnet models trained with different rasterization parameters indivdually on train_full.zarr and stacked on validation.zarr. Key to training that many models on this huge dataset were several key improvements to the original l5kit repository to remove CPU bottleneck and enable efficient multi-GPU training using pytorch. </p> <h2>Improvements/Changes to the L5Kit</h2> <p>One of the major challenges in this competition was the slow rasterizer. We profiled the l5kit a lot and pinpointed the bottlenecks to speed it up by a factor of 4+. After that, our experiments were mostly GPU limited and multi-GPU experiments allowed us for "fast" iterations. Running through train_full (191_177_863+ samples) was possible within 2 days with medium sized models. </p> <h3>Speedups</h3> <p>Speed before changes with semantic view only (i5-3570K single thread, so ignore the actual time):</p> <pre><code>Raster-time per sample: 0.124 s Hits Time Per Hit % Time Line Contents ================================================== 9 11147507.0 1238611.9 93.6 rasterizer.rasterize(history_frames, history_agents, history_tl_faces, selected_agent) </code></pre> <p>All the speedups were achieved with plain python, no C++. The main speedups came from:</p> <ol> <li>Batched transformation of boxes with modifications from <a href="https://github.com/lyft/l5kit/pull/167" target="_blank">https://github.com/lyft/l5kit/pull/167</a>. <br> It has been said that the transformation is the bottleneck in the rasterizer as it is called a lot. We used the vectorized transformation whereever possible.</li> <li>In <code>box_rasterizer.py</code>, using python lists of small numpy arrays instead of one two large numpy arrays (<code>agent_images, ego_images</code>).<br> Python lists are surprisingly fast! <code>np.concatenate</code> on large numpy arrays is slow. Pre-allocation and writing to a single large array is even slower.<br> The lists can be cast to a numpy array by <code>out_im = np.asarray(agents_images + ego_images, dtype=np.uint8)</code><br> This way, boxes will stay in <code>uint8</code></li> <li>Moving concatenate to the GPU<br> As <code>np.concatenate</code> is slow on large arrays, we do that on the GPU.<br> Replace <code>"image": image,</code> with <code>"image_box": image_box</code>, <code>"image_sat": image_sat</code>, <code>"image_sem": image_sem</code></li> <li>Speedups from <a href="https://github.com/lyft/l5kit/pull/140" target="_blank">https://github.com/lyft/l5kit/pull/140</a><br> With the changes above, they now made a larger impact than the 7-8% stated in the PR.</li> </ol> <p>Speed after changes with semantic and satellite view (i5-3570K single thread, so ignore the actual time):</p> <pre><code>Raster-time per sample: 0.032 s Hits Time Per Hit % Time Line Contents ================================================== 9 2847054.0 316339.3 84.2 rasterizer.rasterize(history_frames, history_agents, history_tl_faces, selected_agent) </code></pre> <h3>Additions/Changes</h3> <ul> <li>New rasterizer which combines satellite and semantic view (<code>py_sem_sat</code>).</li> <li>Flag: AgentID used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Flag: AgentLabelProba used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Flag: Velocity used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Option to draw all objects on the semantic layer, combinable with the label probas from above.</li> <li>Filter options for agents: <code>th_extent_ratio, th_area, th_distance_av</code> to give a larger training dataset (or harder as you want to phrase it)</li> <li>Multiple parallel rasterizers for ensembling models that use different raster sizes</li> <li>Satellite view fix for non square shapes (had some weird transformation in it before)</li> </ul> <p><img src="https://i.imgur.com/G3g9lEJ.png" alt="rasterized_image"></p> <p>We will make our L5kit code changes public in the next few days and after cleanup. Most likely with PRs to the original repo.</p> <h2>Journey &amp; Experiments</h2> <p>With all the speed improvements from our custom L5kit we were able to eliminate the previous CPU bottleneck and run experiments on an efficient multi-GPU setup. We mostly trained on 8xV100 nodes using pytorchs awesome distributed data parallel (DDP), which enables to effectivly scale a training script to multiple GPUs. This efficient training setup was key to run quite a few experiments. (See list of stuff that did not work below) In all our experiments, we used the chopped validation set for local validation as discussed <a href="https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/188695" target="_blank">here</a>. This yielded us very good prediction of the LB score (in the sub 13 range, the score was usually ~0.2 higher than local validation with a standard deviation of around 0.3). </p> <p>Originating from the provided baseline and the multi-mode prediction from <a href="https://www.kaggle.com/corochann/lyft-training-with-multi-mode-confidence" target="_blank">here</a>, we experimented with raster size, pooling layers, head-modifications, usage of different backbones, learning rate scheduler, subsampling and oversampling to name a few. It was a major challenge in this competition, that experiments had to be run for at least 50 % of a full epoch of train_full.zarr or when comparing different sample sizes or learning rate scheduler even until the end to be really comparable. Subsampling the train set or stepwise decrease of the LR quickly yielded a low training and validation loss, but in the end, the performance was worse.</p> <p>Quite quickly, we decided to always use all data, and settled to a setup using shuffled train_full.zarr (with a fixed seed to be able to resume training) with a linear decay learning rate scheduler. To keep the training sample fixed, we also settled to an AgentDataset with <code>min_frame_history=1</code> and <code>min_frame_future=10.0</code> to best resemble the test dataset. Opposed to the many discussions about a much lower training loss, we never experienced that. Without any augmentation, the training loss is of course slightly lower, especially in the end of the epoch, but usually within 80% of the validation score. We found that larger backbones help to reach lower loss levels, but starting at the size of an EfficientNetB7, the nets seem to be overconfident with their predictions and the LB score was sometimes shaky. </p> <p>The first half of test is public, second half is private test. We couldn't identify any major differences in the two sets. With that knowledge it was possible to hide the true score (setting some public rows to zeros), we wonder how many teams did that and were afraid of surprises in the private LB due to this. </p> <h2>Best Single Model</h2> <p><img src="https://i.imgur.com/YC1WGD1.png" alt="model"></p> <p>Our best single model is actually one that not fully finished on the last day. It's just an EfficientNetB3 with a Linear Layer Head and dropout attached trained on an extended train_full.zarr Dataset (<code>min_frame_history=1, min_frame_future=5.0, th_distance_av=80, th_extent_ratio=1.6, th_area=0.45</code>)</p> <ul> <li>history_num_frames: 30.0</li> <li>raster_size: [448, 224]</li> <li>pixel_size: [0.1875, 0.1875]</li> <li>rasterizer: py_sem_sat</li> <li>batch_size: 64</li> <li>dropout: 0.3</li> </ul> <p>The model was pre-trained for 4 epochs on different lower resolution images (starting with 112, 64 and pixel size 0.75, 0.75) , so you can argue this is also an ensemble of some kind. The 5th epoch was trained with the parameters above and a very customized learning rate schedule that starts with a consine anneal and then transitions to a stepwise lr scheduler as seen in the graph below. The last evaluation score is 9.697.</p> <p><img src="https://i.imgur.com/4Q2PkvX.png" alt="best_single"></p> <p>The model achieves a public LB score of 9.776 and a private LB score of 9.070. With that, it would also have ranked 1st place this competition on it's own.</p> <p>Pretraining is VERY important! We discovered it by a mistake where we loaded an EfficientNet without pretrained weights from imagenet. It performed significantly worse by ~1.00.</p> <h2>Ensemble</h2> <p>Ensembling proved to be challenging in this competition, as traditional methods are not working well with the metric in use. We tried a lot of manual blending and form of post-processing without success. We then started building stacking models taking the raw predictions of the models as input, but this was also surprisingly not working well. At about the same time as <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> posted it <a href="https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/discussion/193908#1077899" target="_blank">here</a>, we had the idea of using the features/embeddings from each model as an input to a ensemble head. This worked much better and we could improve upon our single models specifically as we could introduce some diversity into the blend (different image size, pixel sizes, etc.) We also went with the dropout + single linear layer approach here, and all modification we tried with the head performed worse.</p> <p><img src="https://i.imgur.com/2I3M2rD.png" alt="ensemble_arch"></p> <p>To prevent overfitting, the ensembling was done on the chopped validation set (without using the chopped off actual validation part), in some experiments extended to validation + test set without noticeable change in the metric but with about twice the runtime. </p> <p>Ensembles of two similar models (even just two different checkpoints) already gave a boost of about 0.2 in the metric. Naturally, with more diverse models the ensemble performed even better and we were able to achieve our best public LB score of 9.319 and private LB score of 8.579 with an ensemble of 4 models (B3, B5, B6, B6) on two different raster sizes. </p> <h2>Bootstrapped Validation</h2> <p>We evaluated model and ensemble performance with bootstrap method on the chopped dataset. For more diverse ensembles we also got the lowest standard deviation from the bootstrap method. For our final submission it was 0.22754. The best single models had a standard deviation of ~0.27. We also identified our B7 and B8 models to have a slightly larger standard deviation in the bootstrap method of ~0.36-0.39 which may also explain their partial performance degredations on the public LB. Despite their good validation score, for that reason we excluded them in our final ensembles and final submissions. </p> <p><img src="https://i.imgur.com/l4vEAaz.png" alt="bootstrap"></p> <p>We plotted CV vs public LB for our submissions and found a spread of +-0.3 for most of our single models (green corridor) with just the B7 and B8 models outside of that corridor (not shown in the picture). Our ensembles are in an even smaller corridor with a spread of +-0.1 (orange corridor), proving their superior robustness and generalization capabilities over single models. </p> <p><img src="https://i.imgur.com/S6fYsBI.png" alt="cv_lb"></p> <p>Given this robust CV &amp; LB correlation, we were very confident that improvements in validation also lead to improvements on the leaderboard, so we did not have to sub it. So the sudden jump on public leaderboard only happened after we decided to sub our better models and was not some sudden magic we discovered.</p> <h2>What Didn't Work</h2> <ul> <li>Flag: AgentID used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Flag: AgentLabelProba used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Flag: AgentLabelProba used as a value for drawing (instead of just boolean <code>no agent = 0</code>, <code>agent = 1</code>)</li> <li>Option to draw all objects on the semantic layer, combinable with the label probas from above.</li> <li>Kalman filtering the output</li> <li>Any non NN prediction method</li> <li>Prediction of only the difference to a constant velocity model</li> <li>Using additional informations, such as velocity, history positions, label probas or anything else we could find in the head of the models. </li> <li>Heavier heads</li> <li>3D CNNs</li> <li>RNN based approaches on history frames for backbone input or outbut </li> <li>Using additional features from shallower layers of the backbone</li> <li>Ensembling with kmeans, or other analytic clustering method that we tried. Some success at scores down to around 14, but no improvements below that. </li> <li>Subsampling the training set. Huge speedups, but accuracy was always slightly lower</li> <li>TTA with modified yaw angles</li> <li>Unfreezing the backbone after ensembling and continue training with a second epoch</li> </ul> <h2>What We Didn't Try</h2> <ul> <li>Augmentation</li> <li>Multi backbone, multi rasterizer training</li> </ul>
Google Landmark Recognition 2020
My engineering approach to the problem. #49
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Landmark Recognition 2020 <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is my first real competition. Definitely learnt quite a bit! Thanks to the organizers. After giving up on training my model, I leveraged the baseline model with few additional measures. My approach did quite well in my test cases with 2019 competition data (~0.6) but unfortunately didn't do quite as well on this contest.</p> <ul> <li><p>Precomputed embeddings for complete train data and stored in sqlite3 database. This substantially reduced time to evaluate.</p></li> <li><p>Mix in precomputed embeddings with embeddings generated on a subset openimages dataset. I assign a landmark ID of 999999 to this dataset and use it to filter out any images that matched this dataset. </p></li> <li><p>After this step, the dataset is reduced by 10x allowing a lot of time to do local matching etc. </p></li> </ul> <p>I did try using cv2 sift for local scoring but didn't see much benefit. </p>
Mechanisms of Action (MoA) Prediction
27th place - Trusting CV always works.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h4>Feature Used</h4> <ol> <li><br><ul> <li>Important features selected from xgboost and RFECV without any preprocessing.</li></ul></li> <li><br><ul> <li>Variance Threshold on original features. (before scaling and pca)</li> <li>Quantile transformation</li> <li>PCA with 80 gene and 10 cell features</li></ul></li> <li><br><ul> <li>Quantile Transformation</li> <li>PCA with approx 600 gene and approx 60 cell</li> <li>Variance Threshold</li></ul></li> <li><br><ul> <li>Quantile Transformation</li> <li>PCA with approx 600 gene and approx 60 cell</li> <li>Variance Threshold</li> <li>Dropped high correlated features after all the feature engineering</li></ul></li> <li><br><ul> <li>4000 ML-SMOTE features</li> <li>Quantile Transformation</li> <li>PCA with approx 500 gene and approx 50 cell</li> <li>Variance Threshold</li></ul></li> </ol> <h4>Little Data Augmentation</h4> <ul> <li>Used albumentations to add random dropout in the features with 0.5 dropout rate and trained<br> models separately for this data.</li> </ul> <h4>Models Used (10 fold MultilabelStratifiedKFold with No seeds)</h4> <ol> <li><p>Two Three layer neural networks :<br> a) (BatchNorm -&gt; Dropout -&gt; WeightNorm(Dense) -&gt; ReLU) * 3<br> b) (BatchNorm -&gt; Dropout -&gt; WeightNorm(Dense) -&gt; LeakyReLU) * 3</p></li> <li><p>Two headed neural network with skip connections trained with non-scored for transfer learning:<br> Head1 --&gt; out1 = (BatchNorm -&gt; Dropout -&gt; Dense -&gt; LeakyReLU) * 4<br> Head2 --&gt; out2 = out1 + (BatchNorm -&gt; Dropout -&gt; Dense -&gt; LeakyReLU) * 4<br> avg --&gt; Average([out1, out2])<br> final head --&gt; avg + (BatchNorm -&gt; Dropout -&gt; Reshape -&gt; LSTM) * 1<br> (BatchNorm -&gt; Dense -&gt; LeakyReLU) * 3</p></li> <li><p>Two headed neural network with skip connections trained with non-scored for transfer learning:<br> Head1 --&gt; out1 = (BatchNorm -&gt; Dropout -&gt; Dense -&gt; ReLU) * 4<br> Head2 --&gt; out2 = out1 + (BatchNorm -&gt; Dropout -&gt; Dense -&gt; ReLU) * 4<br> avg --&gt; Average([out1, out2])<br> final head --&gt; avg + (BatchNorm -&gt; Dropout -&gt; Dense) * 1<br> (BatchNorm -&gt; Dense -&gt; ReLU) * 3</p></li> <li><p>TabNet.</p></li> </ol> <p><strong>Schedulers:</strong> ReduceLROnPlateau, OneCycleLR<br> <strong>Optimizers:</strong> Adam, AdamW<br> <strong>Activations:</strong> ReLU, LeakyReLU</p> <p>(All the models are trained with different above mentioned features and parameters making a total of 16 models.). Submitting average of all these models achieved :<br> <strong>CV:0.01501 <br> Public: 0.01824 <br> Private: 0.01611</strong></p> <h4>Stacking</h4> <p>Stacked predictions from ensemble of all the models by taking simple mean of all the predictions.<br> Training models with original+stacked features achieved : <br> <strong>CV:0.01495 <br> Public: 0.01821 <br> Private: 0.01610</strong></p> <h4>Ensemble</h4> <p>Simple average of all models trained with different features (16 models) + models trained with<br> stacked features (10 models) making an ensemble of total 26 models achieved <br> <strong>CV:0.01475 <br> Public: 0.01819 <br> Private: 0.01608</strong></p> <h4>Best submission (0.01604)</h4> <p>Submission with little blending of weights<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2169626%2F16d013510cef396fca059df98ea9c3b0%2Fbest.png?generation=1606808910913035&amp;alt=media" alt="best"></p> <p><strong>Thanks to kaggle and all the contributors for sharing their amazing work with notebooks and discussions. I learned so many new things from feature engineering techniques to building models for tabular data and so many more things. Really a great competition so far.</strong></p>
Mechanisms of Action (MoA) Prediction
20th place (If we had picked it)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank my teammates <a href="https://www.kaggle.com/shalules" target="_blank">@shalules</a>, <a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a>, <a href="https://www.kaggle.com/rohitsingh9990" target="_blank">@rohitsingh9990</a> and everyone who shared their notebooks, specifically <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a>, and contributed to the discussions. I learned a lot this competition. </p> <p>This solution didn't have the best public score, so unfortunately, we didn't pick it. It was created over a month ago. Maybe not having the drug ids helped us. It's very straightforward and was really just intended to be a starting point for other blends. </p> <p>It's based on these 3 kernels </p> <ol> <li><a href="https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing" target="_blank">https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing</a> (via <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model</a>) <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> </li> <li><a href="https://www.kaggle.com/gogo827jz/rapids-svm-on-gpu-6000-models-in-1-hour" target="_blank">https://www.kaggle.com/gogo827jz/rapids-svm-on-gpu-6000-models-in-1-hour</a> <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> </li> <li><a href="https://www.kaggle.com/optimo/tabnetregressor-2-0-train-infer" target="_blank">https://www.kaggle.com/optimo/tabnetregressor-2-0-train-infer</a> <a href="https://www.kaggle.com/optimo" target="_blank">@optimo</a></li> </ol> <p>I changed the SVC to ridge regression and used the ridge regression output as input to tabnet and averaged it with the 2 headed resnet. <br> <a href="https://www.kaggle.com/ericfreeman/moa-multi-input-resnet-model" target="_blank">https://www.kaggle.com/ericfreeman/moa-multi-input-resnet-model</a></p>
Mechanisms of Action (MoA) Prediction
Private 14th / Public 24th place solution!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I want to say thank you to the host, Laboratory for Innovation Science at Harvard. <br> Also thank you to kaggle admins and all the participants! <br> Thanks to everybody here in this competition, I learned a lot. </p> <p>Personally, I want to express my biggest gratitude to <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> and <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> !<br> I learned a lot from especially both of you. Kaggle is really a great place to learn! </p> <p>This is the first competition I am fully committed.<br> It must be one of the memorable competition in my entire life! <br> Even though the outcome is a kind of luck, I really excited to have a medal.<br> The reason I'm saying 'luck' is, I was not sure what I did was right or not for the private. </p> <p>I will explain my solution briefly here. </p> <p>If you have time, please take a look at <a href="https://www.kaggle.com/kokitanisaka/moa-ensemble" target="_blank">my final submission notebook</a>.<br> I explained everything what I did in the notebook. <br> As I did everything in one notebook, you can see everything what I did. <br> I don't have local environment after all. </p> <h1>Overview</h1> <p>This is the flow. <br> The magic is in blending. I blended all these 6 models, including stacking model.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4974940%2F1f01596426bda0efd1c21c79f94a6d6d%2Fmoasolution.png?generation=1606792790231187&amp;alt=media" alt=""></p> <h1>Whole solution</h1> <p>Everything is in <a href="https://www.kaggle.com/kokitanisaka/moa-ensemble" target="_blank">the notebook</a>. <br> If you have time, please give me feed backs! <br> I still don't know I was doing right or not.</p> <p>Thank you again for all of you!!!<br> I'm looking forward to see you guys in other competitions, other platforms, or even in person! 😊</p>
Mechanisms of Action (MoA) Prediction
Public 46th / Private 34th Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hello, Kaggler! <br> This is Public 46th / Private 34th Solution. </p> <p>Thanks to my wonderful teammates, <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> <a href="https://www.kaggle.com/takanobu0210" target="_blank">@takanobu0210</a> <a href="https://www.kaggle.com/masatakashiwagi" target="_blank">@masatakashiwagi</a> <a href="https://www.kaggle.com/taromasuda" target="_blank">@taromasuda</a>.<br> I enjoyed discussion with teammates every weekend!</p> <h3>Final Submission Notebook</h3> <p><a href="https://www.kaggle.com/ttahara/34th-stacking-5-models-by-mlp-1d-cnn-wo" target="_blank">https://www.kaggle.com/ttahara/34th-stacking-5-models-by-mlp-1d-cnn-wo</a></p> <h3>Our Pipeline</h3> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2502130%2Fc5637c0994ec7a680b2f90c0060658f7%2F2020-12-02%2010.42.44.png?generation=1606873393103931&amp;alt=media" alt=""></p> <h3>What worked</h3> <ul> <li>Add stat feature</li> <li>Add PCA feature</li> <li>Rankgauss</li> <li>Label Smoothing</li> <li>Transfer Learning by nonscored for NN, ResNet</li> <li>Shallow model<ul> <li>Short epoch, learning until limit before loss is NaN by NN</li> <li>n_steps=1, n_shared=1 by TabNet</li></ul></li> <li>Thresholdlng NN<br> <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a> customized NN, which preprocesses inputs by element-wise linear functions followed by tanh and feeds the preprocessed values into an ordinary MLP. He calls this model <strong>Thr</strong>esholding NN. </li> <li>Ensemble. In particular, Tabnet and NN's ensemble is effective.</li> <li>2 Stage Stacking by MLP, 1D-CNN, Weight Optimization</li> </ul> <h3>What did NOT work</h3> <ul> <li>SelfSupervised TabNet</li> <li>Transfer Learing by nonscored for TabNet</li> <li>Feature Selection by TabNet's Feature Importance</li> <li>Pseudo Labeling</li> <li>Class Balanced loss, focal loss</li> <li>PostPredict for large logloss target</li> <li>Predict drug id, and label power set.<br> MoA is determined by drug id, So we tried to predict drug id directly(And target converted to label power set), but not work.</li> <li>Using Non-scored as feature<br> We predict Non-scored MoA, and add Non-scored oof and pred to the train, test data, but not work.</li> </ul> <h3>Not enough time</h3> <ul> <li>Target Encoding to g-,c- bin's feature</li> <li>XGBoost, CatBoost, CNN model for single model (Stage 1)</li> <li>GCN model for stacking model (Stage 2)</li> <li>Netflix Blending</li> <li>PostPredict by LGBM<br> We noticed that there are columns that NN can't predict, but LGBM can (e.g. cyclooxygenase_inhibitor). Therefore, we came up with the idea of repredicting only the columns that are good at LGBM. But not enough time.</li> </ul>
Mechanisms of Action (MoA) Prediction
1st Place Winning Solution - Hungry for Gold
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Overview</h1> <p>Thanks to the Kaggle team and <a href="https://www.kaggle.com/srandazzo21" target="_blank">@srandazzo21</a> <a href="https://www.kaggle.com/mrbhbsof" target="_blank">@mrbhbsof</a> from <em>Laboratory for Innovation Science at Harvard</em> who hosted this challenging and interesting MoA competition!</p> <p>Representing the <strong>Hungry for gold🥇🥇</strong> team (with <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> <a href="https://www.kaggle.com/kibuna" target="_blank">@kibuna</a> <a href="https://www.kaggle.com/poteman" target="_blank">@poteman</a>), in this post I’m going to explain our winning solution in detail.</p> <p>Our winning blend consists of 7 single models:</p> <ul> <li>3-stage NN stacking by non-scored and scored meta-features</li> <li>2-stage NN+TabNet stacking by non-scored meta-features</li> <li>SimpleNN with old CV</li> <li>SimpleNN with new CV</li> <li>2-heads ResNet</li> <li>DeepInsight EfficientNet B3 NS</li> <li>DeepInsight ResNeSt</li> </ul> <p>The following overview diagram depicts our winning blend models.</p> <p><strong>Figure 1. Winning Weighted-Average Blend.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F0a0b63df266de28b20d55046de9776da%2Ffigure_1.png?generation=1607159993332157&amp;alt=media" alt=""></p> <p>Our two final submissions are based on weighted average. The winning blend is the best LB (Table 1), and the best CV blend (Table 3) can also achieve <strong>5th place</strong> in the private LB. It shows that both final submissions are performant and robust.</p> <p><strong>Table 1. Winning Blend with 7 Models (Private LB: 0.01599).</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fd4ac6db6183219b14317f1b074ce9752%2Ftable_1.png?generation=1608037787146399&amp;alt=media"></p> <p>For our winning blend, the selection of blend weights were based on two factors: the LB scores and correlations between the single models. For better understanding, the mean correlation of each model’s predictions without control groups was calculated with respect to other single models (shown in Table 2). The selection was manually done to maximize the LB score. Models with less mean correlations were given higher weights. Also, some of the weights were decided by the previous blend submission scores. More details can be found in the Model Diversity section.</p> <p><strong>Table 2. Mean Correlation Between the Submissions of Best Single Models.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F66d2f23777fc03afbffc785902790cbd%2Ftable_2.png?generation=1607160532323286&amp;alt=media"></p> <p>The selection of single models in our Best CV blend is fully based on the OOF (Out-of-folds) predictions. We used the <a href="https://optuna.readthedocs.io/en/stable/reference/generated/optuna.samplers.TPESampler.html#optuna.samplers.TPESampler" target="_blank">TPE</a> (Tree-structured Parzen Estimator) sampler in Optuna and <a href="https://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html" target="_blank">SLSQP</a> (Sequential Least Squares Programming) method in Scipy for searching CV-optimized weights. The resulting weights from Optuna (under 3000 or 5000 trials) and SLSQP are nearly the same.</p> <p>The log loss of the best CV is <strong>0.15107 (private LB: 0.01601)</strong> from 5 models. Interestingly, the search results eliminated 2-stage NN+TabNet and Simple NN models, which gave some contribution to the winning blend.</p> <p><strong>Table 3. Best CV Blend with 5 Models (CV: 0.015107, Private LB: 0.01601).</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F79f1ac6d7492693aedd1dd287998a2b4%2Ftable_3.png?generation=1607160550745973&amp;alt=media"></p> <p>The addition of <a href="https://www.kaggle.com/c/lish-moa/discussion/195378" target="_blank">DeepInsight CNNs</a> played a significant role in our final blends because of their high diversity to other shallow NN models. By curiosity, we compared the log loss scores of our winning blend with/without those CNN models (shown in Table 4).</p> <p>We got almost <strong>0.00008 improvement</strong> in the private LB score while including DeepInsight CNN models!</p> <p><strong>Table 4. Winning Blends with/without DeepInsight CNNs.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F14643afb45705f0ca73561f28acd6a4b%2Ftable_4.png?generation=1607160593397588&amp;alt=media"></p> <h1>Cross-Validation Strategies</h1> <p>Most of our models are based on the <a href="https://github.com/trent-b/iterative-stratification" target="_blank">MultilabelStratifiedKFold</a> (old CV), with the exception of a Simple NN model that used the <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">new double stratified CV</a> shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> using drug_id information. We used different seeds for the CV splits in our models. The choice of CV strategy is based on the goodness of alignment to the CV log loss and LB log loss. The <em>K</em> of CV is either 5 or 10. We also trained the models using multiple seeds to reduce the variance.</p> <p>Notably, we noticed that it is difficult to combine the OOF files from both old and new CVs for searching optimized CV weights as the log loss scores of new CV models are much higher. We ended up selecting only old CV models for our Best CV blend, which scored <strong>0.15107</strong> on old CV and <strong>0.01601</strong> on private LB.</p> <h1>Model Details</h1> <p>In our teamwork, Nischay and Kibuna contributed on feature engineering and multi-stage stacking of 3-stage NN, 2-stage NN+TabNet and Simple NN models while I was focusing on DeepInsight CNNs and the replication of a 2-heads ResNet model.</p> <p>To fight with the risk of overfitting under this small and highly imbalanced multi-label dataset, we imposed label smoothing and weight decay as the regularization methods in the model training process. Label smoothing worked very well and prevented our models from being too confident about its predictions, which greatly reduced the chance of overfitting.</p> <p>To make our models more visually understandable, we have provided a model architecture diagram for each of our single models to explain the inner training process and its NN topology. In this section we are going to introduce each model in detail.</p> <h2>3-Stage NN</h2> <p>This is our best single model (CV: <strong>0.01561</strong>, Public LB: <strong>0.01823</strong>, Private LB: <strong>0.01618</strong>) and a great addition to our final blend on both CV and LB scores. Nischy and Kibuna did a great job on producing good results based on the idea of multi-stage stacking with model predictions as meta-features and different engineering features. Control group rows are removed from the training set.</p> <p><strong>Figure 2. 3-stage NN Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Ffb1fcba2f3331b3d3ac90cda457c23bd%2Ffigure_2.png?generation=1607164173264589&amp;alt=media" alt=""></p> <p><strong>Figure 3. 3-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fe46b018f07394a19a29398de96eea718%2Ffigure_3_7.png?generation=1607164188073162&amp;alt=media" alt=""></p> <p>Figure 3 depicts the detailed topology of the model’s based NN block, which is similar to the public notebooks. Some changes in the training setup were made to boost up the performance. In the multiple stages training, the same architecture was used apart from tuned dropout values.</p> <p><strong><em>STAGE 1</em></strong></p> <p>Firstly, <a href="https://umap-learn.readthedocs.io/en/latest/" target="_blank">UMAP</a> and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FactorAnalysis.html" target="_blank">Factor Analysis</a> were used to generate extra features from the genes and cells features. The <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">Quantile Transformer</a> was applied to all features except for one-hot encoding ones. In this stage the NN model with a size of 2048 hidden neurons is trained for 15 epochs for non-scored targets excluding those having zero values (332 targets included). The non-scored predictions were reused as meta-features for the next stage.</p> <p><strong><em>STAGE 2</em></strong></p> <p>In the 2nd stage, we applied Quantile Transformer again on the non-scored meta-features to combine with the original features in the previous stage, and trained another NN for 25 epochs with a size of 2048 hidden neurons for scored targets. Similarly, the scored predictions were reused as meta-features for the next stage.</p> <p><strong><em>STAGE 3</em></strong></p> <p>In the last stage, we also applied Quantile Transformer on the scored meta-features, and retrained a NN for 25 epochs with a size of 1024 hidden neurons based on only those meta-features, while targets were clipped from 0.0005 and 0.9995 in the training data after label smoothing.</p> <p>In each stage a similar setup was used: a Adam optimizer with a learning rate of 5e-3, a batch size of 256, a weight decay of 1e-5, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p>The final model produced a CV log loss of <strong>0.01561</strong>, which is a bit higher than the 2nd stage model due to clipping to model predictions, but it scored really well on the LB.</p> <h2>2-Stage NN+TabNet</h2> <p>Similar to the 3-stage NN, this model also used non-scored and scored targets in multi-stage training. TabNet is only used for the 2nd stage instead of the first one, because it was not producing good results on non-scored targets. Control group rows are removed from the training set.</p> <p>Figure 4 and 5 show the model architecture and the detail of Base NN used in the first stage.</p> <p><strong>Figure 4. 2-stage NN+TabNet Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F7cec080edde3393d3c86a779d227b211%2Ffigure_4.png?generation=1607164563571085&amp;alt=media" alt=""></p> <p><strong>Figure 5. 2-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F6f444e4b91653b33fc1274e8adf11587%2Ffigure_5.png?generation=1607164583842981&amp;alt=media"></p> <p><strong><em>Stage 1</em></strong></p> <p>For the feature engineering part, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html?highlight=pca#sklearn.decomposition.PCA" target="_blank">PCA</a> and <a href="https://umap-learn.readthedocs.io/en/latest/" target="_blank">UMAP</a> were used along with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">QuantileTransformer</a> on the genes and cells features, with a variance threshold of 0.5 to keep only important features. A NN model with a size of 2048 hidden neurons is trained for 15 epochs for non-scored targets. Those predictions are then used as meta-features for the next stage.</p> <p>The following training setup was used: a Adam optimizer with a learning rate of 5e-4, a batch size of 256, a weight decay of 1e-5, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p><strong><em>Stage 2</em></strong></p> <p>In the 2nd stage, we applied Quantile Transformer again on the non-scored meta-features to combine with the original features in the previous stage, and trained a TabNet with label smoothing (0.0001) for scored targets. Factor Analysis, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html" target="_blank">K-Means Clustering</a> along with some statistics features like mean, skew, kurtosis, and std of genes and cells features were also used.</p> <p><a href="https://github.com/dreamquark-ai/tabnet" target="_blank">PyTorch TabNet</a> regressor training setup: a width of 32 for the decision prediction layer, and a width of 32 for the attention embedding for each mask, 1 step in the architecture, a gamma value of 0.9, Adam optimizer with a learning rate of 2e-2 and a weight decay of 1e-5, a sparsity loss coefficient of 0, and entmax as the masking function. It was trained with a batch size of 1024 and a virtual batch size of 128 for 200 epochs before early-stopped by a patience of 50 epochs. The final TabNet model produced a CV log loss of <strong>0.01615</strong>.</p> <h2>Simple NN</h2> <p>The model topology of simple NN is the same as the 3-FC Base NN used in the 3-stage NN model. It is trained with only a single stage on scored targets. Figure 6 and 7 show the model architecture and the detail of Base NN. Control group rows are removed from the training set.</p> <p><strong>Figure 6. Simple NN Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F8276e3b36e68f6ad4557b22ea5749c46%2Ffigure_6.png?generation=1607175455595627&amp;alt=media" alt=""></p> <p><strong>Figure 7. 3-FC Base NN Block Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fe46b018f07394a19a29398de96eea718%2Ffigure_3_7.png?generation=1607164188073162&amp;alt=media" alt=""></p> <p>For the feature engineering part, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FactorAnalysis.html" target="_blank">Factor Analysis</a> is used along with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html" target="_blank">QuantileTransformer</a> on the genes and cells features except for one-hot encoding ones. Some statistics features like mean, skew, kurtosis, and std of genes and cells features were also used. A NN model with a size of 2048 hidden neurons is trained for scored targets.</p> <p>The following training setup was used: a Adam optimizer with a learning rate of 5e-4, a batch size of 256, a weight decay of 1e-5, an early stopping patience of 10 epochs, and a OneCycleLR scheduler with a maximum learning rate of 1e-2.</p> <p>We trained two Simple NN models in total, one with the old CV including extra features from K-Means Clustering, and the other one with the new CV to add more diversity.</p> <h2>2-heads ResNet</h2> <p>This model replicated the public notebook "<a href="https://www.kaggle.com/demetrypascal/fork-of-2heads-looper-super-puper-plate" target="_blank">Fork of 2heads looper super puper plate</a>" shared by <a href="https://www.kaggle.com/demetrypascal" target="_blank">@demetrypascal</a> with some minor updates for the freezing/unfreezing training loops. The original notebook was written in R (public LB: <strong>0.01833</strong>, private LB: <strong>0.01624</strong>), and we made a Tensorflow/Keras Python version for it (public LB: <strong>0.01836</strong>, private LB: <strong>0.01624</strong>). Control group rows are removed from the training set.</p> <p>Figure 8 and 9 show the model architecture and the detail of ResNet-like Residual NN.</p> <p><strong>Figure 8. 2-heads ResNet Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fcfe379bc20d3e1c10f711d3a75c46ed5%2Ffigure_8.png?generation=1607165233701360&amp;alt=media" alt=""></p> <p>The architecture is based on ResNet-like Residual NN with two inputs, one is from mean and PCA components of genes and cells features, and the other one includes the <a href="https://www.kaggle.com/demetrypascal/t-test-pca-rfe-logistic-regression#Select-only-important" target="_blank">447 selected important features</a> by <a href="https://en.wikipedia.org/wiki/Student%27s_t-test" target="_blank">Student's t-test</a> based on the 25% quantile of the p-values for the test of sample means between training rows with and without MoA. All features are normalized by Standard Scaler.</p> <p><strong>Figure 9. ResNet-like Residual NN Topology.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fbb20162671344e825fe8369b6810509b%2Ffigure_9.png?generation=1607165255497473&amp;alt=media" alt=""></p> <p>Using a ResNet-like topology brings the benefit of skip connections in the FC layers, which allows the flow of information to be passed from one layer to next layers. In this way, the model gains better capability of learning with more hidden neurons and layers without the early degradation problem.</p> <p>The training process can be divided into two phases.</p> <p>Firstly, the model is trained for non-scored targets by Adam optimizer for 50 epochs minimizing the binary cross entropy loss, with a learning rate of 1e-3, a batch size of 128, early stopping with a patience of 10 epochs and a minimum delta of 1e-5. ReduceLROnPlateau learning rate scheduler is used with a factor of 0.5 and a patience of 4 epochs. The alpha value of label smoothing is set to 0.0005.</p> <p>In the second phase, the learnt model weights from non-scored targets are being reused and transferred learning to scored targets using loops of freezing and unfreezing process. The weights except for the last layers (the FC2 module in Figure 9) are freezed for training several epochs with minimum delta of early stopping set to 1e-6, and a smaller learning rate of 1e-3/3. And then the whole model weights are unfreezed and we follow a similar loop to train the same model, with an even smaller learning rate of 1e-3/5, until it early-stopped.</p> <p>There is also another V2 version of model that included one-hot encoding features and used a smaller batch size of 64. It got a worse public LB but scored better on CV and private LB, which contributed as part of our best CV blend.</p> <h2>DeepInsight CNNs</h2> <p>Compared with most of the shallow NN models shared in public notebooks and our other single models that added lots of engineering features, our DeepInsight CNNs only used raw features from genes and cells!</p> <p>The idea was to fully utilize the power of feature extraction and transfer learning from pretrained convolutional neural networks and generate highly unique and diverse models in comparison with other NN models.</p> <p>As introduced in <a href="https://www.kaggle.com/markpeng/deepinsight-transform" target="_blank">my post about DeepInsight</a>, instead of doing feature extraction and selection for collected samples (<em>N</em> samples x <em>d</em> features), we are <strong>arranging similar or correlated features into the neighboring regions of a 2-dimensional feature map (<em>d</em> features x <em>N</em> samples)</strong> to ease the learning of their complex relationships and interactions. </p> <p>Since we are transforming features instead of samples, the shape of the learnt feature map is static for samples. However, the distribution of features values (heatmap colors) in the feature map per class looks different. As we know, CNNs can learn hierarchical patterns and exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers. Even if the feature map is fixed, the variance of feature values and the interactions of neighboring weak features are still the learnable source for kernel weights.</p> <p>In this competition, we applied DeepInsight with two popular pretrained CNN models: <a href="https://arxiv.org/abs/1905.11946" target="_blank">EfficientNet</a> and <a href="https://arxiv.org/abs/2004.08955" target="_blank">ResNeSt</a> (Split-Attention Networks). And it turns out that they all worked very well on MoA data!</p> <p>Figure 10 and 11 show the detail of DeepInsight CNNs.</p> <p><strong>Figure 10. DeepInsight EfficientNet B3 NS Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fa64ec9d3048c8abee264adb81f3f9848%2Ffigure_10.png?generation=1607165765201040&amp;alt=media" alt=""></p> <p><strong>Figure 11. DeepInsight ResNeSt Model.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2F8e3b1c9ddbd1818405f29b76585e9025%2Ffigure_11.png?generation=1607165780342951&amp;alt=media" alt=""></p> <p><strong><em>Training Process</em></strong></p> <p>The overall training process is the same for both CNN models. First of all, raw genes, cells and one-hot encoding features are normalized by the <a href="https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-47765-6/MediaObjects/41598_2019_47765_MOESM1_ESM.pdf" target="_blank">Norm-2 Normalization (Log Scaler)</a> method mentioned in the original <a href="https://www.nature.com/articles/s41598-019-47765-6" target="_blank">DeepInsight paper</a>, which works better for CNNs than Standard Scaler, Robust Scaler and Min/Max Scaler in our experiments.</p> <p>Secondly, t-SNE is applied to transform raw features into a non-linear 2D embeddings feature space. Then the convex hull algorithm is used to find the smallest rectangle containing all features and a rotation is performed to align the frame into a horizontal or vertical form. After this step, we have created a <strong>DeepInsight Feature Map</strong> that contains the extracted neighboring relationships between the raw features from genes and cells. </p> <p>Next, to deal with the chance of overfitting in this small and highly imbalanced dataset, we borrowed the idea of <a href="https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629" target="_blank">Swap Noise</a> for data augmentation, where each training sample has a 10% chance of swapping 15% of the features with the other random sample in the training set. Our experimental results showed that this trick greatly reduced the chance of overfitting and boosted the prediction performance of CNNs.</p> <p>Fourthly, the augmented raw feature values are mapped into the pixel coordinate locations of the feature map image. Note that the resolution of feature map image affects the ratio of feature overlaps (the features mapped to the same location are averaged), which is a trade-off between the level of lossy compression and computing resource requirements (e.g., host/GPU memory, storage).</p> <p>In the original paper overlapped feature values are aggregated by the mean, and it provided only a single gray-scale channel duplicated into 3-channels for pretrained CNNs. Therefore, in the final step I thought of an idea of extracting also the minimum and maximum values as the other two channels to keep more statistical information about the overlapped values. This trick has been proven to boost the prediction performance on both CV (<strong>-0.00013</strong>) and public LB (<strong>-0.00011</strong>) in the experiment results significantly.</p> <p>We chose to use the same value for resolution and image size, which is 300x300 for EfficientNet B3 NoisyStudent and 224x224 for ablated ResNeSt. The last FC layer of CNNs are replaced with a new FC with a hidden size of 512 and the <a href="https://pytorch.org/docs/stable/generated/torch.nn.ELU.html" target="_blank">ELU</a> activation.</p> <p>For ResNeSt, we chose the ablated version (<strong><em>resnest50_fast_2s2x40d</em></strong>), which provides nearly the best prediction performance of ResNeSt variants with relatively smaller model size and faster inference speed. We also trained another V2 model with a different seed and a dropout rate of 0.2 to the final FC layer.</p> <p>Both CNNs are using the same training process implemented by the nice <a href="https://pytorch-lightning.readthedocs.io/en/latest/" target="_blank">PyTorch Lightning</a> framework. We trained the CNNs under 16-bit precision for about 35 epochs in 10-folds by the <a href="https://github.com/jettify/pytorch-optimizer#radam" target="_blank">RAdam</a> optimizer, with a learning rate of 0.000352 obtained from <a href="https://pytorch-lightning.readthedocs.io/en/latest/lr_finder.html" target="_blank">Learning Rate Finder</a>, a batch size of 48 for EfficientNet B3 NS and 128 for ablated ResNet, a patience of 12 epochs for early stopping, a Cosine Annealing LR scheduler with T_max set to 5 epochs, and a label smoothing of 0.001 for binary cross entropy loss.</p> <p>Each of them took about 12-25 hours to train for 10-folds. Therefore, they were all trained on my local machine (with two 2080-Ti GPUs) instead of a Kaggle GPU notebook, which has a run time limit of 9 hours.</p> <p>In contrast to other models that removed control group rows from the MoA training set before training, I was having a different thought about it. From a <a href="https://www.kaggle.com/c/lish-moa/discussion/180304" target="_blank">discussion thread</a> in the competition forum, the host was mentioning that</p> <blockquote> <p>… However, in real applications, the null controls are one way to assess the efficacy of a drug and can be useful in modeling the other MoAs.</p> </blockquote> <p>This hint lets me think that control rows might provide some useful information for the CNNs as they are modeling the relationships between gene expression and cell viability features. Therefore in our CNNs they are all trained with control group rows. Removing those rows produced worse scores on both CV and LB in the experiments.</p> <p>Things that I had experimented but did not work:</p> <ul> <li>EfficientNet B0, B5, B7 NS (smaller models are weaker and bigger models overfitted more quickly in this small dataset)</li> <li>PCA or Kernel-PCA with Norm-2 Normalization, Quantile Transformer or Standard Scaler</li> <li>Higher dropouts in the last FC layers</li> <li>ImageNet normalized RGB channels</li> <li>Cosine annealing with warm restarts</li> <li>Other activation functions like ReLU, SELU, GELU, etc. in the last FC layers</li> </ul> <h1>Model Diversity</h1> <p>To understand more about the diversity of our models, here are the mean correlation heatmaps of our best single models. The correlation coefficients are calculated by Pearson’s method for each pair of models. We took the mean of target-wise correlation coefficients as the final value of each model.</p> <p>For our best LB model submissions, even though they all scored very well on LB, their correlations are impressively low! Especially for DeepInsight CNNs, the correlations are only around <strong>0.63-0.73</strong> with other NN models. It showed a strong proof of high diversity and a great source of ensemble boost.</p> <p><strong>Figure 12. Mean Correlation Heatmap of Best LB Models' Submission Predictions.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Ff21625b01ba2d040d4624a44c98511d1%2Ffigure_12_best_LB_correlation.png?generation=1607166479684837&amp;alt=media" alt=""></p> <p>The OOF (out-of-folds) predictions of our Best CV models also showed a similar phenomenon, and model correlations are even lower, with a coefficient of only around <strong>0.52-0.70</strong>!</p> <p><strong>Figure 13. Mean Correlation Heatmap of Best CV Models' OOF Predictions.</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F201774%2Fdbde5454ab2fa18bafc5895ff591ea79%2Ffigure_13_best_cv_correlation.png?generation=1607166496491195&amp;alt=media"></p> <p>In my previous experience, diverse models with correlations around <strong>0.85-0.95</strong> are usually very good sources for the ensembles. In this competition our best single models had proven their strong prediction performance and at the same time with much lower correlations than usual!</p> <h1>Inference Efficiency</h1> <p>To ensure that we can safely squeeze all our best models in the final blend submissions in 2-hours execution limit, we optimized the inference scripts by loading all scalers and fitted transformers from pickle files. Also, since the computing resource of Kaggle GPU notebook is rather limited (2-cores CPU, 13GB Host RAM and 16GB GPU RAM), we did several trials to find the sweet points for each model to maximize the inference throughput by the number of workers and batch size.</p> <p>Regarding the batch size, our best setup is 2048 for NN-based models and 512 for CNN-based models (since CNN models are bigger and memory hungry). Setting bigger numbers won’t help with the inference speed due to the saturation of CPU, GPU and I/O resources.</p> <p>The total run time of final submissions are 1923 seconds (Best LB) and 1829 seconds (Best CV) on the public test set respectively, which all finished in 2 hours on the private test set.</p> <h1>Source Code</h1> <p>All of our training notebooks, blend notebooks and inference scripts are open-sourced in the following Github repository:<br> <a href="https://github.com/guitarmind/kaggle_moa_winner_hungry_for_gold" target="_blank">https://github.com/guitarmind/kaggle_moa_winner_hungry_for_gold</a></p> <p>Note that the structure of this repository will be updated in the next few days to follow the <em>Winning Model Documentation Guidelines</em> by the Kaggle team.</p> <p>The source code of our final submission models for training and inference can be found under the <code>final</code> folder.</p> <h1>Conclusion</h1> <p>Without any prior knowledge about MoA or the medical domain, we have done our best to apply the ML/DL techniques that we know to this small and highly imbalanced dataset. We are pretty amazed by our final standing, and it turns out that general AI methods could work in almost any domain.</p> <p>The combination of shallow NN models in different architectures, in-depth feature engineering, multi-stage stacking, label smoothing, transfer learning, and the addition of diverse DeepInsight CNNs are the key winning factors of our team’s final submissions.</p> <p>Thank you for reading through this long post, we hope that you gain something interesting and useful from it!</p>
Mechanisms of Action (MoA) Prediction
Layer Normalization - 36th place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all the winners.</p> <p>I just wanted to share one trick which had a significant boost on my model and may not have been shared by anyone yet.</p> <p>It is adding LayerNorm on the gene and cell features to the model (It literally means row-wise normalization on gene/cell). The model looks like this:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F427399%2Ff612b7c9c69584f6c52cdfe8bbe7ac59%2FCapture.png?generation=1606951353239586&amp;alt=media" alt=""></p> <p>It has a performance boost on the MLP:</p> <ul> <li>Private: 0.01625 -&gt; 0.01616 (-0.00009) </li> <li>Public: 0.01834 -&gt; 0.01824 (-0.00010) </li> </ul> <p>I also tried adding row-wise normalized gene and cell features on my TabNet. It did not have the same boost as the LayerNorm on MLP, but still has some:</p> <ul> <li>Public: 0.01848 -&gt; 0.01843 (-0.00005)</li> </ul> <p>I guess the model finds some relative signals across the columns by doing this.</p>
Mechanisms of Action (MoA) Prediction
Silver medal in very first competition!!! (A noob first Kaggle Experience)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I remember 3 months ago when I first heard of kaggle, as the biggest data science competing platform. Total noob who just started with his Data science journey starts taking part in competitions Seems Quite stupid, isn't it. Competition is not a noob's play. Yeah, True competition is not a noob play but that's what makes kaggle unique i.e. it's so helpful and kind-hearted community working together with one motive <strong>Learn and Share</strong>. <br> So I stepped into kaggle and there was a buzz about the MoA competition and I was like I don't stand a chance at all among so many experienced experts, masters, and Grandmasters. Yeah, truly a noob doesn't stand a chance before them but then again there comes the motto of kaggle <strong>Learn and Share</strong>. <br> 2 months ago this noob started with the MoA journey in the hope of Learning something new regardless of the result and truly speaking Thanks to the kaggle community learned a lot, a lot.</p> <p>So, today after 2 months of hard work and 2 full days of tension regarding final submissions and The Shake Up finally our team <code>paracetamol</code> won a silver medal.<br> Thanks to my teammates <a href="https://www.kaggle.com/namanj27" target="_blank">@namanj27</a> <a href="https://www.kaggle.com/xiaowangiiiii" target="_blank">@xiaowangiiiii</a> <a href="https://www.kaggle.com/nasirkhalid24" target="_blank">@nasirkhalid24</a> <a href="https://www.kaggle.com/ajaykumar7778" target="_blank">@ajaykumar7778</a> for their constant support and guidance.<br> <strong>I have also made my notebook public please do visit and do review this</strong> <a href="https://www.kaggle.com/kushal1506/moa-prediction-complete-walkthrough-eda-ensemble" target="_blank">Notebook</a>.</p> <p>Also congratulations to the winners. My thanks to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> and <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> for their contributions to this competition.</p> <p>ThankYou,<br> Noob</p>
Mechanisms of Action (MoA) Prediction
Simple Preliminary 71st Place Solution Explained
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for the organizers for this long-waited tabular competition; it was both fun and challenging. To be honest, I am not good at tabular competitions at all. Thanks to all the public notebook contributors, I learned a lot from all of you guys; you guys are the true heroes of this competition. I also want to say sorry to my friend and ex-teammate <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> for missing his best submission; without your sharing, this competition wouldn't be as popular. I also want to say thank you to my teammates <a href="https://www.kaggle.com/wangyijia" target="_blank">@wangyijia</a> <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> for the contributions. Additionally, congrats to my previous teammate <a href="https://www.kaggle.com/tiandaye" target="_blank">@tiandaye</a> for winning his first gold medal🏅️ and my teammate <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> for getting his first kaggle medal🥈.</p> <h2>TL;DR</h2> <p>Our final solution is mainly based on public kernels in terms of features. I have no idea how to do feature engineering and I don't know how to validate them. Tried couple methods, they all deteriorate the CV except PCA. With my model architectures and the abuse of pseudo labels, we were able to train some solid models. Along with the public kernel: <a href="https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features" target="_blank">https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features</a> by <a href="https://www.kaggle.com/kushal1506" target="_blank">@kushal1506</a>, we were able to achieve 0.01815 public leaderboard and 0.01610 private leaderboard.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Ff19ece2ff6e5ad0f0895cf56f3e4b91f%2Fmoa_final_solution.png?generation=1606786588701522&amp;alt=media" alt=""></p> <h2>Timeline</h2> <p>I started this competition at the beginning and tried a few submissions. I figured it's quite difficult to use LightGBM/XGboost and NN is taking over. Then I took a break from this competition to participate in a CV competition. I rejoined the competition 13 days before the competition ends. There were already many great kernels and diverse methods shared in the notebooks section, I went through some of them and decided to tune some parameters and change the model architecture. I then teamed up with <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> which a public leaderboard of 1811. The 1811 submission itself is very important for further development of my models using pseudo labels; however, since this is <a href="https://www.kaggle.com/nurmannaz" target="_blank">@nurmannaz</a> 's first kaggle competition, he wasn't aware that there's a private dataset so the submission itself is invalid. I had to rebuild everything from scratch and it was a lot of work. I began to train models and write inferences.</p> <h2>Models</h2> <p>I came up with a method modified from <a href="https://arxiv.org/abs/1507.06228" target="_blank">https://arxiv.org/abs/1507.06228</a>. It can be a good replacement of simple dense layer, implemented below:</p> <pre><code>class Dense_Alternative(nn.Module): def __init__(self, h1, h2, f): super().__init__() self.num_layers = num_layers self.nonlinear = nn.Linear(h1, h2) self.linear = nn.Linear(h1, h2) self.switch = nn.Linear(h1, h2) self.f = f def forward(self, x): switch = torch.sigmoid(self.gate(x)) nonlinear = self.f(self.nonlinear(x)) linear = self.linear(x) x = (1 - switch) * linear + switch * nonlinear return x </code></pre> <p>It allows the neural network to decide the weight between linear and nonlinear units,<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F9df6f34914348fa6cf02f5e5d408f1c3%2Fdense_alt.png?generation=1606788885682182&amp;alt=media" alt=""></p> <h2>Validation Strategies</h2> <p>I used both Multilabel Stratified Kfold and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's fold based on drug ID. I wished two different validation schemes can result in more stable ensemble. I still don't know if it helped or not.</p> <h2>Pseudo Labels(PL)</h2> <p>PL is basically using unseen data's prediction as training data. Note that we should not use any of the samples in the unseen data for validations. To my understanding, there are two ways of using pseudo labels, one is to use the entire predictions, which is what I did in this competition; the other is to use soft labels based on probability, e.g. selecting samples with a certain probability threshold. I think PL is a very important part of my solution as it improves my CV and LB significantly. Moreover, I noticed the quality of the pseudo labels really matter. A 1810 submission can improve my CV a lot more than a 1850 submission. There is some potential risks of using pseudo labels, but I will not dive into this here. Namely, making wrong predictions wronger. </p> <p>An example usage of using pseudo label. In this case I first pretrained on all targets and then when I am finetuning, I add the pseudo labels.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F288261cf066410c82f19681608e14dde%2Fpl_Explained.png?generation=1606791330584992&amp;alt=media" alt=""></p> <h2>Result</h2> <p>I tried to get more diverse models regardless of public LB and I took the mean of my selected models. It resulted in our best submission. I wish I had a bit more time. It was a rush to get all these done in twoo weeks<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F12367d07d3f18922e5dc6c3b0d27fd4d%2Fscore.png?generation=1606789822769643&amp;alt=media" alt=""></p> <h2>Fin</h2> <p>Thanks for reading this. Please comment below if you have any questions. This solution is a lot worse than the best solutions but I hope it helped. I look forward to reading all your solutions.</p>
Mechanisms of Action (MoA) Prediction
8th place solution | From multilabel to multiclass
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>A big thank you to the Laboratory for Innovation Science at Harvard and to the Kaggle team for hosting this competition, as well as to all other participants who created such a friendly and yet challenging competitive environment! This was my first DS competition and I learned a lot from you.</p> <h3>From multilabel to multiclass</h3> <p>The main idea behind my solution was reframing the multilabel prediction problem as multiclass classification to leverage correlations between targets. Even though there were originally 206 binary targets, the training dataset contained only 328 distinct combinations of them. I gave each a class label from 0 to 327, using it as target variable instead. The correspondence between class labels and original targets was stored in a 328x206 matrix.</p> <p>Then I set up NN models with a softmax activation in the last layer, yielding probability matrices of shape (batch_size x 328) and optimizing for categorical cross-entropy. To retrieve predictions for the 206 original targets, I multiplied the NN output by the 328x206 matrix.</p> <p>While in theory this "dual" approach is more limited due to the fact that it is impossible to give perfect predictions for a new drug with MoA combinations not previously seen among the 328 classes, in practice these individual models still had decent performance by themselves (best public/private LB scores were 0.01849/0.1633 with NNs and 0.01830/0.01623 with Tabnet).</p> <p>However, their biggest power came when blending with other models, because this reframing significantly lowered the prediction correlations, as it detects different signals from the training data.</p> <h3>Final solution</h3> <p>My final solution was a simple average (no weighting) of four models: two Tabnets and two NNs (one multiclass and one multilabel for each architecture). Nothing remarkable about the feature engineering pipeline: I used QuantileTransformer, PCA and row-based statistics as it was done in several public notebooks.</p> <p>The details of my top selected submission can be seen below (0.01811/0.01603 LB), but I achieved similar private scores of 0.01603 and 0.01604 with other model combinations.</p> <table> <thead> <tr> <th><strong>Model</strong></th> <th><strong>Nb folds</strong></th> <th><strong>CV (with ctrl)</strong></th> <th><strong>Public LB</strong></th> <th><strong>Private LB</strong></th> </tr> </thead> <tbody> <tr> <td>Multiclass NN</td> <td>7</td> <td>0.01361</td> <td>0.01884</td> <td>0.01659</td> </tr> <tr> <td>Multiclass Tabnet</td> <td>7</td> <td>0.01484</td> <td>0.01830</td> <td>0.01623</td> </tr> <tr> <td>Multilabel NN</td> <td>7</td> <td>0.01453</td> <td>0.01837</td> <td>0.01626</td> </tr> <tr> <td>Multilabel Tabnet</td> <td>7</td> <td>0.01480</td> <td>0.01834</td> <td>0.01620</td> </tr> </tbody> </table> <p>For instance, one of my blends with four different models (Multiclass NN - 0.01633, Multiclass Tabnet - 0.01636, Multilabel Resnet - 0.01636, Multilabel Tabnet 0.01624) also achieves 0.01603 Private LB, but I did not select it because its public LB score was worse (0.01819).</p> <p>As CV scores (MultilabelStratifiedKFold) were very different between my models, it was difficult to come up with a final validation strategy, since optimizing blending weights would give 100% to Model 1. In the end, I went with my gut feeling that I should not be overfitting too much as long as things were kept simple, and my two final submissions combined distinct models to minimize risk.</p>
Mechanisms of Action (MoA) Prediction
30th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I want to thank the organizers for interesting data and competition. <br> I'm not sure that choosing logloss as primary metric is good for pursuing practice goals, but it's convenient for modeling multilabel tasks for competitors ^^<br> Also, huge thanks to users <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a>, <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> and <a href="https://www.kaggle.com/namanj27" target="_blank">@namanj27</a>. Pretty appreciate your discussions and NJ7's pytorch starter is a gem.<br> Personally, it's the best result in Kaggle competitions, so I'm pretty happy about it.</p> <p>Solution is relative easy and based on average blending of 4 models: <br> 1) Shallow MLP <br> <a href="https://www.kaggle.com/alturutin/moa-mlp" target="_blank">https://www.kaggle.com/alturutin/moa-mlp</a><br> 2) "Resnet"-like multihead NN with cluster embeddings<br> <a href="https://www.kaggle.com/alturutin/moa-resnet" target="_blank">https://www.kaggle.com/alturutin/moa-resnet</a><br> 3) TabNet with hyperparameters almost similar to public notebooks<br> <a href="https://www.kaggle.com/alturutin/moa-tabnet" target="_blank">https://www.kaggle.com/alturutin/moa-tabnet</a><br> 4) Shallow MLP with pretraining on nonscored data with and finetuned to maximize CV to GELU activation<br> <a href="https://www.kaggle.com/alturutin/moa-mlp-gelu" target="_blank">https://www.kaggle.com/alturutin/moa-mlp-gelu</a><br> *) Ensemble notebook:<br> <a href="https://www.kaggle.com/alturutin/moa-inference" target="_blank">https://www.kaggle.com/alturutin/moa-inference</a><br> *) Viz, eda notebook:<br> <a href="https://www.kaggle.com/alturutin/moa-t-test-outliers-segmentation-interactions" target="_blank">https://www.kaggle.com/alturutin/moa-t-test-outliers-segmentation-interactions</a><br> Most of the code incapsulated in dataset scripts.</p> <p>OneCycleLR scheduler for training MLP and Resnet and ReduceLROnplateu for TabNet.<br> Trained 2 type of models: for CV based on train data, because organizers wrote that private is randomized, and, just in case, grug_id based cv.</p> <p>Preprocessing similar to public notebooks: quantile transform -&gt; feature selection, pca, feature stats, clusters.<br> I suppose main difference from other works may be using RFE to MLP permutation importance scores. It's given big boost on CV.<br> Also, I've used prediction bias as metric for model finetuning: for example, if train prior mean = 0.0037 I've tried to get prediction mean &gt; 0.0037 to avoid overfitting, because models tended to overfit to 0. <br> It's possible to solve this problem using regularization, for example, label smoothing, but estimates is not very concise and little lucky-ish I think :)<br> Pseudo labeling worked too and provided small boost, with lr=1e-5 and SGD optimizer.<br> Postprocessing is just clipping to inteval [1e-5; 1 - 1e-5]<br> Pytorch was preferred over tensorflow because it just has given better results for similar hyperparameters.</p> <p>Finally, sorry for my english (it's not my native), I've learned a lot in this competition. </p> <p>List of trained features below.</p> <p>===========================================================<br> worked:</p> <ul> <li>10 folds oof mkf (multilabel stratified)</li> <li>bagging over seeds &amp; splits</li> <li>label smoothing</li> <li>predictions clipping</li> <li>pca</li> <li>pseudo labels</li> <li>quantile transform preprocessing</li> <li>resnet with categorical embeddings</li> <li>tabnet</li> <li>average blending</li> <li>stats feature engineering</li> <li>RFE feature selection</li> <li>stacking top labels</li> <li>nonscored pretraining</li> <li>clusters from kmeans, GMM and outlier detection methods</li> </ul> <p>===========================================================<br> not worked:</p> <ul> <li>fit unsupervised on full data (train + test) -&gt; overfitting</li> <li>gbm (too long to train)</li> <li>kmeans dists</li> <li>dbscan-&gt;svm clustering</li> <li>self-norm nn</li> <li>ttest feature selection</li> <li>using drug group cv</li> <li>MLSMOTE augmentation</li> <li>noise augmentation</li> <li>feature interactions like sum &amp; multiplications</li> </ul> <p>===========================================================<br> had no time to check:</p> <ul> <li>online pseudo labeling on private dataset</li> <li>TVAE; GAN augmentation</li> <li>transformer/attention model</li> <li>convert features to image and fit CNN</li> <li>DAE embeddings</li> <li>contrastive learning (with triplet loss for example) embeddings </li> </ul>
Lyft Motion Prediction for Autonomous Vehicles
3rd Place Solution: Baseline + Set Transformer
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update: YouTube video about our solution <a href="https://youtu.be/3Yz8_x38qbc" target="_blank">https://youtu.be/3Yz8_x38qbc</a></p> <h1>TL;DR</h1> <ul> <li>Baseline CNN regression</li> <li>Six 3-mode models based on: Xception41, Xception65, Xception71, EfficientNetB5</li> <li>One 16-mode Xception41 model</li> <li><a href="https://arxiv.org/abs/1810.00825" target="_blank">Set Transformer</a> as a second level clusterizer model for ensembling</li> </ul> <p>Our code is available on <a href="https://github.com/asanakoy/kaggle-lyft-motion-prediction-av" target="_blank">GitHub</a></p> <h1>Detailed solution</h1> <h2>1. Data preprocessing</h2> <p>The first level models used the same raster format as generated by l5kit, with next settings:</p> <p>raster_size=[224, 224]<br> pixel_size=[0.5, 0.5]<br> ego_center=[0.25, 0.5]</p> <p>With history frames rendered at frame offsets 0, 1, 2, 4, 8.</p> <p>We found the significant time with l5kit rasterizer is spent to prepare coordinates for opencv rendering due to the large number of operations on small numpy arrays, so we combined multiple stages like transform points and CV2 shift to single, numba optimized functions.</p> <p>This allowed us to improve performance approximately 1.6x.</p> <p>Another improvement was to uncompress zarr files, it’s especially useful with random access.</p> <p>Next improvement was to save the cached raster and all relevant information for each training sample to numpy compressed npz file. Especially with the full dataset, we saved each N-th frame for training, since the following frames are usually very similar.</p> <p>All optimization combined allowed to <strong>improve the CPU load during training around 6x</strong> and train multiple models simultaneously. The cached training samples for the full dataset used around 1.3TB of space. Fast nvme SSD drive is useful.</p> <h2>2. First level CNN models</h2> <p>We have tried many approaches but could not beat the baseline solution of using the imagenet pretrained CNN, avg pooling or trainable weighted pooling and fully connected layer to directly predict positions and confidences of trajectories.</p> <p>What made a bigger difference was the training parameters. We used SGD with a relatively high learning rate of 0.01, gradient clipping of 2 and batch size around 64-128.</p> <p>We used the modified CosineAnnealingWarmRestarts scheduler, starting from period of 16 epoch (200000 samples each), increasing period by 1.41421 times each cycle.</p> <p>We used the following models:</p> <ul> <li>Xception41, avg pool, batch size 64</li> <li>Xception41, avg pool, batch size 128 - similar performance to batch size of 64</li> <li>Xception41, learnable weighted pooling</li> <li>Xception41, predicts 16 modes instead of 3</li> <li>Xception65</li> <li>Xception71 - seems to be the best performing model, but was trained less comparing to Xception41 due to the shortage of time</li> <li>EfficientNet B5</li> </ul> <p>Initially the EfficientNet based model performed worse compared to Xception41, but with the batch size increased from 64 to 128, the performance improved significantly, with results on par with Xception.</p> <p>Models have been trained on the full training dataset for about 5-7 days on a single GPU (2080ti, 3090 for larger models) each.</p> <p>Training for longer would likely improve the score, for example Xception41 model reached following validation loss during the last 3 cycles: 11.31, 10.86, 10.37<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2000545%2Fdc9904887584210bbfa46ee33f7a9dd2%2FScreen%20Shot%202020-12-20%20at%200.20.53.png?generation=1608412893460095&amp;alt=media" alt=""></p> <h2>3. Ensembling (second level model)</h2> <p>Since we cannot simply average different models predictions, we used an approach where we find 3 trajectories that best approximate input trajectories. This can be achieved by utilizing the competition loss directly, but with input trajectories as ground truth and weighted by input confidences. This can be seen as a particular case of GMM.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2000545%2Fd3c6cac01be9fef035946f893e3a0f05%2FScreen%20Shot%202020-12-20%20at%200.25.01.png?generation=1608413124490687&amp;alt=media" alt=""><br> where n index relates to input trajectories and k index relates to 3 final output trajectories.</p> <p>At first we optimized this loss with a BFGS solver and this already worked pretty well, however, it was very sensitive to the initialization and tended to get stuck in local optima.</p> <p>We tried to use stacking on a hold-out dataset, i.e., train a 2nd level model that takes trajectories from 1st level models as input and predicts 3 completely new trajectories. We compared a bunch of different architectures from MLP to LSTM and luckily came across Set Transformer, which worked almost as well as the BFGS optimizer. Then we noticed that in the paper, they also utilize the model for the GMM task, so we tried to optimize the loss mentioned above with the model instead of stacking, i.e., train it from scratch on the whole val (or test) dataset. It consistently performed by around 0.2 better than the optimizer. We explain such a performance boost by the model's ability to leverage global statistics of the entire provided amount of training data, in contrast to the optimizer, which works sample-wise.</p> <p>Our Set Transformer architecture is pretty simple. As an encoder, we have a 2-layer Transformer without positional encoding. In the decoding stage, we have three so-called "seed vectors" that are trainable parameters. Those vectors attend to the encoded representations of the trajectories.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2000545%2F0c90849861c770da146a1c1b6ec0f32c%2FScreen%20Shot%202020-12-20%20at%200.27.11.png?generation=1608413244195430&amp;alt=media" alt=""></p> <p>All of the input trajectories are predictions from 3-mode models. However, we also had a 16-mode model that boosted the optimizer quite a bit, but didn't help the transformer. So what we did was to add the 16-mode model predictions to the loss, but remove from the input.</p> <h1>Other interesting findings and what didn't work for us</h1> <h2>1. Experiments with the Set Transformer</h2> <p>When experimenting with the Set Transformer on a hold-out train set, we found these interesting results.</p> <table> <thead> <tr> <th>Method</th> <th>Val nll loss</th> <th>Inference time on the entire val</th> </tr> </thead> <tbody> <tr> <td>Optimizer</td> <td>12.06</td> <td>~ 30 min</td> </tr> <tr> <td>Transformer: supervised on train set</td> <td>12.06</td> <td><strong>&lt; 1 sec</strong></td> </tr> <tr> <td>Transformer: unsupervised on train set</td> <td>12.00</td> <td><strong>&lt; 1 sec</strong></td> </tr> <tr> <td>Transformer: unsupervised→supervised fine-tune on train set</td> <td><strong>11.98</strong></td> <td><strong>&lt; 1 sec</strong></td> </tr> <tr> <td>Transformer: unsupervised on val set</td> <td><strong>11.82</strong></td> <td>~ 30-50 min</td> </tr> </tbody> </table> <p>Supervised here denotes stacking (original NLL loss with GT trajectory as a target ), and unsupervised indicates the above-mentioned ensemble loss. </p> <p>In the table, we can see that unsupervised training from scratch on the validation data has the best performance. However, we can also utilize a train set to train a model that generalizes well to the validation set, outperforming the optimizer. Such an approach is speedy (as only a forward is needed to predict for a new observation) and may be more suitable for production purposes.</p> <h2>2. Data preprocessing not used for submitted models</h2> <p>We also added extra rasterizer outputs, related to traffic lights: in addition with the current rendering, we added additional 1/4th resolution planes with information about the current and previous traffic lights. We added the separate planes for known on and off traffic lights for forward, left and right directions for the current and previous moments in time. The intuition behind - to provide the model an extra information and separate unknown traffic light from the known off traffic light, has traffic light changed right now or some time ago and easier way to distinguish different signal directions. Lower resolution is sufficient to associate the signal with lanes but can be mixed to deeper model layers, where it does not have to be compressed so much with other inputs. When trained with the extra traffic light inputs, the training and validation loss dropped faster initially but converged to the same level after a few cycles. This approach may still be useful for models used outside of the small training area.</p> <h2>3. What we tried, with performance either on par or worse comparing to the simple baseline</h2> <ul> <li>Used kalman filter to estimate less noisy initial position, velocity, acceleration and angular rate / turn radius: Slightly faster but less stable training, the same final result.</li> <li>Used transformers to model interaction between agents and map, no improvement</li> <li>Predict the occupation map. May actually be useful for planning, but not very useful for the competition metrics. </li> <li>Use the predicted occupancy map of other agents as an input</li> <li>Predict all agents recurrently, using transformers or CNN</li> <li>Added positions embeddings for points on the map</li> <li>Different heads to predict trajectory, RNN, predict acceleration/turn, separate trajectory parameters from velocity on trajectory etc.</li> <li>Different number of FC (0-3) layers before the final output FC layer</li> </ul> <h1>Conlusion</h1> <ul> <li>CNN regression baseline is very hard to beat</li> <li>Training for longer with right parameters on the full dataset was the key</li> <li>l5kit can be optimized by a lot</li> <li>Great competition overall: lots of high quality data, nice optimizable metric, very<br> supportive host</li> <li>Though we thought it would have been better to split train/val/test by geographical<br> location and to increase the train area.</li> </ul> <p>Thanks for reading and happy kaggling!</p>
Mechanisms of Action (MoA) Prediction
51th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Public 28/Private 51 0.01609</p> <p><strong>Feature Engineering</strong>:</p> <p>I use four group features.</p> <ol> <li>PCA for cell and gene differently</li> <li>SVD for cell and gene differently</li> <li>GaussRankScaler</li> <li>Sub-set features for the raw features by feature importance.</li> </ol> <p><strong>Model</strong></p> <ol> <li>ResNet</li> <li>Highway for FC layers.</li> </ol> <p><strong>Optimizer</strong></p> <ol> <li>Adam</li> <li>Label Smoothing</li> <li>OneCycleLR</li> </ol> <p><strong>Ensemble</strong></p> <ol> <li>Two models</li> <li>Each model is 3 seed and 5 folds</li> <li>Final result is average these output</li> </ol> <p><strong>Methods don't work for me</strong></p> <ol> <li>TabNet (I don't spend too much time on it)</li> <li>Convert each record to an image and use Conv</li> <li>Use the 1D Conv</li> <li>Pre-train on the non-score targets.</li> <li>All kinds of activation functions other than relu.</li> <li>k-means</li> <li>Bin the features</li> <li>All kinds of lr schedules</li> <li>DeepFM and some other models.</li> <li>autoencoder</li> </ol> <p>Thank those to share your notebooks, I borrow some insights from you, and sorry for can't find them and paste the links here. And congrats to all.</p>
Mechanisms of Action (MoA) Prediction
40th place solution | MLPs, ResNet, TabNet & Clustering
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to the winners!</p> <p>Here is my approach:</p> <h1>CV:</h1> <p>The most important part of any Kaggle competition, aka finding a good cross-validation framework, was a nightmare in this competition. I started with 10-fold CV, but it was not generalizing at all to the public LB and I switched to MultilabelStratified 10-fold. Generalization to the LB was weak, so I decided to <strong>only keep changes that improved both my CV and the LB</strong>. I think that this was key in this competition. I experienced with <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">Chris Deotte's CV strategy </a>but found that it was not generalizing better than my old CV, so I sticked to MultilabelStratified 10-fold. </p> <h1>Models:</h1> <p>Here are the models used in my final ensemble:</p> <ul> <li>MLPs (4 models):<br> I took inspiration from <a href="https://www.kaggle.com/riadalmadani/pytorch-cv-0-0145-lb-0-01839/output?scriptVersionId=45510876&amp;select=submission.csv" target="_blank">this notebook</a>.<br> In my final ensemble, I used 4 MLPs: 2 different architecture (varying number of layerss), one with the new CV framework (from Chris Deotte), and one predicting non-scored labels as well.<br> CV (for the best MLP): 0.01571 // LB: 0.01834</li> <li>ResNet (1 model):<br> I started by re-implementing <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">this notebook</a> locally, which I converted to Pytorch.<br> CV: 0.01601 // LB: 0.01854</li> <li>TabNet (1 model)<br> TabNet was doing amazingly well in this competition. I was not familiar with the model and it was a pleasant discovery. I started with the implementation described <a href="https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss?scriptVersionId=45407015" target="_blank">here</a>, and brought it down to 0.01827 LB by just changing parameters. Setting n_shared = 1 and n_independent = 1 was key. Also switching to OneCycleLR scheduler, and reducing the weight decay. <br> CV: 0.01569 // LB: 0.01827</li> </ul> <h1>Training:</h1> <p>A few elements were essential. I used these hyper-parameters across all models: </p> <ul> <li>Averaging 5 models with different random seeds per fold</li> <li>weight decay 8e-6</li> <li>label smoothing 1e-4</li> <li>OneCycleLR scheduler with 25 epochs, and max_lr 5e-3 and div_factor 1e3</li> </ul> <h1>Ensembling:</h1> <p>Nothing fancy, just a weighted average. <br> Best weights: <br> 0.5 * MLPs + 0.1 * ResNet + 0.4 * TabNet<br> CV (excluding the MLP trained with the new CV framework): 0.01556 // LB: 0.01821</p> <h1>Clustering:</h1> <p>We know that some drugs appear in both the training set and the test set. Besides, once drug_id was given, we discovered that 8 drugs were dominating in terms of frequency. So I thought that we could do some clustering, and find test drugs close to clusters formed by these 8 drugs. I first trained a t-sne model to reduce the genes+cells feature space to dimension 2, for these 8 drugs + the public test set. Then, when plotting, I noticed that 5 out of these 8 drugs had clear, separate clusters. So for test data points close (in dimension 2, and with L2-distance) to the cluster centers of these 5 drugs, I blended my ensemble's current predictions with the labels from these 5 drugs. Amazingly, that helped my public LB from 0.01821 to 0.01819, which converted to 0.01611 and 0.01609 on the private LB respectively. I did not even check how much this helped CV. To keep safe I only used this clustering with one of my two final submissions. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F965613%2Fb507a12e3054ef12d0f070eaa2df82d4%2Fknn.png?generation=1606784164930624&amp;alt=media" alt=""></p> <p>In red is the public test set, the 8 other colours are the 8 drugs with highest frequency. We can see the 5 forming clear clusters: orange (bottom left), purple (left), green (top left), light blue (top left), and blue (top). When looking at this plot, it is very natural to want to label the red points falling into these clusters with the same label as these drugs :) </p> <p>The drug_ids of the 5 clusters are: 292ab2c28, 87d714366, 8b87a7a83, d08af5d4b and d1b47f29d.</p> <h1>What did not work:</h1> <ul> <li>xgboost</li> <li>Clustering on the full training set</li> <li>Clustering in the original feature space</li> <li>2nd-stage models trained on model predictions</li> <li>Deep stacking </li> <li>DAE features</li> <li>Removing targets with too few positives</li> <li>Other optimizers than Adam </li> <li>T-SNE features for neural nets </li> </ul>
Mechanisms of Action (MoA) Prediction
3rd source code(To fulfill requirement of prize)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>solution: <a href="https://www.kaggle.com/c/lish-moa/discussion/200540" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/200540</a><br> code: <a href="https://github.com/jxzly/3rd-kaggle-moa-2020-solution" target="_blank">https://github.com/jxzly/3rd-kaggle-moa-2020-solution</a></p>
RSNA STR Pulmonary Embolism Detection
[2nd place] Solution Overview & Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Update: code available at <a href="https://github.com/i-pan/kaggle-rsna-pe" target="_blank">https://github.com/i-pan/kaggle-rsna-pe</a></p> <p>Congratulations to all the participants and the winners. Special congrats to prize winners <a href="https://www.kaggle.com/osciiart" target="_blank">@osciiart</a> and <a href="https://www.kaggle.com/jpbremer" target="_blank">@jpbremer</a> who are on track to be physician GMs. This was a tough, compute-heavy challenge given the large amount of data and short timeframe. My setup was 4 24 GB Quadro RTX 6000 GPUs. I always feel guilty during competitions like these since I have the luxury of strong compute. Models were trained using DDP in PyTorch 1.6 with automatic mixed precision. </p> <p>Even though the results aren't final yet and my submission may be removed for violating the label consistency requirements (hopefully my heuristic for fixing those worked!), I still wanted to share my solution.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F281652%2Ff6a15edbc81d867c0faecd0894d6aa27%2Fpe.png?generation=1603756913344030&amp;alt=media" alt=""></p> <p>Here is a schematic outlining my solution. It has a lot of moving parts, so I apologize for the lengthy summary. </p> <h1>Step 1: Feature Extraction</h1> <p>Last year's RSNA Intracranial Hemorrhage Detection challenge shared a lot of similarities with this year's challenge. Most of the top solutions combined 2D CNN feature extraction with sequence modeling. The backbone of my solution also relied on a similar setup. </p> <p>I first trained ResNeSt50 on 2D images. Images were windowed using the PE-specific window (WL=100, WW=700) that I mentioned in one of my initial posts on DICOM processing. Each "image" was 3 channels, with each channel representing an individual slice. Thus the 2D image was a stack of 3 continuous slices. The targets were the 7 PE-related labels (i.e., excluding the RV/LV ratio labels). Note: most of the PE-related labels were exam-level labels. However, I just assigned the exam labels to each slice positive for PE (negative slices had all zeros), with the understanding that there would be label noise. Also, I predicted the labels for the middle slice among the 3 slices in the image. Models were trained using vanilla binary cross-entropy loss (<code>BCEWithLogitsLoss</code> in PyTorch), 512x512 with 448x448 random crops (single center crop during inference), RandAugment data augmentation, batch size 128, 5 epochs, 2500 steps per epoch, RAdam optimizer with cosine annealing learning rate scheduler. I used generalized mean pooling and reduced the final feature vector to 512-D. Mean loss was around 0.06 (AUC 0.95-0.96 for slice-wise prediction of PE vs. no PE). Features were then extracted for all slices. </p> <h1>Step 2: Sequence Modeling</h1> <p>Many of last year's solutions used LSTMs/GRUs as the sequence model of choice. For this competition, I used <code>huggingface</code> transformers (specifically, the <code>Transformer</code> class from <code>transformers.modeling_distilbert</code>). I used one 4-layer transformer to produce slice-wise <code>pe_present_on_image</code> predictions and another transformer to predict exam-wise PE-labels. Sequence length was 512 during training, padded/truncated as necessary. During inference, I used the sequence without modifications. </p> <p>Important point: <strong>Images from non-PE exams do not contribute to the loss.</strong> At first, I was training the slice-wise transformer on all exams. Then, I decided to train these models on positive exams only. This lowered my CV by about 0.01-0.02. I used a custom weighted loss where I weighted the loss from each example by the proportion of positive PE slices (as described in the metric), though I'm still not sure I wrote it correctly. </p> <p>The exam-level transformer was trained using a weighted BCE loss based on the competition label weights. Exam-level validation losses ranged from 0.15-0.17. </p> <h1>Step 3: Time-Distributed CNN</h1> <p>To add some variety into my modeling, I then trained a time-distributed CNN, which is just another way of saying I stacked a transformer on top of a CNN feature extraction backbone and trained end-to-end. </p> <p>But before doing that, I performed inference using the slice-wise transformer model to get PE scores for every slice (5-fold OOF predictions). Then, when training the TD-CNN, I only trained on the top 30% of slices from each exam, sorted by PE score. These were trained on 3D volumes of size 32x416x416 cropped to 32x364x364 using the same windowing strategy (WL=100, WW=700) in batches of 16. </p> <p>I initialized the CNN backbone and the transformer head with trained models from steps 1 and 2 to help with convergence. I forced all the batch normalization layers in the backbone to <code>eval</code> mode as well- this prevents the running mean and variance in each layer from updating and only trains the coefficients. Exam-level validation losses ranged from 0.15-0.17, similar to step 2. </p> <h1>Step 4: Heart Slice Prediction</h1> <p>RV/LV ratio is a significant portion of the loss. I hand-labeled slices with heart in 1,000 CT scans and trained a model (EfficientNet-B1 pruned, AUC 0.998, 256x256-&gt;224x224 crops) to classify heart slices in each CT scan. I did this because I felt that by focusing a model on the heart, I could get better, more consistent results across scans. It actually wasn't hard to label 1,000 scans- probably a full day's worth of work. I just needed to find the top and bottom heart slices; everything in between thus must also contain the heart.</p> <h1>Step 5: RV/LV 3D CNN</h1> <p>I trained a 3D CNN to classify RV/LV ratio. Specifically, I used a 101-layer channel separated network, pretrained on 65 million Instagram videos (<a href="https://arxiv.org/abs/1904.02811" target="_blank">https://arxiv.org/abs/1904.02811</a>, <a href="https://github.com/facebookresearch/VMZ)" target="_blank">https://github.com/facebookresearch/VMZ)</a>. </p> <p>This model was trained only on heart slices from each exam. Also, it was only trained on positive exams. This is because RV/LV ratio was not labeled for negative exams- both RV/LV labels are 0. Thus, it didn't make sense to me to try and train my model on the entire dataset's labels directly. Models were trained using a weighted BCE loss using the competition weights. I resized the input to 64x256x256-&gt;64x224x224 crops and used mediastinal window (WL=50, WW=350). AUC for RV/LV ratio &gt; 1 was about 0.85. The validation loss ranged from 0.44-0.48. I then used these models to extract 2048-D features from each exam.</p> <p>The challenge here was that the likelihood of an exam being PE-positive greatly influenced the RV/LV labels. From step 2, I calculated 5-fold OOF exam-level predictions for all exams. Then, I trained a linear model that took as input the concatenation of the 2048-D 3D CNN feature and the 7 PE exam labels. This linear model was trained across <strong>all exams</strong> using the labels directly. This way, the model could take into account the imaging features from the scan but also adjust the predictions based on the likelihood of PE. Validation losses ranged from 0.22-0.25. </p> <p>I didn't validate my entire pipeline that often due to time constraints, instead choosing to focus on the individual component losses and optimizing each one as well as I could. I did periodically do a sanity-check on 200 single-fold exams to make sure that my entire pipeline was working, and my final validation loss was 0.183 for my 0.150 public LB submission. I'm still not confident that I implemented the metric correctly, but I was seeing good correlation (0.159/0.195-&gt;0.156/0.188-&gt; 0.150/0.183).</p> <p>At the end, I applied a function to enforce label consistency requirements for each exam. There was about 0.001 change in CV after applying this function. I have some thoughts about this requirement which maybe I'll save for another post. </p> <h1>Final Models</h1> <p>Overall, I used these models:<br> -2x ResNeSt50 feature extractors<br> -6x exam transformers (3 for each extractor)<br> -6x slice transformers<br> -5x ResNeSt50 TD-CNN<br> -1x EfficientNet-B1 pruned heart slice classifier<br> -5x ip-CSN-101 3D CNN RV/LV feature extractor<br> -5x RV/LV linear model </p> <p>Inference took about 7-8 hours for the entire test set. </p> <p>Code to follow once I clean it up. </p> <p>Things that didn't work as well:<br> -3D CNN for PE exam-level prediction<br> -Pseudolabeling RV/LV ratio for negative exams<br> -Using logits/probabilities instead of features for second-stage model<br> -TD-CNN for RV/LV ratio prediction<br> -Training feature extractor on positive exams only to improve slice-wise PE prediction </p>
RSNA STR Pulmonary Embolism Detection
3rd place 0.156 public lb, 0.148 private lb
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Edit: Github added.</p> <p>First of all, congrats to all the winners.</p> <p>Our solution is quite straight forward. We mostly use the same tricks from last year's RSNA-IHD with some updated modules for performance improvements, label consistency, and exam-level prediction.</p> <p>The solution can be divided into 3 main parts:</p> <p>Part one is single-image training. We used CNN to predict <code>pe_present_on_image</code> probability for each slice and modify the CNN model by adding a FC layer with 7 units before the final binary prediction layer. This layer act as a embeddings-generator layer and the final binary will just be a linear function of these 7 inputs. We chose 7 because they actually are fine-grained labels of <code>pe_present_on_image</code> which are combinations of slice-level labels and exam-level labels (<code>pe_present_on_image</code> and <code>rv_lv_ratio_gte_1</code> or <code>pe_present_on_image</code> and <code>central pe</code> for example). We also tried to grain the label finer but no luck.</p> <p>Part two is sequential slice-level models. From each slice in part one, we get a 7-d embeddings vector, concatenating 31 consecutive slice's embeddings vector and we have a 31-by-7 image as embeddings feature map of the center slice (in other words, for each slice, we also look at 15 slices before it and 15 slices after it). For edge cases, padding is used. Part two model is a module combining a simple shallow CNN with no pooling and a sequential model with two bi-directional LSTM layers. We get the output of this model as the final prediction of each slice, and with reversing augmentation and two models (CNN and LSTM), we have 4 outputs. Concatenate them all and we have a final 32-d embeddings vector for each slice.</p> <p>Part three is the exam-level CNN models. We just stacking the 32-d embeddings vector of all the slice in an exam and chose 1024-by-32 as the common image size. For exams with less than 1024 slices, we zero padding and for exams with more than 1024 slices, we truncate it. This model will predict 9 exam-level labels.</p> <p>My training source code:<br> <a href="https://github.com/moewiee/RSNA2020-Team-VinBDI-MedicalImaging" target="_blank">https://github.com/moewiee/RSNA2020-Team-VinBDI-MedicalImaging</a></p> <p>You guys can also take a look at my inference kernel:<br> <a href="https://www.kaggle.com/moewie94/rsna-2020-inference" target="_blank">https://www.kaggle.com/moewie94/rsna-2020-inference</a></p>
RSNA STR Pulmonary Embolism Detection
4th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Thank you to the organizers and Congrats to all the participants and winners !</p> <p>My solution is very simple.<br> ・train CNN backbone (Stage-1)<br> ・extract embedding<br> ・train LSTMs (Stage-2)</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2422331%2F37a328ab2727b4b5c02b459c978f80fb%2FRSNA2020%20solution.png?generation=1604015018732684&amp;alt=media" alt=""></p> <p><strong>Stage-1:</strong><br>  Split 3fold and train CNN which predict pre_present_on_image label for each images.<br> As a result of exploring several backbones, I decided to use rexnet200. Efficientnet-b4 and b5 got nan loss with mixed precision so I gave up using.<br> Full-size(512x512) jpeg images preprocessed by Ian Pan’s windowing are fed into CNN.<br> For augmentation, I used the following:<br> ・Horizontal Flip<br> ・ShiftScaleRotate<br> ・One of(Cutout, GridDopout)<br> Since the ratio of pe present on image was quite small, I also use focal loss.<br> In Stage-1, I created 6 models. (3-fold BCE and 3-fold Focal) </p> <p><strong>Stage-2:</strong><br>  The embedding was dumped and sorted in z-axis order and stored in a disk, and then LSTMs were trained using it. I implemented a mini-batch training of variable length series since the number of images for each study is different. Each series is filled with invalid values until the maximum series length (1083) of the training data is reached, and the invalid part is ignored when calculating the loss.<br>  For each study, embeddings were put into BiLSTMx2 and each LSTM’s outputs went into two branches: one to predict pe_present_on_image and the other to predict the exam level label. In pe_present_on_image branch, the two outputs were simply added and transformed by Linear Layer. In exam_level branch, features were aggregated using attention layer and transformed by Linear Layer.<br> For Loss, I used simple BCE.</p> <p><strong>Inference:</strong><br> I implemented a model that connects backbone and LSTMs for inference without dumping embedding to disk. Inference is done by batch_size=1. There were not enough time to apply TTA.</p> <p><strong>Submit model:</strong><br> ・6-model average ensemble. <br>  3-fold with backbone trained by BCE Loss + 3-fold with backbone trained by Focal Loss.<br>  public LB: 0.159 private LB: 0.152<br> ・3-model average ensemble.<br>  3-fold with backbone trained by BCE Loss<br>  public LB: 0.158 private LB: 0.153</p> <p><strong>Thank you !</strong><br> This is my first gold medal. I’m super glad to finally be a kaggle master!<br> And this also is my first time of posting discussion. <br> I'm sorry if my English is not good enough to convey my solution.</p> <p>The code link will appeare here once I clean it up.</p> <p>Update:<br> <a href="https://github.com/piwafp0720/RSNA-STR-Pulmonary-Embolism-Detection" target="_blank">repository</a><br> <a href="https://www.kaggle.com/kazumax0720/rsna-str-pulmonary-embolism-detection-inference" target="_blank">inference kernel</a></p>
RSNA STR Pulmonary Embolism Detection
9th place solution ( + github code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners !<br> This competition was hard on me in many ways.</p> <h1>Solution Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2Fb7889dcd4b8229c53e2103cfb622a8e1%2FRSNA%202020%20solution%20(1).png?generation=1603760751579416&amp;alt=media" alt=""></p> <h1>Preprocess</h1> <ul> <li><p>In the train data, no CT image have PE after 400th image. So, we used only images before 400th image.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2Fc51f745506070907901a148f396ae5d7%2Fdownload-20.png?generation=1603800720901371&amp;alt=media" alt=""></p></li> <li><p>For stage 1 training, we preprocessed image-level labels like following image.</p> <ul> <li>Before<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2F41ffbb129012a32e592556611c4a155d%2Fdownload-21.png?generation=1603760930759265&amp;alt=media" alt=""></li> <li>After<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1227363%2F0a0fa0aacd34ca23e8b7a00e385d25b0%2Fdownload-22.png?generation=1603760956442464&amp;alt=media" alt=""></li></ul></li> </ul> <h1>Stage 1 training</h1> <ul> <li>We used 512 x 512 image + efficientnet-b5 and 384 x 384 image + efficientnet-b3 and using preprocessed labels.</li> </ul> <h1>Stage 2 training</h1> <ul> <li>Inference time was so severe because we used 512 x 512 image + efficientnet-b5. So, we subsampled 400 sequences to 200 sequences and used Deconvolution module.<ul> <li>We got the same CV score when using 400 sequences.</li></ul></li> <li>We was not able to use various models in stage 1 because of resource. Therefore, we trained various models in stage 2.<ul> <li>Input: b5-feature only, b3-feature only, b5-feature + b3-feature</li> <li>model: Conv1D, LSTM, GRU, Conv1D + LSTM</li> <li>output: 3 x 4 = 12 predictions</li></ul></li> </ul> <h1>Stacking</h1> <ul> <li>We trained LGBM, Conv1D and GRU.<ul> <li>We used only PE-exam when training pe_present_on_image by lgbm because image from negative PE doesn't affect competition metric.</li></ul></li> </ul> <h1>Postprocess</h1> <ul> <li>We've implemented a heuristic post process.<ul> <li>This postprocess increased CV and public score 0.002 (Private 0.160 -&gt; 0.162)</li> <li>Main idea<ul> <li>replace <code>pe_present_on_image</code> with <code>1 - negative_exam_for_pe</code> when <code>1 - negative_exam_for_pe &lt;= pe_present_on_image</code></li> <li>repeat sigmoid -&gt; logit -&gt; logit += s -&gt; sigmoid until satisfying label consistency</li></ul></li></ul></li> </ul> <pre><code>label_cols = [ "pe_present_on_image", "negative_exam_for_pe", "indeterminate", "chronic_pe", "acute_and_chronic_pe", "central_pe", "leftsided_pe", "rightsided_pe", "rv_lv_ratio_gte_1", "rv_lv_ratio_lt_1", ] def postprocess(x, s=2.0): logit = np.log(x/(1 - x)) logit = logit + s sigmoid = 1 / (1 + np.exp(-logit)) return sigmoid def satisfy_label_consistency(df): rule_breaks = consistency_check(df).index print(rule_breaks) if len(rule_breaks) &gt; 0: df["positive_exam_for_pe"] = 1 - df["negative_exam_for_pe"] df.loc[ df.query("positive_exam_for_pe &lt;= pe_present_on_image").index, "pe_present_on_image", ] = df.loc[ df.query("positive_exam_for_pe &lt;= pe_present_on_image").index, "positive_exam_for_pe", ] rule_breaks = consistency_check(df).index df["positive_images_in_exam"] = df["StudyInstanceUID"].map( df.groupby(["StudyInstanceUID"])["pe_present_on_image"].max() ) df_pos = df.query("positive_images_in_exam &gt; 0.5") df_neg = df.query("positive_images_in_exam &lt;= 0.5") if "1a" in rule_breaks: rv_filter = "rv_lv_ratio_gte_1 &gt; 0.5 &amp; rv_lv_ratio_lt_1 &gt; 0.5" while len(df_pos.query(rv_filter)) &gt; 0: df_pos.loc[df_pos.query(rv_filter).index, "rv_min"] = df_pos.query( rv_filter )[label_cols[8:]].min(1) for rv_col in label_cols[8:]: df_pos.loc[ df_pos.query(rv_filter + f" &amp; {rv_col} == rv_min").index, rv_col ] = postprocess( df_pos.query(rv_filter + f" &amp; {rv_col} == rv_min")[ rv_col ].values, s=-0.1, ) rv_filter = "rv_lv_ratio_gte_1 &lt;= 0.5 &amp; rv_lv_ratio_lt_1 &lt;= 0.5" while len(df_pos.query(rv_filter)) &gt; 0: df_pos.loc[df_pos.query(rv_filter).index, "rv_max"] = df_pos.query( rv_filter )[label_cols[8:]].max(1) for rv_col in label_cols[8:]: df_pos.loc[ df_pos.query(rv_filter + f" &amp; {rv_col} == rv_max").index, rv_col ] = postprocess( df_pos.query(rv_filter + f" &amp; {rv_col} == rv_max")[ rv_col ].values, s=0.1, ) df.loc[df_pos.index, label_cols[8:]] = df_pos[label_cols[8:]] if "1b" in rule_breaks: pe_filter = " &amp; ".join([f"{col} &lt;= 0.5" for col in label_cols[5:8]]) while "1b" in consistency_check(df).index: for col in label_cols[5:8]: df_pos.loc[df_pos.query(pe_filter).index, col] = postprocess( df_pos.loc[df_pos.query(pe_filter).index, col], s=0.1 ) df.loc[df_pos.index, label_cols[5:8]] = df_pos[label_cols[5:8]].values if "1c" in rule_breaks: chronic_filter = "chronic_pe &gt; 0.5 &amp; acute_and_chronic_pe &gt; 0.5" df_pos.loc[df_pos.query(chronic_filter).index, label_cols[3:5]] = softmax( df_pos.query(chronic_filter)[label_cols[3:5]].values, axis=1 ) df.loc[df_pos.index, label_cols[3:5]] = df_pos[label_cols[3:5]] if "1d" in rule_breaks: neg_filter = "negative_exam_for_pe &gt; 0.5 | indeterminate &gt; 0.5" while "1d" in consistency_check(df).index: for col in label_cols[1:3]: df_pos.loc[df_pos.query(neg_filter).index, col] = postprocess( df_pos.loc[df_pos.query(neg_filter).index, col], s=-0.1 ) df.loc[df_pos.index, label_cols[1:3]] = df_pos[label_cols[1:3]].values if "2a" in rule_breaks: neg_filter = "negative_exam_for_pe &gt; 0.5 &amp; indeterminate &gt; 0.5" while len(df_neg.query(neg_filter)) &gt; 0: df_neg.loc[df_neg.query(neg_filter).index, "neg_min"] = df_neg.query( neg_filter )[label_cols[1:3]].min(1) for neg_col in label_cols[1:3]: df_neg.loc[ df_neg.query(neg_filter + f" &amp; {neg_col} == neg_min").index, neg_col, ] = postprocess( df_neg.query(neg_filter + f" &amp; {neg_col} == neg_min")[ neg_col ].values, s=-0.1, ) neg_filter = "negative_exam_for_pe &lt;= 0.5 &amp; indeterminate &lt;= 0.5" while len(df_neg.query(neg_filter)) &gt; 0: df_neg.loc[df_neg.query(neg_filter).index, "neg_max"] = df_neg.query( neg_filter )[label_cols[1:3]].max(1) for neg_col in label_cols[1:3]: df_neg.loc[ df_neg.query(neg_filter + f" &amp; {neg_col} == neg_max").index, neg_col, ] = postprocess( df_neg.query(neg_filter + f" &amp; {neg_col} == neg_max")[ neg_col ].values, s=0.1, ) df.loc[df_neg.index, label_cols[1:3]] = df_neg[label_cols[1:3]] if "2b" in rule_breaks: while "2b" in consistency_check(df).index: for col in label_cols[3:]: df_neg.loc[df_neg.query(f"{col} &gt; 0.5").index, col] = postprocess( df_neg.loc[df_neg.query(f"{col} &gt; 0.5").index, col], s=-0.1 ) df.loc[df_neg.index, label_cols[3:]] = df_neg[label_cols[3:]].values return df </code></pre> <p>Updated: <br> We uploaded code on github (<a href="https://github.com/shimacos37/kaggle-rsna-2020-9th-solution)" target="_blank">https://github.com/shimacos37/kaggle-rsna-2020-9th-solution)</a>.</p>
RSNA STR Pulmonary Embolism Detection
6th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners. Thank you to Kaggle and RSNA for hosting this competition.<br> I'm happy about my result because I'm a medical doctor and therefore, RSNA competitions are the most important competitions for me. In RSNA 2018, I did my best but I failed to get gold. I couldn't participate in RSNA 2019 because I had to prepare for the national exam for medical doctors. In RSNA 2020, I've got the solo gold finally. I'm so sad that I can't attend the RSNA conference in place because it becomes an online conference.</p> <p>Here I describe my solution. The overview is shown below. Actually, there is nothing special; 2D-CNN for image-level feature extraction and 1D-CNN for exam-level classification.</p> <p><img src="https://storage.googleapis.com/kaggle-forum-message-attachments/1072220/17414/figure1.png" alt="Figure 1"></p> <h1>Stage 1: 2D-CNN for feature extraction</h1> <p>First, I trained 2D-CNN (EfficientNet B0 or B2) with trainable 3 windows (WSO, <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117480" target="_blank">following Yuval's solution</a>) on 2D images with the condition shown below. To save memory and time, I trained models with mixed-precision.</p> <ul> <li>Loss: BCE with the weights reflecting the competitions matric weights</li> <li>Window: initialized with <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">Ian's windows</a></li> <li>Image size: 512 (law image)</li> <li>BatchSize: 80 for B0/50 for B2</li> <li>Training Steps: 8192 (= 0.5 epoch)</li> <li>Optimizer: Adam</li> <li>LR: 1e-3 decline to 1e-4 with cosine annealing</li> <li>Augmentation: ShiftScaleRotate, BrightnessContrast, Crop (448 x 448), CutOut</li> </ul> <p>Training takes 3 hours x 5 folds with P100. I extracted 2D-CNN's feature, the output of the global average pooling layer, and used it as input for stage 2. Feature extraction takes 2. 5 hours x 5 folds.</p> <h1>Stage 2: 1D-CNN for exam-level classification</h1> <p>Next, I trained 1D-CNN for exam-level classification. As input, I used feature sequences extracted by 1st-stage 2D-CNN. Feature pooling, Skip connection, and SE-module are employed. For image-level prediction, I used U-Net-like upconv architectures. The training conditions are shown below.</p> <ul> <li>Exam-level loss: BCE with the weights reflecting the competitions metric weights</li> <li>Image-level loss: BCE and BCE with the q_i weights</li> <li>BatchSize: 64</li> <li>Epoch: 16</li> <li>Optimizer: Adam</li> <li>LR: 1e-4 decline to 1e-5 with cosine annealing</li> <li>Augmentation: Crop (128 slices), Flip<br> Training takes 6 minutes x 5 fold.</li> </ul> <h1>Postprocessing</h1> <p>I averaged the predictions of B0 and B2. I did postprocessing to solve the conflict of label consistency with minimal modification. The 5-fold CV score is shown below.</p> <pre><code>negative_exam_for_pe bce: 0.345601, auc: 0.895471 indeterminate bce: 0.084615, auc: 0.812207 chronic_pe bce: 0.158846, auc: 0.682470 acute_and_chronic_pe bce: 0.081380, auc: 0.842780 central_pe bce: 0.111196, auc: 0.949153 leftsided_pe bce: 0.287363, auc: 0.900652 rightsided_pe bce: 0.298690, auc: 0.911001 rv_lv_ratio_gte_1 bce: 0.228942, auc: 0.902246 rv_lv_ratio_lt_1 bce: 0.342175, auc: 0.835369 exam-level score bce: 0.196321 q_i weighted_image_bce bce: 0.206749, auc: 0.965775 total_score bce: 0.201473 </code></pre> <h1>Final submission</h1> <p>As final submissions, I selected</p> <ol> <li>B0 model with 5-fold averaging: public: 0.161, private: 0.157</li> <li>B0 and B2 model with 5-fold averaging and model averaging: public: 0.160, private: 0.156</li> </ol> <h1>Comparison test</h1> <p>Stage 1</p> <pre><code> exam BCE exam AUC image BCE image AUC B0 final model 0.314768 0.709304 0.101278 0.951922 num step X2 0.316794 0.708131 0.100873 0.951224 initial LR=1e-4 0.321354 0.698805 0.113893 0.938055 BatchSize=16 0.320763 0.687397 0.101027 0.950439 w/o CutOut 0.316906 0.703339 0.100654 0.950882 InputSize=256 0.328765 0.681211 0.134463 0.917054 w/o WSO 0.315638 0.703576 0.103514 0.947023 with MixUp 0.317532 0.693032 0.103800 0.944957 InputSize=640 0.321643 0.703420 0.108685 0.953675 B2 final model 0.315517 0.707378 0.099449 0.950943 </code></pre> <p>I did comparison tests after the deadline. scores are calculated without any weights. 0.5 epoch is enough to converge. InputSize=512 is better than InputSize=256. InputSize=640 may be better than InputSize=512. LR=1e-3 is better than LR=1e-4. BatchSize=80 may be better than BatchSize=16. Cutout, WSO, and MixUp may not be necessary.</p> <p>Stage 2</p> <pre><code> exam BCE exam AUC image BCE image AUC B0 final model 0.200486 0.867666 0.222240 0.964814 w/o pred-2 0.200840 0.867577 0.221665 0.963847 LSTM 0.203616 0.864214 0.216788 0.963465 GRU 0.195599 0.873768 0.219781 0.963367 B2 final model 0.196107 0.871995 0.220244 0.962613 </code></pre> <p>Scores are calculated with weights of competition metrics. Training with pred-2 may not be necessary. LSTM and GRU may be able to achieve the same performance as CNN. </p> <p>What I had to do in this competition was obvious because this competition is very similar to the last year's one and Yuval, the last year's winner, was at the top of the LB. Actually, What I did was just implementing the last year's solution and making a custom loss that minimizes the competition metric directly. It is a very baseline model. I think my solution would be in the middle of silver medals in usual competitions. This competition is a little bit harder than usual competitions, because of a large dataset, complicated metric, short span, and notebook competition. Maybe that's why this baseline model can get gold.</p> <p><a href="https://github.com/OsciiArt/Kaggle_RSNA2020_6th_Solution" target="_blank">All the codes are available here.</a></p>
RSNA STR Pulmonary Embolism Detection
1st place solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the winners! Thanks to Kaggle and RSNA for hosting this competition and presenting us this interesting problem. The data size is big and of high quality and there is no shakeup. I’m glad I can win this one and I have learnt a lot during this journey.<br> Special thanks to <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> for providing the topic introduction and useful input processing code. Also, credits should go to last year’s RSNA winners, lots of their ideas are incorporated in my solution.<br> <a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117242" target="_blank">https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117242</a></p> <p>My solution is described below. <br> Code: <a href="https://github.com/GuanshuoXu/RSNA-STR-Pulmonary-Embolism-Detection" target="_blank">https://github.com/GuanshuoXu/RSNA-STR-Pulmonary-Embolism-Detection</a><br> Inference kernel: <a href="https://www.kaggle.com/wowfattie/notebook6fff7ff27a?scriptVersionId=45476524" target="_blank">https://www.kaggle.com/wowfattie/notebook6fff7ff27a?scriptVersionId=45476524</a></p> <h1>Preprocessing</h1> <p>Early after I joined this competition, I noticed that increasing input image size from 512x512 to 640x640 improves the modeling performance. By browsing the training images, I further noticed that the lungs did not occupy large and consistent portions of the images. This is inefficient because we know input size matters and it’s not worthy to waste computing time on irrelevant things in the images, and this could also give the modeling unnecessary difficulty to learn large scale and shift invariance. So, it’s necessary to have a high-quality lung localizer. There are some existing pretrained lung localizer online, I did not try them because according to my observation it’s easy for a CNN to accurately localize the lung area from images as long as we have the bbox labels of the lungs. So, I annotated the train data and built a lung localizer with the bboxes and Efficientnet-b0 as the backbone. For simplicity I only annotated four images per study. The training and prediction process were also on only four images per study to save time. Some examples of this preprocessing are given below. The localizer is very robust even in some relatively difficult conditions. The idea of preprocessing the input is partly inspired from last year’s 2nd place solution.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F478989%2F51ad8deb361d5f8c50f2265970a30d3c%2FPicture1.png?generation=1604080349102371&amp;alt=media" alt=""></p> <h1>Training/validation split</h1> <p>Since the provided data are big and of high quality, we don't have to do cross validation, a single training/validation split is reliable enough. In this competition, I randomly set aside 1000 studies for validation and used the rest 6200+ studies for training and hyperparameter tuning. For final LB submission I re-trained my models with the full training set and the optimized hyperparameters.</p> <h1>Image-level modeling</h1> <p>I used the same 2-stage training strategy as in last year’s RSNA competitions. For image-level modeling, the 3-channel input was the PE windows of the current image and its two direct neighbors. Using neighboring images has proved to be effective in last years 1st and 3rd place solutions. My experiments also confirmed that this input setting outperformed single images with 3 types of windows.</p> <p>Apart from predicting image-level labels, this year we are given various study-level labels. At first glance it appeared to me that, because the input of the study-level models are image embeddings, we need to use these study-level labels during image-level modeling so that the following study-level model could have sufficient knowledge to model and predict them. But after I tried lots of combinations of them and various loss masking tricks, the best performing model in both the image-level and study-level stages was still the one trained with the image-level labels only. I’m a little puzzled how the image embeddings are encoded with the study-level labels, for example, the exact position labels (center, left, right) and the more refined acute and chronic, when the image-level models were not trained using any of those labels.</p> <p>The training loss was the vanilla BCE loss with linear lr scheduler. No special data sampling was applied. I found that a single epoch through the train data was the optimal for my settings. The best augmentations were </p> <pre><code>albumentations.RandomContrast(limit=0.2, p=1.0), albumentations.ShiftScaleRotate(shift_limit=0.2, scale_limit=0.2, rotate_limit=20, border_mode=cv2.BORDER_CONSTANT, p=1.0), albumentations.Cutout(num_holes=2, max_h_size=int(0.4*image_size), max_w_size=int(0.4*image_size), fill_value=0, always_apply=True, p=1.0), </code></pre> <p>My final ensemble were with one serexnext50 and one seresnext101. Their respective validation performance for image-level PE prediction was</p> <pre><code> Loss AUC seresnext101 0.079 0.964 seresnext50 0.080 0.962 </code></pre> <p>Other good backbones are inception_resnet_v2 and efficientnets. Densenets and resnexts performed a lot worse. Input were resized to 576x576 after the lung localization, this was the largest size the models could finish running in the 9 hours.</p> <h1>Study-level modeling</h1> <p>Image embeddings of dimension 2048 served as the input to a RNN for both image-level and study-level modeling. </p> <p>One thing we need to handle was that the number of images each study has could vary from 100+ to 1000+. As we don’t know the information of the private test data, it was hard to predefine an input sequence length for our RNN model if we want to predict all the images. Stacking all the images into a 3-D array and resizing it along the z-axis before generating image embeddings is an option, but it was not compatible to my inference pipeline. For convenience, I swapped the order of embedding generation and resizing, in other words, I chose to resize the features instead of images. For example, given a study which has N images, the input feature shape is Nx2048. If the max sequence length limit in the RNN is M, the cv2.resize function is applied to resize features to Mx2048 if N&gt;M, otherwise if N&lt;M zero-padding is used. The image-level labels and the predictions are zoomed in and out in the same way during training and inference. To find the best M, I ran a search in the step size of 32, and M=128 gave the best performance. In the train set, the majority of Ns is in the range of 200-250. This means downsizing across the z-pos first before sequence modeling improves the performance. In my final models, I actually set m=192 because I believed there might be more big Ns in the private test data.</p> <p>Inspired from last year's 2nd place solution, I also computed the difference of embeddings between current and the two direct neighbors and concatenate with the current features. So the input size was expanded to 2048x3. </p> <p>The exact RNN architecture is not very important, I settled down to only a single bidirectional GRU layer, with the study-level labels predicted by a concatenated attention weighted average pooling and max pooling over the sequence. My local validation loss was around 0.18, I have no idea why it is much higher than the LB scores.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F478989%2F5aabec8cb08e9bc19b65b9efd4c0dcdd%2FPicture5.png?generation=1604080838336664&amp;alt=media" alt=""></p> <h1>Postprocessing</h1> <p>The main purpose of the postprocessing step is to satisfy the consistency requirement of the labels. Since this consistency requirement agrees with how the data was labeled, a careful postprocessing could improve the performance. In my case, the local validation has a tiny improvement after postprocessing. The brief workflow of the postprocessing is </p> <pre><code>for each study: if the original predictions satisfy the consistency requirement do nothing else change the original predictions into consistent positive predictions, and compute loss between them change the original predictions into consistent negative predictions, and compute loss between them choose from the positive and negative predictions based on which causes the smaller loss </code></pre> <p>The weights of the loss function is almost same as the competition metric, except that the q_i of image loss weight is replaced by a fixed 0.005 because we don’t have the ground truth of the test data. Code of this postprocessing can be found in my inference kernel.</p>
RSNA STR Pulmonary Embolism Detection
30th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the Kaggle team and organizers for this challenging competition. Huge props to my teammates <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a> <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a>, they worked very hard!! We have learned a lot and now we are exhausted both computationally and physically 😃, just like many of you. This solution is developed within 15 days and it's far away from the top winning solutions. But hopefully there's something helpful in this!</p> <h2>TL; DR</h2> <ul> <li><p><strong>Efficientnet B1</strong> for image level, <strong>Efficientnet 3D CNN</strong> + <strong>Resnet 3D</strong> for study level. <strong>CNN + Transformer</strong> for image and study level.</p></li> <li><p>Used TFRecords by the organizers and windows function by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a>.</p></li> <li><p>Applied mask to hide left/right side of the image</p></li> <li><p>Optimized blending weights based on OOF calculation.</p></li> </ul> <h2>Timeline</h2> <ul> <li><p>We started the competition about 15 days ago. </p></li> <li><p>Trained <strong>Efficientnet B0</strong> on image level and blended with the mean predictions by <a href="https://www.kaggle.com/osciiart" target="_blank">@osciiart</a>, aka the public notebook scored 0.325. 0.4 * mean prediction + 0.6 * Efficient B0 along with the mean prediction for study levels. This gives us LB: <code>0.292</code>. </p></li> <li><p>Developed <strong>Resnet3D</strong> that scored <code>0.370</code> LB. We ran a local validation for this model and found out that this model did well on the study level but poorly on the image level. Hence, for study level, we replaced then mean prediction with our Resnet3D and achieved <code>0.253</code> LB.</p></li> <li><p>Improved <strong>Efficientnet B0</strong> with masks(masking one half of the image). And achieved <code>0.248</code> LB. </p></li> <li><p>The amazing public baseline came out and we were frustrated because it scored <code>0.233</code> and it made us panic. We tried to inference that and it took forever so we decided to not incorporate that into our pipeline.</p></li> <li><p>We added <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> to our team and blended with our existing results(take the mean) and got <code>0.226</code> LB. <a href="https://www.kaggle.com/bacterio" target="_blank">@bacterio</a> had a <strong>CNN + Transformer</strong> at that time, which is completely disparate from our approach. It calmed us down a bit and we knew that if we continue improving both image level and study level, we will get better.</p></li> <li><p>We improved by training three <strong>Efficientnet B1</strong>s with similar techniques and <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a> came out with a new architecture <strong>Efficientnet 3D CNN</strong> and we got <code>0.212</code> LB. </p></li> <li><p>Oh! We only had <strong>two days left</strong>. What should we do? We realized that it's quite unrealistic add in another architecture. So we should work on upgrading existing models. We added TTAs to <strong>Efficientnet 3D CNN</strong> and fine-tuned <strong>Efficientnet B1</strong>, <strong>Resnet 3D</strong>, <strong>Efficientnet 3D CNN</strong> and we got <code>0.204</code> LB.</p></li> </ul> <h2>Modeling</h2> <p>In this section I will show our final models.</p> <h3>Efficientnet B1(TF)</h3> <ul> <li>Trained with TPU using TFRecords provided by the organizer</li> <li>3 windows as 3 channels by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a></li> <li>Configurations:<ul> <li>Batchsize: 768</li> <li>Epochs: 17</li> <li>Scheduler: 9 epochs <strong>1e-3</strong>, 2 epochs <strong>1e-4</strong> and 6 epochs <strong>1e-5</strong>. Three phases, pick the best model at each phase and continue with a new learning rate next phase.</li></ul></li> <li>Augmentations:<ul> <li>rotation, shear, hzoom, wzoom, hshift, wshift by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. <a href="https://www.kaggle.com/cdeotte/rotation-augmentation-gpu-tpu-0-96" target="_blank">https://www.kaggle.com/cdeotte/rotation-augmentation-gpu-tpu-0-96</a></li> <li>mask, basically a cutout, this has <strong>significantly boosted our valid loss</strong>. Implementation shown as below <a href="https://www.kaggle.com/wuliaokaola" target="_blank">@wuliaokaola</a>.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F27df2eb5da63345c737175b23062be4f%2Fmask.png?generation=1603764000748568&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Fb03deaf567cceb73532a54fcdca4d53d%2FScreenshot_2020-10-26%20b1_fold3%20Kaggle.png?generation=1603764244357127&amp;alt=media" alt=""></li></ul></li> <li>Results:<ul> <li>Around <code>0.22</code> image level weighted loss</li></ul></li> </ul> <h3>ResNet 3D(TF)</h3> <ul> <li>Randomly take 256 images from each study distributed by position; if the study does not contain 256 images, tile</li> <li>Input shape: [256, 256, 256, 3], output shape: 256(image level) + 9(study level)</li> <li>ResNet 50 Backbone</li> <li>Change Conv2D to Conv3D</li> <li>Keep z dim as input and use it for image level label</li> <li>It can predict image level labels as well but the performance is not as we expected</li> <li>Results:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F1ea1b91bcae244d0c00abda661385556%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta.png?generation=1603765140289343&amp;alt=media" alt=""></li> </ul> <h3>EfficientNet 3D CNN(TF)</h3> <ul> <li>Connect 256 <strong>EfficientNet B0</strong>s with a <strong>TimeDistributed</strong> layer </li> <li>use the same identity_block and conv_block like ResNet on the top</li> <li>It can also predict image level labels as well but the performance is not as we expected</li> <li>Results:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F617169bcd0724022099c5bf12568dbdb%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta(1).png?generation=1603765158757751&amp;alt=media" alt=""></li> </ul> <h3>CNN + Transformer(Fastai v2 + Pytorch)</h3> <ul> <li>Resnet34 Backbone</li> <li>6 Stacked Reformers head </li> <li>Trained with 256x256 images with a <strong>single window</strong> but stacking 5 <strong>consecutive slices as channels</strong></li> <li>Affine Augmentations</li> <li>Results: around <code>0.245</code> on image level and <code>0.270</code> on exam level with 5xTTAs</li> </ul> <h3>Blending</h3> <ul> <li>Blending <strong>EfficientNet 3D CNN</strong> and <strong>ResNet3D</strong> for study level based on OOF:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F6c30952a10b542c019b5745f10c213dd%2FScreenshot_2020-10-26%20rsna%20sped%20cv%201026%20tta(2).png?generation=1603765267593556&amp;alt=media" alt=""><br> Here the exam level loss is around <code>0.19</code> with weight <code>[0.71, 0.05, 0.07, 0.38, 0.28, 0.78, 0.71, 0.13, 0.45]</code> for each label.</li> <li>Blended with </li> <li>Along with our existing image level models(<strong>EfficientNet B1</strong>), we can get around a 0.2 CV</li> </ul> <h2>Hardwares</h2> <p>Yes, this competition is really heavy on hardwares. We also found the problem of IO bottleneck, i.e. GPU cannot run at full power, as shown below. Here are a list of hardwares we primarily use:</p> <ul> <li>Kaggle TPU, GCP TPU for Tensorflow development</li> <li>V100(32G) * 2 for Fastai and Pytorch development</li> <li>Rtx Titan(24g) for casual testing<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2F6872f49c2ad156431fdceab9512fbae1%2FScreenshot_2020-10-26%20Slack%20general%20KLLEE.png?generation=1603766768077129&amp;alt=media" alt=""></li> </ul> <h2>Additional Tips</h2> <ul> <li>When <strong>blending Tensowflow and Pytorch</strong> models, it's often a good practice to write them as <strong>scripts</strong> as show below.<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5168115%2Fd91e7eea26f12e581cf6bf8109853b47%2FScreenshot_2020-10-26%20rsna%20sped%20submit%201026%20tta.png?generation=1603766413299332&amp;alt=media" alt=""></li> </ul> <h2>Fin</h2> <p>Thanks for reading this solution. Happy kaggling!</p>
RSNA STR Pulmonary Embolism Detection
10th Place Solution with code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>code</strong> - <a href="https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection" target="_blank">https://github.com/OrKatz7/RSNA-Pulmonary-Embolism-Detection</a><br> <strong>Full Pipeline</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1094066%2F2c42c343c91bc3fd09f089c43fd8c17d%2FRSNA.png?generation=1603799132907836&amp;alt=media" alt=""></p> <p><strong>Overall Strategy</strong></p> <ol> <li>Train an image-level 2d CNN and save to hard drive features</li> <li>Train an exam-level 3d CNN and save to hard drive features</li> <li>input the 2d and 3d features into sequence model</li> </ol> <p><strong>2D CNN Modeling</strong></p> <ol> <li>Data pre-processing - based on Ian Pan: <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930</a></li> <li>augmentation - RandomBrightnessContrast, HorizontalFlip, ElasticTransform, GridDistortion, VerticalFlip, ShiftScaleRotate, RandomCrop</li> <li>cnn models - efficientnet-b3, efficientnet-b4, efficientnet-b5</li> </ol> <p><strong>3D CNN Modeling</strong><br> based on <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> pipline <a href="https://www.kaggle.com/boliu0/monai-3d-cnn-training" target="_blank">https://www.kaggle.com/boliu0/monai-3d-cnn-training</a></p> <p><strong>Sequence Model</strong><br> Input - Slice embeddings from multi 2d models + exam embeddings from 3d models<br> loss - rsna metric</p>
RSNA STR Pulmonary Embolism Detection
12th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, we would like to thank all the organizers, Kaggle and all the medical institutions who contributed their data and all the data annotators. As someone who previously tried to annotate an MRI scan during my internship I know a bit about how cumbersome and sensitive it's to annotate medical data. So appreciate it all!</p> <p>In overall it was a great yet challenging competition in terms of the data volume. We initially started training models from data shared by <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a>, without his generosity the barrier of entry to this competition would be too high for many participators. So, a special thank to him goes from our team.</p> <p>Although, starting off with 256x256 images was great for prototyping in computer vision problems you can always get significant boosts just by training with higher resolution images. At first we tried to create and save full resolution images using Kaggle kernels but it wasn't fun and easy since only 5GB disk space is allowed so we ended up using a cloud provider for the remaining experiments.</p> <p>First, we created full resolution training images using the same windowing shared publicly and also leveraged great utilities from <a href="https://docs.fast.ai/medical.imaging" target="_blank">https://docs.fast.ai/medical.imaging</a>. GDCM was also a requirement, because not all images were readable without it.</p> <p>We extracted both images and metadata.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fc18414a7c1f639ec7d082bc8632e17e3%2Fdata_prep.jpg?generation=1603759954173470&amp;alt=media" alt=""></p> <p>Later we trained CNN models for predicting Image level PE.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fd930773d07281b83ce33642edd90bf0e%2Fcnn_models.jpg?generation=1603760506036737&amp;alt=media" alt=""></p> <p>Then we used an LSTM model to predict image level PE and exam level predictions.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F558069%2Fdf271e5837b3de4d8a0fefad7f7cd63f%2Flstm_sigmoid.jpg?generation=1603760549832762&amp;alt=media" alt=""></p> <p><a href="https://www.kaggle.com/keremt/12th-place-rsna-pe-inference?scriptVersionId=45505224" target="_blank">Inference Kernel</a></p> <p>Other Notes:</p> <ul> <li>5 folds validation scheme.</li> <li>Sequence model directly optimized on competition metric.</li> <li>Tried EfficientNet but we had problems with overfitting.</li> <li>Didn't have time for stacking experiments.</li> </ul> <p>Code for this competition will be publicly available in this <a href="https://github.com/KeremTurgutlu/rsna-pulmonary-embolism" target="_blank">repo</a>. </p> <p>Special thanks to my teammates: <a href="https://www.kaggle.com/jesucristo" target="_blank">@jesucristo</a>, <a href="https://www.kaggle.com/josealways123" target="_blank">@josealways123</a> and <a href="https://www.kaggle.com/atikahamed" target="_blank">@atikahamed</a></p>
RSNA STR Pulmonary Embolism Detection
28th Place - Quick writeup - Improving the baseline
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank my amazing team <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a> <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> <a href="https://www.kaggle.com/cateek" target="_blank">@cateek</a> and <a href="https://www.kaggle.com/proletheus" target="_blank">@proletheus</a>. This was a very challenging competition and I had a great time collaborating as a team and discussing ideas. Also thanks to the host and winners! I trust the top solutions will positively benefit physicians and those impacted by pulmonary embolism.</p> <p>Even though our best submission was based off the from the "baseline" kernel that was released a week before the deadline, it's a little bittersweet. It's disappointing that such a high scoring kernel was released so late- I know it frustrated many teams (ours included). Our team was working on other end-to-end and 3D based models. I took on the role of improving the baseline as our backup plan.</p> <p>This turned into more of an engineering challenge, as we had to balance the limited inference time with such a large amount of data- without exceeding GPU or local memory. It was also challenging because inference time seemed to vary randomly. A submission might complete in 7 hours but then a nearly identical submission would go over 9 hours and fail.</p> <p>Differences between our solution and the public kernel:</p> <ul> <li>Trained a b6 in place of b0 for the stage 1 models.</li> <li>Changed the inference loop to bag predictions from all 10 (5 folds x 2) stage 1 models.</li> <li>Bagged 5x predictions from stage 2.</li> <li>Changed the code so that it would only predict the private test during inference and used offline calculated public test predictions.</li> <li>Modified the code to only loop through the dataloader once instead of twice for stage 1 predictions.</li> <li>Ahmet used some magic to tweak the stage 2 model to get some added boost, including modifying the loss function.</li> </ul> <p>What didn't work:</p> <ul> <li>ResNext101 in stage 1 models, although it had a slightly better CV score.</li> <li>Parallel GRU + LSTM for stage 2 model.</li> </ul> <p>Our solution also meets the required criteria and shouldn't have conflicting label predictions. Our best submission ignoring these restrictions would've given us a private LB score of 0.177 but we didn't select it to stay within the rules.</p> <p>I'd also like to personally thank Z by HP and NVIDIA for providing me the Z8 desktop which I put to good use on this challenge.</p>
RSNA STR Pulmonary Embolism Detection
Provisional 8th Place Solution - Monai x EfficientNets x LGBM
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Github: <a href="https://github.com/lisosia/kaggle-rsna-str" target="_blank">https://github.com/lisosia/kaggle-rsna-str</a></p> <h3>What a competition!</h3> <p>First of all Congratulations to everyone and especially to <strong>Yuji-san</strong> ( <a href="https://www.kaggle.com/yujiariyasu" target="_blank">@yujiariyasu</a> ) and <strong>Yama-san</strong> <br> ( <a href="https://www.kaggle.com/lisosia" target="_blank">@lisosia</a> ) for hopefully -fingers crossed- archiving <strong>Master status</strong>. <br> And a heartfelt Thank you to the organizers and the Kaggle Team for an awesome competition.</p> <h1>Solution Overview</h1> <p>In the big scope our solution is split into (1) image and (2) exam level predictions, which then are (3) ensembled in a Decision Tree. </p> <p>On an image level we predict Pe present on image, left,central and right. <br> On the exam level we predict Left/Central/right, RV/LV Ratio and Acute/Chronic. <br> We feed the predictions into multiple Decision Trees each fintuned with specific features, which then outputs the final predictions. <br> We skipped predicting Indeterminate due to inference time purposes.</p> <p><img src="https://storage.cloud.google.com/kaggleimages/kagglersna.JPG?generation=1603760751579416&amp;alt=media" alt=""></p> <h2>PreProcess</h2> <p>Ian Pans ( <a href="https://www.kaggle.com/vaillant" target="_blank">@vaillant</a> )dataset was a great start. But we very early on created a 512*512 image jpg dataset without jpg compression. With jpg compression our score was always worse. For this we used Ian Pans Script for preprocessing with the same window sizes. </p> <h2>Image Level</h2> <p>In our final solution we used 2 Efficient Net models to predict on image level.<br> They were trained on all images to predict PE present on Image and the efficient Net b0 also to predict left/central/right . </p> <p>Important Edit: <br> Our Image level models predictions were transformed by calibrating the predicted probability of pe_present_on_image using:</p> <pre><code>def calib_p(arr, factor): # set factor&gt;1 to enhance positive prob return arr * factor / (arr * factor + (1-arr)) </code></pre> <p>It is conducted to equalize each folds pe_present_on_image predictions before stacking with LGBM . The factor for each fold is determined so that the per-fold validation weighted-logloss is minimized. Yama-san came up with this idea and it boosted our image level predictionstogether with LGBM by a lot(see below).</p> <h2>Exam Level</h2> <p>Our pipeline here is very much based on the awesome kernel of <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> .<br> Images are center cropped and first and last 20% of images in the z axis removed and resized to 100<em>100</em>100 spatial size. We now see some clever approaches for determining the Heart level (for example by Ian Pan).<br> The 3D model is very bad at predicting positive_exams. Therefore rv_lv_ratio gte was combined to one target. <br> We have trained the exam level model only on exams with PE. The main target for us initially was to predict RV/LV ratio but the model is also surprisingly good at predicting left/central/right and acute/chronic, which also gave us a boost for these features. </p> <h2>LGBM</h2> <h5>Image Level LGBM</h5> <p>For Image level Predictions we used a LGBM which for each image got the last 10 and the next 10 images as input to predict each image. This essentially simulates the CNN+LSTM method many competitors used. But in our case CNN + LGBM always outperformed CNN + LSTM.</p> <p>CV pe present on image: 0.12 (raw prediction ) -&gt; 0.105 (with LGBM + calibration)</p> <h5>Exam Level LGBM</h5> <p>We used the raw predictions of the Monai model and derived features from the image level models. These features mostly consist of percentiles [30, 50, 70, 80, 90, 95, 99] and number of images over a certain threshold [0.1, 0.2 ,, ..0.9] , also mean and max of image level models predictions. </p> <h2>Post Process.</h2> <ol> <li>We clipped right/left/center predictions to average(right) &lt; ave(pe_present)</li> <li>We simply set Inderminate to pos_exam * MEAN_WHEN_POS * (1-pos) * MEAN_WHEN_NOT_POS as we had no time left in inference and it had a low weight assigned to it.</li> <li>Consistency Requirement: we have adjusted the exam level predictions to fit the consistency requirement with the lowest weighted difference (using the official metric weights).</li> </ol> <h2>Final Score</h2> <p>5 Fold CV:<br> (exam level unweighted)</p> <table> <thead> <tr> <th>Target</th> <th>LogLoss</th> </tr> </thead> <tbody> <tr> <td>pe_present_on_image</td> <td>0.105</td> </tr> <tr> <td>rv_lv_ratio_gte_1</td> <td>0.231</td> </tr> <tr> <td>rv_lv_ratio_lt_1</td> <td>0.334</td> </tr> <tr> <td>leftsided_pe</td> <td>0.282</td> </tr> <tr> <td>central_pe</td> <td>0.114</td> </tr> <tr> <td>rightsided_pe</td> <td>0.285</td> </tr> <tr> <td>acute_and_chronic_pe</td> <td>0.087</td> </tr> <tr> <td>chronic_pe</td> <td>0.160</td> </tr> </tbody> </table> <h2>Things that did not work</h2> <p>We have tried many different architectures. Sequence models always yielded worse results for us compared to image level prediction ensembles with LGBM. I suppose this may be due to the very inconsistent number of images per exam compared to last years challenge. </p> <h2>Thank you!</h2> <p>2 days before submission deadline we did not have a final submission, which made things realy close to the end with Kaggle commits being slower to start and commit. I have never witnessed this issue before but it is good to keep this in mind for future competitions. Our final best submission finished just in time. </p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
9th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>1. Introduction</h1> <p>The COVID-19 RNA vaccine related competition was held in Kaggle from 9/11 to 10/8 for a month.<br> This competition was led by Rhijus Das, one of Stanford University's professors in biochemistry, <a href="https://profiles.stanford.edu/rhiju-das" target="_blank">https://profiles.stanford.edu/rhiju-das</a>.</p> <p>There is a global effort to create a COVID-19 vaccine.<br> However, there are several vaccine candidates, but it is said that there has been a problem of degradation of activity.</p> <p>In this competition, given each RNA vaccine sequence, the goal was to predict the reactivity of each part of the sequence under various environmental conditions (pH, temperature).</p> <h1>2. Data</h1> <p>The data is given a sequence consisting of A, G, U, C, structural information, and predicted loop type as shown below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fe4301e4b373db920f23f2de72acae141%2Ffig1.png?generation=1602209693538402&amp;alt=media" alt=""></p> <h1>3. Methodology</h1> <p>A brief of the solving process is shown in the figure below.<br> We need to optimizer inputs, encoders, and heads for a month 😊</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F3d0008a25b8d4ad33d29fd23a6510cbf%2Ffig2.png?generation=1602209716128348&amp;alt=media" alt=""></p> <h1>4. Solutions</h1> <p>First I needed to convert the sequences given as inputs to numbers before embedding them in the encoder. Since it is a different way of expressing the same RNA, there can be two ways.</p> <p>a. Convert them together<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F8ba7489d06ba62c6ea4d17d97c4b589d%2Ffig3.png?generation=1602209732298186&amp;alt=media" alt=""></p> <p>b. Convert them individually<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F8e97cd0c828c25b56a961cf640b5a246%2Ffig4.png?generation=1602209750766042&amp;alt=media" alt=""></p> <p>At first, I used a, but it's a different way of expressing, so I tested b. \<br> Actually, b was good on the cv score, so I chose method b. </p> <p>These encoded sequences have a (batch size, length of sequence, 3) matrix, and become a matrix (batch size, length of sequence, dimension of embedding) after the embedding layer.</p> <h3>my solution</h3> <p>Besides the sequence, I added other features. RNA sequence can have a probability of being linked. The organizer gave a (N, N) adjacency matrix called bpps. Since tokens in each sequence are probability values reaching different tokens and structures, it could be a very important feature, and many Kagglers put this information in the input.<br> <a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model</a><br> <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation</a><br> <a href="https://www.kaggle.com/hidehisaarai1213/openvaccine-checkout-bpps" target="_blank">https://www.kaggle.com/hidehisaarai1213/openvaccine-checkout-bpps</a></p> <p>In addition, the statistical features that the sequence has are also included.<br> As a result, it becomes:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F965e6b6aab8450cfbcb6696d4dcc892b%2Ffig5.png?generation=1602209787702488&amp;alt=media" alt=""></p> <p>The possibility of GNN began to be discussed from the beginning of the competition, and people started to use GNN. At first, I didn’t use GNN. I also used GNN In the middle of the competition.</p> <p>In my GNN, node and edge features were created, and additional statistical features were also used. <br> All this baseline are from other kagglers. Thanks 😊 <br> You can put it as a multi input.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fb66c0f42373ebe726f06380d82cb9786%2Ffig6.png?generation=1602209813885439&amp;alt=media" alt=""></p> <p>Now I needed to use encoders that can manipulate these embedded features well.<br> Almost all architectures were proposed in this competition. RNN (gru, lstm), transformer, GNN, wavenet, etc. were used.</p> <p>In particular, GNN based on self-supervised learning was shared, and this approach gave a very strong score. I think Most of the gold zones used it.</p> <p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p> <p>I used all of architectures above. I opened, studied and customized all of the baselines shared by many Kagglers. These are the model structures I did.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2Fce2b04e3497574d9db6a65d17514b0f3%2Ffig7.png?generation=1602209839803512&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F1fd00b23b4e4bbc89b027ce6325661fc%2Ffig8.png?generation=1602209854399900&amp;alt=media" alt=""></p> <p>(1), is an approach made to learn sequence data.<br> (2), (3), (4) are the approaches created to learn graph data.</p> <h1>5. Discussion</h1> <p>My major was Computational Chemistry. And I had 3rd place in the molecular property prediction competition last year, and my previous roommate was doing RNA experiments, so I started the competition with a lot of background knowledge.</p> <p>The big issue of this contest is that the train and test RNA sequence lengths are different, and the data contains noise. The RNA sequence length used for training was 107, and the final ranking was obtained for 130 sequences.</p> <p>I came up with the following strategy to be robust against these noise and other sequences.<br> a. Make models as many as possible and do ensemble<br> b. Feature engineering<br> c. Using GNN Much more<br> d. Diversifying the training data<br> A. Dataset(all data), Dataset(SN_filter == 1), Dataset(signaltonoise &gt; 1)</p> <p>I created the (N, N) matrix for fixed attention in various ways. Bpps was created using several packages, and the distance matrix was further expanded using gaussian.</p> <p>for gamma in np.arange(10):<br> Dss.append(np.exp(-np.power(Ds, 2) * gamma))<br> Ds = np.stack(Dss, axis = 3)</p> <p>As a result, the above strategy seems to work well.<br> It was 10th in public, 2nd in non-filtered data (1st private), and 9th in filtered data (2nd private), so My solution achieved an generalization which is very important issue in machine learning.</p> <p>The ensemble is shown below.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1155353%2F0650b58e841d94c50bf10e7b19ecada2%2Ffig9.png?generation=1602209880738874&amp;alt=media" alt=""></p> <h1>6. Acknowledgement</h1> <p>First of all, I dedicate my solution to those who suffer from Corona and to many who struggle to overcome Corona. </p> <p>I think it is a solution that many Kaglers have made together. Thanks to all Kaggler</p> <p>Also, I sincerely thank my beloved wife who provided food, eye medicine to me. (I worked for this competition in the room on a weekday, weekend after honeymoon. bad husband..)</p> <h1>7. Final words.</h1> <p>I became the 195th grandmaster in the world by winning a solo gold medal in this competition. (nakama became a grandmaster on the same day as me. Congratulations.)<br> Yesterday, after the competition, I watched a movie(Thor Ragnarok) with Wife and, and there was a Grandmaster too.<br> I felt very good. I first knew Kaggle and have been kaggling for three years, shouting keep going. Finally GM!!!!!!<br> I</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
12th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>We ( <a href="https://www.kaggle.com/hidehisaarai1213" target="_blank">@hidehisaarai1213</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, <a href="https://www.kaggle.com/rishigami" target="_blank">@rishigami</a>, <a href="https://www.kaggle.com/copasta" target="_blank">@copasta</a> ) would like to thank kaggle &amp; host for the interesting competition and to all the participants for giving us a lot of ideas. And congrats to winners!</p> <h1>Solution overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1807118%2F6c9e9e79b259cecb81d3c15f1bcdeedd%2Foverview.png?generation=1602163661261585&amp;alt=media" alt=""></p> <h1>models</h1> <p>As the team name suggests, we developed many types of models.<br> Here’s a list of models.</p> <ul> <li>Pytorch RNN</li> <li>Keras One-Hot RNN</li> <li>Tree models based on NN extracted features</li> <li>MPNN</li> <li>DeepGNN</li> <li>Pytorch RNN + Domain Adversarial NN (DANN)</li> <li>DeepGNN + DANN</li> <li>Neighborhood Attention Architecture model by <a href="https://www.kaggle.com/reppic" target="_blank">@reppic</a></li> <li>AE pretrained Transformer model by <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a></li> <li>AE pretrained GraphTransformer by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a></li> </ul> <p>We also combined different architectures. (e.g. MPNN + BERT-Encoder)<br> For final submission, we used 67 models.</p> <h1>features</h1> <ul> <li>bpps sum/max/nb</li> <li>additional structure/loop type sampled with ViennaRNA</li> <li>weighted one-hot encoding of additional structure/loop type with probability</li> <li>position Shannon entropy proposed <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560" target="_blank">here</a></li> </ul> <h1>training strategy</h1> <ul> <li>GroupKFold with kmeans</li> <li>weighted MCRMSE loss, np.log1p(1/error) for weights</li> <li>validation with SN_filter=1 data</li> </ul> <h1>stacking &amp; blending</h1> <p>We prepared NN<em>7, LGB</em>1, XGB<em>1, CAT</em>1 stacking predictions and blended them using <code>sp.optimize.minimize</code> for final predictions. This is worse CV than simple blend of models but performed better on LB because if we blend models simply there will be some zero weight models and lose diversity. So we decided to prepare some stacking models and blend them for diversity.</p> <h1>Domain Adversarial Neural Networks (DANN)</h1> <p>To accommodate the differences between the train and test datasets, we came up with a strategy to bring the distribution of features closer. This would work if train and test follow the same y (<code>reactivity</code>, <code>deg_Mg_pH10</code>, <code>deg_Mg_50C</code>…) generation rules, and if unrelated elements (e.g. length of sequences) cause changes in the distribution. To do this, we use Domain Adversarial Neural Networks [1], which tries to minimize the loss of original task(s) while learning a representation invariant to train/test difference through adversarial training.<br> Most of the technical details are the same as that of the paper: here’s the difference in our implementation</p> <ul> <li>Exponential warm up of lambda parameter</li> <li>Two domain classifiers: train/public domain classifier + train/private domain classifier</li> </ul> <p>We used this in some of the models used for stacking, and also for NN stacking. Our second submission was NN stacking + DANN. However, it turned out that this didn’t have a positive effect on any of CV/public/private.</p> <p>[1]: Ganin, Yaroslav, et al. "Domain-adversarial training of neural networks." The Journal of Machine Learning Research 17.1 (2016): 2096-2030.</p> <h1>Things that did not work</h1> <p>・Data augmentation<br> ・Distillation<br> ・Pseudo labeling<br> ・Fine-tuning multi-task model to single-task model</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
Brief Summary of 5th Place
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all I would like to thank Kaggle and host team for organizing this interesting contest.<br> How great is it that I might be able to contribute to the research of covid-19 vaccine by participating in this contest?<br> I'm very excited to be a part of this meaningful contest.</p> <h2>models</h2> <p>I extended the following models.</p> <ul> <li>AE model based on <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">AE pretrain + GNN + Attn + CNN</a>.</li> <li>GRU+LSTM model mainly based on <a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">OpenVaccine: Simple GRU Model</a>.</li> </ul> <p>Final model is weighted average of variation of these 2 models.</p> <p>I have already&nbsp;shared some techniques <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">here</a>.</p> <ul> <li>augmentation</li> <li>sample_weight with all data</li> <li>some feature extraction</li> <li>CV strategy with clustering</li> </ul> <p>I mainly focused on data preprocessing.</p> <h2>additional augmentation</h2> <p>I used eternafold, vienna, nupack, contrafold and rnasoft to extract structure and loop_type.<br> These backend engines are used to extract additional bpps too.<br> Especially eternafold and contrafold worked well.</p> <h2>features</h2> <p>I extracted following features.<br> For detail information, please refer to the <a href="https://www.kaggle.com/its7171/feature-extraction" target="_blank">source code for this</a>.<br> ・bpps_sum<br> ・bpps_max<br> ・bpps_sum-max<br> ・The value of bpps of the pair - the strength of the pair.<br> ・The type of the pair (CG or GU or AU or None)<br> ・Information on the neighbors of the pair<br> ・entropy</p> <h2>Some experiments.</h2> <p>・130 length sequence training.<br> I added dummy&nbsp;39 length sequence to the training sequence.<br> I expected this model improves private scores, but did not make significant improvements.<br> ・reversed sequence<br> I added reversed sequence data as augmentation data and it did not make significant improvements either.But these 2 modelshelped the ensemble a bit.<br> I tried these&nbsp;2 models only for GRU+LSTM model.</p> <h2>Results</h2> <table> <thead> <tr> <th>model</th> <th>private</th> <th>public</th> </tr> </thead> <tbody> <tr> <td>ensemble of AE pretrain + GNN + Attn + CNN</td> <td>0.34799</td> <td>0.23260</td> </tr> <tr> <td>ensemble of GRU+LSTM</td> <td>0.35477</td> <td>0.24222</td> </tr> <tr> <td>ensemble of all models</td> <td>0.34471</td> <td>0.23025</td> </tr> </tbody> </table> <p>I was able to get 5th place with private score of 0.3447.<br> Needless to say, I couldn't have done this without the input from the Kaggle community.<br> I'd like to thank Kaggle community, especially&nbsp;@xhlulu,&nbsp;@mrkmakr and&nbsp;@hengck23!!!</p>
Mechanisms of Action (MoA) Prediction
lucky!!! 234th to 48th.share my solutions
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>first of all,thank all of you!!! I learned a lot from public kernels.I think participation and learning what we are unclear in seem more important than medals.just enjoy it!<br> This is my solution.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3841226%2Ff7998f184fc791c388cebfb82a50048d%2F1606787683512.jpg?generation=1606788356625920&amp;alt=media" alt=""></p>
Mechanisms of Action (MoA) Prediction
Silver to gold? All you need is augmentation
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The idea is quite simple: <strong><em>upsample where positive samples &lt; 10</em></strong>. However, it has reliably boosted our models by 0.0003 - 0.0004: an improvement that is big enough to turn a silver medal into a gold one or bronze to silver. Simply add the following lines to most of the public notebook:</p> <p>skew_target_cols = train_targets_scored.drop('sig_id', axis=1).sum()[lambda x:x&lt;10].index.values<br> print(skew_target_cols)<br> for col in skew_target_cols:<br> tmp_df = train[train[col]==1].copy()<br> for _ in range(4):<br> train = pd.concat((train, tmp_df))<br> print(train.shape)<br> train = train.reset_index(drop=True)<br> target = train[train_targets_scored.columns]</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
8th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Input</h2> <ul> <li>sequence, structure, predicted_loop_type</li> <li>sum, max of bpp (vienna(T=37, 50), nupack(T=37, 50), rnastructure(T=37, 50), rnasoft(T=37), eternafold(T=37), contrafold(T=37)) (<a href="https://github.com/DasLab/arnie" target="_blank">ARNIE</a>)</li> <li>shannon entropy (<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/187560" target="_blank">see this thread</a>)</li> </ul> <h2>Model</h2> <ul> <li>LSTM, GRU (with / without AE pretrain)</li> <li>GNN + Transformer (with / without AE pretrain)</li> </ul> <p><strong>My models were based on these great public notebook</strong>.</p> <ul> <li><a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">OpenVaccine: Simple GRU Model</a></li> <li><a href="https://www.kaggle.com/symyksr/openvaccine-deepergcn" target="_blank">OpenVaccine - DeeperGCN</a></li> <li><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">[covid] AE pretrain + GNN + Attn + CNN</a></li> <li><a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain?scriptVersionId=43764127" target="_blank">Open Waccine: Pytorch AE pretrain</a></li> </ul> <h2>Validation</h2> <ul> <li>train without SN_filter (sample_weight=0.5*log(signal_to_noise+1.1)) and validate with SN_filter.</li> <li>10fold split</li> <li>KFold, StratifiedKFold(binned reactivity), GroupKFold(cluster of sequence)</li> <li>For all models, I averaged the predictions made by the above 3 validations.</li> </ul> <h2>Ensemble</h2> <ul> <li>Weights were determined based on Public LB's scores.</li> <li>Local CV, Public LB and Private LB were all correlated.</li> </ul> <h2>NOTE</h2> <ul> <li>Because all of the data in this competition are predicted values except for the sequence, I think it would have been effective to use an ensemble of bpps predictions from various algorithms.</li> <li>(It would have been nice to regenerate structure and predicted_loop_type from the various bpps predictions, but I was short on time.)</li> <li>At high temperature: Since the secondary structure of RNA is temperature-dependent, I think it was effective to use bpps with T changed as input.</li> <li>At high pH: Alkaline hydrolysis of RNA can occur at any position in the sequence (probably) as well, so the prediction itself is difficult.</li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
BERT Solution ~Part of the heaven and hell team (single 0.35085) ~
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats prize / gold medal / good result winners.<br> And thank you for kagglers who shared many great kernel and discussion.</p> <p>Unfortunately our team experienced heaven and hell(<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189561)" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189561)</a>, but I think one of my model may be different from others, and single model score is not bad (single 0.35085).<br> Therefore I share my model a little.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1305588%2Fd16310913457b197fb43b96fb18b87ce%2Fsolution.png?generation=1602117903910861&amp;alt=media" alt=""></p> <p>I regard sequence data as text and use BERT Encoder with MLM(Masked Language Model)</p> <p>Other information</p> <ul> <li>After pretrain, I attached CNN+LSTM+BppAttention(This is based on <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> and <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> great kernels and my team member <a href="https://www.kaggle.com/yiemon" target="_blank">@yiemon</a> 's great idea)</li> <li>Loss: <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a> loss and custom weight(target error and signal-to-noise) (<a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183211" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183211</a>) </li> <li>Data:<ul> <li>Augmentation data by <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> method (Use in BERT pretrain and normal-train and TTA)</li> <li>not filtering on SN_filter = 1, but reduce noise by weighted loss</li></ul></li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
Solution 20
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Great thanks to Kaggle ad Stanford University for this challenge. It is great to see Kaggle community help fight COVID. I also thank Psi ( <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> ) and Dieter ( <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a> ) for joining me for the last week despite being exhausted after their GRL win. Moreover they had to write a paper about it during that week. Last, I'd like to thank whatever god for the extra day. Without it we would be around the 140th rank.</p> <p>Before describing our solution let me answer this: no, I did not reuse the distance attention that I shared early in this competition. Reason is that a node can have at most 3 edges, its preceding and following node sin the sequence, and its paired node. A generic attention looked like an overkill to me. After reading several top teams writeup, and seeing they almost all reused my distance attention, either directly, or via public notebooks, I coded it and compared with our best. It is slightly worse, by 0.001-0.002 depending on variants. But it would probably have helped in an ensemble. </p> <p>This said, the provided structure is only the most probable one given a bpps matrix, and using bpps probabilities should help. Dieter has been working on it and got a first model like 2 hours before end. We blended it with our best, and got some boost. Psi and I worked on variants of the model I had when we teamed and our best model is a result of that collaboration. That's a simplification of course, we constantly exchanged ideas and info and everybody contributed to all pieces of the solution.</p> <p>Let me now describe our solution.</p> <p><strong>Data</strong></p> <p>We did not use extra sequences beyond the competition data. Like everyone, we created a sequence of nodes with this data for i-th node:</p> <ul> <li>one hot encoding of sequence char, </li> <li>one hot encoding of structure char</li> <li>one hot encoding of loop prediction char. </li> <li>a binary which is one if the node is paired. </li> <li>an index which is the index of the paired node i if the node is not paired, i otherwise.</li> <li>pp and entropy computed with <a href="https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/examples_python.html" target="_blank">RNAlib</a> </li> <li>unpaired probability (1 - sum bpps along one dimension)</li> </ul> <p><strong>Model</strong></p> <p>Overall structure is a linear layer to create embeddings of size 512 from the node features, then 3 attention/convolution layers, then a head to output 5 targets.</p> <p>The Attention/Convolution layers are similar to the transformer encoder structure: an attention layer followed by a convolution bloc with skip connection.</p> <p>The attention layer is a paired attention layer followed by a bidirectional GRU. Sure, GRU is not attention, but here it plays the same role as an attention layer where each node attend its two neighbors in the sequence.</p> <p>The paired attention is where we could have used my NFL attention as discussed above. Instead we just concatenate the paired node embedding. Do do so we permute the node sequence using the paired index, then we mask it with the paired binary. We then feed the concatenation to the GRU.</p> <p>The convolution layer was designed after efficientnet convolution bloc. </p> <p>With the above we got public LB of 0.241 by bagging the model a large number of time.</p> <p>Last day, looking for inspiration, we looked at public kernel and decided to add convolution with larger kernel size as in <a href="https://www.kaggle.com/thebigd8ta/open-vaccine-pytorch-v" target="_blank">this notebook</a>. I don't know who is the original author of this code but I want to credit him/her and than him/her a lot. By stacking our conv bloc the same way, with larger and larger kernel sizes we move our public LB to 0.0.23646., private 0.34910, which would be 30th rank.</p> <p>At a point we trained variants of the model by replacing one or several GRU by a LSTM. Given we have three GRU layer this can generate 8 variants. Bagging these 8 variants was our best sub till last day. Unfortunately we did not have time to train the same 8 variants of our best model. I expect it would have given us a 0.001 boost.</p> <p><strong>Training</strong></p> <p>We weighted samples by their signal to noise clipped between 0 and 10. Without clipping this was unstable and led to loss nan from time to time. When bagging a large number of runs we also had nan. After competition experiment show that clipping to (0.03, 10) was way more stable numerically.</p> <p>We tried pseudo labeling. It improved our CV but degraded public LB a bit, which stopped us. It is a pity as the PL models had a quite better private score. We tried both leaky PL, where you use all fold models to predict on test then use that as pseudo labels, and non leaky, fold based PL where you only use the model trained on same data to pseudo label test. public test data wasn't noisy hence we use sample weight =1. Private test data is noisy hence we used a much smaller weight like 0.03.</p> <p>Our CV setup is similar to others: we did clustering on all node features and use GroupKFold to keep each cluster in a single fold. The CV value was extremely well correlated with public LB, except when we used pseudo labeling. It turns out that CV was well calibrated with private LB and that we should have trusted it for our PL models.</p> <p>I sued cosine annealing with warmup and restart, 6 cycles of 10 epochs. Psi used a single cycle of 60 epochs. Results were similar but add diversity. Unfortunately we could only use one scheme for last day subs because of lack of time.</p> <p>We use Adam.</p> <p>And, oh, yes, we all used Pytorch.</p> <p><strong>Target Transform</strong></p> <p>The target distribution is very narrow around 0. I looked for a way to make it smoother. IN the end we turn prediction y into 3y + y^3. Here is the reactivity distribution:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F540d2f1eaa24872017b1d390623b2cc3%2Frecativity.png?generation=1602163509653828&amp;alt=media" alt=""></p> <p>Here is what the distribution looks like before the transformation:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F75976%2F97d419da8871c305d7e4ac10e3640e10%2Freact2.png?generation=1602163591182036&amp;alt=media" alt=""></p> <p>This gives us a boost around 0.002. I was looking for a function that would spread values near zero. A cubi looked good. To find the coefficient values of the cubic function I used <a href="https://en.wikipedia.org/wiki/Cubic_equation" target="_blank">Cardamo formula</a>.</p> <p><strong>Gamble</strong></p> <p>We decided to submit a bagged version of our best model blended with Dieter's model. For second sub we tried a little gamble. We applied an upscaling coefficient. The gamble backfired badly when private LB was revealed as downscaling was the way to go. Fortunately, once private test scoring was fixed, our gamble bought us 10 ranks.</p> <p><strong>What did not work</strong></p> <p>A lot of things. I may provide the list here later, but it was quite discouraging. The main thing is that GNN or Transformer didn't work. What worked is a GNN with GRU backbone. </p> <p><strong>What's best</strong></p> <p>I used Hardswich activation, Dieter used PReLU, and Psi used ReLU. I used LayerNorm and Psi used BatchNorm. We used different schedulers. We all claimed ours was best for us. I guess it is a self fulfilling prophecy: once your settle on things like activation function, norms, etc, then you tune for it, and as a result switching to other ways degrades your tuned model.</p>
Mechanisms of Action (MoA) Prediction
7th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you to the LISH Competition Organizers, the Kaggle Team, and everyone who shared notebooks and contributed to the discussions. </p> <p>TL;DR: Our final submission was a simple weighted average of an MLP, TabNet, and ResNet model.</p> <h2>Dimensionality Reduction</h2> <p>We decided to do dimensionality reduction on the full test set. This meant we had to retrain the models when the private test set became available and so we couldn't do an inference-only submission. We used a mix of PCA, SVD, and ICA depending on the model.</p> <h2>Early Stopping for Each Target</h2> <p>All of our models used early stopping for each target. The idea was to keep track of the best epoch for each of the targets separately. For example, this plot shows the log loss on a few of the targets.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F331718%2Fd388ae8ca8d2a44d809290e0fd3b111d%2Ftarget_loss.png?generation=1606854356671598&amp;alt=media" alt="target loss"><br> If we had used a single epoch's weights for every target, we would have overfit some targets and underfit others.</p> <h2>Validation</h2> <p>We tried a lot of variants of CV, but in the end, we went with MultilabelStratifiedKFold and did not use <code>train_drug.csv</code>. Our CV-LB alignment was still shaky and there was clearly a point at which further CV improvement degraded LB. <a href="https://www.kaggle.com/cdeotte" target="_blank">@Chris Deotte's</a> smart <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">CV scheme</a> was promising, but it did not solve this entirely for us. Our LB scores with the smart CV scheme were a bit worse than with MultilabelStratifiedKFold, similar to what other teams noticed.</p> <h2>Final Blend</h2> <p>Our final submission was a simple weighted average of <code>0.5*mlp + 0.3*tabnet + 0.2*resnet</code>. We did not use the public LB to tune these weights, and that probably helped prevent some overfitting.</p> <table> <thead> <tr> <th>Model</th> <th>Public</th> <th>Private</th> <th>Run Time</th> </tr> </thead> <tbody> <tr> <td>Blend</td> <td>0.01817</td> <td>0.01603</td> <td>5883.6s</td> </tr> <tr> <td>MLP</td> <td>0.01825</td> <td>0.01613</td> <td>816.6s</td> </tr> <tr> <td>TabNet</td> <td>0.01828</td> <td>0.01614</td> <td>1619.9s</td> </tr> <tr> <td>ResNet</td> <td>0.01859</td> <td>0.01628</td> <td>2562.4s</td> </tr> </tbody> </table> <h3>MLP</h3> <p>Our MLP was very similar to those in public notebooks. We found that the PReLU activation worked better than ReLU or LeakyReLU.</p> <h3>TabNet</h3> <p>We used the code from <a href="https://www.kaggle.com/optimo" target="_blank">@Optimo's</a> excellent <a href="https://www.kaggle.com/optimo/tabnetmultitaskclassifier" target="_blank">TabNet Multitask Classifier</a> <a href="https://github.com/dreamquark-ai/tabnet/" target="_blank">library</a>. We used only the model code and not the helper functions from the library so that we could have more flexibility with the learning rate scheduler, loss function, etc.</p> <h3>ResNet</h3> <p>Our model was based off of <a href="https://www.kaggle.com/demetrypascal" target="_blank">@Demetry Pascal's</a> clever <a href="https://www.kaggle.com/demetrypascal/2heads-deep-resnets-pipeline-smoothing-transfer" target="_blank">2heads+deep resnets+pipeline+smoothing+transfer notebook</a> and ported to PyTorch.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
23rd Place write-up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Model:</strong><br> Transformer + CNN + RNN</p> <p>transformer block:<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1904629%2F87390c6d22d2f21710e0c95331f09874%2FUntitled%20Diagram%20(2).png?generation=1602124003960935&amp;alt=media" alt=""><br> The bpps attention is inspired by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> 's graph transformer (<a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430" target="_blank">https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430</a>)<br> I added single layer bidirectional LSTM before and after stacked transformer blocks, and I also added positional encoding before the first LSTM layer.</p> <p><strong>Ensemble:</strong></p> <ul> <li>30+ models with different hyperparameters (number of layers, convolution kernel size, sample weighting, loss function…) in addition to some public models</li> <li>Stacking (Ridge + backward elimination)</li> </ul> <p><strong>Shakedown:</strong><br> It seems that my model does not generalize to a longer sequence very well, the predictions for position 68-91 looks different from 0-67, I guess this is the main reason for my shakedown.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
79th Place Solution: Distance Attention with Dijkstra's Algorithm
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>Basic Idea</h1> <p>We represented the protein structure as graph and computed shortest distance matrices using dijkstra's algorithm. We got additional boost using second best structure &amp; corresponding loop type as feature. Our best model was a distance attention + Multi head Attention + LSTM/GRU head with 7 types of distance matrices and some engineered features. We started with distance attention notebook by <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183865" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/183865</a> .</p> <h1>Features</h1> <h2>Sequential Features</h2> <ul> <li>Sequence, Structure, Predicted loop type in one hot encoded form</li> <li>BPPS sum, BPPS Max, BPPS NB</li> <li>Dijkstra based aggregation features</li> </ul> <pre><code>No of neighbours within distance 3, distance 7 No of each loop types within distances 3 &amp; 7 </code></pre> <ul> <li>Alternate Structure, Alternate loop type in one hot encoded form</li> <li>Log Gamma for alternate structure</li> </ul> <h2>Distance Matrices for distance attention</h2> <ul> <li>BPPS Array</li> <li>Best Structure Matrix (if bond exists b/w i &amp; j: then 1 else 0) (Given Structure)</li> <li>Djikstra Matrix for best structure</li> <li>Distance Matrices (Power 1, 2)</li> <li>Alternate Structure Matrix </li> </ul> <h1>Sample Rejection</h1> <ul> <li>Samples with SN_filter 0 were rejected for validation</li> <li>Samples with Signal/Noise &lt; 1 were rejected for training</li> <li>Weights: log(1.02+Signal/Noise)</li> </ul> <h1>Validation Strategy</h1> <p>Fold i -&gt; Rank(Signal/Noise) % 5 == i</p> <h1>Training</h1> <p>Training on all the target, learning rate scheduling &amp; checkpoint on 3 targets!!</p> <h1>Generating Model Variants</h1> <ul> <li>Randomly 1-2 features were dropped in each model &amp; Architecture (mostly head ) was slightly varied!! Heads used - LSTM, GRU, Wavenet, Combination of them </li> </ul> <h1>Models</h1> <ul> <li>Distance Attention + Multihead Attention + LSTM/GRU/Wavenet head</li> <li>LSTM, GRU with all sequential features<ul> <li>GCN with all sequential features and 2 matrices</li></ul></li> </ul> <h1>Ensemble</h1> <ul> <li>Weighted Blending</li> <li>Weights optimised on OOF </li> <li>First All </li> </ul> <h1>Private scores of Individual Model Types</h1> <ul> <li>Distance Attention (0.40787)</li> <li>LSTM, GRU Models (0.41723)</li> <li>GCN (0.40956)</li> </ul> <h1>OOFs of Individual Model Types</h1> <ul> <li>Distance Attention (0.1962)</li> <li>LSTM, GRU Models (0.208)</li> <li>GCN (0.209)</li> </ul> <h1>Comment</h1> <p>We had some solutions in gold range but it wasn't reflected on CV. Even simple mean of our distance attention and GCN passes the barrier, but if we compare CV, there is a great difference between scores of Distance Attention &amp; GNN!! </p> <h1>Selected Submission</h1> <ol> <li>Only Distance Attention</li> <li>Ensemble of all 3 types of models </li> </ol> <h1>Mentions (My Teammates)</h1> <p><a href="https://www.kaggle.com/prateekagnihotri" target="_blank">@prateekagnihotri</a> <br> <a href="https://www.kaggle.com/akshaypatel05" target="_blank">@akshaypatel05</a></p> <h1>Edit</h1> <p>With the reveal of new private scores, scores of my individual models are now consistent:</p> <ul> <li>Distance Attention (0.35298)</li> <li>LSTM, GRU Models (0.36429)</li> <li>GCN (0.36120)</li> </ul> <p>New Final Score: 0.35173 (Ensemble of all 3)</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
2nd Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I was preparing this as 1st place solution but unfortunately I got <strong>5th</strong> 2nd place again.<br> Who cast the curse???</p> <h1>LB progress</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F7839efcb3651f2aedaa830116f096133%2F2020-10-19%207.52.26.png?generation=1603061606818239&amp;alt=media" alt=""></p> <h1>Pipeline</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Fb13f002dcf298c871e273638e6930c43%2F2020-10-17%2016.50.40.png?generation=1602921081920687&amp;alt=media" alt=""><br> I made prediction of SN_filter and pseudo label, but otherwise it's almost same methodology as another of my <a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162765" target="_blank">2nd place solution</a> of TReNDS.</p> <h1>Stacking</h1> <p>I'm really good at stacking with xgboost. However, stacking using oof of pseudo label model is little bit hard due to the drastic oof. So I tried to add gaussian noise to the oof and prediction of test, and then I succeeded to avoid overfitting. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F0f51b7b8c34d1a85170217132173549a%2F2020-10-19%2015.42.57.png?generation=1603112091717651&amp;alt=media" alt=""><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2Ff7d8997d7beac2371a450165963b9980%2F2020-10-19%2015.44.59.png?generation=1603112114562030&amp;alt=media" alt=""></p> <h1>CV vs LB</h1> <p>Also, our CV and LB are super-correlated. We can completely predict LB score using CV, so we didn't submit much.</p> <h1>LB simulation</h1> <p>Now I think this work<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F317344%2F42216ecab6efd6b771802b927a2149ae%2F2020-10-19%2021.53.01.png?generation=1603112007528427&amp;alt=media" alt=""><br> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196#1040074" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196#1040074</a></p> <h1>Reference</h1> <p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">AE GNN</a><br> <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">Data augmentation</a><br> <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189196" target="_blank">First penguin who has suspicion of LB</a></p>
RSNA STR Pulmonary Embolism Detection
7th place - EfficientNet, Transformer and a 2nd opinion [edited]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I used a two stage model.</p> <ol> <li>A EfficientNet (B5, B3) used for feature extraction per image</li> <li>A transformer used per series to predict the series related classes and the 'PE Present on Image' per image</li> </ol> <p>This is the 2nd competition I use such network and an extensive description of this model can be found <a href="https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/181830" target="_blank">here</a> </p> <p>The targets for the EfficientNet where: </p> <ul> <li>The original targets for images where PE Present on Image = 1</li> <li>0 for every other image. Except the Intermediate target which remained the same.</li> </ul> <p>The loss was weighted BCE - the weights reflecting the competitions matric weights.<br> I used flip, rotate, random resize/crop, mean/std shift as augmentation.<br> I also use trainable 3 windows to convert the CT image to jpeg (<a href="https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/discussion/117480" target="_blank">WSO</a> )</p> <p>The transformer was a 4 layer encoder (using Pytorch's transformer encoder module). Where the relative and absolute places of the images in the series were embedded and added to the features vectors (as is done for positional embedding in NLP transformers such as BERT)<br> The loss function reflected the competition's matric.</p> <p>This model gave an <strong>LB of 0.166</strong> </p> <p>Ensembling improved the <strong>LB to 0.162</strong>, but I could only ensemble 2 models in the time frame (I didn't use the public/private LB trick that can add 25%). to improve this I used a <strong>2nd opinion mechanism</strong> - instead of using a 2nd model to inference all the data again, I did what an MD will do, I chose only series where the results where the most uncertain (near 0.5), inference them with another model and ensembled - I did this 3 times for ~ 30-40% of the data each time gaining an equivalence of ensembling 4 models =&gt; <strong>LB 0.157</strong></p> <p>As a last step, I checked if any prediction meet the competition's <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/183473" target="_blank">Label Consistency Requirement </a> and if it didn't, made the minimal changes needed to meet the requirements</p> <ul> <li><p>The full code can be found in <a href="https://github.com/yuval6957/RSNA2020_final.git" target="_blank">git</a></p></li> <li><p>The inference code can be found <a href="https://www.kaggle.com/yuval6967/rsna2020-inference-2nd-op-final" target="_blank">in this notebook</a></p></li> <li><p>The models' weights are <a href="https://www.kaggle.com/yuval6967/rsna2020-models" target="_blank">in this</a> public dataset</p></li> <li><p>A more detailed description can be found <a href="https://github.com/yuval6957/RSNA2020_final/blob/master/Documentation.md" target="_blank">in this documentation</a> and <a href="https://github.com/yuval6957/RSNA2020_final/blob/master/RSNA2020%20presentation.pdf" target="_blank">this presentation</a></p></li> <li><p><a href="https://www.youtube.com/watch?v=hVgIawktZgs" target="_blank">This is a video</a> which present this solution</p></li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
[Updated with code] 13th place solution - Arnie Ensemble
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Update</strong>: you can find our solution code <a href="https://github.com/karimmamer/Predicting-RNA-Vaccine-Degradation-using-Deep-Learning" target="_blank">here</a>.</p> <p>First of all, congratulations to all winners and many thanks for the organizers. </p> <p>It was a very confusing day for us after going from 95 public lb to 730 in the first private lb and finally landing on 13th place to win our first gold medal 😁.</p> <p>The main idea behind our approach is generating bpp matrices from all the libraries included in Arnie, training a model from the output structures of each library and finally creating an ensemble of all the trained models.</p> <p><strong>Model Architecture</strong></p> <p>The main architecture consisted of: </p> <ol> <li>one 1D conv layer</li> <li>one edge conv layer</li> <li>two bi-directional lstm layers</li> <li>one edge conv layer</li> <li>one linear layer</li> </ol> <p>The edge conv layer processes each two connected nucleotides in the secondary structure (zeros added when a nucleotide isn't connected). I think it is similar to what <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a> described in his <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189571" target="_blank">post</a> but I am adding an instance norm and relu after each 1D conv and edge conv layers separately.</p> <p><strong>Input Features</strong></p> <ul> <li>OHE of nucleotide</li> <li>OHE of predicted loop type</li> <li>OHE of secondary structure </li> <li>bpp sum</li> </ul> <p><strong>Training configuration</strong></p> <p>Each single model is trained as follows:</p> <ul> <li>5 folds</li> <li>structure augmentation with 5 generated structures (log gamma 0 to 4) for each sequence and their corresponding predicted loop type</li> <li>self supervised learning to predict randomly dropped nucleotides</li> <li>snapshot ensemble</li> <li>TTA with all the generated structures</li> </ul> <p><strong>Final submission</strong></p> <p>My final submission is an average ensemble of the main architecture (along with other variations) trained with the outputs of all libraries in Arnie at T=37 and T = 50. </p> <p>In order to make sure we are not overfitting on the short sequences in training, we didn't use Batchnorm nor distance features with any model and trusted only our CV score.</p> <p>The post will be updated with a link to the code soon.</p> <p>At the end, we want to thank <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> for their amazing kernels upon which we built our solution. </p> <p>All the praises be to Allah.</p>
RSNA STR Pulmonary Embolism Detection
5th Place Solution Overview
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://raw.githubusercontent.com/darraghdog/rsnastr/main/docs/architecture.jpg" alt=""><br> <strong>Code base</strong> : <a href="https://github.com/darraghdog/rsnastr" target="_blank">https://github.com/darraghdog/rsnastr</a> <br> <strong>Kaggle submission</strong> : <a href="https://www.kaggle.com/darraghdog/rsnastr2020-prediction" target="_blank">https://www.kaggle.com/darraghdog/rsnastr2020-prediction</a></p> <p>Thanks to all the organisers for hosting this competition. And congratulations to all the participants and winners - these competitions and solutions are getting more and more advanced 😁 </p> <p>My compute was a v100 on AWS initially with 320X320 images, with prototyping on Macbook (a lot can be checked on CPU). Then in the last 3 weeks I moved to an A100 to run 512X512 (thanks DoubleYard) - 40GB GPU memory per card - one card was enough. <br> Overall the solution pipeline ended up being pretty similar to last year, but the journey to get there was a lot different. The metric used by the competition was difficult to simulate, and all parts of the pipeline needed to be finalised/integrated to get feedback on the competition benchmark. </p> <p><strong>Preprocessing</strong><br> Use the <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/preprocessing/dicom_to_jpeg.py#L59-L75" target="_blank">windowing</a> from Ian Pan's <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/182930" target="_blank">post</a> pretty much as is, just <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/preprocessing/dicom_to_jpeg.py#L77-L118" target="_blank">parallelised</a> it to speed it up. Some dicom's fell out which I saw later was due to the way pydicom was used, but I think the majority were good. <br> Light <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L72-L74" target="_blank">augmentations</a>, maybe more would have helped but did not want to lose information which side the PE lay on. </p> <p><strong>Image Level</strong><br> Efficientb5 seemed to be better than anything else I tried both in terms of speed and loss. Given the time and compute limits on submission, speed was important. <br> The datasampler took at least <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L112-L117" target="_blank">two images from each study, in each epoch, and positive images were oversampled</a> to give a rate of 4:1 -ve:+ve, with image batchsize of 48 (using half point precision - amp). <br> <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L132-L138" target="_blank">Weighted study loss</a> was calculated at each step on the positive samples only (weights as per competition metric weights excl. <code>negative_exam_for_pe</code>), image loss was calculated on all samples. Final loss per step was <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_image_classifier.py#L200-L209" target="_blank">sum of both averages</a>; image loss was tuned so that image loss and study loss would be roughly equal. <br> Each epoch took about 5 mins and ran ~15 epochs, final solution used three of five folds.</p> <p><strong>Study Level</strong><br> Extracted gap layer of each image to disk. These were fed into <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_sequence_classifier.py#L113-L124" target="_blank">three independent sequence models</a>. <br> 1) 2X Bi-LSTM, more or less <a href="https://github.com/darraghdog/rsna" target="_blank">same architecture</a> as last year.<br> 2) Bert transformer model with 1 layer. <br> 2) Bert transformer model with 2 layers. <br> Each of these used the same loss <a href="https://github.com/darraghdog/rsnastr/blob/14c8516d3a81bc26c5101afff004ce1e99c3f5a9/training/pipeline/train_sequence_classifier.py#L150-L170" target="_blank">(or at least as close as I could get)</a> as the competition metric on each step, study batchsize 64, no accumulation (again with amp).<br> Finally both models and all folds were averaged. <br> I submitted a few more transformer models on last day which I had high hopes for but most broke the consistency check - I only allowed submission, if it passed - and then time ran out. (Thanks <a href="https://www.kaggle.com/yuval6967" target="_blank">@yuval6967</a> for the tip to only drop to disk if it passed). Actually I think 7 of my last 10 submissions failed. </p> <p><strong>Inference</strong><br> A few tricks to help speed up and bring RAM and GPU memory down - <a href="https://www.kaggle.com/darraghdog/rsna512-effnetb5-fold-all-exam-xfrmr-validated?scriptVersionId=45523025" target="_blank">final sub</a>.<br> Batchsize of 1. Feed sequences of images into image level model in chunks of 128 images, but only normalize (uint8-&gt;float32) when that chunk is to be passed into image model. Then concat all output (GAP layer) chunks to make the sequence. <br> Convert all models and input data to half point precision <code>model=model.half()</code> - this halved the memory, but results were just as good. <br> With the above, I managed to squeeze three folds out of the above pipeline - four folds also worked, but it failed on consistency check on last day. Interested to see optimisations from other competitiors. </p> <p>A final note, about two weeks ago, I was stuck on CV could not get the metric working, and decided to kick the competition. Luckily I stuck with it to try a first submission which got in at LB 0.181 single fold :) For such a challenging metric and large dataset the timeline was tight, I think with another couple of weeks the community could have got to 0.12x or so on the leaderboard - but maybe that would overfit the dataset. </p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
#39 Solution soon. Congrats to everyone. Thanks to Hengck23
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my team ( <a href="https://www.kaggle.com/gopidurgaprasad" target="_blank">@gopidurgaprasad</a>, <a href="https://www.kaggle.com/aayushchou" target="_blank">@aayushchou</a>, <a href="https://www.kaggle.com/jagadish13" target="_blank">@jagadish13</a>), kaggle team and mainly <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> without you, our team won't able to reach this goal. we will share our writeup in few hours. time to sleep now</p> <p>Magic for this competition is Henk discussions, shared code and new learnings<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F761268%2F4dd3de1dd275e9e039f4877e44a36240%2FScreenshot%202020-10-07%20at%201.25.09%20AM.png?generation=1602014149917679&amp;alt=media" alt=""><br> coming soon…</p> <p><a href="https://github.com/seshurajup/kaggle-stanford-covid-vaccine" target="_blank">https://github.com/seshurajup/kaggle-stanford-covid-vaccine</a> ( Papers, discussions, Local CV )</p> <p>will share the code soon.</p> <h2>Background Material</h2> <ul> <li><p><a href="https://www.youtube.com/watch?v=v1UbIUZ8k9o&amp;ab_channel=TheW.C.RayLab" target="_blank">Youtube: Understanding RNA folding energy dot-plots</a></p></li> <li><p><a href="https://www.nature.com/articles/s41467-019-13395-9.pdf" target="_blank">RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning</a></p> <ul> <li>The majority of our human genome transcribes into noncoding RNAs with unknown structures and functions. Obtaining functional clues for noncoding RNAs requires accurate base-pairing or secondary-structure prediction. However, the performance of such predictions by current folding-based algorithms has been stagnated for more than a decade. Here, we propose the use of deep contextual learning for base-pair prediction including those noncanonical and non-nested (pseudoknot) base pairs stabilized by tertiary interactions. Since only 10,000 nonredundant RNAs made available through comparative analysis. The resulting method achieves large, statistically significant improvement in predicting all base pairs, noncanonical and non-nested base pairs in particular. The proposed method (SPOT-RNA), with a freely available server and standalone software, should be useful for improving RNA structure modeling, sequence alignment, and functional annotations.</li> <li><a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/182303" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/182303</a></li> <li><a href="https://dash.plotly.com/dash-bio/fornacontainer" target="_blank">https://dash.plotly.com/dash-bio/fornacontainer</a></li> <li>in my previous competition on DNA, you will get better results to encode k-mers, rather than using single nucleotide<br> e,g:<br> single nucleotide : GACGACG --&gt; 1,2,3,1,2,3,1<br> k-mers: [GAC][GAC][G…] --&gt; 47, 47, 56, …</li></ul></li> <li><p><a href="https://github.com/DasLab/arnie" target="_blank">Arnie Software</a>:</p> <ul> <li>Python API to compute RNA energetics and do structure prediction across multiple secondary structure packages.</li> <li><a href="https://github.com/DasLab/arnie/blob/master/notebooks/start_here.ipynb" target="_blank">Jupyter Notebook</a></li></ul></li> <li><p>[RNA secondary structure packages ranked and improved by highthroughput experiments](The computer-aided study and design of RNA molecules is increasingly prevalent across a<br> range of disciplines, yet little is known about the accuracy of commonly used structure<br> prediction packages in real-world tasks. Here, we evaluate the performance of current packages<br> using EternaBench, a dataset comprising 23 in vitro structure mapping and 11 riboswitch activity<br> datasets involving 18,509 synthetic sequences from the crowdsourced RNA design project<br> Eterna. We find that CONTRAfold and RNAsoft, packages with parameters derived through<br> statistical learning, achieve consistently higher accuracy than more widely used packages like<br> the ViennaRNA software, which derive parameters primarily from thermodynamic experiments.<br> Motivated by these results, we develop a multitask-learning-based model, EternaFold, which<br> demonstrates improved performance that generalizes to diverse external datasets, including<br> complete viral genomes probed in vivo and synthetic designs modeling mRNA vaccines.)</p></li> <li><p><a href="https://www.biorxiv.org/content/10.1101/2020.08.22.262931v1" target="_blank">Theoretical basis for stabilizing messenger RNA through secondary structure design" - Hannah K. Wayment-Steele</a></p> <ul> <li>RNA hydrolysis presents problems in manufacturing, long-term storage, world-wide delivery, and in vivo stability of messenger RNA (mRNA)-based vaccines and therapeutics. A largely unexplored strategy to reduce mRNA hydrolysis is to redesign RNAs to form double-stranded regions, which are protected from in-line cleavage and enzymatic degradation, while coding for the same proteins. The amount of stabilization that this strategy can deliver and the most effective algorithmic approach to achieve stabilization remain poorly understood. Motivated by the need for stabilized COVID-19 mRNA vaccines, we present simple calculations for estimating RNA stability against hydrolysis, and a model that links the average unpaired probability of an mRNA, or AUP, to its overall rate of hydrolysis. To characterize the stabilization achievable through structure design, we compare optimization of AUP by conventional mRNA design methods to results from the LinearDesign algorithm, a new Monte Carlo tree search algorithm called RiboTree, and crowdsourcing through the OpenVaccine challenge on the Eterna platform. Tests were carried out on mRNAs encoding nanoluciferase, green fluorescent protein, and COVID-19 mRNA vaccine candidates encoding SARS-CoV-2 epitopes, spike receptor binding domain, and full-length spike protein. We find that Eterna and RiboTree significantly lower AUP while maintaining a large diversity of sequence and structure features that correlate with translation, biophysical size, and immunogenicity. Our results suggest that increases in in vitro mRNA half-life by at least two-fold are immediately achievable and that further stability improvements may be enabled with thorough experimental characterization of RNA hydrolysis.</li></ul></li> <li><p><a href="https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-018-2018-4" target="_blank">Capturing alternative secondary structures of RNA by decomposition of base-pairing probabilities" -Taichi Hagio</a></p> <ul> <li>Background: It is known that functional RNAs often switch their functions by forming different secondary structures.<br> Popular tools for RNA secondary structures prediction, however, predict the single ‘best’ structures, and do not produce<br> alternative structures. There are bioinformatics tools to predict suboptimal structures, but it is difficult to detect which<br> alternative secondary structures are essential.<br> Results: We proposed a new computational method to detect essential alternative secondary structures from RNA<br> sequences by decomposing the base-pairing probability matrix. The decomposition is calculated by a newly<br> implemented software tool, RintW, which efficiently computes the base-pairing probability distributions over the<br> Hamming distance from arbitrary reference secondary structures. The proposed approach has been demonstrated on<br> ROSE element RNA thermometer sequence and Lysine RNA ribo-switch, showing that the proposed approach<br> captures conformational changes in secondary structures.<br> Conclusions: We have shown that alternative secondary structures are captured by decomposing base-paring<br> probabilities over Hamming distance. Source code is available from <a href="http://www.ncRNA.org/RintW" target="_blank">http://www.ncRNA.org/RintW</a>.<br> Keywords: RNA secondary structure, Dynamic programming, Base-pairing probability, Partition function</li></ul></li> </ul> <hr> <h2>Architectures</h2> <ul> <li><p><a href="https://arxiv.org/pdf/1901.10526.pdf" target="_blank">Comprehensive evaluation of deep learning architectures for prediction of DNA/RNA sequence binding specificities</a></p> <ul> <li>Motivation: Deep learning architectures have recently demonstrated their power in predicting DNA- and RNA-binding specificities. Existing methods fall into three classes: Some are based on Convolutional Neural Networks (CNNs), others use Recurrent Neural Networks (RNNs), and others rely on hybrid architectures combining CNNs and RNNs. However, based on existing studies it is still unclear which deep learning architecture is achieving the best performance. Thus an in-depth analysis and evaluation of the different methods is needed to fully evaluate their relative. Results: In this study, We present a systematic exploration of various deep learning architectures for predicting DNA- and RNA-binding specificities. For this purpose, we present deepRAM, an end-to-end deep learning tool that provides an implementation of novel and previously proposed architectures; its fully automatic model selection procedure allows us to perform a fair and unbiased comparison of deep learning architectures. We find that an architecture that uses k-mer embedding to represent the sequence, a convolutional layer and a recurrent layer, outperforms all other methods in terms of model accuracy. Our work provides guidelines that will assist the practitioner in choosing the best architecture for the task at hand, and provides some insights on the differences between the models learned by convolutional and recurrent networks. In particular, we find that although recurrent networks improve model accuracy, this comes at the expense of a loss in the interpretability of the features learned by the model. Availability and implementation: The source code for deepRAM is available at this <a href="https://github.com/MedChaabane/deepRAM" target="_blank">https://github.com/MedChaabane/deepRAM</a></li></ul></li> <li><p><a href="https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-019-3039-3" target="_blank">A classification model for lncRNA and mRNA based on k-mers and a convolutional neural network</a></p> <ul> <li>We established a classification model for lncRNA and mRNA based on k-mers and the convolutional<br> neural network. The classification accuracy of the model with 1-mers, 2-mers and 3-mers was the highest, with an<br> accuracy of 0.9872 in humans, 0.8797 in mice and 0.9963 in chickens, which is better than those of the random forest,<br> logistic regression, decision tree and support vector machine.</li></ul></li> </ul>
Mechanisms of Action (MoA) Prediction
2nd Place Solution - with 1D-CNN (Private LB: 0.01601)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>1. Overview</h2> <p>First of all, many thanks to Kaggle team and Laboratory for Innovation Science at Harvard for providing priceless dataset and hosting this great challenge. This is the my first kaggle competition, I have to say I learned a lot from the valuable community. </p> <p>This approach consists of 3 single modes:</p> <ul> <li>1D-CNN</li> <li>TabNet</li> <li>DNN</li> </ul> <p>and the final submission is generated by weighted average of single models outputs.</p> <p>The table below lists the performance of the single modes and the final blending in this scheme. The most important part of this scheme may be 1D-CNN 0.01601 (private lb). The final blending only improves by 0.00002 on this basis (private lb 0.01599).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2F81016106a1046153d07cdfbdc8c43173%2Fperformance.jpg?generation=1607494441438421&amp;alt=media" alt=""></p> <p>The flow chart below outlines the technical details , I will introduce it in the following sections. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2Fbb5d2945d14f9b148f86f462e4fb7d97%2Fflowchart.jpg?generation=1607494473918650&amp;alt=media" alt=""></p> <h2>2. Local CV</h2> <p>One of the most important objectives of this challenge is to test the performance of the model for drugs that have never appeared in the public dataset. <br> For this, I used the cv stratification code posted by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> (<a href="https://www.kaggle.com/c/lish-moa/discussion/195195)" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/195195)</a>.</p> <p>Generally in genomics and biomedical data, sample type, instrument error, reagent quality and other systematic deviation sometimes affect the distribution significantly. This is inevitable in the real world, so proper assessment is very important. Here's an example: when batch effect exists, "leave batch out" can reflect the generalization ability of the model more objectively than random kfold-split. <br> In this competation, although not very sure, this "leave drug out" strategy seems to reduce the risk of over fitting in a specific group of drugs ( public lb 4th; private lb 2nd ).</p> <h2>3. Pre-processing</h2> <h4>Sample Normalization</h4> <p>There are some assumptions (may be wrong):</p> <ul> <li>The raw biological data are often not comparable due to the inevitable existence of systematic errors(e.g., data cannot be produced on the same machine, experiments cannot be performed by the same person)</li> <li>Even with totally different treatments, the number of significantly different features won`t be too large. </li> <li>It can be simply assume that the distributions of most samples are similar.</li> </ul> <p>Based on these assumptions, samples weres normalized by the distribution of gene expression and cell viability separately. </p> <p>In short, the gene data in each sample is subtracted from the mean value of 25% and 75% quantiles .<br> Similarly, the cell data is subtracted from the mean of 25% and 72% quantiles, and then divided by the 4 + 75% quantiles of the new distribution. The slightly different steps for genes and cells are determined by the normality of the distribution after treatment.</p> <p>Generally, samples normalization should be implemented before features conversion.</p> <h4>Feature Transformation</h4> <p>After sample normalization, quantile transformation is applied on numerical features.</p> <h2>4. Feature Engineering</h2> <p>There is nothing special in feature engineering, even no variance filtering in 1D-CNN model. Different feature processing methods in the other two models are only used to increase the diversity. </p> <ul> <li><p>PCA feature are used for all models with different n_components (50 genes + 15 cells for 1D-CNN, 600 genes + 50 cells for TabNet and DNN).</p></li> <li><p>Statistical features (such as sum, mean) and combination features are used in Tabnet.</p></li> <li><p>Variance fillter is used in Tabnet and DNN.</p></li> <li><p>In addition, dummy variable (cp_time, cp_dose) are removed in all models.</p></li> </ul> <h2>5. Loss</h2> <p>To deal with the imbalance of targets and reduce overfitting to specific ones, BECloss with label smooth and pos_weight was used in all single models.</p> <p>As mentioned by others, label smoothing performed well by reducing the overconfident probability .</p> <p>In addition, targets weight is slightly adjusted by the reciprocal of class frequency. </p> <p>Specifically, for each target i, the weight is set to log ( Fmin + 100 ) / log ( Fi + 100 ), where the Fi is the number of positive samples in target i, and Fmin donates the min positive count of all target. The constant term 100 is added to prevent the target with low frequency (e.g., with 1 positive sample) from affecting model extremely, and log conversion is to make it smooth.</p> <h2>6. Modelling</h2> <h4>1) 1D-CNN</h4> <p>This single mode achieves the best performance in this approach ( private score : 0.01601 ). Using such a structure in tabular data is based on the idea that:</p> <ul> <li>CNN structure performs well in feature extraction, but it is rarely used in tabular data because the correct features ordering is unknown.</li> <li>A simple idea is to reshape the data directly into a multi-channel image format, and the correct sorting can be learned by using FC layer through back propagation.</li> </ul> <h6>Model Architecture</h6> <p>Based on these ideas, the model which extracts features through 1D-CNN (performs better than 2D and 3D in experiments) are implemented. The figure below shows the main structure.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2Ffdf0f6ba40f129524fecc26235fd38c5%2F1D-CNN.jpg?generation=1607494535889507&amp;alt=media" alt=""></p> <p>As shown above, feature dimension is increased through a FC layer firstly. The role of this layer includes providing enough pixels for the image by increasing the dimension, and making the generated image meaningful by features sorting.<br> Next, data is directly reshaped into image format (size 16*1, chanel 256).<br> Like the basic CNN model, features are extracted in the next several 1D-Conv layers with a shortcut-like connection. <br> Finally, the extracted features are used to predict targets through a FC layer after flatten.</p> <h6>Pre-training</h6> <p>Additionally, non-scored-targets are used alone for pre-training. In this step, only 33 non-scored-targets that are most correlated to the scored-target are trained for only 1 epoch. The purpose of this is only to make the model better initialised. Irrelevant targets and more training steps may lead to unexpected consequences.</p> <h4>2) TabNet</h4> <p>TabNet used in this approach is slightly modified from a public kernel (<a href="https://www.kaggle.com/kushal1506/moa-tabnet-inference" target="_blank">https://www.kaggle.com/kushal1506/moa-tabnet-inference</a>) . There is nothing special except the preprocessing method mentioned above. </p> <h4>3) DNN</h4> <p>Similarly, the DNN model is a modified version learned form public notebook (<a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids/comments)" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids/comments)</a>, with pre-training on all targets (scored &amp; non-scored) and fine-tuning on scored ones. </p> <h2>7. Blending</h2> <p>3 single models are blended as the final submission. Compared with 1D-CNN, the final blending only improves by 0.00002 (private lb 0.01599), perhaps due to the lack of diversity or the insufficient number of single modes.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1851802%2F992b5b32870f4571ad9cf9eb632c5443%2Fcorrelation%20between%20single%20models.jpg?generation=1607494569218486&amp;alt=media" alt=""></p> <h2>8. Source Code</h2> <p>You can find training and inference code here: <a href="https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution" target="_blank">https://github.com/baosenguo/Kaggle-MoA-2nd-Place-Solution</a></p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
First solo gold, 7th place solution: transformers with convolution/deconvolution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Before going into my solution, I would like to thank many people. First, I would like to thank <a href="https://www.kaggle.com/herbison" target="_blank">@herbison</a> for helping me clear my name when I was banned (wrongly) in the recent major compliance action mainly against angtk.ai. Then the kaggle compliance team <a href="https://www.kaggle.com/inversion" target="_blank">@inversion</a> for looking into my case and unbanning me. This would not have been possible without them. </p> <p>Also thanks to people who questioned the lb results <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a>, <a href="https://www.kaggle.com/group16" target="_blank">@group16</a>, <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> who turned out to be right and thanks to the stanford team for looking into the issue and rescoring everything. I had given up yesterday thinking that maybe deep learning just did not work on this RNA problem</p> <p>Many thanks to the stanford team (@dosoon, <a href="https://www.kaggle.com/rhijudas" target="_blank">@rhijudas</a>, <a href="https://www.kaggle.com/wayment" target="_blank">@wayment</a>, and <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a>) for having hosted an amazing and important competition. Special thanks to <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a> for explaining RNA structures and helping me understand it better.</p> <p>Many of you are probably aware of the CHAMPS competition a a year ago, where all the top solutions made use of transformers, and were my inspiration going to into this competition. </p> <p>Last but not least, thanks to my PI for allowing me to do this competition as part of my research work.</p> <p>This competiton was very short and I made many mistakes… I made some big improvements last Sat and had to go into ensembling in a hurry so if I had more time, I can probably come up with a better score, but this is probably the same for everyone in the competition and I am happy with the final results either way</p> <h1>Core ideas</h1> <p>I would say my solution is the simplest manifestation of my basic understanding of RNA and deep learning. I did not use any tricks and just focused on architectural design. There is no feature engineering, and minimal post or pre processing. Everything was executed simply and directly. Here I will talk about the main ideas, and let me know if something is not clearly explained. </p> <ol> <li>Self-attention to learn global dependencies</li> <li>Convolutions to provide local connectivity</li> <li>RNA should be treated as molecules (covalently and locally bonded nucleotides with the chance to form non-covalent bonds) instead of sequences (this means there should be no positional encoding as was present in bert and the original transformer)</li> </ol> <h1>Core architecture: ConvTransformerEncoder</h1> <p>Most people know that transformers do not work on this problem if you directly input single nucleotide encodings into the transformer encoder. What works is instead of using self-attention to obtain a nucleotide to nucleotide attention map, you aggregate single nucleotides locally using 1D convolutions before self-attention. You can say this is a region to region attention mapping (and hence no padding). </p> <p>Early in the competition, I thought that adding the BPP matrix into the attention function as a bias must improve the results. This makes sense since I think that secondary structures stabilize RNA. I used the same idea (mathematically) used by CHAMPS top solution <a href="https://www.kaggle.com/c/champs-scalar-coupling/discussion/106575" target="_blank">https://www.kaggle.com/c/champs-scalar-coupling/discussion/106575</a>. This idea should be attributed to <a href="https://www.kaggle.com/zkolter" target="_blank">@zkolter</a>.</p> <p><img src="https://raw.githubusercontent.com/Shujun-He/images/main/modified_self_attention.png" alt=""></p> <p>Adding the BPP+distance matrix turned out to lead to significantly better results. Since I use no padding in all 1D convolution operations, the BPP matrix should also be processed into a region to region mapping, and 2D convolution using the same kernel size without padding results in direct correspondence to nucleotide region to region mapping on a 2D level and can be directly added to the attention function QK^T before softmax.</p> <p>Since convolutions without padding leads to reduced dimensional encodings, this type of convolution+self-attention module cannot be stacked to form a deeper model using residual connections. Therefore, I thought I would just use the opposite of convolution, namely deconvolution, to retrieve full dimensional encodings. With residual connections, now I have a module that can theoretically be infinitely stacked to form models of arbitrary depths. This is also done for the BPP matrix so a deeper transformation can be used to obtain a better bias that is fed into the self-attention function.<br> <img src="https://raw.githubusercontent.com/Shujun-He/images/main/arch.PNG" alt=""></p> <p>Aside from the typical hyperparameters for the transformer encoder, here I mainly introduce one more parameter k, which is the kernel size for all conv/deconv operations in the same module. I found that when stacking the ConvTransformerEncoders, the results were always better when the last module before outputting predictions does single nucleotide level self-attention (k=1), before layers where k&gt;1. This makes sense because you would want nucleotides to be looked at by the transformer individually before outputting predictions for each one of them.</p> <h1>Pretraining</h1> <p>I used all available sequences to pretrain on randomly mutated or masked (with NULL token) sequence retrieval loss (basically just softmax to retrieve correct nucleotide/structure/loop). I think this resulted in larger variance between single models, and worked better during ensembling. Also, I thought it had to be better for the models to have seen longer sequences than not.</p> <h1>Ensembling</h1> <p>Basically just many models of the same core architecture with different depths and kernel sizes. I selected the training set based on signal to noise &gt; [0.25, 0.5, 1]. Some models were trained without positional encoding, and a few without pretraining. During the last hour, I decided to make a submission using only models without positional encoding, which resulted in a shakeup for me (0.34701-&gt;0.34550). I also used median averaging (inspired by <a href="https://www.kaggle.com/zkolter" target="_blank">@zkolter</a>'s CHAMPS solution) using a code written by <a href="https://www.kaggle.com/rvslight" target="_blank">@rvslight</a>, which came in handy:</p> <pre><code>def median_avg(predictions,beta=0.5): ''' predictions should be of a vector shape n_models beta: if beta is 0.5, then the middle 50% will be averaged ''' sorted_predictions=np.sort(predictions) n_model=len(sorted_predictions) mid_point=n_model//2+1 n_avg=int(n_model*beta) if debug: print('sorted_list',sorted_predictions) print('after_cut',sorted_predictions[mid_point-n_avg//2-1:mid_point+n_avg//2]) to_avg=sorted_predictions[mid_point-n_avg//2-1:mid_point+n_avg//2] return np.mean(to_avg) </code></pre> <p>In my final ensemble, I used beta=0.7. This helps with outliers and such.</p> <h1>Some things that did not work</h1> <p>I tried to get some 3D distance matrix but based on my discussion with <a href="https://www.kaggle.com/andywatkins" target="_blank">@andywatkins</a>, it seemed impractical and I gave up on it. <a href="https://www.kaggle.com/heng" target="_blank">@heng</a> also tried to get 3D structures, I wonder if this worked for him? </p> <p>Adding RNNs did not work (or minimal benefit with additional computational overhead). I thought by adding the positional distance matrix into the self-attention function, the self-attention function already simulates an RNN, so it makes sense.</p> <p>Single label models did not work. The labels are highly correlated so this makes sense as well.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
51th place solution. Single Model. Private Score 0.35175, Public Score 0.24019.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>score update~!<br> no - crop and padd : 0.35215<br> crop : 0.35274<br> crop and padd : 0.35175</p> <hr> <p>I have uploaded my code.</p> <p>I coded in <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s code.<br> Thanks for sharing the nice code.</p> <p>The part where my code looks so different from other kaggler code<br> To prevent overfitting according to the length of private and public data is that I used padding and cropping.</p> <p><a href="https://www.kaggle.com/ttagu99/16th-place-single-model-private-score-0-40760" target="_blank">https://www.kaggle.com/ttagu99/16th-place-single-model-private-score-0-40760</a></p> <p>I want to go back to the days without Covid.<br> It's been too long since I went on a trip.<br> I hope the vaccine comes out soon~.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
From public kernels to silver in a few days (52nd place before and after rescore)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratz to the winners ! Even though my result is not really good, I felt like sharing anyways.</p> <p>I started working on this competition 5 days before the end, but managed to get a fairly nice results in a short time, here's a brief overview of what I did.</p> <h3>Process</h3> <p><strong>Day 1 :</strong> Study the top scoring public kernels </p> <p>I submitted a first blend using a bunch of public notebooks as a benchmark to beat. At the time it was in the low bronze range (0.238~) if I remember. I mostly focused on <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">the PyTorch AE one</a> since I believe it had the most potential. </p> <p><strong>Day 2 :</strong> Improve the transformer model</p> <p>With simple modifications, I was able to reach 0.238~ with a single model here's what I changed :</p> <ul> <li>Get rid of the checkpointing, which in my opinion is not a really good practice</li> <li>Use a linear scheduling with warmup</li> <li>Add the augmentations and TTA from <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">the GRU-LSTM kernel</a></li> <li>Tweak some parameters, but in the end I think the only thing I changed is the batch size which I put to 32. </li> <li>Use the group k-fold from <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">the GRU-LSTM kernel</a> as a validation strategy</li> </ul> <p>Other things were kept as-is, I mostly did code refactoring. </p> <p><strong>Day 3 :</strong> Build an ensemble</p> <p>I trained a bunch of transformers with slightly different hyperparameters, to benefit from diversity and have a more robust submission.</p> <p>I also looked into post-processing, and found that my CV score benefited from some linear transformations such doing <code>y = x * a + b</code>. Using <code>a=0.97</code> and <code>b=0.02</code> gave a private boost of <code>0.0014</code>, but before the rescore. Boost is only <code>0.0004</code> after it.</p> <p>At the end of the day, I was ~30th public by blending a bunch of transformers.</p> <p><strong>Day 4 :</strong> Add diversity </p> <p>I trained a RNN architecture to hopefully push my results a bit further. The gain on the overall ensemble was very small but I still added a RNN blend to my previous best LB (with weighting 0.9 / 0.1)</p> <p><strong>Day 5 :</strong> Surprise day </p> <p>I looked into pseudo-labelling, the motivation was to make my models more robust to the longer molecules. It gave me a 0.001 boost on single models, but unfortunately my ensembles did not benefit much from it. During the last two days my public went up by 0.0001 I believe.</p> <p><strong>Update :</strong> After the re-run, turns out that my submissions which only used models with trained with pseudo-labelling was best one. </p> <ul> <li><p>Private : 0.35092</p></li> <li><p>Public : 0.23679</p> <p>52nd, the same rank as the old blend did in the previous rescore which is a funny coïncidence. But with a different submission !</p></li> </ul> <p>One of my single model trained with PL scored private LB <code>0.35186</code>.</p> <h3>Results</h3> <p>I dropped 18 places overall, which in the end is not too bad considering that I did not really validate my models. . I was in fact saved by the surprise day and the pseudo-labelling. </p> <p>Many thanks to the people who shared without whom I wouldn't've been able to get a decent score given that I procrastinated way too much before really working on this interesting challenge.<br> Especially <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> </p> <p>Also, I was lurking in the forum for a while, so I was already familiar with the competition, I did not really do everything in 5 days. But my early exepriments scored so poorly that I kinda gave up.</p> <p>If anyone is interested, an exemple of trained model is available here : <a href="https://www.kaggle.com/theoviel/open-vaccine-pl" target="_blank">https://www.kaggle.com/theoviel/open-vaccine-pl</a><br> It's really dirty and I don't give any credits, which I apologize for.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
89-th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi ,Kagglers!</p> <p>First of all, I would like to say <strong>Thanks</strong> to <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> and <a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a>. It was a dream team. Any of us did not have any domain knowledge. But we managed to make some domain research (mostly <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a>) and apply our Deep/Machine Learning skills for such an essential task nowadays. Also great thanks to Kaggle team and organizers for such an important competition.</p> <p>So let's go to our modelling and inference insights!</p> <h2>Data Preparation</h2> <p>We have several approaches to data preparation:</p> <ul> <li>Some of our models include filtered data (<code>signal_to_noise &gt; 1</code>)</li> <li>Some include all train data</li> <li>Also we tried adding augmented data, from this <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">notebook</a>, but mostly it did not boost nor CV nor public LB score </li> </ul> <h2>Feature extraction</h2> <p>Firstly we used only <code>sequence</code>, <code>structure</code>, <code>predicted_loop_type</code></p> <p>Then we made some feature engineering. Features that worked well:</p> <ul> <li><code>structure_bpps</code> - bpps row from paired element. Paired element is taken from structure</li> <li><code>distance2pair</code> - distance to paired element, normalized by sequence length </li> <li><code>bbps_sum</code> - sum of bbps by columns for each row</li> <li><code>bbps_max</code> - max of bbps by columns for each row</li> </ul> <h2>Validation Scheme</h2> <p>We used two validation schemes:</p> <ol> <li>First - we compute median by reactivity and made StratifiedKFold - <a href="https://www.kaggle.com/vladimirsydor/basic-visualization-and-stratification-strategy" target="_blank">https://www.kaggle.com/vladimirsydor/basic-visualization-and-stratification-strategy</a></li> <li>Second - we made stratification by median reactivity and errors in target</li> </ol> <p>We used both of them and they correlated well with each other and with Public LB</p> <h2>Model</h2> <p>We mostly used GRU based model </p> <p>Input, Features -&gt; Embedding, Feature Prenet -&gt; Shared Prenet -&gt; CustomBPPSAttention -&gt; SeqEncoder (takern from this <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">notebook</a>) -&gt; 2-4 GRUS blocks -&gt; Linear Multilayer classifier </p> <p>GRU Block - GRU -&gt; CustomBPPSAttention -&gt; Linear -&gt; BatchNorm -&gt; Mish -&gt; Dropout </p> <p>Linear Multilayer classifier - (Linear -&gt; BatchNorm -&gt; Mish -&gt; Dropout) x2 -&gt; Linear </p> <p>CustomBPPSAttention - use <code>bpp</code> matrix as attention matrix to sequence and concat attention result to input sequence </p> <p>Also we tried to use ResNets on bpps matrices or on stacked [bpps, stricture, distance] matrices but it did not boosted score but gave some different models for final ansamble </p> <p>And we tried some Reccurent decoders based on GRUs. They also did not boosted score but gave diversity to ansamble</p> <p>One more great model was using SeqEncoder and Attention at the end </p> <p>All in all:</p> <ul> <li>Replacing ReLu with Mish boosted performance </li> <li>Adding BatchNorms boosted performance </li> <li>Adding prenets and Linear layers before GRUs boosted performance </li> <li>Adding SeqEncoder boosted performance</li> </ul> <h2>Training Scheme</h2> <ul> <li>We used Ranger (Radam + Lookeahead) and it greatly outperformed Adam </li> <li>Tuning ReduceLRonPlato also boosted score</li> <li>Smaller batch sizes (8,16,32) performed better then bigger one</li> <li>Also we used denoising pretraining in some of our models, as it was proposed in this <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">notebook</a></li> <li>We used Crop Augmentation - we gave random part of input sequence and predicted part of target sequence. This stuff performed worse on CV and Public LB but boosted Private LB score greatly</li> <li>Also we tried adding white noise to target, which also made models more stable</li> </ul> <h2>Choose Final Blend</h2> <p>We end up with some of models from each teammate:</p> <ul> <li><a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> gave 59 models to his bland part !!!!</li> <li><a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a> gave 2 best models (one was our best on Public LB)</li> <li>I gave 10 diverse models </li> </ul> <p>And we made BLENDOFBLENDS </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1690820%2Fa70492c1ad41ccba1780c2dc81b2dffd%2FScreenshot%20from%202020-10-07%2012-03-10.png?generation=1602061416100633&amp;alt=media" alt=""></p> <h2>Training/Inference Example</h2> <p><a href="https://www.kaggle.com/yakuben/openvaccine-9th-place-single-model" target="_blank">https://www.kaggle.com/yakuben/openvaccine-9th-place-single-model</a></p> <h2>Framework</h2> <p>For training and experiment monitoring I and <a href="https://www.kaggle.com/yakuben" target="_blank">@yakuben</a> used Pytorch and Catalyst frameworks. Great thanks to <a href="https://www.kaggle.com/scitator" target="_blank">@scitator</a> and catalyst team! And <a href="https://www.kaggle.com/lzcabc123456" target="_blank">@lzcabc123456</a> used TF2 and Keras</p> <h2>P.S</h2> <p>Thanks to all participants and organizers!</p> <p>And happy Kaggling!</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
11th Lightweight Single Architecture [Conv-GCN]-> RNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><strong>Introduction</strong><br> We have been using this lightweight architecture on the 2nd in the LB one week before the end of the competition. It is about 2 times if not 3 times faster than the sharing from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>.<br> Just to comment, <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> did release the gold range secret at that time. </p> <p><strong>High-Level Idea</strong><br> How do we pass information from 1 sequence position to the other, given they are dependent? </p> <p><em>Passing information from position i to j</em><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F690886%2F83da03ea10245b6867febd170f6188f4%2FScreenshot%202020-10-08%20at%207.51.09%20AM.png?generation=1602114976893373&amp;alt=media" alt=""></p> <p>Our model achieves this by 3 steps as shown in the above image.</p> <ol> <li>Gathering local information with 1D Conv.</li> <li>Passing remote information with the Paired/Skip connections across the sequence dimension.</li> <li>Gathering local information with 1D Conv. again</li> </ol> <p>Finally, pushing the information to a longer range with Bi-directional RNNs.</p> <p><strong>Notebook</strong><br> Here is the notebook about this solution, note that it is a simplified version, mainly removed some blendings and optimizations.<br> <a href="https://www.kaggle.com/kingychiu/mrna-base-degradation-keras-cnn-gcn-rnn" target="_blank">https://www.kaggle.com/kingychiu/mrna-base-degradation-keras-cnn-gcn-rnn</a></p> <p><strong>Model</strong><br> Conv. step (1+2+3), where the GraphConv is simply a dot product to a learnable weight + filter from<br> <a href="https://github.com/danielegrattarola/spektral/blob/a2cd265a9440831afc441c1774dd1b7d080a59f8/spektral/layers/convolutional/graph_conv.py#L96" target="_blank">https://github.com/danielegrattarola/spektral/blob/a2cd265a9440831afc441c1774dd1b7d080a59f8/spektral/layers/convolutional/graph_conv.py#L96</a></p> <pre><code> def conv_block(x): conv = tf.keras.layers.Conv1D(hidden_dim*2, 5, padding='same', activation='tanh')(x) gcn = GraphConv( graph_channels, )([conv, As_in]) gcn_2 = GraphConv( graph_channels, )([conv, bpp_inputs]) gcn = tf.keras.layers.Conv1D(graph_channels, 5, padding='same', activation='tanh', )(gcn) conv = tf.keras.layers.Concatenate()([conv, gcn, gcn_2]) conv = tf.keras.layers.Activation("relu")(conv) conv = tf.keras.layers.SpatialDropout1D(0.1)(conv) return conv </code></pre> <p>Putting together with RNNs</p> <pre><code> hidden = conv_block(reshaped) hidden = gru_layer(hidden_dim, dropout)(hidden) hidden = conv_block(hidden) hidden = gru_layer(hidden_dim, dropout)(hidden) </code></pre> <p><strong>Acknowledgment</strong></p> <ol> <li>Thanks <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> for the solid validation in this notebook <a href="https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation" target="_blank">https://www.kaggle.com/its7171/gru-lstm-with-feature-engineering-and-augmentation</a></li> </ol>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
3rd Place Write-up [UPDATED]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Truly want to thank Eterna and Kaggle for hosting this meaningful competition. Thanks all the kagglers for sharing great ideas and notebooks, I learnt a lot in this competition. And congratulations to all winning teams, especially my former teammate <a href="https://www.kaggle.com/nullrecurrent" target="_blank">@nullrecurrent</a> for the 1st place!! </p> <p>These two days were like a roller coaster.. that I dropped from 4th place on LB to 720th place on the first evaluation, and climbed back to 3rd place today. I think my shake were among the largest in this competition… which means my solution is not that robust in different situations, compare to <a href="https://www.kaggle.com/youhanlee" target="_blank">@youhanlee</a> and <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a> . There is still many things to learn. </p> <p>Before this competition, I know little about mRNA and degradation. So most techniques I used were from NLP competitions. I will summarize my approaches below: </p> <h3>Data</h3> <ul> <li>I filtered out data based on <code> *_error_* </code> columns larger than [6,8,10] and train labels less than -0.5</li> <li>The first augmentation of data I did was reversed sequences (you also need to reverse other features and labels too). This is natural, since as an augmentation in NLP I would change the order of sentences in a paragraph. I noticed LB benefitted a lot from data, and correlation of private predictions decreased. Here the correlation is only calculated on the first 91 positions. </li> <li>Then I thought adding more augmented data to train and test would make sense, so I used <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s great <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">notebook</a> to generate possible structure and predicted loop types for each samples, using them as training and test augmentation. </li> <li>So my final training data includes (1) original data (2) reversed original data (3) Augmented data from Arnie (4) reversed augmented data. My fold prediction is also an average of these four. </li> </ul> <h3>Model</h3> <ul> <li>I used mostly the AE pretrained GNN notebook by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> (thanks so much for this great great notebook, I wouldn't be able to get 3rd without your work). </li> <li>added different layers of LSTM/GRU/wavenet at the end with different units/parameters for diversity.</li> <li>In general, 2 x 128 units of LSTM or GRU layer at the end works the best for me. </li> <li>wavenet is having very low correlation with other structures but score is generally worse. good for blending though. </li> <li>I tried to slight increase the units of multi-head-attention, change the dropout of different layers etc. But in general the original structure is very good already. </li> </ul> <h3>Features</h3> <ul> <li>I again used the backbone of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s code :)</li> <li>In additional his original node features, I added bpp max, bpp second max, diff between max and 2nd max, bpp sum, pair type etc. </li> <li>In addition to <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s structure adjacency matrix, I also added two matrices to specify the neighbors of each node's pair. This feature alone increased 20bps on LB. </li> </ul> <h3>3D distance</h3> <ul> <li>I managed to get 3D structures for all samples from <a href="http://rnacomposer.cs.put.poznan.pl/" target="_blank">http://rnacomposer.cs.put.poznan.pl/</a>, as discussed by <a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a> and <a href="https://www.kaggle.com/shujun717" target="_blank">@shujun717</a>. This website can calculate predicted 3D structures given sequence and original structure. But the problem is that even the batch work only take maximum of 10 sequences. So I wrote a simple <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189574#1041899" target="_blank">script</a> using Selenium to call their server 600+ times to get all the 3D structures in .pdb files. </li> <li>Now I published the 3D data in this post, along with a starter notebook: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189604</a></li> <li>I parsed these pdb files to get 3D distance between all C1 atoms, to replace the original distance matrix from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s notebook. I noticed about 10-15bps increase in LB and decrease in prediction correlation. </li> <li>I feel that I didn't use the full power of these .pdb files, but I didn't have enough time to dig deeper. </li> </ul> <h3>Training Strategy</h3> <ul> <li>5 fold stratified CV based on sequence edit distance. </li> <li>Based on <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a>'s loss function (version v9), I used different weights for each target columns. I tried [0.2, 0.3, 0.3, 0.1, 0.1] and [0.2, 0.4, 0.2, 0.1, 0.1]. </li> </ul> <h3>Pseudo Labeling</h3> <ul> <li>As also mentioned in <a href="https://www.kaggle.com/nullrecurrent" target="_blank">@nullrecurrent</a>'s great write-up, using the first 91 positions is the key here. After reading her write-up, I have to say that the save and load methodology is genius :). But I found that in most situations, even PL increase the validation score (even 30~50 bps), it helps the next epoch (trained on training samples) to converge much faster than not using the PL. </li> <li>This methodology was tricky because if I add too much PL, CV score became unreliable. So I only added a few epochs of PL at the beginning of each fold to help the model converge. </li> </ul> <h3>Blending</h3> <ul> <li>Final model is a simple blending of 20+ models. </li> <li>My strongest single model is using all of the above with LB 0.23125 and PB 0.34429</li> <li>I have a safe submission, which blended 4 models that only used GNN and very little PL, because I feel that RNN may still be affected by length of the sequence, and using blend PL might cause overfitting too. This blend ended up with PB 0.34540. </li> </ul> <h3>Final Thoughts</h3> <ul> <li>For me, <strong>the key to climb the LB was 4x augmented data and PL</strong>. Adding some RNN layers at the end of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>'s model, with some basic bpp features will help you in 241x range. Reversed sequence will help about 20bps, Arnie augmentation will help about 10bps, and PL will help about ~50bps if using correctly. </li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
1st place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First, many thanks to the Kaggle team and Stanford team for hosting this competition, and for all the re-scoring efforts. Many thanks to people who shared their kernels and ideas - I learnt a lot from you and was inspired by you. And congrats to all winners!</p> <p>My ideas are mainly on the data side (especially the way to use pseudo labeling), as well as some feature engineering. I'm still a beginner/learner in architecture - my initial versions were based on <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a>'s <a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">solid gru baseline</a>. I added distance embedding (to be elaborated later in the 'features' section), as <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> did <a href="https://www.kaggle.com/c/nfl-big-data-bowl-2020/discussion/119430" target="_blank">for NFL</a>. Then after <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> published <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">the excellent ae+gnn kernel</a>, I merged my frame work with this one, and it's pretty much as simple as adding a GRU layer before the final dense. (Personally I always enjoyed reading kernels by <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a>. I started learning NN by reading his kernel in the NFL game, and I also learnt a lot from this one!)</p> <p><strong>Data preprocessing and weights</strong></p> <ol> <li>I didn't use the SN_filter. Instead, I give Nan value to individual positions with large errors (started with error&gt;10 and value/error&lt;1.5 - and diversity can be added by varying this). I edited the loss function so that the Nan targets won't contribute to loss during training. About 20K values are labeled as Nan across all 5 targets, which is much less than the data we'd lose by using the SN_filter.</li> <li>I calculated edit distance between sequences, and did clustering based on it (trying to reverse engineer what the organizer did to the data). I found that many clusters only contain 1 sequence, but some clusters contain as many as 60 sequences. Thus, I decided to give sample weights proportional to <code>1/sqrt(count_in_cluster)</code> </li> <li>My sample weight also favors sequences that are "closer" to the private dataset.</li> <li>I assign column weights [0.3,0.3,0.3,0.05,0.05] to favor the scored columns </li> </ol> <p><strong>Data augmentation</strong></p> <ol> <li>Reverse the sequence both train and test time. This has better effects when you also encode the start and the end of the sequence.</li> <li>Random perturb of targets. Since targets have measurement errors, I randomly perturb the targets proportionally to the error.</li> <li>Some of my versions used <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s <a href="https://www.kaggle.com/its7171/how-to-generate-augmentation-data" target="_blank">arnie augmented data</a>. I use this mainly for diversity.</li> </ol> <p><strong>Features</strong></p> <ol> <li>Some standard bpp/pairing based features discussed by many others</li> <li>The "distance" matrix to be used in the distance embedding - it can be calculated as the position difference in the sequence as <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> did in his kernel, or it can also be calculated as the position difference after adjusting for primary pairs: for example, if positions (5,20) are a pair, then 5 has distance 1 to 4,6,20, and distance 2 to 3,7,19,21 - this can be calculated iteratively.</li> <li>The distance to the closest paired position, as well as the distance to the closest unpaired position. This is a very strong feature, which may even deteriorate performance in complex models because (I guess) it prevents the model from learning this feature using Attention layers. However in simple models this helps a lot, and it provides diversity.</li> </ol> <p><strong>Pseudo Labeling (of private test set, as well as a randomly generated data set)</strong><br> Finally my favorite part. I got at least 80 bps from this, and maybe more if time / computing power permits. </p> <ol> <li>I started off trying pseudo labeling the private dataset and positions 68-107 of the train set. It didn't work, which pushed me to do some EDA. Then I realized my predictions (from different single models) have much larger variance for positions 68-107 in train and positions 91-130 in test. I would have been very worried if my predictions were more noisy for positions 68-130, but the std didn't start to increase until position 91. Which made me wonder, are the last 39 positions special/less stable/different from others, so that the hosts also leave them out when scoring? Anyway, I decided only to use the first 91 from private to do pseudo labeling.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3652230%2F3d862334265e6423cfb64ccef6bd6878%2Fprivate_noise.png?generation=1602133417143543&amp;alt=media" alt="private data error as function of position"></li> <li>Similar to what I did to train data, I also Nan the pseudo labels at positions where my predictions have large STDs. I also do random perturb of targets proportional to STD.</li> <li>Then I train train set and the pseudo label set in turns (for example, 5 epoch of train as a "train section", following by 2 epochs of PL as a "PL section"). I observed that the PL set makes convergence faster initially, but continue to point to a worse direction at later stages. So I used a "save and load" method (many thanks to the RPG games I've played - from which I learnt this ugly but useful method). Such that whenever a PL section deteriorate the val score by more than 20 bps, I reload the weight before this PL section, and continue from there to another train section (effectively skipped that PL section). This works, and immediately increase the public score of my single model by 30 bps! (even 10 bps higher than the blend solution used as PL labels).</li> <li>This is very dangerous, because using blending as PL labels leads to leaks among folds, thus my CV is no longer trust worthy (I thought about doing foldwise-PL, but didn't do so due to time constraint). Also this save&amp;load method is overfitting the val. So basically after started using PL, the public LB is the only validation set I've got. Then after I used public LB for too many times, I started to be skeptical about it (thus I had a "overfitting?" suffix in my team name if anyone noticed).</li> <li>Being aware of the danger in 4, I didn't do any feature picking or model picking after I started doing PL. Instead, I focus on model correlation a lot, and that's why I mentioned "diversity" several times in this post. Moreover, one of my 2 subs (private score 0.34198) was based on best public score, but there's another one based only on correlation (private 0.34453) - I simply run clustering methods on all my single models, and do simple mean within each cluster, and assign weights to aggregate these clusters based on my knowledge/intuition about each cluster of models. - This "safe sub" got me 300+ place before the rescoring - otherwise I would have been 800+ place then.</li> </ol> <p><strong>Pseudo Labeling (from a randomly generated data set)</strong></p> <ol> <li>I thought another potential danger of doing PL on the private dataset (without doing foldwise PL) is that, all folds may overfit to the PL labels, thus have less diversity in the private set predictions compared with public ones. That's another reason why I had a "overfitting?" suffix to my name. To overcome this, I randomly generated a set of len=130 sequences, and did pseudo label based on them just like I did with the private set.</li> <li>BPP matrix and the pairing structures of the randomly generated set are calculated using <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>'s excellent kernel <a href="https://www.kaggle.com/its7171/how-to-generate-predicted-loop-type" target="_blank">for arnie library</a></li> <li>It turned out I my private set PL didn't overfit, but these random data PL contributed a lot of diversity in the final blending.</li> <li><strong>Personally, I think using PL random dataset, together with different/amazing architectures contributed by other people, have potential of huge improvements</strong> (if we still need better predictions to fight COVID now). The reason is, with PL random dataset we now have access to unlimited "data" - labeled with blending of forecasts from different architectures (better than any single model). This is then used as PL input to improve all single models. After several turns, all single models will be much better. I got at least 80 bps with only 1 main architecture, so I think improvement may be a couple of percent if we use more diverse architectures contributed by my fellow Kagglers.</li> </ol> <p>Last, I'd say I didn't expect this… My goal (which felt like a dream only 2 weeks ago) was getting a solo gold. I'm still a beginner in DL, and I've got a lot to learn from many of you. So thank you to all Kagglers who made this community beginner/learner friendly.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
6th place solution (UPDATED)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><em>2020-10-08 14:22 UTC: The first write-up was written in a hurry, so I've updated the details a bit. I'm really glad I got the gold medal at the end!</em></p> <h4>Model</h4> <p>I trained various versions of the model with different loss and data based on the following four different architectures.</p> <ul> <li>Model 1-a: CNN+GNN (private 0.35045, public 0.24371)</li> <li>Model 1-b: CNN+GNN+GRU (private 0.35192, public 0.24085)</li> <li>Model 2: Modified Version of <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's great <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">kernel</a></li> <li>Model 3: Modified Version of <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a> 's great <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">kernel</a></li> </ul> <p>My "Model 1-a" is a bit similar to the solution of <a href="https://www.kaggle.com/kingychiu" target="_blank">@kingychiu</a>, after thinking about how to propagate the long-term interaction via pairing. Here is the architecture of my model:</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2Fb1daa7f96e24d96f932f802cc57d324c%2FCNN-GCN.png?generation=1602165368336895&amp;alt=media" alt=""></p> <p><code>Residual Graph Attention</code> is a simple combination of <code>torch.bmm</code> and conv1d. The SE-Residual block is the same as the block of SE-Resnet, only the convolution has been changed from 2d to 1d. Kernel size of 1d convolution is small (3 to 7) to prevent overfitting.</p> <pre><code>class ResidualGraphAttention(nn.Module): def __init__(self, ch, dropout): super().__init__() self.conv = nn.Sequential( nn.Conv1d(ch, ch, kernel_size=7, padding=3), nn.BatchNorm1d(ch), nn.ReLU(), nn.Dropout(dropout), nn.Conv1d(ch, ch, kernel_size=7, padding=3), nn.BatchNorm1d(ch), nn.ReLU(), nn.Dropout(dropout) ) self.relu = nn.ReLU() def forward(self, src, attn): """ :param src: feature vector. [batch, features, seq] :param attn: attention matrix. [batch, seq, seq] :return: """ h = self.conv(torch.bmm(src, attn)) return self.relu(src + h) </code></pre> <p>The last relu layer corresponds to what is described in the postprocessing part. The primary and secondary bpps varies among models (contrafold, rnasoft…) to make diversity. Pre-training like <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's kernel was applied to all of my models, but I'm not sure how it affected the private score (public score was not improved at all by pre-training).</p> <p>I was initially unsure of how robust GRU and Transformer were to changes in sequence length, so I focused on modeling CNN-GNN. Later on, I noticed that adding GRU layer improve a bit on public LB (Model 1-b) so I started to focus on the GRU model, but the first model was better after all.</p> <h4>Data</h4> <ul> <li>Replace original structure, predicted_loop_type and bpps files with eternafold &amp; its mfe structure<ul> <li>this gave me big boost (~0.005 in both public and private, see my comment below)</li> <li>mea-based structure didn't help</li></ul></li> <li>Add bpps from vienna(T=20, 50, 70), contrafold, rnafold as additional channels of attention layer</li> </ul> <h4>Feature</h4> <ul> <li>max, sum, nb_count, 2ndmax, 3rdmax of bpps</li> <li>not using distance matrix</li> </ul> <h4>Validation</h4> <ul> <li>7-fold StratifiedGroupKFold</li> <li>Grouped by sequence similarity (use agglomerative clustering with hamming distance)</li> </ul> <h4>Loss</h4> <ul> <li>training with all data</li> <li>use various sampling weight like log(snr + 1.1)/2, log(snr + 0.5)/2, …</li> </ul> <h4>Training</h4> <ul> <li>Adam with default parameters</li> <li>The first 30 epochs are trained using 5 columns, and then the weight of the 2 columns not used for scoring is reduced to 0.01</li> </ul> <h4>Ensemble</h4> <ul> <li>Ridge stacking</li> <li>Use all five columns as features for each column's prediction</li> </ul> <h4>Postprocessing</h4> <ul> <li>Clip all prediction with [-0.5, 100] (according to the data description, all ground truths should be greater than -0.5)</li> </ul> <h4>Others</h4> <ul> <li>I tried to use eternabench to look at the effect of sequence length and to predict the effect of scaling, but I did not apply scaling based on these experiments because it didn't seem to be clear result for me. I should have looked more carefully.</li> </ul> <h4>Resut</h4> <p></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2F53ce2be1a5172288889ba72360a41330%2F22b3ff0b8f92a0f6d126e2ebed329864.png?generation=1602042636806898&amp;alt=media" alt=""></p> <p>I finally did it! </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2Fe6909d4e5628e74654bda2bf6f9a8fdf%2Fcdb350b981b5515cbd56c48614920847.png?generation=1602167470518665&amp;alt=media" alt=""></p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
4th place solution: code & blog published
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I finally got around cleaning up the code and writing a more elaborate write-up in the form of a blog post!</p> <p>The code can be found <a href="https://www.kaggle.com/group16/covid-19-mrna-4th-place-solution" target="_blank">here on Kaggle</a> or on <a href="https://github.com/GillesVandewiele/covid19-mrna-degradation-prediction" target="_blank">Github</a>.</p> <p>A blog post can be found on <a href="https://gillesvandewiele.medium.com/predicting-mrna-degradation-using-gnns-and-rnns-in-the-search-for-a-covid-19-vaccine-b3070d20b2e5" target="_blank">Medium</a>. As always, I would greatly appreciate any kind of feedback. If there is anything unclear or badly written, you can definitely tell me.</p>
Mechanisms of Action (MoA) Prediction
Thank you all! First time competition, silver medal! My blending nn+tabnet+resnet
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank everyone, I learned a lot from your great notebooks and discussions.</p> <p>My final solution is blending 3 models:<br> NN + Tabnet + Resnet</p> <p>NN public score 0.01832</p> <p>Tabnet: Actually I don't know the score since I run out of submission, I just put it into my blending model. But I think it is around 184x. Since this is a modified version of this notebook from kushal:<br> <a href="https://www.kaggle.com/kushal1506/moa-tabnet-inference" target="_blank">https://www.kaggle.com/kushal1506/moa-tabnet-inference</a></p> <p>Resnet: public score 0.01861</p> <p>Thank you all!</p>
RSNA STR Pulmonary Embolism Detection
A CNN Post-Processing method for Image level.
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all the participants and the winners. And I also want to thank my teammates <a href="https://www.kaggle.com/haqishen" target="_blank">qishen</a>, <a href="https://www.kaggle.com/boliu0" target="_blank">Bo</a> for their efforts.<br> I'd like to share a trick used by our team here, which may not be as good as the top team's solution, but I hope this can help you.<br> And this trick is a post-processing method for image level.</p> <h2>Extract the probability prediction</h2> <p>Extract the probability prediction from 2D-CNN(like efficientnet-b0), which was trained on 2D images. And save the prediction for next step.</p> <h2>Pooling Post-Processing</h2> <p>From the previous step, we have the 2d-CNN probability, and then we sort them by the <strong>ImagePositionPatient_z</strong>. As we all know, contiguous images from same person should have similar label/probability after sorted. So we used 1D-Pooling to adjust the contiguous probability, it can improve our image level CV by about 0.01+<br> The code below.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F4ee0cb9c54a362ff8f2cc4ff7d5088dc%2Fpoolingpp.png?generation=1603765054719087&amp;alt=media" alt=""></p> <h2>CNN Post-Processing</h2> <p>Meanwhile, we also designed a simple cnn as a stage2-model to predict the contiguous probability, hope that the cnn can learn more context information from the contiguous probability. And the training-set is also 2d-CNN probabilitysorted by <strong>ImagePositionPatient_z</strong>. In training, we sampling 80 probs to train the cnn, like that.<br> The CNN Post-Processing is improve our image level CV by about 0.01+.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F099c16938208c0c881f7c53440b56475%2Fsampling.jpg?generation=1603765599705036&amp;alt=media" alt=""><br> And the shape of model's output is same as input. Finally, we interpolated the probability predictions back based on the total number of the person's original image samples.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2Feca02ccfc3151804652067a84bd9cde5%2Finter.jpg?generation=1603765870690625&amp;alt=media" alt=""></p> <p>and the cnn model<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1270655%2F9e0cca527bed91c0d0ff03bd340ffcbc%2Fcnn.jpg?generation=1603765956961639&amp;alt=media" alt=""></p> <h2>Combined the Pooling Post-Processing and CNN Post-Processing</h2> <p>Finally, just combined the above two Post-Processing results directly. it would improve our image level CV by about 0.02+.</p> <p>That's all I want to share, and I hope it helps.</p>
Mechanisms of Action (MoA) Prediction
26th Place Solution / Source Code
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to say my gratitude to the Laboratory for Innovation Science at Harvard, the Kaggle Team, and all of participants inspiring me a lot.</p> <h1>Source</h1> <ul> <li><a href="https://github.com/kazumitsusakurai/lish-moa" target="_blank">Source code</a></li> <li><a href="https://www.kaggle.com/kazumitsusakurai/submission-for-moa" target="_blank">Final submission</a></li> </ul> <h1>Score</h1> <p>The Final submission score</p> <ul> <li>Private: 0.01608</li> <li>Public: 0.01820</li> <li>CV: 0.01550 (without ctl_vehicle)</li> </ul> <h1>Summary</h1> <h4>Denoising Autoencoder</h4> <p>I used normal denoising autoencoder instead of the swapping method, and I concatenated it with the original features. It improved the public test socre from 0.01842 to 0.01834 but CV was not improved.</p> <h4>Weighted Loss</h4> <p>I used weighted loss which weights targets that only occur few times in the train data (I used 40).<br> It significantly improved CV score (about -0.0003) but public test score wasn't improved. In order to avoid overfitting to the train data I blended both weighted-loss models and non-weighted-loss models.[edit]</p> <h4>Blending</h4> <p>I adopted the simple weighted average of</p> <ul> <li>2 Hidden Layer NN (7 seed, 7 OOF, Train each model with weighted-loss and non-weighted-loss)</li> <li>3 Hidden Layer NN (same as the above)</li> <li>4 Hidden Layer NN (same as the above)</li> <li>TabNet (6 seed)</li> <li>DeepInsight Model (2 seed, different settings)</li> </ul> <h4>Other Stuffs That Worked</h4> <ul> <li>Rank gauss</li> <li>Adding statistical features (sum, mean, std, kurt, skew, median, etc..)</li> <li>PCA (only applying to TabNet)</li> <li>Smoothing loss</li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
83rd Place - Feature Eng & Key Takeaways
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I would like to thank the hosts for organizing a great contest that helps us contribute to the fight against Covid. Hopefully the work done here will help accelerate vaccine research!</p> <p>This post is to show some key findings and discuss the feature engineering that we did for our final model. I hope that people in the field can use this for various other open issues:</p> <ol> <li>Our Graph Neural Net ensemble gave us the winning result - the ensemble which included RNN performed worse (the RNN included ensemble was not scaled this is probably why)</li> <li>Wavenet model gave the worst individual performance and as much as I tried to FE and hyper parameter tune it did not help</li> <li>We used Vienna RNA and its PERL scripts to generate additional features. In some cases we got worse scores on public LB but now I can see that their addition did help on the private LB. <a href="https://www.kaggle.com/nasirkhalid24/61st-pos-feature-engineering-entropy-mtn-plot" target="_blank">I share my notebook here</a> where it shows the various features we extracted including entropy, mountain plot etc.</li> </ol> <p>I will leave it to my teammate <a href="https://www.kaggle.com/arpitrf" target="_blank">@arpitrf</a> to discuss his excellent work related to our winning submission - he may make a post soon</p> <p>Will update this post if I discover something else.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
Congrats, and my 20 min lottery tickets (154th place/bronze medal)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>The original topic idea is from <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728" target="_blank">https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728</a> and I have another discussion topic talk about the shake-up: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189124" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189124</a><br> I'll show the whole process of my final result.</p> <p>My previous model is at 0.24120 public LB, and 0.41489 private LB, with 370th place.<br> This 0.24120 model is Fork from notebook <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a><br> I read through his kernel takes several days. Then re-run his model by changing some parameters, and get 3 same model result with different parameters. Doing averaging of these 3 models and get a 0.24120 public LB solution</p> <p>This notebook shows a lot of things about the test set <a href="https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data" target="_blank">https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data</a>? <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1860617%2Ff5b09cf8980ee673be909243f05a3d61%2Fresult.png?generation=1602047821073873&amp;alt=media" alt=""><br> There are lots of figures that show: All model prediction has the same and correct trend, so we can trust your single model with the best public LB.</p> <p>Then using the idea of <a href="https://www.kaggle.com/sushize" target="_blank">@sushize</a>. I using my best single model multiple by 0.92 and 1.1. and using <a href="https://www.kaggle.com/scaomath" target="_blank">@scaomath</a> notebook code to shows these two new results. It takes me 20mins in total. In the end, the 0.92 results get a better fitting, to help me win a bronze medal.</p> <p>Thank you all, if you think this processing is well, don't forget to upvote the original authors topic <a href="https://www.kaggle.com/sushize" target="_blank">@sushize</a> <a href="https://www.kaggle.com/scaomath" target="_blank">@scaomath</a> <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a>. The link is below:<br> <a href="https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728" target="_blank">https://www.kaggle.com/c/LANL-Earthquake-Prediction/discussion/94324#latest-552728</a><br> <a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a><br> <a href="https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data" target="_blank">https://www.kaggle.com/scaomath/private-leaderboard-shake-up/data</a>? </p>
Mechanisms of Action (MoA) Prediction
53rd place in a nutshell
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3612572%2F7e56f0bed3d9bf6ffb7e6ec88fcb1b31%2Ffreestocks-nss2eRzQwgw-unsplash%20(1).jpg?generation=1606775950410689&amp;alt=media" alt=""><br> Photo by <a href="https://unsplash.com/@freestocks?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">freestocks</a> on <a href="https://unsplash.com/s/photos/pills?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p> <p>Hey Kagglers !</p> <p>It's been a real pleasure being part of this competition. It was a new experience, inspired us to try a lot of things (the majority of which didn't work, but we're always happy for the few that did).</p> <p>I'd like to thank my amazing teammates <a href="https://www.kaggle.com/amedprof" target="_blank">@amedprof</a>, <a href="https://www.kaggle.com/kneroma" target="_blank">@kneroma</a>, <a href="https://www.kaggle.com/ulrich07" target="_blank">@ulrich07</a> and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> without whom nothing would have been possible.</p> <p>Our models were trained on HP Z8 with Intel® Xeon(R) Gold 6234 CPU @ 3.30GHz × 32 and ran 8 times faster than on a Kaggle kernel, what enabled us to quickly try a lot of different things.</p> <h2>1. Data Processing &amp; feature engineering</h2> <p>We adopted almost the same pipeline as the public kernels, no real innovation here.</p> <ul> <li><p><a href="https://www.kaggle.com/louise2001/rapids-feature-importance-is-all-you-need" target="_blank">Feature Importance analysis</a></p></li> <li><p>Feature normalization with RankGauss (switching from this to standard normalization didn’t seem to deteriorate our results).</p></li> <li><p>PCA features : Public kernels fit the PCA on the test set, we got rid of this idea as it forces model to be trained during the final submission (which is slow), and makes them less robust.</p></li> <li><p>Feature selection : we stuck to the standard variance threshold feature selector (simple is the new black).</p></li> </ul> <h2>2. Models</h2> <p>We mostly built two ensembles : </p> <h3>Theo's ensemble</h3> <p>I use a MLP and a MLP with skip connection with a training strategy similar to <a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">this public kernel</a>. </p> <ul> <li><p>I got rid of the checkpointing which artificially boosts CV</p></li> <li><p>Pretraining is only done for 10 epochs</p></li> <li><p>When unfreezing, I progressively unfreeze only half of the layers</p></li> <li><p>I used a linear scheduler with warmup</p></li> <li><p>Batch size of 256 worked better for me</p></li> </ul> <p>I also trained a TabNet, using the following parameters. Unsupervised pretraining didn't help.</p> <pre><code>n_d = 32 n_a = 128 n_steps = 1 gamma = 1.2 n_shared = 1 n_independent = 1 lambda_sparse = 0 mask_type = "entmax" batch_size = 256 virtual_batch_size = 128 epochs = 200 lr = 2e-2 weight_decay = 1e-5 scheduler_params = dict(mode="min", patience=5, min_lr=1e-5, factor=0.1) scheduler_fn = ReduceLROnPlateau </code></pre> <p>My three models achieve CVs of 0.0160~ using <a href="https://www.kaggle.com/chrisdeotte" target="_blank">@chrisdeotte</a>'s folds. I used 7 seeds and 7 folds for each model, and their average score <strong>CV 0.01553 - Public 0.01839 - Private 0.01618</strong>.</p> <p>I also produced another blend by simply retraining the previous models with pseudo-labelling. I used the soft labels directly and it improved my LBs by a few (+0.00003 public, +0.00001 private).</p> <p>I consider my ensembles to be quite weak, the thing is, the public kernel was already heavily tweaked and was therefore quite hard to really improve. </p> <h3>Amed's ensemble</h3> <p>It was hard to beat public kernels, I built several models that didn't perform better than public baselines. Therefore, I decided to start from these public kernels (<a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">Public1</a>, <a href="https://www.kaggle.com/kushal1506/moa-pytorch-feature-engineering-0-01846" target="_blank">Public2</a>) and in order to improve them I tried some changes on training strategy, feature engineering and validation strategies used.<br> I used the following validation strategies :</p> <ul> <li><p>Strategy 1 : 7 versions of Chris's folds<br> Assumption 1 : frequent targets in train will also be frequent in test.</p></li> <li><p>Strategy 2 : 3 versions of my own custom folds<br> Assumption 2 : targets that are rare in train will be frequent in test and targets that are most frequent in train will be rare in test. Basically the contrary of assumption 1.</p></li> </ul> <p>My final ensemble is a blend of the two models trained on 10 folds with 2 seeds.</p> <h3>Post-Processing :</h3> <p>We trained binary classifiers for the most prevalent drugs in train set, and used them to correct our model probabilities. It gave us a 0.0001 boost on public, but didn't really work on private. Fortunately we also used a safer approach that was enough to grab a 53rd place. We're not sure if it really helped in the end.</p> <h3>Final Blend :</h3> <p>We used a simple average of :</p> <ul> <li><p>The three models in <a href="https://www.kaggle.com/vikazrajpurohit/3-model-training-and-inference" target="_blank">this kernel</a></p></li> <li><p>Theo's blend</p></li> <li><p>Theo's blend with PL</p></li> <li><p>Amed's blend</p></li> </ul> <p>It achieves a public LB of 0.01822 which ranks between 150th and 200th, but we knew it would do better on private. </p> <p>We could not even reproduce our 0.01816 solution which gave us a nice ranking on public, and even if we did we most likely wouldn't have selected it as it was not really robust.</p> <h2>3. What worked, and what didn't</h2> <p><strong>What worked :</strong></p> <ul> <li><p>Weight decay</p></li> <li><p>Correctly scheduling the learning rate, the OneCycleLR one worked pretty well</p></li> <li><p>Transfer Learning for shallow models</p></li> <li><p>Blending : It always works with enough diversity</p></li> </ul> <p><strong>What didn't work :</strong></p> <ul> <li><p><a href="https://www.kaggle.com/amedprof/lgbm-custom-loss-one-lgb-for-206-labels" target="_blank">Custom-loss Lgbm in order to learn simultaneously all classes</a>.</p></li> <li><p>Metric Learning to detect drugs in the test set that were in the train set</p></li> <li><p>Target Encoding</p></li> <li><p>Using a binary classifier to predict if 2 experiments are from same drug or not</p></li> <li><p>Embedding features depending on drug targets</p></li> <li><p>Tabnet pretraining</p></li> <li><p>Second level models / stacking</p></li> <li><p>And a lot more …</p></li> </ul> <h5>Thanks a lot for reading, and happy Kaggling !</h5>
OSIC Pulmonary Fibrosis Progression
[#145] Robust Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OSIC Pulmonary Fibrosis Progression <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I decided to model only with the values that are also available in the test set. That means no FVC other than the first one, no percent other than the first one.</p> <p>Quickly, I realized that the data doesn't really tell anyting and went with the easiest and most robust solution. I normalized all data to FVC_init = 1 and shifted the start_week to 0.</p> <p>Then, I fitted a linear regression model to shifted weeks vs normalized FVC. Turns out that was my best private LB submission.</p> <p><img src="https://i.imgur.com/qN56d6Q.png" alt="img"></p> <p><a href="https://www.kaggle.com/ilu000/osic-my-normalized-submission-with-explanation" target="_blank">link to my notebook</a></p>
Lyft Motion Prediction for Autonomous Vehicles
10th place solution (vectornet)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Lyft Motion Prediction for Autonomous Vehicles <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle and Lyft for hosting the competition.<br> <strong>Rule-base</strong> is only my team name, but not a solution.😀</p> <p>In this competition, I tested two papers from waymo &amp; google research:<br> vector net: <a href="https://arxiv.org/abs/2005.04259" target="_blank">https://arxiv.org/abs/2005.04259</a> (final solution)<br> tnt: <a href="https://arxiv.org/abs/2008.08294" target="_blank">https://arxiv.org/abs/2008.08294</a> (hard to train, = =!)</p> <p>vector net:<br> batch=1024, 1 * 1080 GPU + 8 core CPU, trained for 2 days<br> &lt;=800 vectors</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5513638%2Fdc1279adf7cddb1b4ad05a32e108d175%2F149.png?generation=1606437813920943&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5513638%2Fcb42ce425d35ec735107a2f2178e0985%2F12.png?generation=1606437866782123&amp;alt=media" alt=""></p> <p>tnt:<br> batch=128, 1 * 1080 GPU + 8 core CPU, trained for 0.5 day<br> &lt;=800 vectors</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5513638%2Fde8650eb8c219883afa0c64f65c51911%2Ftnt1.png?generation=1606438391638966&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5513638%2F97dd2340242bbc2ff9a1a8492b882d2e%2Ftnt2.png?generation=1606438403511527&amp;alt=media" alt=""></p>
Mechanisms of Action (MoA) Prediction
Private 55th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, Congrats to winners.. </p> <p>Thanks to <a href="https://www.kaggle.com/gogo827jz" target="_blank">@gogo827jz</a> and other kagglers who shared wonderful ideas throughout this competition. </p> <p>Thanks to my wonderful teammates <a href="https://www.kaggle.com/ratan123" target="_blank">@ratan123</a> and <a href="https://www.kaggle.com/jagadish13" target="_blank">@jagadish13</a>. We started competition pretty late.. Our submission was an average ensemble of 4 models. </p> <ol> <li>Tabnet</li> <li>Pytorch models based on Drug id's. </li> <li>Two heads Resnet</li> </ol> <p>Each model had 49 different predictions which were averaged out to get final prediction</p> <p>We have made our solution public<br> <a href="https://www.kaggle.com/manojprabhaakr/inference-model-nn-2-models-1-tabnet-and-1-keras" target="_blank">https://www.kaggle.com/manojprabhaakr/inference-model-nn-2-models-1-tabnet-and-1-keras</a></p> <p>What didn't work (atleast for us):</p> <ol> <li>Resnet model based feature extraction on top of existing features. </li> <li>NODE/Catboost model based tuning</li> <li>AutoEncoder based feature selection. </li> <li>EfficientNet based training (Deep Insight Transform). </li> <li>Clustering of features/adding statistical features. </li> <li>Postprocessing - Clipping of values (with different thresholds)</li> </ol>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
33rd Place - RNA Folding Analysis
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>OpenVaccine: COVID-19</h1> <p>I'm excited to win solo silver. Thank you Eterna and Kaggle for hosting this fun and important competition. Five days ago, I knew nothing about RNA. Now I know the basics and I'm curious to learn more about this mysterious molecule.</p> <h1>Unique Competition</h1> <p>This is unique competition. When we call <code>model.fit(X, y)</code>, we use features <code>X</code> to predict targets <code>y</code>. Many Kaggle competitions have noisy targets <code>y</code> due to human labeling. This competition had noisy targets <strong>and</strong> noisy features! The provided features <code>X</code> were only estimates and not truth and the targets <code>y</code> were estimates too!</p> <h1>Private Test <code>id_01cd826d9</code></h1> <p>Here's private test RNA <code>id_01cd826d9</code>. The provided structure is display below. (Plots use <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> <a href="https://www.kaggle.com/its7171/how-to-use-arnie-on-kaggle-notebook" target="_blank">notebook</a>)<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F0f4190923b04ac5103045f5e9bf30d85%2Fimg0.jpg?generation=1602037311050479&amp;alt=media" alt=""></p> <h1>RNA Folding <code>id_01cd826d9</code> Uncertainty</h1> <p>The above provided structure is only a <strong>guess</strong>. We see in the picture that there are attractions <code>(1)</code> and <code>(2)</code>. The bases <code>A</code> and <code>U</code> each have a probability of 14% to bond as indicated by the <code>bpps</code> matrix. Therefore private test <code>id_01cd826d9</code> may actually have the following structure. Note, I have changed the color scheme below. Blue indicates high probability of bonding while yellow indicates low probability.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fa293c37ca79657fa55bcb2a22152b297%2Fimg2.jpg?generation=1602037322557672&amp;alt=media" alt=""></p> <h1>RNA Folding <code>id_01cd826d9</code> Uncertainty</h1> <p>We still don't know if the structure is the above. We notice that there are attractions <code>(3)</code>, <code>(4)</code>, <code>(5)</code>. Therefore the RNA may fold into the following<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F135cdd09c9f07df86b39f69197a8163c%2Fm3.png?generation=1602037807933315&amp;alt=media" alt=""></p> <h1>Dealing With Uncertainty</h1> <p>Since the structure is uncertain, I did 2 things. </p> <ul> <li>Convert the <code>bpps</code> matrices into color images and convolute them into embeddings for my models</li> <li>Predict targets for the top 3 folding of each test RNA and average according to probability</li> </ul> <p>If I had more time, I would train with all the different folding of train images and set train sample weight according to their probability.</p> <h1>BPPS Matrix</h1> <p>Below is an example BPPS matrix. To input these into a CNN, first skew the matrix. Then each number in the matrix represents one of three types of bond (1) C-G (2) A-U (3) G-U. Change that cell to red, green, blue respectively. Set the intensity of the color equal to the value in the matrix. Next convolute maintaining the <code>seq_length</code> but using <code>strides</code> to reduce the other dimension</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Ff4371b3166cf5ffa5ac6cea423814dde%2Fimg3.jpg?generation=1602040512839574&amp;alt=media" alt=""></p> <pre><code> inp2 = L.Input(shape=(seq_len, WIDTH, DIM)) x2 = L.Conv2D(24,3,padding='same',activation='relu')(inp2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,padding='same',activation='relu')(x2) x2 = L.Conv2D(24,3,strides=(1,2),padding='same',activation='relu')(x2) x2 = tf.reshape(x2,shape=(-1, x2.shape[1], x2.shape[2]*x2.shape[3] )) </code></pre> <h1>Example Foldings</h1> <p>For the <code>BPPS</code> matrix above, below are some sample foldings<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F17f5e7e66a48ee6c202639376136dbea%2Fimg4.jpg?generation=1602039065083154&amp;alt=media" alt=""></p> <h1>Starter Notebooks</h1> <p>I studied the following 4 starter notebooks. Thanks, these notebooks are great! </p> <h3>GRU/LSTM <a href="https://www.kaggle.com/xhlulu" target="_blank">@xhlulu</a></h3> <p><a href="https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model" target="_blank">https://www.kaggle.com/xhlulu/openvaccine-simple-gru-model</a></p> <h3>Deep GCN <a href="https://www.kaggle.com/symyksr" target="_blank">@symyksr</a></h3> <p><a href="https://www.kaggle.com/symyksr/openvaccine-deepergcn" target="_blank">https://www.kaggle.com/symyksr/openvaccine-deepergcn</a></p> <h3>AE Pretrain, GNN Attn CNN - TensorFlow <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a></h3> <p><a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p> <h3>AE Pretrain, GNN Attn CNN - PyTorch <a href="https://www.kaggle.com/takadaat" target="_blank">@takadaat</a></h3> <p><a href="https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain" target="_blank">https://www.kaggle.com/takadaat/openvaccine-pytorch-ae-pretrain</a></p> <h1>Build 109 Models!</h1> <p>I built 109 different models by modifying the above starter notebooks and then predicted the top 3 foldings for each test RNA (reported by <code>ViennaRNA</code> and <code>ViennARNIE</code>). Next I plotted the correlations of their predictions. I removed models whose predictions looked strange and ensembled the rest with models of similar type. Then I ensembled the ensembles into a final model with <code>(GRU + GCN + 2 * TF + 2 * PY)/6</code></p> <p>With LB probing, I discovered that multiplying the public LB predictions by <code>1.04</code> improved LB score. So, for my final private submission, I also multiplied predictions by <code>1.04</code>. This proved to be helpful.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F4adadba9f7f2f71edd8dea41cc738eea%2Fcorr_mat.png?generation=1602039482879603&amp;alt=media" alt=""></p> <h1>Thank You</h1> <p>Thank you everyone for your sharing your discussions and notebooks. I could not have made my models without your help.</p>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
[38th solution]My attempt at simulating private leaderboard
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>UPDATE after regrading: it works! based on my submissions scores.</p> <p><a href="https://www.kaggle.com/scaomath/covid-19-openvac-private-lb-simulation-baseline" target="_blank">https://www.kaggle.com/scaomath/covid-19-openvac-private-lb-simulation-baseline</a></p> <p>Apparently it does not work because seq_len matter, using public to simulate private is fruitless. 😂</p>
Mechanisms of Action (MoA) Prediction
Our Bronze Medal Solution (if we had picked it)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Our Best Solution</h2> <p>I would like to thank the Harvard and Kaggle teams, my teammate <a href="https://www.kaggle.com/hjq133133" target="_blank">@hjq133133</a>, and all other competitors who shared their knowledge and experience. This was the first competition I was ever committed to in Kaggle. I learned and produced a lot, and I truly enjoyed it!</p> <p>Our best solution was in the bronze zone but we did not pick it.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F337744%2F60c2d578227a25901f367e9eb6da2b2c%2FScreen%20Shot%202020-12-01%20at%208.06.58%20AM.png?generation=1606828064660638&amp;alt=media" alt=""></p> <p>This solution, shown in the figure below, was based on a weighted average of three models: <a href="https://www.kaggle.com/tolgadincer/nn-lb-0-1836-t19" target="_blank">a simple NN</a>, TabNet, and <a href="https://www.kaggle.com/tolgadincer/transfer-model-training-v2-t1" target="_blank">ResNet with transfer learning</a>. </p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F337744%2Fbb9e168cf01bba58b61c32b516647bab%2FFlowchart.png?generation=1606936444743021&amp;alt=media" alt=""></p> <h4>Feature Engineering</h4> <p>The feature engineering, after the data went through a Quantile Transformation, was slightly different for each model but included the following methods:</p> <ul> <li>PCA components</li> <li>Variance threshold</li> <li>Basic statistical quantities</li> <li>Kmeans clusters</li> </ul> <h4>Weight Optimization</h4> <p>At the beginning of the competition, I wrote a <a href="https://www.kaggle.com/tolgadincer/blending-multilabeled-models-with-scipy" target="_blank">code</a> to optimize blend's weights with scipy's optimizer but couldn't get reliable results. Then, I decided to use the Lagrange's multiplier strategy, which worked perfectly. This code is available <a href="https://www.kaggle.com/tolgadincer/ensemble-weight-optimization" target="_blank">here</a>. It's very easy to use and the weights need not be given by hand.</p> <h4>Some of the tricks that improved our experiments</h4> <ul> <li><strong>Bias initialization</strong>: The data was extremely imbalanced and the default bias initialization for the output layers was leading to a very cold start in my initial experiments, not good for an impatient person like me. So, I decided to initialize the output bias with the target mean as I discussed it in <a href="https://www.kaggle.com/c/lish-moa/discussion/185161" target="_blank">here</a>. This allowed me to see better CV values and I believe many of the other competitors benefitted from it.</li> <li><strong>Label smoothing</strong>: A smoothing value of 1e-5 worked very well in all of our experiments. The most critical point was to clip only the training data and not to touch the validation data as I noted in <a href="https://www.kaggle.com/c/lish-moa/discussion/189857#1043953" target="_blank">here</a>.</li> <li><strong>Learning rate schedulers</strong>: I had experience only with the reducelr scheduler but the public notebooks used different schedulers. I explored how various schedulers change the learning rate in <a href="https://www.kaggle.com/tolgadincer/tf-keras-learning-rate-schedulers" target="_blank">here</a>, and this allowed me to fine-tune our models.</li> </ul>
OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
77th -> 58th place brief write up
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I started this competition from <a href="https://www.kaggle.com/mrkmakr" target="_blank">@mrkmakr</a> 's great notebook.<br> ref: <a href="https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn" target="_blank">https://www.kaggle.com/mrkmakr/covid-ae-pretrain-gnn-attn-cnn</a></p> <p>I tuned some in models, epochs, and data augmentation, then check CV scores. Finally I choose 8 submissions based on CV scores and simply took average of them. Averaging from pretty different models didn't work for me, so I gave up using other types of nn models.</p> <p>I had a scale downed sub which scored 1st (if chosen), but I couldn't select it because there is no solid theory for me.<br> ref: <a href="https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189182" target="_blank">https://www.kaggle.com/c/stanford-covid-vaccine/discussion/189182</a></p> <p>Anyway, I enjoyed this short-term competition and I hope our activities can contribute to the researches. Thank you to all the organizers and participants.</p>
RSNA STR Pulmonary Embolism Detection
[54th solution] Baseline inference optimization with TTA (No additional training)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>This is the solution for my team, "Speed is all you need". We were about 40th before the public baseline. TLDR at bottom.</p> <p>First of all, thanks to my team, especially <a href="https://www.kaggle.com/Neurallmonk" target="_blank">@Neurallmonk</a>, who has been exceedingly helpful, especially considering it is his first competition. Thanks to Ian Pan for the dataset, upon which our first month of experimenting was based on, and thank you to Kun for the baseline, though I was frustrated when it was initially released, I have learned so much from the code, and a medal is only a medal, but learning the code lasts forever.</p> <p>Our solution is nothing special - just hours and hours of optimizing Kun's baseline inference. We did not have the compute resources in order to retrain the first stage models, and a few of our teammates were busy during the last week of the competition, myself included as I had exams throughout. Otherwise, we would have liked to replace GRU with LSTM and ensemble.</p> <p>I optimized Kun's inference by moving model loading outside of for loops, replacing zoom with albumentations' resize, and hundreds of other small changes which combined together to make the inference over 3x faster. This results in a score of 0.229 public, 0.230 private LB in 2.5 hours (original notebook was 0.233 public, 0.232 private in 9 hours). Admittedly, I have no clue why this notebook has a slightly better public score, but the main goal was to be faster. This 2.5 hour long notebook is <a href="https://www.kaggle.com/stanleyjzheng/fast-baseline-with-tta?scriptVersionId=45225265" target="_blank">Here</a></p> <p>Then, we added TTA. We used TTAx3 with flips, CLAHE, brightness/contrast/hue. TTAx3 was the maximum we could fit in 9 hours, and these augmentations were found with trial and error on the public LB. Our final notebook with TTA is <a href="https://www.kaggle.com/stanleyjzheng/fast-baseline-with-tta-after-gru-8e13cc?scriptVersionId=45424927" target="_blank">Here</a></p> <p>TLDR: We used the original models trained in the public baseline, and changed purely inference. After optimization, our notebook runs in 2.5 hours (0.229 public, 0.230 private), and after TTAx3, we achieve a score of 0.221 public, 0.216 private.</p>
Google Research Football with Manchester City F.C.
12th Place Solution(ML approach)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I will represent my team <strong>Ken+digimagi</strong> in this post to publish some information on our solution. </p> <p>Before I go deeper, I wanted to first thank the hosts, Kaggle, Google Research, and Manchester City and the community as a whole. I spent 132 hours on this competition. I am glad I did this competition because not only I learned a lot, but also I had a very exciting time.</p> <p>Credit also to these two notebooks where a lot of our inspiration came from:<br> my teammate <strong>Ken Miller</strong> <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> <a href="https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation" target="_blank">https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation</a><br> <strong>Tom Kelly</strong> <a href="https://www.kaggle.com/tomkelly0" target="_blank">@tomkelly0</a> <a href="https://www.kaggle.com/tomkelly0/lightgbm-10mins-940" target="_blank">https://www.kaggle.com/tomkelly0/lightgbm-10mins-940</a></p> <p>We would also like to thank <strong>Wekick</strong> and <strong>SaltyFish</strong> for their continued lead in this competition and their influence on our solution.</p> <p>Also, after the submissions were closed, there was a fierce battle going on with <strong>kangaroo</strong>, <strong>xin &amp; willson &amp; pigzz</strong> for the last one gold medal. It disturbed my restful sleep, but at the same time, it made me the most excited of the year.</p> <h1>Solution</h1> <p>Our solution itself is not too innovative and is simple. This competition can be approached in a variety of ways: rule-based, machine learning, and reinforcement learning. When I decided to tackle this competition in earnest, there were two teams Wekick and SaltyFish that dominated the other teams. Our starting point was the question of how strong an agent could we make if we could mimic the strong agents of these teams. We reduced it to a multi-class classification machine learning problem of predicting the actions of these strong agents.</p> <p>Here(<a href="https://www.kaggle.com/digimagi/12th-place-solution-ml-approach-lb-1395" target="_blank">https://www.kaggle.com/digimagi/12th-place-solution-ml-approach-lb-1395</a>) is our end-to-end notebook(except collecting episodes) which learned from WeKick:18239675, scores LB about 1395.</p> <h1>Algorithm</h1> <p>Gradient boosting trees (LightGBM), one model per <code>ball_owned_player</code>.<br> Performed hyperparameter tuning with Optuna.<br> The average ensemble, with the hyperparameter changed, raised the score somewhat.<br> We had been working on K-fold model for a while, but it took a long time to learn and we were able to get a certain level of accuracy without using K-fold, so we stopped using it halfway through.</p> <p>Here are the score trends for our model.</p> <table> <thead> <tr> <th>date</th> <th>score</th> <th>accuracy</th> <th>number of features</th> </tr> </thead> <tbody> <tr> <td>Nov. 16</td> <td>600</td> <td>0.65(with a few episodes)</td> <td>147</td> </tr> <tr> <td>Nov. 22</td> <td>1070~1150</td> <td>0.60</td> <td>482</td> </tr> <tr> <td>Nov. 25</td> <td>1300~1320</td> <td>0.75~0.77</td> <td>744</td> </tr> <tr> <td>Nov. 29</td> <td>1380~1400</td> <td>0.79~0.81</td> <td>1154</td> </tr> </tbody> </table> <h1>Data</h1> <p>The following submissionId agent episodes were used as candidates.</p> <ul> <li>SaltyFish:18010220</li> <li>SaltyFish:18211783</li> <li>WeKick:18220693</li> <li>WeKick:18127517</li> <li>WeKick:18239675</li> </ul> <p>In the beginning, we used episodes from SaltyFish:18010220, which has a large number of matches, but we were not able to achieve high accuracy. However, we were able to achieve relatively high accuracy with the WeKick:18127517 or WeKick:18239675, so we focused on these agent's episodes. We tried combining them, excluding losing games.</p> <h1>Features</h1> <p>We generated 1154 features from the observations.<br> Some of them are directly from raw observations, but we used the following paper (in Japanese) as a reference to generate features based on the football domain.<br> <a href="https://www.i-juse.co.jp/statistics/xdata/sportscompe06-sem_special_award.pdf" target="_blank">https://www.i-juse.co.jp/statistics/xdata/sportscompe06-sem_special_award.pdf</a><br> <a href="https://www.ism.ac.jp/editsec/toukei/pdf/65-2-287.pdf" target="_blank">https://www.ism.ac.jp/editsec/toukei/pdf/65-2-287.pdf</a><br> <a href="https://sites.google.com/site/takumanarizuka/home/publications?authuser=0" target="_blank">https://sites.google.com/site/takumanarizuka/home/publications?authuser=0</a></p> <p>The features we created are mainly as follows:</p> <ul> <li>Distance and direction from the active player or ball to each player (polar coordinates)</li> <li>Distance and direction of each player from the main location (center position (0,0), goal line (1,0),(-1,0)) (polar coordinates)</li> <li>Distance and direction of each player to the center of gravity as a team (polar coordinates)</li> <li>Maximum y-coordinate, minimum y-coordinate and their widths for the team excluding the goalkeeper</li> <li>Maximum x-coordinate, maximum y-coordinate and width of the team</li> <li>The x-coordinate of the offside line, and whether or not an offside position has occurred</li> <li>The angle between the two goal posts and the ball</li> <li>The number of opponent/allied players in front of/behind the ball</li> <li>The number of opponent/allied players within a radius of <em>n</em> meters of the active player/ball</li> </ul> <p>We also generated features for players/ball velocity and acceleration, but since the accuracy did not improve that much, we excluded them in the process to simplify the implementation.</p> <h1>Heuristics</h1> <p>Because of the bias in the frequency of occurrence of actions, we weighted the data according to the logarithm of the frequency of occurrence.</p> <p>Also, many of the actions were the same as they were one step before, but we thought that the actions that were different from one step before (e.g., shooting, passing) were more important, so we gave them more weight.</p> <h1>Implementation ingenuity</h1> <p>We created the features without pandas because creating features with pandas seemed to be trapped by the 0.5 second per step limit, and there were times when we could not train with kaggle notebook due to lack of memory.</p> <p>Since the agent has a 100MB capacity limit, the learning rate of LightGBM cannot be very small, so we mainly used 0.1.</p> <p>Since the accuracy was in constant proportion to the score, I focused on improving the accuracy without having it play against other bots.</p> <h1>What we could not do due to time constraints</h1> <p>Consideration of actual court and xy coordinates being distorted.<a href="https://www.kaggle.com/c/google-football/discussion/190676" target="_blank">https://www.kaggle.com/c/google-football/discussion/190676</a><br> Proper handling of features when a player is ejected due to a red card.<br> Consideration of action weights just before a good play step like a goal being scored.<br> Ensemble with other models such as neural networks.</p> <p>I welcome any type of questions or comments. Thank you for all.</p>
Google Research Football with Manchester City F.C.
16th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to Kaggle, Manchester City F.C., and Google Research for this fantastic competition. Working on this competition was the most fun I've had for a while.</p> <p><strong>The tl;dr version of my solution is that I used an MLP model to stochastically imitate WeKick's agents, with some rules to help it navigate in unfamiliar waters.</strong></p> <p>After I got the GCP coupon, I looked at the competition timeline and thought that there is no way I can train a competitive RL agent from scratch in less than two weeks. I had to find some way to cut the training time shorter.</p> <p>Then I found Ken Miller's <a href="https://www.kaggle.com/mlconsult/1149-ish-bot-rl-approximation" target="_blank">RL approximation notebook</a> and learned that the imitation strategy works reasonably well. So I decided to use a similar approach to (pre)train NN models to bootstrap my policy network, with an episode scraper based on Felipe Bivort Haiek's <a href="https://www.kaggle.com/felipebihaiek/google-football-episode-scraper-quick-fix" target="_blank">Google Football Episode Scraper quick fix</a>. Huge thanks to both of you! <a href="https://www.kaggle.com/mlconsult" target="_blank">@mlconsult</a> <a href="https://www.kaggle.com/felipebihaiek" target="_blank">@felipebihaiek</a> </p> <p>After a while, I found a lot of low-hanging fruits by tuning the features and assisting rules. As the model training is restricted to experience replays, no exploration was possible. The trained agent will not know how to act in a vast part of the state space, so setting some rules to guide them back to familiar territories can be quite helpful. I decided that tuning the imitator agents would be a better use of my time than an RL moonshot. Taking the safer bet paid off and gave me two agents with ~1,300 scores on the final day.</p> <p>Some details (after some quick browsing, there seem to be a few teams using similar approaches to mine. I haven't read through them yet, so I apologize if I did not reference already-shared contents properly) :</p> <ul> <li>Episodes: Only episodes in which both agents were high-ranking were used. It was most likely a mistake. It probably made my agents perform poorly when playing against low-ranking agents. A better approach might be a submission-based filter. </li> <li>Features: Active-player-centered coordinates; coordinates and velocities of the goalkeepers are kept separately from the rest; distance-sorted coordinates, distances, angles, and velocities for the rest; Sprint and dribble sticky states; ball features; an offside feature(doesn't seem to work as expected).</li> <li>Stochastic actions: I reduced the temperature of the softmax function (by doubling the logits). This seems to work better than deterministic/greedy actions.</li> <li>Rules:<ul> <li>Using direction sticky states in features leads to overfitting and erratic behaviors, so I removed them. The downside is having the player high-passing the ball back to the goalie in free-kicks and resulting in own-goals. So I instructed the player to simply go right in free-kicks.</li> <li>The model controls only the normal and throw-ins game modes. Training the model to play other modes seem to be a waste of time.</li> <li>No dribbling.</li> <li>Sprinting is entirely rule-based. I had found this to work slightly better in the final days as it doesn't waste precious time giving the sprint command in critical moments.</li> <li>Turn the player back when they are taking the ball straight out of the field.</li> <li>The defense is the most tricky part. In early experiments, the models would give absurd commands when the ball controlled by other players is too far from the active player (possibly due to the lack of exploration during training). Therefore, I kept the agents on a tight leash when defending. The models were only allowed to take over when the ball is near enough or in high-passes; the running-towards-the-ball strategy is used otherwise. This approach gives away many easy shot opportunities to the other teams. Only on the last day had I realized that the leash was too tight for the latest models. Loosen the leash gave better results in local tests. Unfortunately, I only had two submissions left at that point, so I could not properly tune this hyper-parameter.</li></ul></li> <li>Local testing: At first, I let my agent play against the easy built-in AI and see if they can consistently beat the AI for more than five goals. Later I've found that the larger goal margin in easy mode doesn't translate to better performance against high-ranking agents, so I switch to the hard built-in AI.</li> </ul> <p>My code is public on Github(<a href="https://github.com/ceshine/google-football-2020" target="_blank">ceshine/google-football-2020</a>). Currently, it lacks proper documentation and environment reproducibility. I'll try to make it better in the coming days. </p>
Mechanisms of Action (MoA) Prediction
Post-Processing Based on Drug Classification (15th place)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>It turns out, it’s very easy to build a model that can detect one of the 8 most frequent drugs with more than 99% accuracy. And once you identify the drug you can predict all 206 labels.</p> <p>How exactly this information can be used? Initially, I experimented with applying various thresholds based on the classification model’s confidence. But the approach that worked best was using the classification confidence as the clip value to post-process the results produced by other models (or blends). </p> <p>For example, if the classification model predicts drug A with 99.9% confidence:</p> <ul> <li>Clip model predictions to the minimum of 0.999 for targets corresponding to 1's for drug A</li> <li>Clip model predictions to the maximum of 0.001 for targets corresponding to 0's for drug A. <br> This last bit can be further improved by using the squared confidence, i.e. 1e-6 instead of 1e-3 in my example. Cannot fully explain why this squaring works - my intuition is that this is because of how imbalanced the targets are, so the priors are already very low. </li> </ul> <p>This post-processing approach gave a nice boost (~0.00005) to my models, both CV and LB.</p> <p>Classification notebook (fastai): <a href="url" target="_blank">https://www.kaggle.com/dmitryvyudin/moa-drug-classification-fastai/output</a></p> <p>CV and Inference - based on a popular public notebook: <a href="url" target="_blank">https://www.kaggle.com/dmitryvyudin/pytorch-transfer-learning-with-post-processing</a></p> <p>Before post-processing: CV - 0.01564, public LB - 0.01835<br> After post-processing: : CV - 0.01559, public LB - 0.01831</p>
Mechanisms of Action (MoA) Prediction
56th solution and my share
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the kaggle official, thanks to the competition, and thanks to my team.It was one month before the game ended before I started to take part in the game seriously, but this month I was still sick. Fortunately, I persisted to the end and got a result that was enough to comfort myself.<br> My solution is a combination of one TabNet, three NNs, and one ResNet model.</p> <p><strong>1,ResNet</strong><br> Full reference from: <a href="url" target="_blank">https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model?scriptVersionId=44981070</a> This is the model I integrated on the last day, I haven’t made any changes, but it brings me an improvement of 0.00002 on Private Score. I saw that its public score is 0.01854, and its private score is 0.01636.</p> <p><strong>2. The first NN model</strong><br> Reference from:<a href="url" target="_blank"> https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features?scriptVersionId=46973557</a>, this is also the model I integrated on the last day, I did not make any changes, Neither LB nor Pr brought me improvement. But it did not decline. I saw that its public score is 0.01836, and its private score is 0.01626.</p> <p>**3. The second NN model **<br> Also referenced from <a href="url" target="_blank">https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features?scriptVersionId=46973557</a>, but I removed PCA and used Chris Deotte's method of splitting and adjusting the parameters . I saw that its public score is 0.01837, and its private score is 0.01626.</p> <p><strong>4. TabNet:</strong><br> Reference from: <a href="url" target="_blank">https://www.kaggle.com/optimo/selfsupervisedtabnet</a>, I removed the self-supervised part, because it did not bring me improvement, I also removed the PCA and other operations, I did not trust them at the time. Using Chris Deotte's method , get the public 0.01839, private0.01620.</p> <p><strong>5. The third NN model</strong><br> Reference from: <a href="url" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids</a>, the pre-training method here has benefited me a lot, and I also removed the PCA Part, using Chris Deotte's method of splitting, get 0.01834 publict and 0.01625 private .</p> <p>My integration does not refer to cv, because the cv gap between models is very large, so I refer to LB to make adjustments. Fortunately, I am lucky.</p> <p>All of my code is learned from the kernel, and there are many good kernels and discussions I haven't listed. This is what Kaggle makes me love. You can always learn, and someone is willing to share and help you.</p> <p>Next, I want to share some of the experience I gained in the competition.<br> 1,After getting the baseline, the first modification is to separate training and inference, save OOF during training, and create files with different folds first. There is also a reliable cross-validation.</p> <p>2.The output of each training process should be checked well. Using early_stop can save a lot of training resources and time. Observing the changes in the training effect of the first fold, you can infer the quality of the schedule used.</p> <p>3.Schedule is a part that must be adjusted. First find a suitable schedule for the baseline, and then adjust other parameters.</p> <p>4.At the beginning of the baseline, when testing the parameters, you should first pay attention to the training time. If the time is very short, you might as well make two seeds, so that on average, you can test out good parameters more stably and avoid overfitting the seeds. Of course, in many competitions, it is impossible to test the two seeds together.</p> <p>5.Note that for games that require kernel inference, you must run under the GPU, and must only run inference, not training.</p> <p>6.Note that the kernel creation time is different, and the installation environment may be different, such as different versions of pytorch.</p> <p>Maybe some of my ideas are wrong, please correct me in time. In any case, persistence is the most important quality for participating in kaggle competitions.😊</p>
Google Research Football with Manchester City F.C.
My Human Learning bot (rank 31st)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research Football with Manchester City F.C. <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank Kaggle, Google, and Manchester City for creating this competition. It has been a fun learning experience.</p> <p>My approach is rule-based and strictly human learning. :) I started with the excellent notebook by Yegor Biryukov's <a href="https://www.kaggle.com/yegorbiryukov/gfootball-with-memory-patterns" target="_blank">notebook</a> with the deterministic script from <a href="https://www.kaggle.com/raffaelemorganti/rule-based-way" target="_blank">Raffaele Morganti</a>. My sincere gratitude to both of them for making my participation in this competition much easier. Thanks also should go to all the top teams. They lead the way and inspire all of us to do better.</p> <p>I made these improvements from Yegor's notebook based on what I glean from the replay.</p> <ul> <li>Defensive issues:</li> </ul> <ol> <li>Limit blind Action.Shot up field to goalie only. The reason for this is I found that trying to boot the ball up field without looking at whether opponent players might be blocking is dangerous so I limit it to goalie who mostly has enough room in front of him to clear.</li> <li>For other players, I try to evade the chasing opponents before trying to pass while avoiding going into my own penalty box.</li> <li>Sliding is essential but doing it wildly can get your player sent off so I chose to slide only when the player is near both the ball and the opponent player for two steps. This is all trial-and-error.</li> <li>High ball overhead beyond our last defender leads to many goals so I handle this situation as a special case. The last defender does not chase the ball but chase the highest opponent player instead. This works reasonably well but not foolproof for the case of two-on-two and the ball is going to the second-highest attacker instead of the highest one. I try to fix it only to make it worse so I gave up.</li> </ol> <ul> <li>Offensive issues:</li> </ul> <ol> <li>I notice that quick passing and shooting without taking possession of the ball is the way top players score so I mimic it by calculating whether the active player is going to be the first to the ball. If so, I issue the pass or shot command before the ball arrives. This is not without downside in that if the player turns out not to be first to the ball, he can stray too far from the opponent to prevent him going for goal. Another sticky point is the fact that passing command is delayed. If the player turns the wrong direction after taking possession of the ball, i.e. facing toward our own goal, it can result in an own goal. So, I made quick passing and shooting sticky by repeating it. This is risky so I try to delay making this quick pass/shoot decision as late as possible before the ball reaches the player.</li> <li>When trying to dribble past defenders, I adjust the constant so we don't look at defenders that is a little bit far behind us. This improves the score quite a bit.</li> </ol> <p>Overall, I'm happy with the result. I think I achieve rank 31st only because I had many submissions with different tuning parameters. What I thought was a good change turn out not to be that good in the end. The randomness of the game combines with slow converging LB score makes tuning very difficult. I wish I have implemented a local LB but it might not help much because of the lack of variety in agents.</p> <p>Here is my <a href="https://www.kaggle.com/huiunited/hui-gfootball-with-memory-patterns-from-yegor" target="_blank">notebook</a>.</p> <p>Thank you and hope to see you guys again in the next competition. I will definitely learn and use ML by then. :)</p>
Riiid Answer Correctness Prediction
2nd Place Solution (LSTM-Encoded SAKT-like TransformerEncoder)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you all teams who competed with me, all the people who participated in this competition, and the organizer that hosts such a great competition with the well-designed API!<br> Congrats <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a>, who defeats me and becomes the winner in this competition.</p> <p>I'm happy because it's my first time I get solo prize!</p> <p>It's my 4th kaggle competition and it was fun to compete with my past teammates ( <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>, <a href="https://www.kaggle.com/pocketsuteado" target="_blank">@pocketsuteado</a>) and people who I competed with past competitions (e.g. <a href="https://www.kaggle.com/aerdem4" target="_blank">@aerdem4</a>, <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a>). <br> Here, I will explain the summary of my model and my features. </p> <p>I uploaded 2 kaggle notebooks for the explanation. As the ensemble is not so important in my solution, I will only explain my single model. </p> <ol> <li><a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution" target="_blank">6 similar models weighted average</a> : 0.817 public/0.818 private</li> <li><a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution-single-fold" target="_blank">single model</a> : 0.814 public/0.816 private</li> </ol> <h1>Models</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F762938%2F39ec58b701e6d37701cc794763d3a473%2F2nd_place.png?generation=1610213712807228&amp;alt=media" alt=""></p> <h2>Overview</h2> <p>my model is similar to SAKT, with 400 sequence length, 512 dimension, 4 nheads. <br> I don't use lecture information for the input of transformer model, which I guess is why I lost in this competition. For the query and key/value of SAKT-like model, I used LSTM-encoded features, whose input is as follows.</p> <ul> <li><p><strong>"Query" features</strong><br> content_id<br> part<br> tags<br> normalized timedelta<br> normalized log timestamp <br> correct answer<br> task_container_id delta<br> content_type_id delta<br> normalized absolute position </p></li> <li><p><strong>"Memory" features</strong><br> explanation<br> correctness<br> normalized elapsed time<br> user_answer</p></li> </ul> <h2>Detailed Explanation of Training/Inference Process</h2> <p>I tried a very precise indexing/masking technique to avoid data leakage in training process, I guess which is partly why I became 2nd place. OK, suppose a very simple model of the sequence length = 5, and a task_container_id history of a specific user (without lecture) is like this.<br> <code>[0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 5, 8, 7, 7, 6]</code><br> As there is 15 measurements in this user, I made 15/5 = 3 training samples and loss mask.</p> <p>(1) input task_container_id: <code>[pad, pad, pad, pad, pad, 0, 0, 0, 1, 1]</code> <br> (1) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code><br> (2) input task_container_id: <code>[0, 0, 0, 1, 1, 2, 2, 3, 3, 4]</code> <br> (2) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code><br> (3) input task_container_id: <code>[2, 2, 3, 3, 4, 5, 8, 7, 7, 6]</code> <br> (3) loss_mask: <code>[False, False, False, False, False, True, True, True, True, True]</code></p> <p>In this competition, the handling of the task_container_id is very important, as <strong>it is not allowed to use "memory" features of the same task container id</strong> to avoid leakage.<br> So, after applying LSTM to the features, such fancy indexing is required to avoid the leakage for 3 training samples, where -1 means this position can't attend any position.</p> <p>(1) indices: <code>[-1, -1, -1, -1, -1, 4, 4, 4, 7, 7]</code><br> (2) indices: <code>[-1, -1, -1, 2, 2, 4, 4, 6, 6, 8]</code><br> (3) indices: <code>[-1, -1, 1, 1, 3, 4, 5, 6, 6, 8]</code></p> <p>To get this indices very fast in the training/inference phase, I wrote such cython (#1) function (vectorized implementation):</p> <pre><code>%%cython import numpy as np cimport numpy as np cpdef np.ndarray[int] cget_memory_indices(np.ndarray task): cdef Py_ssize_t n = task.shape[1] cdef np.ndarray[int, ndim = 2] res = np.zeros_like(task, dtype = np.int32) cdef np.ndarray[int] tmp_counter = np.full(task.shape[0], -1, dtype = np.int32) cdef np.ndarray[int] u_counter = np.full(task.shape[0], task.shape[1] - 1, dtype = np.int32) for i in range(n): res[:, i] = u_counter tmp_counter += 1 if i != n - 1: mask = (task[:, i] != task[:, i + 1]) u_counter[mask] = tmp_counter[mask] return res </code></pre> <p>After applying such fancy indexing to the output of LSTM features, I concatenated them with "Query" features and apply MLP, then we can get query for SAKT model. Then, I obtained key/value for SAKT model by applying MLP to query concatenated with "Memory" features. </p> <p>To train SAKT-like model, precise memory masking is also required to avoid leakage. I used <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html" target="_blank">3D attention mask</a> of torch.nn.MultiheadAttention. It should be noted <a href="https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html" target="_blank">torch.repeat_interleave</a> must be leveraged to make 3D mask (batchsize * nhead, sequence length, sequence length). <br> The memory mask for these 3 samples are like this. Here, 1 means True and 0 means False. Please remember the <a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html" target="_blank">documentation</a> says </p> <pre><code>attn_mask ensure that position i is allowed to attend the unmasked positions. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged. </code></pre> <p>(1) memory_mask:</p> <pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0, 0, 0, 1, 1]]) </code></pre> <p>(2) memory_mask:</p> <pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 0, 0, 0, 1]]) </code></pre> <p>(3) memory_mask:</p> <pre><code>array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 1, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 1, 1, 1], [1, 1, 0, 0, 0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 0, 0, 0, 1]]) </code></pre> <p>To make this memory mask very fast in the training phase, I wrote such cython (#2) function (not vectorized implementation):</p> <pre><code>%%cython import numpy as np cimport numpy as np cpdef np.ndarray[int] cget_memory_mask(np.ndarray task, int n_length): cdef Py_ssize_t n = task.shape[0] cdef np.ndarray[int, ndim = 2] res = np.full((n, n), 1, dtype = np.int32) cdef int tmp_counter = 0 cdef int u_counter = 0 for i in range(n): tmp_counter += 1 if i == n - 1 or task[i] != task[i + 1]: res[i - tmp_counter + 1 : i + 1, :u_counter] = 0 if u_counter == 0: res[i - tmp_counter + 1 : i + 1, n - 1] = 0 if u_counter &gt; n_length: res[i - tmp_counter + 1: i + 1, :(u_counter - n_length)] = 1 u_counter += tmp_counter tmp_counter = 0 return res </code></pre> <p>In the inference phase, memory mask is not required. For more details, please look at my <a href="https://www.kaggle.com/mamasinkgs/public-private-2nd-place-solution-single-fold" target="_blank">single model notebook</a>.</p> <h1>My Features</h1> <p>Transformer is great, but it suffers from a problem that the sequence length cannot be infinite. I mean, it cannot consider the information of very old samples. To tackle this problem and to leverage the lecture information, I made simple features and concatenated them with the output features of SAKT-like model, and applied MLP and sigmoid.<br> I uploaded the names of 90 features as attachments. For the implementation of these features, I didn't use any pd.merge or df.join and most of the implementation are done using numpy. To make user-content features, I used scipy.sparse.lil_matrix to spare memory usage. I think my feature is not so good as other competitors (about 0.795 when using GBDT), but still improved score 0.001 ~ 0.002.<br> I made some tricky features (e.g. obtained by SVD), but it did not improve the score of NN model (improved GBDT model, though.), probably because the information of NN features includes that of such tricky features.</p> <h1>CV Strategy</h1> <p>My CV strategy is completely different from tito( <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> )'s one. First, I probed the number of new users in test set and I found there are about 7000 new users. as we know there is 2.5M rows in test set and we know the average length of the history of all users, we can estimate the number of rows of the new users and the number of rows of the existing users. After the calculation, I found</p> <pre><code>the number of rows of new users (i.e. user split): the number of rows of existing users (i.e. timeseries split) = 2 : 1. </code></pre> <p>So, For validation, I decided to use 1M rows for timeseries split and 2M rows for user split.<br> When making validation set of timeseries split, I was so careful that leakage can't happen. I mean, my training dataset and validation dataset never shares same task_container_id for a given user.</p> <h1>What worked</h1> <ol> <li>increase length from 100 to 400 worked.</li> <li>using normalized timedelta is better than digitized timedelta (mentioned in SAINT+ paper).</li> <li>concatenating embeddings is better than adding embedding, as mentioned in <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/201798" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/201798</a>. </li> <li>dropout = 0.2 is very important.</li> <li>StepLR with Adam is good. I trained my model for about 35 epoch with lr = 2e-3, then trained it for 1 epoch with lr = 2e-4.</li> </ol> <h1>What didn't work</h1> <ol> <li>Random masking of sequences didn't improve the score.</li> <li>bundle_id and normalized task_container_id is not needed for the input of LSTM.</li> <li>As I didn't make diverse models, weighted averaging is enough and blending using GBDT didn't work well.</li> <li>Full data training (without any validation data) didn't improve the score.</li> <li>In my implementation, SAINT didn't work well. In my opinion, it's logically very hard to implement SAINT without data leakage in training due to task_container_id.</li> </ol> <h1>Comments</h1> <p>I was very sad to see the private score bug problem of kaggle. Of course this competition is very great, but I think this competition could have been one of the most successful competition in kaggle without this problem.<br> Anyway, I really enjoyed my 4th kaggle competition. I'll continue kaggle and will surely become the winner in the next competition!</p> <h1>Thanks all, see you again!</h1> <p>P.S. The word <code>memory_mask</code> may be confusing because it is different from pytorch <a href="https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html#torch.nn.Transformer" target="_blank">Transformer</a>'s <code>memory_mask</code>.<br> The word <code>memory_mask</code> is more like <code>mask</code> in pytorch's <a href="https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html" target="_blank">TransformerEncoder</a>.<br> The reason my word is confusing is, I used Encoder-Decoder model at first, then gave up it and started using Encoder-only model, but I didn't change the function names. :(</p>
Riiid Answer Correctness Prediction
20th solution, transformer encoder only model (SERT) with code and notebook
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers for hosting a comp with very little shakeup. This is always nice to see. I wanna thank <a href="https://www.kaggle.com/wangsg" target="_blank">@wangsg</a> and <a href="https://www.kaggle.com/leadbest" target="_blank">@leadbest</a> for the starter kernels. Without those I would not have known what to do in this difficult comp.</p> <h1>Architecture</h1> <p>I use the transformer encoder only SERT (SIngle-directional Encoder Representation from Transformers), to make predictions just one linear layer after the last encoder layer. This is probably a mistake</p> <h1>Embeddings</h1> <ol> <li>Question_id</li> <li>Prior question correctness</li> <li>Timestamp difference between bundles</li> <li>Prior question elapsed time</li> <li>Prior question explanation</li> <li>Tag cluster thanks to <a href="https://www.kaggle.com/spacelx" target="_blank">@spacelx</a> </li> <li>Tag vector</li> <li>Fixed pos encoding, same as in Attention is All You Need</li> </ol> <h1>Key modifications</h1> <ol> <li>encoder only, this is kind of stupid and a mistake. I think this cost me a few places</li> <li>fixed pos encoding, which allows retraining the model with longer sequences</li> <li>layer norm and dropout after embedding layers</li> <li>Loss weight favoring later positions, np.arange(0,1,1/seq_length)*loss</li> </ol> <h1>Mistakes I couldn't resolve</h1> <ol> <li>Intra-bundle leakage, I made a nice task mask implementation, but it only made my score worse, probably because of the lack of a decoder. I ended using just an autoregressive mask</li> <li>First few positions during inference and training may have wrong timestamp difference since i simply do t[1:]-t[:-1]</li> </ol> <h1>Code and submission kernel</h1> <p><a href="https://github.com/Shujun-He/Riiid-Answer-Correctness-Prediction-20th-solution" target="_blank">https://github.com/Shujun-He/Riiid-Answer-Correctness-Prediction-20th-solution</a></p> <p><a href="https://www.kaggle.com/shujun717/fork-of-tag-encoding-with-loss-weight?scriptVersionId=51272583" target="_blank">https://www.kaggle.com/shujun717/fork-of-tag-encoding-with-loss-weight?scriptVersionId=51272583</a></p> <h2>Feel free to ask questions. This is a very brief write-up</h2>
Riiid Answer Correctness Prediction
[70th] My transformer solution (with code)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congratulations to all winners!</p> <p>Hope you all enjoyed and learned something in this great competition (ok, submission errors were painful …)</p> <p>I want to share briefly my solution here, although it is not the one I got the private LB scores (I have difficultly to debug my final version).</p> <p>My solution is transformer model, and here is the summary:</p> <ul> <li><p>Training / Validation is done with TensorFlow TPU<br> * This is quite fast, 1 epoch takes 5 - 6 minutes (1M sequences of length 128)<br> * Check <a href="https://www.kaggle.com/yihdarshieh/tpu-track-knowledge-states-of-1m-students" target="_blank">TPU - Track knowledge states of 1M+ students in the wild</a>.<br> * Use Google Colab TPU for training </p></li> <li><p>Use elapsed time / lag time : LB: 0.781 -&gt; 0.795 ~ 0.797</p></li> <li><p>I also included the loss for predicting the user actual response, not only the response correctness. This makes the training can run longer without overfitting. LB: 0.797 -&gt; 0.800</p></li> <li><p>I included the count of (in previous time) each part and each correct answer (of the questions), and their correctness ratio. On CV, 0.807 / 0.808 -&gt; 0.809 / 0.810.</p></li> <li><p>Finally, I included the count of the correction and the correction ratio of each question seen by each user. On CV 0.809 / 0.810 -&gt; 0.813.</p></li> </ul> <p>Here is the link to my notebook:</p> <p><a href="https://www.kaggle.com/yihdarshieh/r3id-transformer/" target="_blank">R3ID - Transformer</a></p> <p>ps. Since I tried to fix the inference bug , the code is somehow messy …</p>
Riiid Answer Correctness Prediction
28th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>I want to thank the hosts for organizing a meaningful competition that reflected the critical constraints of real-world difficulties (time series API, limited RAM, and inference time). Also, I thanks my teammate NARI who worked hard with me until the end of the competition.</p> <h1>Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2F5cfa0a830e6e45584a9088a794720ef7%2FOverview.jpg?generation=1610195752423547&amp;alt=media" alt=""></p> <p>Our final solution is an ensemble of Catboost and Transformer. Although Transformer could not create a high CV model in time like the top teams, it contributes a lot to boosting the stacking model score. Our solution's validation strategy and feature engineering pipeline were heavily influenced by tito's notebooks (<a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">validation strategy</a>, <a href="https://www.kaggle.com/its7171/lgbm-with-loop-feature-engineering" target="_blank">feature engineering</a>). We definitely would not have reached this score if he had not been shared early in the competition.</p> <h1>Features</h1> <p>We extracted 160 features for training Catboost. </p> <p>The main features are as follows. The detailed contribution to the score of each feature has not been confirmed, but the feature importance by CatBoost is available <a href="https://github.com/haradai1262/kaggle_riiid-test-answer-prediction/blob/main/notebook/check_feature_importance.ipynb" target="_blank">here</a></p> <ul> <li><p>Content-related features</p> <ul> <li><p>Aggregation features</p> <ul> <li>mean, std for answered_correctly</li> <li>mean for elapsed time</li> <li>We also used aggregation in records only for the second and subsequent answers to the same question by the user.</li></ul></li> <li><p>Word2vec features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2Fa381717acbf8460946ef6455aef948c4%2Fword2vec%20figure.jpg?generation=1610196796564277&amp;alt=media" alt=""></p> <ul> <li>We extract Word2vec features using each user's answer history as sentences, each question_id and its associated part, tag_id, and lecture_id as words.</li> <li>We also extracted cases in which users' answer histories for only correct answers, and only incorrect answers were treated as separate sentences.</li> <li>The similarity and clustering by the word2vec feature was also used for other feature extraction.</li></ul></li> <li><p>Graph features<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1360119%2Fb4fd67c9bad9534451cf405bc27d04ef%2FGraph%20figure.jpg?generation=1610196761780811&amp;alt=media" alt=""></p> <ul> <li>Based on the users' answer histories, we created a directed graph, whose nodes are question_ids and edges' weights are defined by the number of users' transitions, and used the following node feature as question features.<ul> <li>node metrics (eigenvector_centrality, betweenness_centrality, trophic_levels <a href="https://networkx.org/documentation/stable/reference/algorithms/centrality.html" target="_blank">NetworkX document</a>)</li> <li>node embedding (DeepWalk <a href="http://www.perozzi.net/publications/14_kdd_deepwalk.pdf" target="_blank">paper</a>, struc2vec <a href="https://arxiv.org/pdf/1704.03165.pdf" target="_blank">paper</a>, <a href="https://github.com/shenweichen/GraphEmbedding" target="_blank">implementation</a>)</li> <li>SVD for adjacency matrix</li></ul></li></ul></li></ul></li> <li><p>User history features</p> <ul> <li>Answer count, Correct answer rate<ul> <li>We use the user's correct answer rate and the correct answer rate in the last N times.</li> <li>We also used the correct answer rate in the first ten times, one day and one week.</li></ul></li> <li>Time-related<ul> <li>difftime (timestamp - previous_timestamp) worked especially well</li> <li>We used some features related to difftime, including difftime with the most recent 5 step timestamp and their statistics.</li></ul></li> <li>Question-related<ul> <li>Whether the user has answered the target question in the past or not, and the number of times the user has answered the question in the past were worked.</li> <li>We also use correct answer rate for questions in the same cluster (clustered by k-means using similarities based on Word2vec features described above, with the number of clusters set to 100)</li> <li>We also used some features related to the similarity between the target question and the recent questions or these parts (the similarities were based on Word2vec features described above).</li></ul></li> <li>Part-related<ul> <li>The correct answer rate in the part of the question users are answering was worked.</li> <li>We also used the count and correct answer rate of each part.</li></ul></li> <li>Tag-related<ul> <li>The correct answer rate for each tag of the user is kept. The statistics (max, min, mean) of the user's correct answer rate for each tag in the question being answered are used.</li></ul></li> <li>Lecture-related<ul> <li>The number of answers from the user's most recent lecture worked best for lecture-related features.</li></ul></li></ul></li> </ul> <h1>Model</h1> <h3>Catboost</h3> <ul> <li>Data split: tito CV</li> <li>Input<ul> <li>160 features</li></ul></li> <li>CV: 0.803~0.804, PublicLB: 0.801~0.802</li> </ul> <h3>Transformer (SAINT-like model)</h3> <ul> <li>Data split: tito CV</li> <li>Input (sequence length 120)<ul> <li>Encoder<ul> <li>question_id, part, tag, difftime</li></ul></li> <li>Decoder<ul> <li>answered_correctly, elasped_time</li></ul></li></ul></li> <li>CV: 0.797~0.798</li> </ul> <h1>Stacking</h1> <ul> <li>Model: Catboost</li> <li>Data split: k-fold CV (k=9) of 2.5M records</li> <li>Input<ul> <li>Predictions of Catboost (6 models, random seeds) and Transformer (2 models, a slight variation of hyperparameters and structure)</li> <li>Top 75 higher importance features based on feature importance of CatBoost</li></ul></li> <li>3 Seed average</li> <li>CV: 0.810, PublicLB: 0.807, PrivateLB: 0.809</li> </ul> <hr> <p>Thank you for your attention! Our code is available in</p> <ul> <li><a href="https://github.com/haradai1262/kaggle_riiid-test-answer-prediction" target="_blank">https://github.com/haradai1262/kaggle_riiid-test-answer-prediction</a></li> <li><a href="https://www.kaggle.com/haradataman/riiid-28th-solution-inference-only" target="_blank">Inference (kaggle notebook)</a></li> </ul>
Riiid Answer Correctness Prediction
11th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to my teammate <a href="https://www.kaggle.com/jusco11" target="_blank">Akihiko</a> for competing with me on this, there was great learning from the community for both of us. And thanks to our hosts for a wonderful challenge. Congrats all who competed. </p> <p>Solution has heavily inspired by Bestfitting's <a href="https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection/discussion/56262" target="_blank">TalkingData solution</a> which we noticed a lot of the people in this competition took part in :) </p> <p>We used around 35 features including the raw, in a 2 LSTM layer model, each layer single direction trained on sequence length 256, infer on 512. <br> Batchsize for training 2048. Hidden layer size of 512. <br> Example below. </p> <p>First layer,</p> <ul> <li>Used features below in <code>embcatq</code>, no continuous features. </li> <li>No label in first layer, like in the SAINT paper.</li> <li>Added the difference of some of the embedding to the final embedding. This gives the model info on how similar each historical question was to the question in the sample. </li> </ul> <p>Second layer, </p> <ul> <li>Outputs of first layer and included continuous features. </li> <li>Added embedding for interaction of question and chosen answer. </li> <li>Continuous features generated using <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> great book - this is why I would like to be able to give multiple upvotes.</li> <li>One important feature was the answer ratio, what percentage of students picked the same answer as the chosen answer. </li> </ul> <p>How to handle the histories in memory was a problem, but there was plenty of space for it on the GPU, so loaded that first to numpy then to a torch tensor on GPU (history features took around 6GB); then loaded the other objects to RAM. <br> Attention did not work for us - looked promising on validation though - should have persisted with it. We just took the final hidden cell from the LSTM as output. <br> The below got ~ 0.811 public, by make some changed to the architecture and bagging four models, lifted to 0.813 public, 0.816 private. </p> <pre><code>class LearnNet(nn.Module): def __init__(self, modcols, contcols, padvals, extracols, dropout = 0.2, hidden = args.hidden): super(LearnNet, self).__init__() self.dropout = nn.Dropout(dropout) self.modcols = modcols + extracols self.contcols = contcols self.emb_content_id = nn.Embedding(13526, 32) self.emb_content_id_prior = nn.Embedding(13526*3, 32) self.emb_bundle_id = nn.Embedding(13526, 32) self.emb_part = nn.Embedding(9, 4) self.emb_tag= nn.Embedding(190, 8) self.emb_lpart = nn.Embedding(9, 4) self.emb_prior = nn.Embedding(3, 2) self.emb_ltag= nn.Embedding(190, 16) self.emb_lag_time = nn.Embedding(301, 16) self.emb_elapsed_time = nn.Embedding(301, 16) self.emb_cont_user_answer = nn.Embedding(13526 * 4, 5) self.tag_idx = torch.tensor(['tag' in i for i in self.modcols]) self.cont_wts = nn.Parameter( torch.ones(len(self.contcols)) ) self.cont_wts.requires_grad = True self.cont_idx = [self.modcols.index(c) for c in self.contcols] self.embedding_dropout = SpatialDropout(dropout) self.diffsize = self.emb_content_id.embedding_dim + self.emb_part.embedding_dim + \ self.emb_bundle_id.embedding_dim + self.emb_tag.embedding_dim * 7 IN_UNITSQ = self.diffsize * 2 + \ self.emb_lpart.embedding_dim + self.emb_ltag.embedding_dim + \ self.emb_prior.embedding_dim + self.emb_content_id_prior.embedding_dim + \ len(self.cont_idxcts) IN_UNITSQA = ( self.emb_lag_time.embedding_dim + self.emb_elapsed_time.embedding_dim + \ self.emb_cont_user_answer.embedding_dim) + len(self.contcols) LSTM_UNITS = hidden self.diffsize = self.emb_content_id.embedding_dim + self.emb_part.embedding_dim + \ self.emb_bundle_id.embedding_dim + self.emb_tag.embedding_dim * 7 self.seqnet1 = nn.LSTM(IN_UNITSQ, LSTM_UNITS, bidirectional=False, batch_first=True) self.seqnet2 = nn.LSTM(IN_UNITSQA + LSTM_UNITS, LSTM_UNITS, bidirectional=False, batch_first=True) self.linear1 = nn.Linear(LSTM_UNITS * 2 + len(self.contcols), LSTM_UNITS//2) self.bn0 = nn.BatchNorm1d(num_features=len(self.contcols)) self.bn1 = nn.BatchNorm1d(num_features=LSTM_UNITS * 2 + len(self.contcols)) self.bn2 = nn.BatchNorm1d(num_features=LSTM_UNITS//2) self.linear_out = nn.Linear(LSTM_UNITS//2, 1) def forward(self, x, m = None): ## Continuous contmat = x[:,:, self.cont_idx] contmat = self.bn0(contmat.permute(0,2,1)) .permute(0,2,1) contmat = contmat * self.cont_wts content_id_prior = x[:,:,self.modcols.index('content_id')] * 3 + \ x[:,:, self.modcols.index('prior_question_had_explanation')] embcatq = torch.cat([ self.emb_content_id(x[:,:, self.modcols.index('content_id')].long()), self.emb_part(x[:,:, self.modcols.index('part')].long()), self.emb_bundle_id(x[:,:, self.modcols.index('bundle_id')].long()), self.emb_tag(x[:,:, self.tag_idx].long()).view(x.shape[0], x.shape[1], -1), self.emb_prior(x[:,:, self.modcols.index('prior_question_had_explanation')].long() ), self.emb_lpart(x[:,:, self.modcols.index('lecture_part')].long()), self.emb_ltag(x[:,:, self.modcols.index('lecture_tag')].long()) , self.emb_content_id_prior( content_id_prior.long()), ], 2) embcatqdiff = embcatq[:,:,:self.diffsize] - embcatq[:,-1,:self.diffsize].unsqueeze(1) # Categroical embeddings embcatqa = torch.cat([ self.emb_cont_user_answer(x[:,:, self.modcols.index('content_user_answer')].long()), self.emb_lag_time(x[:,:, self.modcols.index('lag_time_cat')].long()), self.emb_elapsed_time(x[:,:,self.modcols.index('elapsed_time_cat')].long()) ] , 2) #embcatqadiff = embcatqa - embcatqa[:,-1].unsqueeze(1) embcatq = self.embedding_dropout(embcatq) embcatqa = self.embedding_dropout(embcatqa) embcatqdiff = self.embedding_dropout(embcatqdiff) # Weighted sum of tags - hopefully good weights are learnt xinpq = torch.cat([embcatq, embcatqdiff], 2) hiddenq, _ = self.seqnet1(xinpq) xinpqa = torch.cat([embcatqa, contmat, hiddenq], 2) hiddenqa, _ = self.seqnet2(xinpqa) # Take last hidden unit hidden = torch.cat([hiddenqa[:,-1,:], hiddenq[:,-1,:], contmat[:, -1]], 1) hidden = self.dropout( self.bn1( hidden) ) hidden = F.relu(self.linear1(hidden)) hidden = self.dropout(self.bn2(hidden)) out = self.linear_out(hidden).flatten() return out </code></pre>
Riiid Answer Correctness Prediction
12th place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F843848%2F94885ef1fcce200915129d79e234b4ea%2Friiid-solution.png?generation=1610081122634658&amp;alt=media" alt="riiid-stack"></p> <p>First of all, I want to thank both the hosts(kaggle and riiid) for hosting this competition.<br> It was a really challenging competition in many ways, and really tested my limits. <br> The competition was also very clean (albeit the privateLB bug unrelated to this competition). A very clean and logical train/test split/correlation is what makes this competition shine.</p> <hr> <h1>Overall solution:</h1> <p>For our single mode, NN scored around 0.806-810(privateLB), LGBM 0.804.<br> We stacked these single models with NN and LGBM to achieve a score of 0.816.<br> I suspect our single models were mediocre in terms of score, but the stacking got us to gold.</p> <p>Regarding our 5place drop on the privateLB, we kind of expected it. We knew that one of our models(SAINT) had a bad private score because of the private score leak, but we couldn't quite fix it. Even after the competition has ended, we still have no clue as to why the model suffers in the privateLB.</p> <hr> <h1>Validation:</h1> <p>We used tito-split throughout most of the competition period.<br> We mostly experienced good Valid-LB correlation, which I believe is the same for others. The single models used a 95M/5M split, and the stack used the 5M for training/validation.<br> We also tried user-split as our final submission for diversity, but this had no impact on our final score.</p> <hr> <h1>LGBM details:</h1> <h4>Resource management</h4> <p>(When generating features for submission)<br> I used h5py for memory-hungry features like (user x content features, user x tag features).<br> The h5py file uses user_id as the key, and read the whole user-feature when hitting new users while predicting.<br> This creates a good balance between memory and runtime. Since there are not many unique users in the test-set (compared to train), the time to FileIO is not too much, and huge amounts of memory is saved.<br> All the other features were pickled and loaded to memory in the beginning.</p> <h4>Features:</h4> <p>Question features:<br> These were made by the full training-set(100M). <br> Obviously this is leaky, but since every single question has &gt;1000 counts, the leak is tolerable.<br> Example features: mean question ac, mean user rating of who did not answer correctly.</p> <p>User features:<br> How good the user is, especially related to parts, tags, contents.<br> Example features: user x contents mean question ac</p> <p>Timestamp features: <br> This was kind of a surprise to me. Not only was the timestamp diff of t and t-1 a good features, diff of t-1 and t-2 up to t-9 and t-10 improved my model.<br> Example features: user user timestamp diff from last lecture</p> <p>Rating features:<br> Elo features from this <a href="https://www.kaggle.com/stevemju/riiid-simple-elo-rating" target="_blank">notebook</a> (thank you very much). Trueskill did not improve my model.<br> With only mean ac, the model cannot determine if the user is challenging hard questions or easy ones, so rating questions and users makes a lot of sense.</p> <p>SVD features:<br> LGBM is bad at expressing category columns (compared to NN). So I took the question embedding layer of NN, and used the 20dimension SVD as features.</p> <h4>Feature selection:</h4> <p>I used about 70-80 features for my model.<br> Since we were stacking a lot of models, I didn't want to use too much resource with my LGBM(runtime, memory), so I picked features which had a lot of impact, and made my model contribute to the stacking-model. There were a lot of features which didn't improve my model much, and all of them were thrown away. </p> <h4>Hyper-param:</h4> <p>I didn't change this too much, but increasing the num-leaf 127-&gt;1023 improved my score by 0.003, which was a surprise. This happened after I added lots of timestamp features, so there might be a very complicated interaction underlying in the timestamp.</p> <h4>Machine:</h4> <p>I used GCP, 64coreCPU 416GBmemory. Even with this monster machine, I ran out of memory a lot when generating the full features (which I avoided by processing in chunks).<br> I believe this instance cost me around $1000 over the competition (this is all payed by my company, and we are hiring btw). A lot of this cost happened because I was lazy (never used a preemptive instance), and I believe you could still be competitive with a $100 budget, even on this HUGE data, so don't be discouraged by the cost if you are a Kaggle beginner (I would recommend a competition with smaller data though :) )</p> <hr> <h1>Stacking details:</h1> <p>Stacking is always difficult. It often leads to overfitting.<br> In this competition, there was a very good Valid-LB correlation, so I guessed(correctly) that stacking could work.<br> I wanted to make sure this layer works, so I tried to do everything conservative.</p> <h4>Validation:</h4> <p>3M/2M train/valid setup. For the final subs, I made multiple models with a time-series-split (negligible gain).</p> <h4>Input Models:</h4> <p>Multiple models from Sakami(SAINT based and AKT based), which had different window size. We couldn't fit in all the models, so we chose 3 as our final sub.<br> Owruby had another SAINT model which both added diversity and improved the stack a lot. Lyaka had a SSAKT model which also helped slightly.<br> I also added my LGBM model, which surprisingly improved the stack, even with the features added.</p> <h4>Input Features:</h4> <p>LGBM stack model: Hand selected 15 features from my single LGBM model. There was literally no gain from the other 65.<br> NN stack model: Selected features + the last layers from singleNNs. This was done by Lyaka.</p> <h4>Stacking models:</h4> <p>LGBM stack model: num_leaves was reduced 1023-&gt;127. <br> NN stack model: Lyaka did both MLP and a Transformer. Both had similar scores, and we used MLP as the final sub.</p> <h4>Final output:</h4> <p>Mean of LGBM and NN stack model</p> <hr> <h1>Final thoughts:</h1> <p>Although we couldn't quite achieve the goal we wanted (beating mamas), I think we tried our best and did our best.<br> Everyone contributed to the final stack, everyone did a ton of work, and we had great teamwork to make the stacking happen. I am really happy with the team we had, like I always had throughout my kaggle history. Thank you all.</p>
Riiid Answer Correctness Prediction
Public/Private 13th solution (team takoi + kurupical)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to Hoon Pyo (Tim) Jeon and Kaggle team for such an interesting competition.<br> And congratulates to all the winning teams!</p> <p>The following is the team takoi + kurupical solution.</p> <p><br><br> <br></p> <h1>Team takoi + kurupical Overview</h1> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2F3f2943b27feb7d884472c201cd032d5b%2F1.jpg?generation=1610075778291231&amp;alt=media" alt=""></p> <h1>validation</h1> <p><a href="https://www.kaggle.com/tito" target="_blank">@tito</a>'s validation strategy. <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">https://www.kaggle.com/its7171/cv-strategy</a></p> <p><br><br> <br></p> <h1><strong>kurupical side</strong></h1> <p>I would like to have three kaggler to thank.<br> <a href="https://www.kaggle.com/takoi" target="_blank">@takoi</a> for inviting me to form a team. If it weren't for you, I couldn't reach this rank!<br> <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a> for sharing DSB 3rd solution. I'm beginner in transformer for time-series data, so I learned a lot from your solution!<br> <a href="https://www.kaggle.com/wangsg" target="_blank">@wangsg</a> for sharing notebook <a href="https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing.!" target="_blank">https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing.!</a><br> I used this notebook as a baseline and finally get 0.809 CV for single transformer.</p> <h2>model</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2F411c8ee779411bb443284b572980ef21%2F2.jpg?generation=1610075796425059&amp;alt=media" alt=""></p> <h2>hyper parameters</h2> <ul> <li>20epochs</li> <li>AdamW(lr=1e-3, weight_decay=0.1)</li> <li>linear_with_warmup(lr=1e-3, warmup_epoch=2)</li> </ul> <h2>worked for me</h2> <ul> <li>baseline (SAKT, <a href="https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing" target="_blank">https://www.kaggle.com/wangsg/a-self-attentive-model-for-knowledge-tracing</a>)</li> <li>use all data (this notebook use only last 100 history per user)</li> <li>embedding concat (not add) and Linear layer after cat embedding(@limerobot DSB2019 3rd solution) (+0.03) </li> <li>Add min(timestamp_delta//1000, 300) (+0.02)</li> <li>Add "index that user answered same content_id at last" (+0.005)</li> <li>Transformer Encoder n_layers 2 -&gt; 4 (+0.002)</li> <li>weight_decay 0.01 -&gt; 0.1 (+0.002)</li> <li>LIT structure in EncoderLayer (+0.002)</li> </ul> <h2>not worked for me</h2> <p>I did over 300 experiments, and only about 20 of them were successful.</p> <ul> <li>SAINT structure (Transformer Encoder/Decoder)</li> <li>Positional Encoding</li> <li>Consider timeseries<ul> <li>timedelta.cumsum() / timedelta.sum()</li> <li>np.log10(timedelta.cumsum()).astype(int) as category feature and embedding<br> etc…</li></ul></li> <li>optimizer AdaBelief, LookAhead(Adam), RAdam</li> <li>more n_layers(4 =&gt; 6), more embedding_dimention (256 =&gt; 512)</li> <li>output only the end of the sequence</li> <li>large binning for elapsed_time/timedelta (500, 1000, etc…)</li> <li>treat elapsed_time and timedelta as continuous </li> </ul> <p><br><br> <br></p> <h1><strong>takoi side</strong></h1> <p>I made 1 LightGBM and 8 NN models. The model that combined Transformer and LSTM had the best CV. Here is architecture and brief description.<br> ​</p> <h2>Transformer + LSTM</h2> <p>​<br> <br><br> ​<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1146523%2Fcab7462101930cf01ccbdd867e3b8a68%2F2%20(1).jpeg?generation=1610097396888437&amp;alt=media" alt=""></p> <h3>features</h3> <p>I used 17 features. 15 features were computed per user_id. 2 features were computed per content_id.</p> <h4>main features</h4> <ul> <li>sum of answered correctly</li> <li>average of answered correctly</li> <li>sum of answered correctly for tag_user_id</li> <li>average of answered correctly for tag_user_id</li> <li>lag time</li> <li>lag time of same content_id</li> <li>previous answered correctly for the same content_id</li> <li>distance between the same content_id</li> <li>average of answered correctly for each content_id</li> <li>average of lag time for each content_id<br> ​<br> <br><br> ​</li> </ul> <h2>LightGBM</h2> <p>I used 97 features. The following are the main features.</p> <ul> <li>sum of answered correctly</li> <li>average of answered correctly</li> <li>sum of answered correctly for tag_user_id</li> <li>average of answered correctly for tag_user_id</li> <li>lag time</li> <li>lag time of same part</li> <li>lag time of same content_id</li> <li>previous answered correctly for the same content_id</li> <li>distance between the same content_id</li> <li>Word2Vec features of content_id</li> <li>decayed features (average of answered correctly)</li> <li>number of consecutive times with the same user answer</li> <li>flag if the user answer being answered in succession matches the correct answer</li> <li>average of answered correctly for each content_id</li> <li>average of lag time for each content_id</li> </ul>
Riiid Answer Correctness Prediction
403 place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h1>403 place solution</h1> <p>My main strategy is training my model by using BigQuery ML because I wanted to learn it.<br> I think it was a good way for me. And one other good thing about BigQuery ML is I can use all data to train.</p> <h2>Model</h2> <h3>XGBoost</h3> <p>It looks that most people use LGBM but I use XGBoost.<br> I'm not familiar GBDT algorithm, I hear that LGBM is better than XGBoost.<br> Unfortunately, BigQuery ML only has XGBoost. So, I use XGBoost.</p> <p>BigQuery ML has other ML models. For example, Logistic Regression, ARIMA, and Neural Network.<br> I tried them but XGBoost was best.</p> <h3>Parameter</h3> <p>This is my BigQuery code to train my XGB model.<br> The parameter can be seen here.</p> <pre><code>CREATE MODEL `&lt;My model PATH&gt;` OPTIONS(MODEL_TYPE='BOOSTED_TREE_CLASSIFIER', BOOSTER_TYPE = 'GBTREE', NUM_PARALLEL_TREE = 1, MAX_ITERATIONS = 300, TREE_METHOD = 'HIST', EARLY_STOP = False, MIN_REL_PROGRESS=0.0001, LEARN_RATE =0.3, MAX_TREE_DEPTH=11, COLSAMPLE_BYTREE=1.0, COLSAMPLE_BYLEVEL=0.4, SUBSAMPLE = 0.9, MIN_TREE_CHILD_WEIGHT=2, L1_REG=0, L2_REG=1.0, INPUT_LABEL_COLS = ['answered_correctly'], DATA_SPLIT_METHOD='CUSTOM', DATA_SPLIT_COL='is_test') AS SELECT `&lt;My Feature Tables&gt;` </code></pre> <h2>CV strategy</h2> <h3>leave-one-out</h3> <p>I use <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">this strategy</a> by leave-one-out. It has looked good to me.</p> <h2>Ensemble</h2> <p>I use the best XGBoost model and two SAKT models.<br> SAKT models are made by forking <a href="https://www.kaggle.com/tarique7/v4-fork-of-riiid-sakt-model-full" target="_blank">this notebook</a>.<br> I little updated three in this notebook that:</p> <ol> <li>Randomly cut to a length of 200 in train data. (original code had used last 200 data)</li> <li>Save best score epoch model.</li> <li>Parameter tuning.</li> </ol> <p>Each score is here.</p> <table> <thead> <tr> <th>model</th> <th>CV</th> <th>LB</th> </tr> </thead> <tbody> <tr> <td>XGB</td> <td>0.7784</td> <td>0.781</td> </tr> <tr> <td>SAKT1</td> <td>0.772</td> <td>0.774</td> </tr> <tr> <td>SAKT2</td> <td>0.772</td> <td>0.773</td> </tr> </tbody> </table> <p>I weighted averaging 3:1:1 this model and I got 0.787 in the public leaderboard.</p> <p>I tried iteration ensemble and CV fold ensemble in XGBoost, however, both got the same score as my single model.</p> <p>Final my submission notebook is <a href="https://www.kaggle.com/takamichitoda/riiid-infer-v8-xgb-sakt?scriptVersionId=51291197" target="_blank">here</a>.</p> <h2>Feature</h2> <p>My features is:</p> <ul> <li>aggregation by contents<ul> <li>answered_correctl: count/sum/avg/std</li> <li>timestamp: max/min/avg/std</li> <li>user_id: count/unique_count</li> <li>task_container_id: max/min/avg/std</li> <li>user_answer_0~3: sum/avg/std</li> <li>question_elapsed_time: max/min/avg/std</li> <li>question_had_explanation: sum/avg/std</li></ul></li> <li>each question tags cumulative sum<ul> <li>number of try</li> <li>number of correct answer</li> <li>correct answer rate</li></ul></li> <li>each lecture tag cumulative sum<ul> <li>number of try</li></ul></li> <li>time window features that size is 200 <ul> <li>answer correctry: sum/avg/std</li> <li>question_elapsed_time: avg/max</li> <li>question_had_explanation: sum/avg/std</li> <li>timestamp lag: avg/max</li> <li>number of content_id=0</li> <li>number of content_id=1</li></ul></li> <li>timestamp lag</li> </ul> <h2>Reference</h2> <h3>Notebook</h3> <ul> <li>Create SQL: <a href="https://www.kaggle.com/takamichitoda/riiid-create-sql" target="_blank">https://www.kaggle.com/takamichitoda/riiid-create-sql</a></li> <li>Create Tags Onehot Encode Table: <a href="https://www.kaggle.com/takamichitoda/riiid-create-feature-table-csv" target="_blank">https://www.kaggle.com/takamichitoda/riiid-create-feature-table-csv</a></li> <li>Create CV index: <a href="https://www.kaggle.com/takamichitoda/riiid-make-cv-index" target="_blank">https://www.kaggle.com/takamichitoda/riiid-make-cv-index</a></li> </ul> <p>EDIT:</p> <ul> <li><a href="https://www.ai-shift.co.jp/techblog/1505" target="_blank">my blog</a>(Japanese)</li> <li><a href="https://github.com/trtd56/Riiid/tree/master/work/sql" target="_blank">SQL to generate Window Features.</a></li> </ul>
Riiid Answer Correctness Prediction
[1st place solution] Last Query Transformer RNN
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi Kagglers, here is the paper link.<br> Paper Link : <a href="https://arxiv.org/abs/2102.05038" target="_blank">https://arxiv.org/abs/2102.05038</a><br> Please refer the paper for details.</p> <h1>Summary</h1> <p>I wanted to use Transformer but I could not input long history to the model because QK matrix multiplication in Transformer has O(L^2) time complexity when input length is L. </p> <h1> </h1> <p>My approach is to use only last input as Query, because I only predict last question's answer correctness per history inputs. It means I will only compare between last question(query) and other questions(key), and not between other questions. It makes QK matrix multiplication in Transformer to have O(L) time complexity(because len(Q)=1, len(K)=L), which allows me to input much longer history. </p> <h1> </h1> <p>In final submission, I ensemble 5 models with 1728 length history inputs.<br> I didn't do feature engineering much, since I can use extremely long history, I wanted the model to learn it by itself. 5 input features I used are question id, question part, answer correctness, current question elapsed time, and timestamp difference.</p> <h1>Acknowledgement</h1> <p>Thank you <a href="https://www.kaggle.com/limerobot" target="_blank">@limerobot</a>, to share amazing transformer model to approach table data problem in the 3rd place solution of 2019 Data Science Bowl. It was a good motivation to work on transformer encoder. <a href="https://www.kaggle.com/c/data-science-bowl-2019/discussion/127891" target="_blank">https://www.kaggle.com/c/data-science-bowl-2019/discussion/127891</a> </p> <h1> </h1> <p>Thanks to competition sponsers and organizers for hosting this fun competition.</p> <h1> </h1> <p>Thank you for reading.</p>
Riiid Answer Correctness Prediction
17th place solution - 4 features model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>My solution is based on a stack of the 3 models:</p> <p>Edit: <a href="https://github.com/NikolaBacic/riiid" target="_blank">Code</a></p> <p>Transformer(Encoder) - Validation: 0.8100<br> LSTM - Validation: 0.8062<br> GRU - Validation: 0.8060</p> <p>LightGBM for stack: Validation: 0.8119 LB: 0.814x</p> <p>I validated on new users (2.5M rows). <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">This one</a> is great, but it was computationally expensive for me. </p> <p>All three models used 4 features (embeddings):</p> <ul> <li>question id embedding</li> <li>response of the previous question (1-correct 0-incorrect)</li> <li><strong>ln(lag+1) * minute_embedding : taking a ln(x+1) of the lag initialy improved my score by ~0.01. I'd be very interested to hear whether it'd improve your models too</strong></li> <li>ln(prior_elapsed_time+1) * minute_embedding</li> </ul> <p>Input is sum of these 4 embeddings.</p> <p><strong>Parameters</strong></p> <p>Shareable parameters (all three models):<br> max_quest = 300 (window size)<br> slide = 150<br> Adam optimizer<br> cosine lr scheduler<br> BCE loss<br> <strong>xavier_uniform_ weight initialization (0.004 improvement over PyTorch's default one)</strong></p> <p>I used Optuna for hyperparameter search on 20% of the data. It was my first time using it, and it's a great tool!</p> <p>Transformer hyperparameters:<br> nhead = 8<br> head_dim = 60<br> dim_feedforward = 2048<br> num_encoder_layers = 8<br> epochs = 6<br> batch_size = 64<br> lr = 0.00019809259513409007<br> warmup_steps = 150*5</p> <p>LSTM hyperparameters:<br> input_size_lstm = 384<br> hidden_size_lstm = 768<br> num_layers_lstm = 4<br> epochs = 4<br> batch_size = 64<br> lr = 0.0007019926812886481<br> warmup_steps = 100</p> <p>GRU hyperparameters:<br> input_size_gru = 320<br> hidden_size_gru = 512<br> num_layers_gru = 3<br> epochs = 4<br> batch_size = 64<br> lr = 0.0008419253431185227<br> warmup_steps = 80</p> <p>What didn't work:<br> -lectures<br> -questions metadata<br> -position encoding: both learnable and hard-coded<br> -predicting user_answer instead of correctness<br> -gradient clipping<br> -label smoothing<br> -dropout<br> -…</p> <p>gg</p>
Riiid Answer Correctness Prediction
22nd Place Solution: Just Encoder Modules
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks for all the discussion and support from fellow Kagglers. I've learned a lot in the competition and tried a lot of things. Here a simple list of the stuff that I believe worked well. I'll try to cover the whole model but may make updates to clarify and add explanations later. </p> <p><strong><a href="https://www.kaggle.com/abdurrafae/22nd-solution-just-encoder-blocks" target="_blank">Link to notebook </a></strong></p> <p><strong>Architecture</strong><br> My architecture was just 3 Encoder modules stacked together using 512 as d_model and 4 Encoder layers in each block.</p> <p>My Questions, Interactions and Response sequences were all padded on the left by a unique vector made using historical stats from the user. This ensured that all 3 sequences were aligned on the sequence number</p> <ol> <li><p>Questions Only Block (Q-Block):<br> This one was just self attention over the questions.</p></li> <li><p>Interactions Only Block (I-Block):<br> This one was just self attention over the interactions.</p></li> <li><p>Questions, Responses, Interaction Block (QRI Block):<br> I used output from Q-Block as query, I-Block as keys and Responses/(QRI Block) as values. The residual connection after attention module was made using value vector (instead of the default query vector). Self Attention Mask was used in first 3 layers and for the last layer I used a custom mask that only attended to prior interactions and didn't have the residual connection.</p></li> </ol> <p>Concatenated the output of Q-Block and QRI-Block, feed it into 3 Linear layers.</p> <p><strong>Data Usage</strong><br> I split the longer sequences into windows of 256 length and 128 overlap. (0-256, 128-384, 256-512).<br> I saved them in tf.records and during inference I took a uniform random 128 window from each 256 length sequences. This ensures any event would be placed at 0-127 location in my training sequence with equal probability (excluding the last 128 events in the last window of a user). For users having smaller sequence length I padded with 0 to make the length equal 128 and used those 128 each time.</p> <p><strong>Compute Resources</strong><br> I initially only used Kaggle GPUs to train the model. Started setting up a TPU pipeline in the last 3 weeks of the competition and in the end was able to use the TPUs as well. I think I've exhausted my complete TPU quota for last 3 weeks.</p> <p>I'd like to thank <a href="https://www.kaggle.com/yihdarshieh" target="_blank">@yihdarshieh</a> (<strong>TPU Guru</strong>) for this <a href="https://www.kaggle.com/yihdarshieh/tpu-track-knowledge-states-of-1m-students" target="_blank">TPU notebook</a>. It helped a lot in setting up the TPU pipeline.</p> <p><strong>Sequential Encodings</strong><br> I used 2 encodings that captured the sequence of events. I subtracted the first value in each sequence from the rest to ensure that all encodings started with a 0.</p> <p><strong>Temporal Encoding</strong><br> For this I converted the timestamp into minutes and used the same scheme as that of positional encoding with power (60 x 24 x 365 = 1 year). I believe this enabled the model to know how far apart in time were 2 questions/interactions. I believe this to be a better implementation of the lag time variable used in SAINT+ as it captures the difference in time between all of the events simultaneously rather than just between 2 adjacent events.</p> <p><strong>Positional Encoding</strong><br> For this I used the task_container_id as the position and a power of 10,000.</p> <p><strong>Proxy for Knowledge</strong><br> Instead of just using 0/1 from responses I added an new feature which is a heuristic for knowledge in case the response was incorrect. If a user selects option 2 when 1 is the correct one. I would calculate (total number of times 2 was selected for that questions)/(total number of times the question has been answered incorrectly). This ratio was maintained and updated during inference as well. I checked the correlation of mean of proxy_knowledge for incorrect answers and overall user accuracy, it was around 0.4.</p> <p><strong>Question difficulty</strong><br> Simple feature that is calculated as (total number of times the question has been answered correctly)/(total number of times the question has been answered). This ratio was maintained and updated during inference as well.</p> <p><strong>Custom Masks</strong><br> I used self attention mask that didn't attend on events of the same bundle.<br> For the last layer of QRI block I removed the mask entries along the diagonal to ensure it only attends to prior values.</p> <p><strong>Starting Vector</strong><br> For the starting vector I used counts of the time each tag was seen/answered correctly in questions and lectures. Just used a dense layer to encode this into the first vector of the sequence. </p> <p><strong>Other Details</strong></p> <ul> <li>Used Noam LR provided on the Transformers page on TF documentation. </li> <li>Batch size 1024 (Trained on TPUs - 1 Epoch took 4-6 mins)</li> <li>For Validation I just separated around 3.4% of users initially and used their sequences.</li> <li>Model converged around 20-30 Epochs. (4 Hours training time at max)</li> <li>Sequence length of 128 was used.</li> </ul> <p><strong>Question Embeddings</strong><br> I added up embeddings for content id, part id, (a weighted average of) tags ids and type_of (from lectures), then concatenated it with both Sequential embeddings and then into a dense layer with d_model dimensions.</p> <p><strong>Response Embeddings</strong><br> I concatenated answered correctly, proxy knowledge, question difficulty, time elapsed and question had explanation and feed into a dense layer with d_model dimensions.</p> <p><strong>Interaction Embeddings</strong><br> I just took the first d_model//2 units from both Questions/Response embeddings and concatenated them for this.</p>
Riiid Answer Correctness Prediction
[0.801 Private, 60th place] Single LGB using 1% of data
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks to the organizers of this very interesting competition and congrats to the winners. My LB score (0.800 public / 0.801 private) was achieved using a single LGB model trained on 1m rows. It wasn't my intention to only use 1m rows, but I had some bugs that I fixed too late to retrain on more data in time. Nevertheless, I'm sharing my solution in case it ends up being useful to anyone.</p> <h1>Features</h1> <p>The final model used 220 features that fit into following categories in rough descending order of importance:<br> note: AC = answered_correctly</p> <ul> <li>Question-Question matrix (QQM) features: out of all users who answered A on question X, how many of them get question Y right? I built a 13523 x 13523 x 4 matrix of all the probabilities, then calculated user stats on their last 3, 10, 30, 50, 100, 200 questions. I also created a separate set of features weighting by Mahalanobis distance between the past and current question embeddings (see section on PEBG embeddings). </li> <li>question's overall accuracy/difficulty across all users, number of responses</li> <li>user's accuracy, number of respones across their entire history &amp; current session (where a new session is created if the user is idle for more than 5 min)</li> <li>question metadata, i.e. tags, part, index in bundle</li> <li>elapsed time and lag time - based features. Ratios were good here, e.g. ratio of prior elapsed time to lag time, user's average elapsed time to average lag time. I also made a simple model to predict elapsed time of the current question</li> <li>PEBG embedding-based features based on this paper: <a href="https://arxiv.org/pdf/2012.05031.pdf" target="_blank">https://arxiv.org/pdf/2012.05031.pdf</a>. I took the current question embedding, the user's average embedding overall, and the average embedding over the user's correct and incorrect responses. Mahalanobis distance between the current question's embedding to the user's correct and incorrect embeddings and the ratio between correct and incorrect distance were all important. </li> <li>features from Ednet dataset (see below)</li> <li>stats on user's past attempts on the same question, e.g. most recent AC, average AC, number of past attempts, time since last attempt, whether user saw explanation on last attempt</li> <li>stats on user's past attempts on questions with the same tags</li> <li>user's accuracy normalized by question difficulty across different windows, e.g. average of user's residuals for their last 3, 10, 30, 50, 100, 200 questions; residual = AC - question accuracy</li> <li>user's accuracy trend over their last 4, 12, 30, 50, 100, 200 questions</li> <li>user's performance on diagnostic questions: the 30 question sequence starting with 7900, 7876, 175, … is usually asked to each user at the beginning. I made 30 features for the user's AC on each question</li> <li>lecture features: time since user watched a lecture with the same tag, time spent watching lecture compared to lecture duration (from Ednet data)</li> </ul> <p>I applied additive smoothing to target encoding features to guard against small samples (this noticeably helped the QQM features). </p> <h1>Ednet dataset</h1> <p>As you probably know, the competition data was taken from Riiid's Ednet database, from which a large public dataset has been open sourced at <a href="https://github.com/riiid/ednet" target="_blank">https://github.com/riiid/ednet</a>. At first glance, the overlap between the competition data and the open source data is not obvious - for example, the open source data has 784,309 users whereas the competition data only has 393,656 and the contents don't look the same either. But the Ednet data comes with question and lecture metadata files, and by comparing them to the metadata files we have in the competition, it's quite easy to map the questions, tags, and users between the datasets. The competition data seems to be a bit more recent, and has mostly filtered out users who only answered a few questions, but 350k+ out of ~393k users appear in both datasets. That's important because most of the private test data is from users appearing in the training set, so you can use the Ednet data to calculate some features on those users that you wouldn't otherwise have. Examples are:</p> <ul> <li>Ednet data has real timestamps, not shifted to start from zero. I calculated the hour, day, and week in KST</li> <li>how often the user switches answers</li> <li>Ground truth video lengths for lectures, which can be used in lecture features</li> <li>whether user is primarily on mobile or web</li> </ul> <p>Also, I extracted ~6m rows from the Ednet data that was filtered out of the competition data. These were mostly users with few interactions so I didn't add them to the training set, but I did use them to pre-populate the QQM which helped a bit. </p> <h1>Pipeline &amp; Remarks</h1> <p>I used a loop-based accumulation framework which updated user states every iteration. To strike a good balance between speed and memory usage, I used dictionaries to look up users and numpy arrays to store the accumulated features. This worked well in general and I could use the same code for inference, which took ~3 hours in the end with 220 features and used about 9GB RAM.</p> <p>In the end, I wasn't able to train on the entire dataset, or even 10m rows due to bugs that weren't fixed until the morning of the deadline. One particularly frustrating one was from the very first piece of code I ran in the competition: the code to read in the train.csv file. I had copied a line to specify the data types from a public notebook assuming it would be the same one from the Competition Starter, but the notebook creator had changed prior_question_elapsed_time to float16 instead of float32. This resulted in all values in the column &gt;=65536 being converted to np.inf. Lesson learned: never blindly copy from a public notebook! </p> <p>Despite this, I really enjoyed the competition as by taking the feature engineering approach, it felt like I was uncovering new things about the data all the time. </p>
Mechanisms of Action (MoA) Prediction
4th Place Solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, We(<a href="https://www.kaggle.com/kannahashimoto" target="_blank">@Kanna Hashimoto</a> , <a href="https://www.kaggle.com/eemonn" target="_blank">@e-mon</a>, <a href="https://www.kaggle.com/kento1993" target="_blank">@Konbuiyon</a>, <a href="https://www.kaggle.com/toseihatori" target="_blank">@hatry</a>, <a href="https://www.kaggle.com/hikomimo" target="_blank">@hikomimo</a>) would like to thank you to Kaggle and the hosts for hosting the competition.<br> I'm still excited about this result!</p> <h2>Solution Overview</h2> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3948967%2Fe3bcf85de2251538780d7a3631157f4a%2F2020-12-01%2023.41.17.png?generation=1606833703847939&amp;alt=media" alt=""></p> <p>Our approach is very simple. Trust CV &amp; Ensemble.</p> <h2>All of the source code is here</h2> <ul> <li><p><a href="https://github.com/e-mon/lish-moa" target="_blank">github</a></p></li> <li><p><a href="https://www.kaggle.com/kento1993/nn-svm-tabnet-xgb-with-pca-cnn-stacking-without-pp" target="_blank">kaggle notebook</a></p></li> </ul> <h2>Validation Scheme</h2> <p><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's CV methods (Drug and MultiLabel Stratification) w/o cp_type = 'ctl_vehcle'.<br> This method could almost completely correlate CV with LB.</p> <p>(Thx <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> !!)</p> <p><a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">Drug and MultiLabel Stratification Code</a></p> <h2>Feature Engineering</h2> <ul> <li><p>statistical features</p> <ul> <li>sum, mean, std, kurt, skew, median</li></ul></li> <li><p>PCA</p> <ul> <li>applied only CELL features</li></ul></li> <li><p>combination features</p> <ul> <li>difference between the two features</li> <li>Because of combinatorial explosion (872C2=379_756), we used only features with variance above the threshold.</li></ul> <pre><code>for c in itertools.combinations(features_g + features_c, 2)): col_name = f"{c[0]}_{c[1]}_diff" d = train_features_df[c[0]] - train_features_df[c[1]] diff_val = np.var(d) if diff_val &gt; 15: train_features_df[col_name] = d var_list.append(diff_val) </code></pre></li> <li><p>quantile transformer</p> <ul> <li>except for PCA features</li></ul></li> </ul> <h2>Modeling</h2> <h4>1st stage models</h4> <ul> <li>stage1-NN (7folds 5seeds)<ul> <li>stacking of 2 models <ul> <li>stage0-NN1 (7folds 5seeds)<ul> <li>labels : scored + nonscored targets (except for all 0 columns)</li></ul></li> <li>stage0-NN2 (7folds 5seeds)<ul> <li>labels : scored targets</li></ul></li></ul></li></ul></li> <li>stage1-tabnet (7folds 5seeds)<ul> <li>labels : scored targets</li></ul></li> <li>stage1-svm (4folds 1seeds)<ul> <li>labels : scored targets</li> <li>create 206 models</li></ul></li> <li>stage1-xgb (5folds 2seeds)<ul> <li>labels : scored targets</li> <li>create 206 models</li></ul></li> </ul> <table> <thead> <tr> <th></th> <th>model</th> <th>cv score</th> <th>blend weight</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>stage0-NN2</td> <td>0.016779</td> <td>-</td> </tr> <tr> <td>1</td> <td>stage1-Tabnet</td> <td>0.016846</td> <td>0.228</td> </tr> <tr> <td>2</td> <td>stage1-SVM</td> <td>0.017554</td> <td>0.160</td> </tr> <tr> <td>3</td> <td>stage1-NN</td> <td>0.016736</td> <td>0.598</td> </tr> <tr> <td>4</td> <td>stage1-xgb</td> <td>0.017366</td> <td>0.0123</td> </tr> </tbody> </table> <ul> <li>correlation matrix for 1st stage models<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3948967%2F8922e5bf92fa0da863a640a579cc52d3%2Fimage%20(12).png?generation=1606833799313456&amp;alt=media" alt=""></li> </ul> <h4>2nd stage models</h4> <ul> <li><p>2D-CNN stacking (5folds 5seeds)</p> <ul> <li>model-wise CNN for 4 stege1 models </li> <li><a href="https://speakerdeck.com/phalanx/imet-7th-place-solution-and-my-approach-to-image-data-competition?slide=24" target="_blank">iMet 7th place solution</a></li></ul></li> <li><p>weight optimization</p> <ul> <li>Scipy minimize function ('Nelder-Mead')</li> <li><a href="https://www.kaggle.com/c/trends-assessment-prediction/discussion/162765" target="_blank">TReNDS 2nd place solution</a></li></ul></li> </ul> <table> <thead> <tr> <th></th> <th>model</th> <th>cv score</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>2D-CNN stacking</td> <td>0.01670</td> </tr> <tr> <td>1</td> <td>weight optimization</td> <td><strong>0.016518</strong></td> </tr> </tbody> </table> <h4>postprocessing</h4> <ul> <li>Drug ID Prediction<ul> <li>Multi-class prediction with LightGBM for frequently appeared 8 drug id's in train datasets.</li> <li>replacing the label of a record whose prediction exceeds the threshold with the hard target of its drug_id</li> <li>target labels<ul> <li>9 classes: ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d', other_drug_ids]</li></ul></li> <li>metric<ul> <li>accuracy: 0.984965 with 5folds 15seed averaging</li></ul></li></ul></li> </ul> <h4>final submission</h4> <p>Our final two submissions are averaging of two predictions (2D-CNN stcking + weight optimization averaging with 1st models ) w/ and w/o postprocessing.</p> <ul> <li>avg weighted_blend + CNN (CV): 0.01647 (PublicLB: 0.01816/PrivateLB: <strong>0.01600</strong>)</li> <li>w/ postprocessing (CV): 0.01644 (PublicLB: 0.01811/PrivateLB: <strong>0.01618</strong>)<ul> <li><a href="https://www.kaggle.com/eemonn/nn-svm-tabnet-xgb-with-pca-cnn-stacking?scriptVersionId=48144329" target="_blank">notebook w/ postprocessing</a></li></ul></li> </ul> <p>Our training &amp; inference processes were almost done only on Kaggle kernel.</p>
RSNA STR Pulmonary Embolism Detection
23rd Place - Fast GPU Experimentation Pipeline!
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA STR Pulmonary Embolism Detection <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thank you Radiological Society of North America (RSNA®), Society of Thoracic Radiology (STR), and Kaggle for hosting this fun competition. Thank you Nvidia for providing compute resources.</p> <p>This has been one of my favorite competitions. I enjoyed building an elaborate multi-stage pipeline of stacked models! Working with 3D images was fun and provided an additional challenge compared with 2D images. I particularly enjoyed tackling the challenge of building a fast experimentation pipeline when the training data is <code>1_000_000_000_000 bytes</code> of data! One trillion bytes! This is the largest dataset I have ever worked with.</p> <h1>RSNA STR Pulmonary Embolism Detection</h1> <p>In the figure below, each row is an exam (i.e. study, i.e. single patient). The row of images are CT scan "slices" from the 3D image of a patient's chest. In this competition, we need to predict 9 targets for each patient (each row) (like is pe on left side? on right side? etc) and we need to classify every image (is pe present?). If below were all the data, then we would need to predict <code>3 rows * 9 targets = 27 exam targets</code> and <code>15 images * 1 target = 15 image targets</code>. In total we would need to predict 42 targets.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F7d5ce77d471fbf8ef7e746731eab734c%2Fstudy.png?generation=1603731145650332&amp;alt=media" alt=""></p> <h1>Understanding the Metric</h1> <p>On the description page, the metric seems very confusing. However it is just a weighted average of 10 log losses (the 9 types of exam predictions and the 1 type of image prediction). (I explain the metric <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/193598" target="_blank">here</a>). After computing the 10 weights, we find that 50% of our LB score is from the 9 exam predictions and 50% of our LB score is from the image predictions. Furthermore, the log loss for the image predictions is itself a weighted log loss where an image that is part of an exam without pulmonary embolism has weight zero (very important observation!) </p> <ul> <li>Improving image level predictions is equally important as improving exam predictions.</li> <li>There is no penalty for false positives, so we can train our image prediction models with only the 30% of the data from positive exams!</li> </ul> <h1>Stage 1 - Model One - Image Level Predictions</h1> <h1>(CNN EfficientNet B4)</h1> <pre><code>inp = tf.keras.Input(shape=(320, 320, 1)) # INPUT IS UINT8 x = tf.keras.layers.Concatenate()([inp/255., inp/255., inp/255.]) base_model = efn.EfficientNetB4(weights='imagenet', include_top=False) x = base_model(x) x = tf.keras.layers.GlobalAveragePooling2D()(x) x = tf.keras.layers.Dense(1, activation='sigmoid')(x) model = tf.keras.Model(inputs=inp, outputs=x) opt = tf.keras.optimizers.Adam(lr=0.000005) model.compile(loss='binary_crossentropy', optimizer = opt) model.fit(X, y, sample_weight = X.groupby('StudyInstanceUID') .pe_present_on_image.transform('mean') * 5.6222 ) </code></pre> <p>I built two models. Model one predicts image level predictions (i.e. <code>pe_present_on_image</code>) and model two predicts patient level predictions (i.e. exams i.e. studies, like <code>leftsided_pe</code> etc)</p> <ul> <li>EfficientNet B4 pretrained on <code>imagenet</code></li> <li><strong>Only Mediastinal window, (ie. level=40, width=400)</strong> i.e. 1 channel <code>uint8</code></li> <li>Random crops of 320x320 from 512x512</li> <li>Rotation (+-8 deg) Scale (+-0.16) augmentation</li> <li>Coarse Dropout (16 holes sized 50x50)</li> <li>Mixup (swap slices of similar Z position with other exams)</li> <li>Adam optimizer with constant <code>LR = 5e-6</code></li> <li>Training sample weight equal to pe proportion in exam</li> <li><strong>Only train on 30% of train data with pe present in exam</strong></li> <li>40 minute epochs using 4x V100 GPU</li> <li>Train 15 epochs with batch size 128</li> </ul> <p>Below illustrates my augmentations. For display purposes, we illustrate Mixup with a large yellow, green, or blue square so you can see it better. (During runtime, it was an actual second image).</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F5515206e0fd5e5a7379f6272e0e56dc3%2Fmixup.png?generation=1603733206613194&amp;alt=media" alt=""> </p> <h1>Stage 1 - Model Two - Patient Level Predictions</h1> <h1>(CNN EfficientNet B4)</h1> <p>Each patient has an average of 200 images. Among those 200, if PE is present, it is usually on the middle slices. Therefore I only train my patient level model with slices <code>0.35 &lt; z &lt; 0.65</code>. Then to predict the 9 targets for each patient, I only infer <code>0.35 &lt; z &lt; 0.65</code> and then take the 9 average predictions.</p> <ul> <li>Most details same as model one</li> <li><strong>Only Mediastinal window, (ie. level=40, width=400)</strong> i.e. 1 channel <code>uint8</code></li> <li>Output layer of 9 sigmoid units</li> <li><strong>Only train on 30% of train data with Z Position between <code>0.35 &lt; z &lt; 0.65</code></strong></li> <li>Loss <code>weighted_log_loss</code></li> </ul> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F5ed4f1ae6291e5edc4f422b479214f80%2Fslices.png?generation=1603734372678374&amp;alt=media" alt=""></p> <h1>Experimentation Pipeline</h1> <p>How do we discover the details above? All the settings above were discovered by performing dozens of experiments on <strong>smaller images and smaller backbones</strong>. For example, use 128x128 (with 80x80 crops) EfficientNetB0 and/or 256x256 (with 160x160 crops) EfficientNetB2. Using these smaller models, we can test out ideas on a single GPU in minutes! Also note that we only use 1 channel images of <code>uint8</code>. This is 33% less data than converting images to 3 channels of 3 different CT window schemes.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F60349090637a446427fc8f7c8775a70f%2Fexp2.png?generation=1603744581352698&amp;alt=media" alt=""></p> <p>Remember we are only training with 30% original data. Then using crops makes it 12% of data. Then using 256x256 reduces this to 3% of data. And using 128x128 reduces this to 0.75% of data! Even Kaggle notebooks P100 GPU can train quickly on 80x80 crops from 128x128 and 30% train data.</p> <p>Once you find a configuration that works well, then run 512x512 with EfficientNetB4 overnight. Using only 2D predictions for image and 2D predictions for patient, <strong>the above two models obtain LB 0.215 and CV 0.235.</strong> </p> <p>We will now increase our CV LB by building stage 2 models that use stage 1 predictions as input</p> <h1>Stage 2 - Model One - Image Level Predictions</h1> <h1>(Random Forest)</h1> <pre><code>FEATURES = ['oof'] for k in NEIGHBORS: tmp = train.sort_values('PosZ').groupby('StudyInstanceUID')[['oof']] train['b%i'%k] = tmp.shift(k) train['a%i'%k] = tmp.shift(-k) FEATURES += ['a%i'%k, 'b%i'%k] train.fillna(-1,inplace=True) model = RandomForestClassifier(max_depth=9, n_estimators=100, n_jobs=20, min_samples_leaf=50) model.fit(train.loc[idxT,FEATURES],train.loc[idxT,'pe_present_on_image'], sample_weight = 5.6222 * valid.loc[idxT,'weight']) </code></pre> <p>All images are slices from 3D images. So adjacent images (within the same exam) contain helpful information. Each plot below displays all 200 or so image level predictions from 1 study. The x axis is z position and the y axis is the prediction value (0 to 1). The blue line is the ground truth, the orange line is the prediction from the model described above. The black line is the random forest Stage 2 model.</p> <p>For each image level prediction, a random forest model takes as input the prediction and neighbor predictions [1,2,3,4,5,6,7,8,9,10,15,20,25,30,35,40,45,50,60,70,80,90,100,150,200,250] on either side. Then the random forest model predicts a new image level prediction display in black below. Notice when the original prediction is close to 1, then the random forest pushes it up to 1 and when the original prediction is close to 0, then the random forest pushes it down to 0.</p> <p><strong>This stage 2 image level model increased LB to 0.204 from 0.215 and CV to 0.224 from 0.235 (gain = 0.011)</strong></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fcccdbe82e765c3e7d762031b358dd290%2Fi-smooth.png?generation=1603736540573190&amp;alt=media" alt=""></p> <h1>Stage 2 - Model Two - Patient Level Predictions</h1> <h1>(GRU + 1D-CNN)</h1> <pre><code>inp = L.Input(shape=(64, 1792)) x = L.Bidirectional(L.GRU(48, return_sequences=True, kernel_initializer='orthogonal'))(inp) x = L.Bidirectional(L.GRU(48, return_sequences=False, kernel_initializer='orthogonal'))(x) x = L.Dense(9, activation='sigmoid')(x) model = tf.keras.Model(inputs=inp, outputs=x) opt = tf.keras.optimizers.Adam(lr=0.00005) model.compile(loss=weighted_log_loss, optimizer = opt) </code></pre> <p>Similarly we can use adjacent slice information to improve our patient level predictions. Most patients have between 160 and 310 images per study.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fa4a52476743ad1e1ada257c586c607fb%2Fhist2.png?generation=1603741964817694&amp;alt=media" alt=""></p> <p>Below are plots of the top view (Z position is vertical axis) of the 3D image (not the ordinary slice view of the 3D image). We notice that most of the crucial information is between 25% and 75% in top view. Therefore we extracted 64 images equally spaces between 25% and 75% Z position. Then we took those 64 images and extracted the GAP embeddings from both our stage 1 model one and stage 1 model two. We trained a stage 2 GRU model and a stage 2 1D-CNN model to predict exam level predictions from these 64 GAP embeddings.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2Fe46e41fab360d847c3fadb5d6dff9002%2Ftop.png?generation=1603741873425399&amp;alt=media" alt=""></p> <p><strong>This stage 2 exam level model increased LB to 0.183 from 0.204 and CV to 0.203 from 0.224 (gain = 0.021)</strong></p> <h1>Other Misc Ideas</h1> <p>When using global average pooling 2D in your CNN, location information is lost. Therefore I tried giving my models location information in various ways to help predict targets related to location (i.e. <code>leftsided_pe</code> etc) and related to size (i.e. <code>rv_lv_ratio_gte_1</code> etc). Unfortunately, none of my ideas increased CV or LB. My favorite is below.</p> <h2>Locating PE with Class Activation Maps CAM</h2> <p>I extracted class activation maps from my stage 1 models and feed the location information into stage 2 exam models. (CAMs explained <a href="https://www.kaggle.com/cdeotte/unsupervised-masks-cv-0-60" target="_blank">here</a>). In the below figure, the ground truth is in the title. The green circles are the CAM of my EfficientNetB4 model. Note that CT scans are flipped so the left side of the image is "right" and the right side of the image is "left. You can see that the CAM does a good job of locating the PE.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1723677%2F291dc983514a322d5ce4e56b22a9a681%2Fcam.png?generation=1603755601607192&amp;alt=media" alt=""></p> <h1>Thank you</h1>
Mechanisms of Action (MoA) Prediction
MoA solution - Tried to keep it simple
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Thanks a lot to Kaggle and Organizers for organizing such a beautiful competition and congratulations to all participants.</p> <p>There are so many sophisticated solutions provided which consist lot of learning opportunities. I will try to give a short description</p> <h3>Models</h3> <p>2 NN models , 1 Tabnet . All of them seed averaged</p> <h3>Feature Engineering</h3> <p>This part is always nightmare to me. For Tabnet , there was no feature engineering. For NN there were followings ( one after another up to 4)</p> <ol> <li>Remove the features whose max(correlation coefficients with all targets) below a threshold</li> <li>Distribute the corelated(derived based on a threshold) features in the different seed</li> <li>Variance Threshold</li> <li>Quantile Transformer</li> <li>PCA(g:0.4-0.6, c: 0.95-0.97) on raw data (before applying 1)</li> </ol> <p>All done on training data only </p> <p>Finally it's ensemble of these three models </p> <p>Thanks to creator of all public notebooks and discussion posts which helped in many ways, especially I want to mention few of them here</p> <p>Notebooks<br> <a href="https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids" target="_blank">https://www.kaggle.com/thehemen/pytorch-transfer-learning-with-k-folds-by-drug-ids</a><br> <a href="https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss" target="_blank">https://www.kaggle.com/hiramcho/moa-tabnet-with-pca-rank-gauss</a><br> <a href="https://www.kaggle.com/markpeng/deepinsight-transforming-non-image-data-to-images" target="_blank">https://www.kaggle.com/markpeng/deepinsight-transforming-non-image-data-to-images</a> (Though I have not tried it, but so innovative)</p> <p>Discussions<br> <a href="https://www.kaggle.com/c/lish-moa/discussion/195195" target="_blank">https://www.kaggle.com/c/lish-moa/discussion/195195</a></p>
Mechanisms of Action (MoA) Prediction
Private 58th solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Mechanisms of Action (MoA) Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all prize winners and Thank you for all participants !<br> I'll show my solution breify.<br> I'm sorry for my spaghetti code …!</p> <p><a href="https://www.kaggle.com/kaerunantoka/moa-blending24/notebook?select=submission.csv" target="_blank">My solution</a> is now public.</p> <h3>Summary</h3> <p>・My final model is 5models (MLP1, MLP2, MLP3, ResNet, TabNet) averaged model.<br> ・I decided to do this comp on solo, so I tried to generate diversity.<br> ・So there is many minor changes.<br> ・They are mainly based on public notebooks, such as <a href="https://www.kaggle.com/kushal1506/moa-pretrained-non-scored-targets-as-meta-features" target="_blank">kushal1506's</a>, <a href="https://www.kaggle.com/rahulsd91/moa-multi-input-resnet-model" target="_blank">rahulsd91's</a>.<br> ・I tuned weight with public LB.</p> <h3>MLP1</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … SVD, <a href="https://www.kaggle.com/kaerunantoka/pytorch-moa-dae-v5" target="_blank">DenoisingAutoEncoder</a>, VarianceThreshold<br> ・CV … MultilabelStratifiedKFold(n_splits=n_folds).split(X=train, y=target, groups=np.array(train.drug_id.values))<br> ・single seed, val logloss 0.014776957827222726</p> <h3>MLP2</h3> <p>・train + infer<br> ・preprocess … <a href="https://www.kaggle.com/c/lish-moa/discussion/195788" target="_blank">RankGauss</a><br> ・FE … SVD, DenoisingAutoEncoder, VarianceThreshold, training with non-scored-targets<br> ・CV … Chris 's CV strategy<br> ・5 seeds, val logloss 0.015637875846293534</p> <h3>MLP3</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … PCA, cluster, stats, VarianceThreshold, Pretrained with non-scored-targets<br> ・CV … MultilabelStratifiedKFold(n_splits=7,random_state=seed)<br> ・2 seeds, val logloss 0.014639779385801738</p> <h3>ResNet</h3> <p>・train + infer<br> ・preprocess … QuantileTransformer<br> ・FE … PCA, mean, DenoisingAutoEncoder<br> ・CV … MultilabelStratifiedKFold(n_splits=n_folds)<br> ・3 seeds, val logloss 0.016809864</p> <h3>TabNet</h3> <p>・only infer<br> ・preprocess … RankGauss<br> ・FE … PCA, SVD, stats, cluster<br> ・6 seeds, Overall AUC: 0.7873209725163044, Average CV: 0.016399392395897112</p>
Riiid Answer Correctness Prediction
6th Place Solution: Very Custom GRU
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all congrats to <a href="https://www.kaggle.com/keetar" target="_blank">@keetar</a> and <a href="https://www.kaggle.com/mamasinkgs" target="_blank">@mamasinkgs</a> and other top teams. I will shortly explain my solution. It is late here, so I may be missing some points.</p> <p><strong>Some General Details</strong></p> <ul> <li>Used questions only. Lectures improve validation score but increase train-val gap and don’t improve LB.</li> <li>Didn't use dataframes. Used numpy arrays partitioned by user ids.</li> <li>Sequence length 256</li> <li>15 epochs. 1/3 of the data each time with reducing LR.</li> <li>8192 batch size</li> <li>Ensemble of the same model with 7 different seeds trained on whole data</li> <li>0.8136 single model validation score, 0.813 LB. Ensemble: 0.815.</li> <li>8 hours training on 4-GPU machine</li> <li>Used Github and committed any improvement with a message like: Add one more GRU layer (Val: 0.8136, LB: 0.813)</li> </ul> <p><strong>Inputs</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F4e161bd44dd25c8329c5ced0fa6c9ab3%2Fimg1.png?generation=1610064408784528&amp;alt=media" alt=""></p> <p><strong>Engineered Features</strong></p> <p>Assume current question’s correct answer is X. Logarithm of:</p> <ul> <li>Number of questions since last X.</li> <li>Length of current X streak. (can be zero)</li> <li>Length of current streak on any non-X answer. (can be zero)</li> </ul> <p>This helps with users who always pick A as answer etc.</p> <p><strong>Embeddings</strong><br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F8256c5fe356ce1bedeedaf9bc91e9884%2Fimg2.png?generation=1610064559098164&amp;alt=media" alt=""></p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F471945%2F73d8b182375f185231d35f8b141b8cbf%2Fimg3.png?generation=1610064576810743&amp;alt=media" alt=""></p> <p>Content Cosine Similarity</p> <ul> <li>A bit similar to attention with 16 heads</li> <li>Linear transformation and l2 norm applied on content vectors</li> <li>For 16 different transformation, cosine similarity between current content and history contents are calculated.</li> <li>Transformation is symmetric for content and history contents.</li> </ul> <p>U-GRU:</p> <ul> <li>GRU with 2 directions but not BiGRU</li> <li>First does reverse pass, concatenates the output and then does forward pass</li> </ul> <p>MLP:</p> <ul> <li>2 layers of [Linear, BatchNorm, Relu]</li> </ul> <p>Edit: Part embedding is trainable. There is actually sigmoid x tanh layer before GRUs.</p>
Riiid Answer Correctness Prediction
18th place solution
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Congrats to all medal teams and new Grandmasters,Masters,Experts.Thanks to Organizers and kaggle for such a good competition,it shows that kaggle competition is not just a game but also can be a useful machine learning project.</p> <h1>Team</h1> <ul> <li>At first we have three teams individually, tomoyo and me, ethan and qyxs , wrb0312.We focus on feature engineering and optimization before wrb0312 joined us,I think we made many good features but neural network dominated this competition.wrb0312 did a great job even he use transformer for the first time.After wrb0312 joined us there are only ten days left,we focus on ensembling our models for inference,and also improved transformer very much.Our team members are from china and japan, it's very interesting to see we use chinese,japanese,english mixed-language to communicate.Greate job everyone!</li> </ul> <h1>Optimization</h1> <ul> <li><p>For GBM features, rather than using many dictionary to save features' data, we developed a nubma-based framework to speed up feature engineering process and online calculation. Firstly, the data are sorted by ['user_id', 'timestamp', 'content_id'] and split into different arrays. Then we created features in different array via self-designed rolling function or self-designed cumlative function. Actually, it provides us a very flexible way to create features and test it. In 10m data, the feature engineering process needs only 5 minutes to finish it.</p></li> <li><p>Some examples are listed as below. </p></li> </ul> <pre><code>from tqdm import tqdm from numba import jit,njit from joblib import Parallel, delayed from tqdm import tqdm import gc from multiprocessing import Process, Manager,Pool from functools import partial from numba import prange import numpy as np import pandas as pd from numba import types from numba.typed import Dict import functools, time from numba.typed import List def timeit(f): def wrap(*args, **kwargs): time1 = time.time() ret = f(*args, **kwargs) time2 = time.time() print('{:s} function took {:.3f} s'.format(f.__name__, np.round(time2-time1, 2))) return ret return wrap def rolling_feat_group(train, col_used): a = train[col_used].values ind = np.lexsort((a[:,2],a[:,1],a[:,0])) a = a[ind] g = np.split(a, np.unique(a[:, 0], return_index=True)[1][1:]) return g, ind, col_used @jit(nopython = True, fastmath = True) def rolling_cal(arr, step, window = 5, shift_ = 1): m = 2 arr_ = np.concatenate((np.full((window, ), np.nan), arr)) ret = np.zeros((arr.shape[0], m)) beg = window for i in step: tmp = arr_[beg-window:beg] ret[beg - window:(beg - window + i), 0] = np.nanmean(tmp) ret[beg - window:(beg - window + i), 1] = np.nansum(tmp) beg += i return ret @jit(nopython = True, fastmath = True) def rolling_time_cal(arr, window = 5, shift_ = 1): m = 1 arr_ = np.concatenate((np.full((window, ), np.nan), arr)) ret = np.zeros((arr.shape[0], m)) for i in range(0,arr.shape[0], 1): tmp = arr_[i:i+window+1] ret[i, 0] = np.nanmean(tmp) return ret def rolling_cal_wrap(tmp_g, shift_period): m = 2 tmp_res = [] step = np.unique(tmp_g[:, 1], return_counts=True)[1] for window_size in shift_period: tmp = rolling_cal(tmp_g[:, 2], step, window_size) tmp_res.append(tmp) tmp_res = np.concatenate(tmp_res, axis = 1) return tmp_res def rolling_time_cal_wrap(tmp_g, shift_period): m = 2 tmp_res = [] for window_size in shift_period: tmp = rolling_time_cal(tmp_g[:, 2], window_size) tmp_res.append(tmp) tmp_res = np.concatenate(tmp_res, axis = 1) return tmp_res def rolling_feat_cal(tmp_g, name_dict, global_period): answer_idx = name_dict.index('answered_correctly') prior_idx = name_dict.index('prior_question_elapsed_time') item_mean_idx = name_dict.index('item_mean') task_set_idx = name_dict.index('task_set_distance') tmp_res1 = rolling_cal_wrap(tmp_g[:,[0,1, answer_idx]], global_period) tmp_res2 = rolling_time_cal_wrap(tmp_g[:,[0,1, prior_idx]], global_period) tmp_res3 = rolling_time_cal_wrap(tmp_g[:,[0,1, item_mean_idx]], global_period) tmp_res4 = rolling_time_cal_wrap(tmp_g[:,[0,1, task_set_idx]], global_period) tmp_res = np.concatenate([tmp_res1, tmp_res2, tmp_res3, tmp_res4], axis = 1) return tmp_res </code></pre> <ul> <li>If anyone interested in how to create features via numba-framework, Tomoyo publiced his full GBM pipeline in github(<a href="url" target="_blank">https://github.com/ZiwenYeee/Riiid-numba-framework</a>)</li> </ul> <h1>Catboost(LB 0.807)</h1> <h3>summary</h3> <ul> <li>We created 183 features for final catboost model,including some original features,global statistics(item base),cumulative and rolling statistics(user base),tfidf-svd(base on question's user list),word2vec(base on user's question list, wrong and correct tag list ),timedelta from many perspective,last same part groups features.</li> </ul> <h3>gbm benchmark</h3> <ul> <li>We compared lightgbm ,xgboost,catboost,catboost is the best for the training and inference speed,and memory consuming.When train the full data,lightgbm need over 100 hours with my AMD Ryzen ThreadRipper 3970X,xgboost always have out of memory error even using dask with 4 RTX 3090.</li> </ul> <h3>strong features and interesting finding by qyxs</h3> <ul> <li>1. the history difficulty statistics features of user who had correct/wrong answers, boost almost 0.003</li> </ul> <pre><code>tmp_df = for_question_df.groupby('content_id')['answered_correctly'].agg([['corr_ratio', 'mean']]).reset_index() tmp_fe = for_question_df[for_question_df['answered_correctly']==0].merge(tmp_df, on='content_id').groupby('user_id')['corr_ratio'].agg(['min', 'max', 'mean', 'std']).reset_index() for_train = for_train.merge(tmp_fe, on='user_id', how='left') </code></pre> <ul> <li>2. focus on the records about the current part of user connect with last same part, generate the features include answer correct ratio, time diff, frequency etc, boost almost 0.002</li> </ul> <pre><code>for_question_df['rank_part'] = for_question_df.groupby(['user_id', 'part'])['timestamp'].rank(method='first') for_question_df['rank_user'] = for_question_df.groupby(['user_id'])['timestamp'].rank(method='first') for_question_df['rank_diff'] = for_question_df['rank_user'] - for_question_df['rank_part'] for_question_df['part_times'] = for_question_df.groupby(['user_id', 'part'])['rank_diff'].rank(method='dense') for_question_df['rank_diff'] = for_question_df.groupby(['user_id', 'part'])['rank_diff'].rank(method='dense', ascending=False) last_part = for_question_df[for_question_df['rank_diff']==1] part_times = for_question_df.groupby(['user_id', 'part'])['part_times'].agg([['part_times', 'max']]).reset_index() last_part_df = last_part.groupby(['user_id', 'part'])['answered_correctly'].agg([['last_continue_part_ratio', 'mean'], ['last_continue_part_cnt', 'count']]).reset_index() last_part_time = last_part.groupby(['user_id', 'part'])['timestamp'].agg([['last_continue_part_time_start', 'min'], ['last_continue_part_time_end', 'max']]).reset_index() last_part_df = last_part_df.merge(last_part_time, on=['user_id', 'part'], how='left') last_part_df = last_part_df.merge(part_times, on=['user_id', 'part'], how='left') last_part_df['part_time_diff'] = last_part_df['last_continue_part_time_end'] - last_part_df['last_continue_part_time_start'] last_part_df['part_time_freq'] = last_part_df['last_continue_part_cnt']/last_part_df['part_time_diff'] for_train = for_train.merge(last_part_df, on=['user_id', 'part'], how='left') for_train['last_continue_part_time_start'] = for_train['timestamp'] - for_train['last_continue_part_time_start'] for_train['last_continue_part_time_end'] = for_train['timestamp'] - for_train['last_continue_part_time_end'] </code></pre> <ul> <li>3. the answer correctly ratio of each question under differenct user abilititys (split for 11 bins), boost almost 0.001</li> </ul> <pre><code>for_question_df['user_ability'] = for_question_df.groupby('user_id')['answered_correctly'].transform('mean').round(1) tmp_df = for_question_df.pivot_table(index='content_id', columns='user_ability', values='answered_correctly', aggfunc='mean').reset_index() tmp_df.columns = ['content_id'] + [f'c_mean_{i}_ratio' for i in range(11)] for_train = for_train.merge(tmp_df, on='content_id', how='left') </code></pre> <ul> <li>Some interseting points:<ul> <li>1.they would watch lecture after users had wrong answers, so we could generated some features from this. LB is not improved caused by the lectures info in next group maybe.</li> <li>2.the content_id such as 0-195, 7851-7984 etc, then are all same in one part and continuous with each other,we could build a new bundle to generate features</li></ul></li> </ul> <h3>strong features and interesting finding by ethan</h3> <ul> <li>1. user's behavior in last 1,5,…,60 minutes, 0.001 boost</li> </ul> <pre><code>for w in [1, 5, 10, 15, 30, 45, 60]: print(w) tmp = q_logs[q_logs['timestamp']&gt;=(q_logs['end_time']-w*60*1000)].copy() group_df = tmp.groupby(['user_id'])['content_id'].agg([['user_content_nunique_in_last{}mins'.format(w), 'nunique']]).reset_index() train = train.merge(group_df, on=['user_id'], how='left') group_df = tmp.groupby(['user_id'])['part'].agg([['user_part_nunique_in_last{}mins'.format(w), 'nunique']]).reset_index() train = train.merge(group_df, on=['user_id'], how='left') group_df = tmp.groupby(['user_id'])['answered_correctly'].agg([['user_correct_raito_in_last{}mins'.format(w), 'mean']]).reset_index() train = train.merge(group_df, on=['user_id'], how='left') </code></pre> <ul> <li>2. "users' ablility" statistics in each question, seperately by "answered_correctly"(0/1), 0.002 boost</li> </ul> <pre><code>cc = q_logs.groupby(['user_id'])['answered_correctly'].agg([['corr_ratio', 'mean']]).reset_index() gg = q_logs[['user_id', 'content_id', 'answered_correctly']].merge(cc, on=['user_id'], how='left') group_df1 = gg[gg['answered_correctly']==1].groupby(['content_id'])['corr_ratio'].agg([['question_correct_user_ablility_min', 'min'], ['question_correct_user_ablility_max', 'max'], ['question_correct_user_ablility_mean', 'mean'], ['question_correct_user_ablility_skew', 'skew'], ['question_correct_user_ablility_med', 'median'], ['question_correct_user_ablility_std', 'std']]).reset_index() group_df2 = gg[gg['answered_correctly']==0].groupby(['content_id'])['corr_ratio'].agg([['question_wrong_user_ablility_min','min'], ['question_wrong_user_ablility_max','max'], ['question_wrong_user_ablility_mean','mean'], ['question_wrong_user_ablility_skew','skew'], ['question_wrong_user_ablility_med','median'], ['question_wrong_user_ablility_std','std']]).reset_index() </code></pre> <ul> <li>3. "lagtime" statistics in each question, seperately by "answered_correctly"(0/1), means the distribution of users' preprare time for answering this question correctly, about 0.001 boost</li> </ul> <pre><code>user_task_timestamp = q_logs[['user_id', 'task_container_id', 'timestamp']].drop_duplicates() user_task_timestamp['lag_time'] = user_task_timestamp['timestamp'] - user_task_timestamp.groupby(['user_id'])['timestamp'].shift(1) tmp = q_logs[['user_id', 'task_container_id', 'content_id', 'answered_correctly']].merge(user_task_timestamp.drop(['timestamp'], axis=1), on=['user_id', 'task_container_id'], how='left') group_df = tmp[tmp['answered_correctly']==1].groupby(['content_id'])['lag_time'].agg([['c_lag_time_mean', 'mean'], ['c_lag_time_std', 'std'], ['c_lag_time_max', 'max'], ['c_lag_time_min', 'min'], ['c_lag_time_median', 'median']]).reset_index() train = train.merge(group_df, on=['content_id'], how='left') group_df = tmp[tmp['answered_correctly']==0].groupby(['content_id'])['lag_time'].agg([['w_lag_time_mean', 'mean'], ['w_lag_time_std', 'std'], ['w_lag_time_max', 'max'], ['w_lag_time_min', 'min'], ['w_lag_time_median', 'median']]).reset_index() train = train.merge(group_df, on=['content_id'], how='left') </code></pre> <h3>feature list</h3> <pre><code>['content_id', 'prior_question_elapsed_time', 'prior_question_had_explanation', 'correct_answer', 'user_count', 'user_sum', 'user_mean', 'item_count', 'item_sum', 'item_mean', 'answer_ratio_0', 'answer_ratio_1', 'answer_ratio_2', 'bundle_id', 'part', 'le_tag', 'question_correct_user_ablility_mean', 'question_correct_user_ablility_median', 'question_wrong_user_ablility_mean', 'question_wrong_user_ablility_median', 'word2vec_0', 'word2vec_1', 'word2vec_2', 'word2vec_3', 'word2vec_4', 'svd_0', 'svd_1', 'svd_2', 'svd_3', 'svd_4', 'tags_w2v_correct_mean_0', 'tags_w2v_wrong_mean_0', 'tags_w2v_correct_mean_1', 'tags_w2v_wrong_mean_1', 'tags_w2v_correct_mean_2', 'tags_w2v_wrong_mean_2', 'tags_w2v_correct_mean_3', 'tags_w2v_wrong_mean_3', 'tags_w2v_correct_mean_4', 'tags_w2v_wrong_mean_4', 'real_time_wrong_mean', 'real_time_wrong_median', 'real_time_correct_mean', 'real_time_correct_median', 'task_set_distance_wrong_mean', 'task_set_distance_wrong_median', 'task_set_distance_correct_mean', 'task_set_distance_correct_median', 'mean_0_ratio', 'mean_1_ratio', 'mean_3_ratio', 'mean_4_ratio', 'mean_5_ratio', 'mean_6_ratio', 'mean_7_ratio', 'mean_8_ratio', 'mean_9_ratio', 'mean_10_ratio', 'user_d1', 'user_d2', 'task_set_distance', 'user_diff_mean', 'user_diff_std', 'user_diff_min', 'user_diff_max', 'task_set_item_mean', 'task_set_item_min', 'task_set_item_max', 'task_set_distance2', 'task_distance_shift', 'task_set_distance_diff', 'task_distance_diff_shift', 'container_mean_1', 'container_mean_5', 'container_std_5', 'container_mean_10', 'container_std_10', 'container_mean_20', 'container_std_20', 'container_mean_30', 'container_std_30', 'container_mean_40', 'container_std_40', 'prior_question_elapsed_time_mean_1', 'prior_question_elapsed_time_mean_5', 'prior_question_elapsed_time_mean_10', 'prior_question_elapsed_time_mean_20', 'prior_question_elapsed_time_mean_30', 'prior_question_elapsed_time_mean_40', 'item_mean_mean_30', 'item_mean_mean_40', 'task_set_distance_mean_1', 'task_set_distance_mean_5', 'task_set_distance_mean_10', 'task_set_distance_mean_20', 'task_set_distance_mean_30', 'begin_time_diff', 'end_time_diff', 'part_time_diff_mean', 'part_session_mean', 'part_session_sum', 'part_session_count', 'full_group0_item_mean_mean', 'full_group0_item_mean_median', 'full_group0_task_set_distance_median', 'full_group0_timestamp_mean', 'full_group0_timestamp_median', 'full_group1_item_mean_mean', 'full_group1_item_mean_median', 'full_group1_task_set_distance_median', 'full_group1_timestamp_median', 'part_sum', 'part_count', 'part_mean', 'part_sum_global_ratio', 'part_sum_1', 'part_sum_5', 'part_mean_5', 'part_sum_10', 'part_mean_10', 'cum_answer0_mean_item_mean', 'cum_answer0_median_item_mean', 'cum_answer0_median_task_set_distance', 'cum_answer1_mean_item_mean', 'cum_answer1_median_item_mean', 'cum_answer1_mean_task_set_distance', 'cum_answer1_median_task_set_distance', 'cum_answer0_time_diff', 'cum_answer1_time_diff', 'global_task_set_shift1', 'global_task_set_shift2', 'global_task_set_shift4', 'global_task_set_shift5', 'cum_answer0_mean_wrong_time_diff', 'cum_answer0_median_wrong_time_diff', 'cum_answer1_mean_right_time_diff', 'content_correct_mean', 'content_correct_sum', 'content_correct_count', 'hard_answer0_time', 'hard_answer1_time', 'full_bundle_item_mean_mean', 'full_bundle_item_mean_median', 'full_bundle_task_set_distance_mean', 'full_bundle_task_set_distance_median', 'full_bundle_timestamp_mean', 'full_bundle_timestamp_median', 'bundle_sum', 'bundle_mean', 'bundle_count', 'user_trend_mean', 'user_trend_median', 'user_trend_roll_user_ans_sum', 'user_trend_roll_user_ans_mean', 'user_trend_roll_user_ans_count', 'user_trend_roll_item_ans_mean', 'user_trend_roll_item_ans_count', 'div_ratio1', 'div_ratio2', 'div_ratio3', 'new_Feat0', 'new_Feat1', 'new_Feat2', 'new_Feat3', 'part_time_wrong_div', 'part_time_right_div', 'diff_lag_median_div', 'diff_item_median_div', 'diff_time_median_div', 'diff_item_mean_div', 'diff_task_set_mean_div', 'diff_timestamp_mean_div', 'last_20_frequent_answer', 'last_20_frequent_answer_count', 'last_20_frequent_answer_mean', 'last_20_frequent_answer_sum', 'last_user_same_answer_tf', 'last_item_same_answer_tf', 'last_right_time_diff', 'last_wrong_time_diff', 'last_5_part_time_div', 'last_10_part_time_div', 'last_20_part_time_div'] </code></pre> <h1>Transformer(LB 0.808)</h1> <ul> <li><p>Used questions only. Lectures did't improve our validation score.</p></li> <li><p>In training window size 800 (the bigger,the better), in inference window size 300(the bigger,the better,but time consuming).</p></li> <li><p>Optimizer : Adam with lr = 8e-4, beta1 = 0.9, beta2 = 0.999 with warmup steps to 4000.</p> <ul> <li>Without warmup steps, didn't converge.</li></ul></li> <li><p>Number of layers = 4, dimension of the model = 256, dimension of the FFN = 2048.</p></li> <li><p>Batch size=80</p> <ul> <li>With smaller size, didn't converge.</li></ul></li> <li><p>Dropout = 0</p></li> <li><p>Postion encoding : Axial Positional Embedding</p> <ul> <li><a href="https://arxiv.org/abs/1912.12180" target="_blank">https://arxiv.org/abs/1912.12180</a></li> <li><a href="https://github.com/lucidrains/axial-positional-embedding" target="_blank">https://github.com/lucidrains/axial-positional-embedding</a></li> <li>This encoding improved score significantly.</li></ul></li> <li><p>Augmentation : Mixup-Transformer</p> <ul> <li><a href="https://arxiv.org/abs/2010.02394" target="_blank">https://arxiv.org/abs/2010.02394</a></li></ul></li> <li><p>Inputs</p> <ul> <li>question_id</li> <li>part</li> <li>prior_question_elapsed_time / 1000</li> <li>lagtime<ul> <li>log1p((timestamp_t - timestamp_(t-1)) /1000 / 60)</li> <li>This feature improved the score significantly</li></ul></li> <li>answered_correctly</li> <li>GBT feats (imortance top N)</li></ul></li> <li><p>model image<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F715257%2F6743aa3ef1403f4084dd4fa3d8372702%2FScreen%20Shot%202021-01-08%20at%2011.34.11.png?generation=1610073332748004&amp;alt=media" alt=""></p></li> </ul> <h1>Ensemble(LB 0.812)</h1> <ul> <li>Finally we use one catboost and two transformers for ensemble due to the inference time limitation.Unfortunately our inference notebook has bug although we improved our model about 0.0006 at last day.</li> </ul>
Riiid Answer Correctness Prediction
149th solution write up (LGBM + Transformer ensemble)
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Also available at <a href="https://github.com/yoonseok312/riiid-answer-correctness-prediction" target="_blank">https://github.com/yoonseok312/riiid-answer-correctness-prediction</a> with codes.</p> <p>This is a solution write up for our model, which is an ensemble between a single Light Gradient Boosted Machine model, and a single Encoder-Decoder based Transformer model. It was our first time competing in a Kaggle competition and none of us had previous AI/ML/Stats experience so we learned a lot throughout this competition. Should you have any questions, feel free to contact me at yoonseok@berkeley.edu.</p> <h1>LightGBM</h1> <h2>Features</h2> <p>ts_delta: the gap between timestamp of current content with previous content of the same user.<br> task_container_id<br> prior_question_elapsed_time: in second, rounded<br> prior_question_had_explanation<br> part<br> num_tag: number of tags in that question<br> u_chance: the average correctness of the user until current time<br> u_attempts: number of content the user have done<br> u_attempt_c: number of times the user interacted with the specific content in the past (only counting from &gt;1 interactions due to memory)<br> c_chance: the average correctness of the question until current time<br> c_attempts: number of encounter of that question (all user)<br> u_part_chance: the average correctness of the user doing the same part as the question<br> u_part_attempts: number of question of the same part the user have done<br> u_skill_chance: the average correctness of the user doing the same skill as the question (part &lt; 5: listening, part &gt;= 5: reading)<br> u_skill_attempts: number of question of the same skill the user have done<br> t_chance: the average correctness of the user of questions with specific tag until current time<br> t_attempts: user's number of encounter of that tag<br> total_explained: number of times explanation was provided to the user until current time (all contents)<br> 10_recent_correctness: user correctness of the most recent questions (up to 10)<br> 10_recent_mean_gap: mean ts_delta of the most recent questions (up to 10)<br> bundle_elapsed: the mean elapsed time of the bundle, up until the abs time.<br> mean_elapsed: the mean elapsed time of the user until now.<br> prev_t1: tag of the last question<br> prev_cor: correctness of the last question<br> trueskill_possibility: possibility of the user 'beating' the question (getting the question correct) based on trueskill<br> mu: mu value (mean of trueskill ratings) of user<br> sigma: sigma value (standard deviation of trueskill ratings) of user<br> Columns with NaN value was filled with -1.</p> <h2>Cross Validation and train strategy</h2> <ol> <li>Define an absolute time for the whole database (abs_time = user_id//50 + timestamp//1000)</li> <li>Sort by abs_time. All features mentioned above were engineered such that we will not take data from the future (higher abs_time) into account.</li> <li>Drop first 25% of the data. This data contains some noise, i.e. when nobody studied plenty of questions yet.</li> <li>Take last 25% of data as validation set.</li> <li>Train the model with remaining 50% of the data.</li> </ol> <h2>Single Model AUC</h2> <p>We used less than 30 features, but considering that most of the single LGBM models above 0.79 AUC used 40+ features, we did a decent work on focusing on imoportant features.<br> LB score: AUC 0.789<br> Number of epochs: 6650 (around 15 hours of training in total)</p> <h1>Transformer</h1> <h2>Encoder</h2> <p>Added below layers with positional encoding.</p> <p>1) Excercise Related<br> min_delta: minute difference from between this question and the previous. Cap at 1443 (1 day)<br> day_delta: day difference from between this question and the previous. Cap at 30<br> month_delta: month difference from between this question and the previous. Cap at 9<br> tid: task container id<br> is_with: if the question is presented with another question. Usually have the same task container<br> c_part: part, one hot encode and denote skill (listening, reading, part1,2,…)<br> tag1…6: tags of question (t1 to t6 are the tag of one question, t1 being the most important tag.) Above embeddings or Dense layer concatenated.</p> <p>2) Content id (cid)<br> Dense layer</p> <h2>Decoder</h2> <p>Added below layers with positional encoding.</p> <p>1) Response Related<br> prev_answered_correct: correctness of previous answer.<br> prior_elapsed: prior elapsed tiem<br> prior_explained: prior has explanation Above embeddings or Dense layer concatenated.</p> <p>2) Answered Correctly<br> Concatenate Lecture related Embeddings/Dense<br> num_lect: number of lecture the user have seen<br> lec_type: 1 hot encode of most recent lecture type, (llecty1, 2…)<br> lec_h_past: time since most recent lecture Above embeddings or Dense layer concatenated.</p> <h2>Parameters</h2> <p>WINDOW_SIZE: 100<br> EMBED_DIM: 256<br> NUM_HEADS: 16</p> <h2>Cross Validation and train strategy</h2> <p>Use first 80% of data as train set and last 20% as validation set.</p> <h2>Single Model AUC</h2> <p>AUC 0.786<br> SAINT model has plenty of room for improvement, but as 1 epoch took more than 10 hours to train we decided to focus on improving LGBM.</p> <h1>Inference</h1> <h2>1. Ensembling two models</h2> <p>Ensembled a single LGBM model and a Transformer model in 0.55 (LGBM) / 0.45 ratio.<br> AUC: 0.793</p> <h2>2. Ensembling three models</h2> <p>Ensembled two LGBM and a Transformer model. 2nd LGBM was same as the first LGBM but except features related to Trueskill. When 2 models out of three models predicted that the user is likely to answer correctly, we used the max value among the 3 predictions. When 2 models out of three models predicted that the user is likely to answer wronly, we used the min value among the 3 predictions. For remaining cases, we mixed three models in 0.4 (Transformer) / 0.45 (First LGBM) / 0.15 (Second LGBM) ratio.<br> AUC: 0.793 (slightly higher than the 1st Inference)</p> <h1>Training Environment</h1> <p>Our biggest mistake was thinking that all the feature engineering, training, and inferencing process must be done in the Kaggle environment. We were only using Kaggle environment until 2 weeks before the competiton ended, and from then we started to use Google Colab with GPU and 25GB of RAM. Still, there were several times when Colab took GPU from us and didn't give it for several hours as we were constanly using their GPU.</p> <p>Thanks to my teammates <a href="https://www.kaggle.com/shhrkre" target="_blank">@shhrkre</a>, <a href="https://www.kaggle.com/ysgong" target="_blank">@ysgong</a>, <a href="https://www.kaggle.com/sanmaruum" target="_blank">@sanmaruum</a>, <a href="https://www.kaggle.com/kuraji" target="_blank">@kuraji</a>.</p>
Riiid Answer Correctness Prediction
58th solution: SAINT+ based predicts user_answer and then answered_correctly
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi folks!</p> <p>My solution has something different from what I've seen until now. I have two outputs in a serial fashion:</p> <ul> <li>First the user answer prediction (A, B, C, D).</li> <li>Then, inside the NN graph I use the correct answer id to gather the correctness probability.</li> </ul> <p>This way I have two different backwards signals that helped me to grow a little bit in LB.</p> <p>Other things to notice:</p> <ul> <li>Log-normalization applied to time features (they have a pareto-like distribution). At the end of the competition I realized I didn't <em>need</em> to clip these features with such normalization.</li> <li>All tags usage.</li> <li>Continuous task_container_id (adds user expertise information).</li> <li>I add a mask to make sure sequence elements in the same container don't attend to each other.</li> <li>Pad tokens and lectures are masked in the loss functions.</li> </ul> <p>The rule I used to choose whether a feature should go on encoder or decoder is: static information on encoder, variable information on decoder.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1820636%2F71579dff863e98b9cdec6a2d0dfd48ee%2FUntitled%20Diagram%20(1).png?generation=1610226098358123&amp;alt=media" alt="My solution"></p>
Riiid Answer Correctness Prediction
#8 solution: Ensemble of 15 same NN models
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,<br> Learned a lot from other's solutions! In this post, I would like to share some insights in my solution.</p> <p>In summary, my best single NN model could achieve 813/815 public/private score, by ensemble of 5 folds and 3 snapshots in each fold, finally 15 nn models achieve the 814/816 public/private score.</p> <p>With 15 nn models, the online inference cost <strong>less than 4 hours</strong>, thus it can ensemble at least 30 models in this pipeline.</p> <h2>Dataset split</h2> <p>I split the train and valid set by several steps:</p> <ol> <li>Calculate all unique users in dataset.</li> <li>Select 5% users totally in the valid set, and 45% users totally in the train set.</li> <li>For the least 50% users, random split the user data into train and valid set by time.</li> </ol> <p>Thus we have both individual user in train and valid set, and also have many users who appear in both train and valid set. This split can achieve less than 0.001 score difference compared with LB.</p> <p>By changing different random seed, we can get different folds.</p> <h2>Feature engineering</h2> <p>Since there are many detailed FE in other's post, I will just share some important points.</p> <h3>Basic features</h3> <ol> <li><p>Evaluate user ability</p> <ul> <li>We can evaluate user ability by his history action, including the correctness, time elapsed, lag time, and so on.</li> <li>These features can be calculated on not only content level, but also on same part, same tags, same content and so on.</li> <li>These features can also be extended based on time, for example, we can make a feature which is the user correctness in last 60 seconds.</li></ul></li> <li><p>Evaluate content difficulty</p> <ul> <li>We can evaluate content difficulty by calculate its global accuracy, std, average time elapsed and so on.</li></ul></li> <li><p>Evaluate User x Content features</p> <p>Even a content is difficult, the user may still skill enough and can solve it correctly. Thus we also need to describe how the user could perform on this content.</p> <ul> <li><p>user_acc_diff: For a content, if the content has low global acc, but the user answered it correctly, the user might above the average of all users. We can use the $logloss(content_global_average_acc, user_answer)$ to evaluate the difference.</p></li> <li><p>user_elapsed_diff: like the user_acc_diff, we can also evaluate user time elapsed difference in user history contents.</p></li></ul></li> </ol> <p>We can dig many features on above three fields, for example, user's average lag time on history content/ history same part content/ history same tag content.. could also be a useful features.</p> <h3>Other features/tricks</h3> <ol> <li>Time related features: last_content_timestamp_diff, last_lag_time and its statistical information in history.</li> <li>Abnormal Users: If a user answered every content less in 4 seconds, and all his chose answer are same (such as C), then if the correct answer of next content is C, we can believe he will correctly answered next content.</li> <li>Learned lectures for a specific content: If there is a content-lecture-content pattern in user history, and the two content are same contents. It might the user learned a specific lecture for this content, which means in the second time, the probability he answered it correctly is high.</li> <li>Wrong answer ratio: There might be a pattern like "select C as default for hard contents". Thus calculating the ratio of user choice on his incorrectly content can tell as weather the user could lucky guess current content even he don't know he correct answer.</li> </ol> <p>Also, features such as current content id/current timestamp/current part are also used. Finally, I get 120 dim features, which can get public 0.806 by a single lgb model in single fold.</p> <p>P.S. The categorical feature (set on content id) of lightgbm can boost my score about 0.003.</p> <p>There are also many useful hint which could improve the speed and save the memory:</p> <ol> <li>Do not use pandas to calculate features, transfer it to numpy or just python.</li> <li>Using a class to store user information, and save each user information in individual file.</li> <li>In the inference phase, we can only read the user information for who appeared in the test set, the total number of users in test set are less than 10,000. This is the key to reduce memory. If the memory still not enough, the LRU-cache could be used to remove unused users.</li> <li>In my experiment, using numpy array to store information cost more disk space compared with python variable.</li> </ol> <h2>NN</h2> <p>Since it is very late when I notice the key to the top is NN model, I don't have much time analysis the NN models, especially design a specific features or structures. To save the time, I use the lightgbm features as the NN input (Time axis is added and seq len is 128).</p> <h4>Robust Standard Normalization</h4> <p>There are many outlier in the features from lightgbm, thus simply utilize standard normalization can hardly get desired results. I utilized the robust standard normalization to normalize all the features:</p> <pre><code>def robust_normalization(column): cur_mean = np.nanmedian(column) cur_qmin, cur_qmax = np.nanpercentile(cur,[2.5, 97.5]) cur_std = np.nanstd(column[(column&gt;=cur_qmin) &amp; (column&lt;=cur_qmax)]) column = np.clip(column, a_min = cur_qmin, a_max = cur_qmax) column = (column-cur_mean)/cur_std return column </code></pre> <h4>Models</h4> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1458993%2F4ae543902ffae52e87dfaf0484477c7d%2FWeChat%20Image_20210111165953.png?generation=1610355676642083&amp;alt=media" alt=""></p> <p>My NN model is very simple, just a transformer encoder with a fc classifier. The encoder has 4 transformer layers and embed dim is 128, no other modifications. The size of model file is only 7M, that's why I can ensemble many models on inference phase.</p> <p>The model can achieve 813/815 on single model/single fold. and 814/816 on 5 folds ensemble. There are much time left on inference, I use 3 snapshot ensemble on each fold, which can also boost some scores. </p>
Riiid Answer Correctness Prediction
52nd place brief write up [priv 0.802 / GBDTs]
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, I would like to thank Riiid Labs, partners and Kaggle for hosting this interesting competition. And congratulates to the all participants and especially the winners!</p> <p>This was so interesting and well designed competition and I enjoyed for three months. I share a summary of my solution, hoping that this will be of some help to other participants.</p> <h2>Final submission summary</h2> <ul> <li>Weighted averaging of <code>LightGBMx2 + CatBoostx1</code><ul> <li>public: 51st (0.801)</li> <li>private: 52nd (0.802)</li></ul></li> <li>We use about 130 features</li> <li>We were trying to make a model using NN(transfomer) in last few weeks, but we couldn't complete it.</li> </ul> <h2>Key Ideas</h2> <h3>Features of the badness of a user_answer</h3> <ul> <li>+0.006, by far the brightest of the features we created😇.</li> <li>The aim of creating the feature<ul> <li>A person who chooses a bad answer that most people don't choose is probably bad</li> <li>People who can do well will not choose bad choices even if they are wrong</li> <li>This idea is probably correct, and the std aggregation below was very effective.</li></ul></li> <li>How to make it<ul> <li>Use all of train to calculate how many of each answer are selected for each question</li> <li>e.g. for content_id=XXXX<ul> <li>Answer 1: 9% of users choice</li> <li>Answer 2: 5% of users choice</li> <li>Answer 3: 1% of users choice</li> <li>Answer 4: 85% of users choice</li></ul></li> <li>Calculate the percentile for each choice by accumulating the percentage of choices for each question</li> <li>Example: In the example above<ul> <li>Answer 1: 15% (=1+5+9)</li> <li>Answer 2: 6% (=1+5)</li> <li>Answer 3: 1% (=1+5)</li> <li>Answer 4: 100% (=1+5+9+85)</li></ul></li> <li>Aggregation (std, avg, min, etc.) of the percentiles of the last N answers for each user.</li></ul></li> </ul> <h3>A little devised Word2Vec (by teammate)</h3> <ul> <li>This looks a bit leaky, but it worked about +0.004, so it was very effective too.</li> <li>The aim of creating the feature<ul> <li>Vectorizing users based on their past history of correct and incorrect answers, and calculate their approximation to the next question's correct or incorrect answer.</li></ul></li> <li>How to make it<ul> <li>For each user, line up <code>Question_(correct answer|incorrect answer)</code> (last N questions)</li> <li>Vectorize <code>Question_(correct answer|incorrect answer)</code> by word2vec</li> <li>Vectorize user by averaging the vector of <code>Question_(correct answer|incorrect answer)</code> of user's last N questions</li> <li>Calculate cosine similarity between user's vector and next question's <code>Question_(correct answer)</code> <code>Question_(incorrect answer)</code>.</li></ul></li> </ul> <h3>Trueskill (by teammate)</h3> <ul> <li>importance was always at the top, but +0.001<ul> <li>may have been conflicting with other features because we create this feature in the last stage</li></ul></li> <li>How to make it<ul> <li>Using <a href="https://trueskill.org/" target="_blank">trueskill</a> to score the strength of each question and each user.</li> <li>From there, calculate the probability that the user will win (answer correctly) the question.</li></ul></li> </ul> <h3>Weighted counting of correct answers (by teammate)</h3> <ul> <li>Weight is the reciprocal of the percentage of correct answers, and correct answers are counted with weight.</li> </ul> <h2>Other Features</h2> <p>The following is an excerpt from what worked for us.</p> <ul> <li>TargetEncodings<ul> <li>percentage of questions answered correctly</li> <li>percentage of users who answered the question correctly</li> <li>and many more.</li></ul></li> <li>User logs<ul> <li>percentage of correct answers in the last 400 questions</li> <li>percentage of correct answers in the same part of the last 400 questions</li> <li>and many more.</li></ul></li> <li>Lag features of timestamp<ul> <li>The time elapsed since the last time user solved the same question.</li> <li>Time elapsed since the previous question.</li></ul></li> <li>Features processed from timestamp<ul> <li>Things that use elapsed time</li> <li>elapsed time since the previous question / (all users') average time taken for that task_container</li> <li>elapsed time since the previous question / (all users') average time for that task_container (only correct answers are counted)</li> <li>I thought the timestamp was the <code>timestamp of answering the question</code> since these were working.</li> <li>the lag time simlar to SAINT's paper.</li> <li>time elapsed since previous question - time spent on previous question</li></ul></li> <li>Simple Word2Vec<ul> <li>Line up questions for each user -&gt; vectorize questions by word2vec</li> <li>We've made a lot of these, by part, only correct questions, only wrong questions, various windows, etc.</li> <li>The more patterns we added, the higher our score became.</li></ul></li> </ul> <h2>What did not work</h2> <p>There are a lot, but I'm going to write down a few that I wondered why they don't work.</p> <ul> <li>Tags<ul> <li>I was able to categorize almost all tags from the distribution of the parts where they appear.</li> <li>I found the grammatical tags, intonation tags, etc.</li> <li>I tried various ways to put it into the model, but it didn't work at all.</li> <li>I tried target encoding and counting the number of correct answers of the user's past tags.</li></ul></li> <li>Lecture<ul> <li>I tried simple counting, but it did not work at all.</li> <li>I gave up on using it as soon as I saw that no one seemed to be making use of it in discussions.</li></ul></li> </ul>
Riiid Answer Correctness Prediction
9th place solution : 6 Transformers + 2 LightGBMs
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>First of all, thanks to the hosts for a great competition! This was one of the toughest competitions I have ever entered, but well worth the effort.</p> <p>Prediction of our team (tito <a href="https://www.kaggle.com/its7171" target="_blank">@its7171</a> and nyanp <a href="https://www.kaggle.com/nyanpn" target="_blank">@nyanpn</a>)consists of following models.</p> <ul> <li>tito's transformer: LB 0.813</li> <li>nyanp's SAINT+ transformer: LB 0.808</li> <li>nyanp's LightGBM: LB 0.806</li> </ul> <p>Simple blending of these models scored LB 0.814 / private 0.816.</p> <h2>Pipeline</h2> <p>We converted the entire train.csv data into hdf5, and loaded only the user_id that appeared during inference into a np array (97% RAM savings compared to hold entire training data in RAM). We estimate that the overhead due to I/O in hdf is ~45 minutes. This some overhead allowed us to combine tito's large transformer with nyanp's feature engineering pipeline.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1532198%2F21ed1bbf93a37f6219fb7dc369761f0e%2Friiid%20pipeline.png?generation=1610287506256559&amp;alt=media" alt=""></p> <h2>Transformer (tito, LB 0.814)</h2> <p>This is transformer model with encoder only, based on <a href="https://www.kaggle.com/claverru" target="_blank">@claverru</a>'s <a href="https://www.kaggle.com/claverru/demystifying-transformers-let-s-make-it-public" target="_blank">nice kernel</a></p> <h3>summary</h3> <ul> <li>only trained and predicted for answered_correctly of last question of the sequence.</li> <li>all features are concatenated (only position encoding is added).</li> <li>used lectures, in timestamp order as it is</li> <li>Window size 300-600</li> <li>batch size 1000</li> <li>drop_out 0</li> <li>n_encoder_layers 3-5</li> <li>augmentation to replace content_ids with dummy ids at a certain rate</li> <li>kept only one question in the last task to avoid leaks</li> </ul> <h3>features</h3> <p>embedded or dense was decided by CV.</p> <ul> <li>(embedded) content id</li> <li>(embedded) part id</li> <li>(embedded) same task question size</li> <li>(dense) answered_correctly</li> <li>(dense) had_explanation</li> <li>(dense) elapsed time</li> <li>(dense) lag time</li> <li>(dense) diff of timestamp from the last question</li> </ul> <h3>combining models</h3> <p>To avoid the overhead of calling model.predict() multiple times for ensemble, I made a combined model that links four models.</p> <pre><code>inputs = tf.keras.Input(shape=(input_shape, n_features)) out1 = model1(inputs[:,-window_size1:,:]) out2 = model2(inputs[:,-window_size2:,:]) out3 = model3(inputs[:,-window_size3:,:]) out4 = model4(inputs[:,-window_size4:,:]) combo_model = tf.keras.Model(inputs, [out1,out2,out3,out4]) </code></pre> <h2>SAINT+ (nyanp, LB 0.808)</h2> <ul> <li>d_model = 256</li> <li>window_size = 200</li> <li>n_layers = 3</li> <li>attention dropout = 0.03</li> <li>question, part, lag are embed to encoder</li> <li>response, elapsed time, has_explanation are embed to decoder</li> </ul> <p>To prevent leakage, in addition to upper triangular attention mask, I masked the loss in questions other than the beginning of each task_container_id. Questions with the same task_container_id were shuffled in each batch during training, and the loss weights were adjusted to reduce the effect of masks. This mask improved LB by 0.0003.</p> <p>(Note: I believe that indirect leaks still exist, but I've spent 80% of my time on the LightGBM implementation and data pipeline, so I couldn't improve it any further)</p> <p>Other than that, there is nothing special about this NN. It scored .806 in single, .808 by averaging 2 models.</p> <h2>LightGBM (nyanp, LB 0.806)</h2> <p>LightGBM models are trained on 264 features. To speed up inference, I fixed the number of trees to 3000 and ensembled two models with different seeds (This is better than single large LGBM in terms of both speed and accuracy). By compiling this model with <a href="https://github.com/dmlc/treelite" target="_blank">treelite</a>, inference time became 3x faster (~10ms/batch, ~10min in total).</p> <h3>features</h3> <p>I mapped prior_question_* rows with their respective rows by following code and utilized them in some features.</p> <pre><code>df['elapsed_time'] = df.groupby('user_id')['prior_question_elapsed_time'].shift(-1) df['elapsed_time'] = df.groupby(['user_id', 'timestamp'])['question_elapsed_time'].transform('last') df['has_explanation'] = df.groupby('user_id')['prior_question_had_explanation'].shift(-1) df['has_explanation'] = df.groupby(['user_id', 'timestamp'])['has_explanation'].transform('last') </code></pre> <p>Here is the list of my features. There is no magic here; no single feature boost CV more than 0.0002. I repeated feature engineering based on well-known techniques and a little bit of domain knowledge.</p> <h4>question features</h4> <ul> <li>count encoding</li> <li>target encoding</li> <li>number of tags</li> <li>one-hot encoding of tag (top-10 frequent tags)</li> <li>SVD, LDA, item2vec using user_id x content_id matrix</li> <li>LDA, item2vec using user_id x content_id matrix (filtered by answered_correctly == 0)<ul> <li>Typical word2vec model is trained on next-word prediction task. By constructing word2vec model over incorrectly answered questions, <br> the latent vectors extracted from the model can be used to capture which incorrect question are likely to co-occur with each other.</li></ul></li> <li>10%, 20%, 50%, 80% elapsed time of all users response, correct response, wrong response</li> <li>SAINT embedding vector + PCA</li> </ul> <h4>user features</h4> <ul> <li>avg/median/max/std elapsed_time</li> <li>avg/median/max/std elapsed_time by part</li> <li>avg has_explanation flag</li> <li>avg has_explanation flag by part</li> <li>nunique of question, part, lecture</li> <li>cumcount / timestamp</li> <li>avg answered_correctly with recent 10/30/100/300 questions, recent 10min/7 days</li> <li>avg answered_correctly by part, question, bundle, order of response, question difficulty, question difficulty x part<ul> <li>question difficulty: discretize the avg answered_correctly of all users for each question into 10 levels</li></ul></li> <li>cumcount by part, question, bundle, question difficulty, question difficulty x part</li> <li>lag from 1/2/3/4 step before</li> <li>correctness, lag, elapsed_time, has_explanation in the same question last time</li> <li>tag-level aggregation features<ul> <li>calculate tag-level feature for each user x tag, then aggregate them by min/avg/max</li> <li>cumcount of wrong answer, cumcount of correct answer, avg target, lag</li></ul></li> <li>(estimated elapsed time) - (avg/median/min/max elapsed time within same part)<ul> <li>estimated elapsed time = (timestamp - prev timestamp) / (# of questions within the bundle)</li></ul></li> <li>(estimated elapsed time) - (10%, 20%, 50%, 80% elapsed time of all users correct/wrong response)<ul> <li>Because TOEIC part1-4 questions are usually answered after listening to the conversation, the correct answers tend to be concentrated immediately after the conversation ends</li></ul></li> <li>task_container_id - previous task_container_id<ul> <li>I'm not sure why this worked. There might be a difference in the correct rate if people answered from different devices than usual (multi-user?).</li></ul></li> <li>part of last lecture</li> <li>lag from last lecture</li> <li>whether the question contains the same tag as the last lecture</li> <li>whether the part is the same as the previous question</li> <li>median lag - median elapsed_time</li> <li>part of the first problem the user solved</li> <li>lag - median lag</li> <li>inner-product of user-{correct|incorrect}-question-vector and question-vector<ul> <li>user-correct-question-vector: average of LDA vectors for each question that the user answered correctly.</li></ul></li> <li>rank of lag compared to the same user's past lag (filtered by answered_correctly == 0, 1 respectively)</li> </ul> <h3>feature calculation</h3> <p>Instead of updating the user dictionary, I calculate user features from scratch for each bacth.</p> <p>The numpy array of historical data was loaded from the hdf storage and then split into questions and lectures, which were then wrapped in a pandas-like API and passed to their respective feature functions.</p> <pre><code>@feature('lag1.user_lecture') def lag1_user_lecture(df: RiiidData, pool: DataPool) -&gt; np.ndarray: """ time elapsed from last lectures for each user :param df: data in current batch :param pool: cached data storage """ lag1 = {} for u, t in set(zip(df.questions['user_id'], df.questions['timestamp'])): past_lect = pool.users[u].lectures # history of lecture if len(past_lect) &gt; 0: lag1[u] = t - past_lect['timestamp'][-1] return np.array(list(map(lambda x: lag1.get(x, np.nan), df.questions['user_id'])), dtype=np.float32) </code></pre> <p>For training, I use the same functions as inference time. This way, there were almost no restrictions on feature creation, and I did not have to worry about bugs of train-test difference.</p> <p>The feature functions were frequently benchmarked by a dedicated benchmark script, and functions with high overhead were optimized by various ways (numba, bottleneck and various algorithm improvement) .</p> <p>The question features were precomputed and made into a global numpy array of shape (13523, *) and merged into the feature data using fancy index.</p> <pre><code>@feature(tuple(f'question_item2vec_{i}' for i in range(20))) def question_item2vec(df: RiiidData, _: DataPool): """ question embedding vector using item2vec """ qid = df.questions['content_id'] ret = QUESTION_ITEM2VEC[qid, :] # mere fancy index, faster than pd.merge return ret </code></pre> <h2>Other ideas</h2> <ul> <li>TTA on SAINT+ by np.roll (it did improve SAINT+ by 0.001+, but we couldn't include it because of submission timeout)</li> <li>linear blending based on public LB labels (timeout, too)</li> </ul> <h2>Feedback on Time-Series API Competition</h2> <p>Although the competition was well-designed, our team still found that the time-series API allowed us to obtain private score information with 1-bit probing even after the known vulnerability was fixed.</p> <pre><code>env = riiideducation.make_env() iter_test = env.iter_test() ground_truth = [] prediction = [] threshold = 0.810 for idx, (test_df, sample_prediction_df) in enumerate(iter_test): ground_truth.extend(list(test_df['answered_correctly'].values)) predicted = model.predict(...) prediction.extend(list(predicted)) if len(prediction) &gt;= 2500000: private_auc = roc_auc_score(prediction[500000:], ground_truth[500000:]) if private_auc &lt; threshold: raise RuntimeError() # private auc of this submission is guaranteed to exceed .810 if submission is succeeded </code></pre> <p>By using this probing, we can select the final submission with the highest private score, or determine the best<br> ensemble weight for private score by using the hill-climbing method.</p> <p>We contacted the Kaggle Team and asked them if this is legal, and they said it was a "low signal matter". We still think this is a gray-area and decided not to use this probing.</p> <p>We think it wasn't critical in this competition, but if there is a future competition with the same format but without AUC metrics, the hack with the "magic coefficient" could improve the ranking significantly. If a competition with the same format is held on Kaggle in the future, we suggest the Kaggle team to fix this problem (e.g. put a dummy value in the private label during the competition).</p>
Riiid Answer Correctness Prediction
158th Solution LGBM + FTRL, Private LB: 0793
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><h2>Huge thanks to:</h2> <p>First and foremost I would like to thank the coordinators of RIIID for conducting this awesome competition and all the Kaggle Staffs who kept their constant vigilance for bugs and issues. Despite having data leaks and many such issues, the kaggle team handled the issue promptly and efficiently. The competition API was one of the most frustrating and yet necessary aspect of a competition such as this one. </p> <p>The list of people I would like to thank is numerous and large, a few notable mentions but not limited to are:</p> <ol> <li><a href="https://www.kaggle.com/shoheiazuma/riiid-lgbm-starter:" target="_blank">https://www.kaggle.com/shoheiazuma/riiid-lgbm-starter:</a> The dataset was huge and intimidating, <a href="https://www.kaggle.com/shoheiazuma" target="_blank">@shoheiazuma</a> notebook gave the idea to train on just the last 24 records. I improvised on this idea a lot. Using final 30M records for training felt like I was missing out some users. Training on final 250 records of each user gave me comparable results to training on the entire dataset. </li> <li><a href="https://www.kaggle.com/vopani" target="_blank">@vopani</a>'s classic notebooks: <a href="https://www.kaggle.com/rohanrao/tutorial-on-reading-large-datasets" target="_blank">this</a> and <a href="https://www.kaggle.com/rohanrao/riiid-ftrl-ftw" target="_blank">this</a>: These tutorials Introduced me to my now favorite feather format. It also introduced me to DataTable FTRL and Dask. </li> <li><a href="https://www.kaggle.com/adityaecdrid" target="_blank">@adityaecdrid</a> discussions on repeated user questions and his ideas to use Bit Arrays. He contributed a ton of ideas and intrigued me to come with cleverer features. </li> <li><a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196009:" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196009:</a> Ditch pandas! All my user state updations were done in a loop. Easier on the pipeline and easier for my brain too :)</li> <li><a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942:" target="_blank">https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/196942:</a> A kaggle GM <a href="https://www.kaggle.com/silogram" target="_blank">@silogram</a> using LGBM to hit high ranks inspired me not to give up so soon when my score stuck at local optimas.</li> <li><a href="https://www.kaggle.com/anuragtr" target="_blank">@anuragtr</a> discussion <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/198245" target="_blank">here</a> on how to prevent ram spikes when starting LGB training.</li> </ol> <h2>The Model Idea and difficulties faced:</h2> <p>So what did <em>I</em> do? My model idea is quite simple: <strong>8 bagged LGBM models fed into an FTRL model</strong>. Kaggle CPU had memory restrictions of 16GB. Even concatenating all the features together threw that horrible OOM error. I created features piece by piece and concatenated them into three different chunks to create the training data. </p> <p>Now we have 100M data containing 38 Features. Training an LGB on this large data would still throw an OOM. So I decided to do bagging (which would still throw an OOM error if you didn't slowly convert the data into float numpy arrays yourself using loops). Now Bagging is not an option you'd go for if you wish to make more accurate predictions, but a discussion mentioned that training on larger dataset gave a score boost of ~0.01 and so I did. </p> <p>Training on the first chunk (25M) gave a CV of 0.791. After 7 such chunks, I got a CV of 0.792. Frustrated by seeing my LB ranks drop and to improve the predictive power a bit, I decided to add another model to the mix, a FTRL. </p> <p>I was also motivated to use FTRL for two reasons: </p> <ol> <li>I only had two days left before competition ends &amp; FTRL was super fast!</li> <li>FTRL model is best suited for applications were the predictions need to dynamic and evolving. </li> </ol> <p>FTRL by itself didn't perform very well on my CVs. So I decided to feed my LGB model predictions to it to help it make its predictions. If the FTRL sees the LGB making false predictions during inference, it can improve or alter its predictions accordingly. Doing this improved my LB from 0.792 to 0.793. I have no prior knowledge with FTRL models. Perhaps with better features FTRL might have worked even better.</p> <h2>Some of my personal takeaways:</h2> <ul> <li><em>Don't shy away from newer unfamiliar concepts!</em> With such a huge time frame given, I could have learned and implemented a transformer model. Instead I chose to stick with what I knew best: Brute forcing my way with hand coded features. Although this approach taught me how to make the most out of the limited given resources.</li> <li><em>Numba is awesome!</em> Ton of code speed up, all with a simple decorator! Numba although fast, may throw OOM if blindly used. My notebook (soon to be published) demonstrates how I leveraged this awesome library.</li> <li><em>LB ranks doesn't matter!</em> To be honest, It was painful to see my LB ranks drop from 60 all the way down to 158. But in the end, I managed to accept it and even laugh about it! See <a href="https://www.kaggle.com/c/riiid-test-answer-prediction/discussion/208356" target="_blank">this</a> hilarious meme thread.</li> </ul> <hr> <p>I enjoyed participating in this competition. It gave me so much more than I imagined it would. To all the winners and participants of this competition: <em>My heartiest congratulations and a Happy new year!</em></p> <hr>
Riiid Answer Correctness Prediction
16th Place - Single Model
Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Riiid Answer Correctness Prediction <|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge <|assistant|><p>Hi all,</p> <p>Here is an insight of our 2 solutions that both scored public 0.812/private 0.815 and that reached 16th gold place.</p> <p>This competition was both ML and engineering optimization to make everything work in 9h with 13GB RAM/16GB GPU. We spent almost 30% of time on optimization to keep the last 512 interactions per users + per content attempts in memory + required for our features.</p> <p>We would like to thank Kaggle and RIIID organizers for this great competition! Congratulations to the top teams and all competitors for their motivation all along the challenge.<br> I would like to thank my teammates <a href="https://www.kaggle.com/rafiko1" target="_blank">@rafiko1</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> and <a href="https://www.kaggle.com/matthiasanderer" target="_blank">@matthiasanderer</a>. You've been amazing, I've learnt a lot from you. I really enjoyed this competition.</p> <h2>Solution 1: Single transformer model</h2> <p>The SAINT+ model is described here <a href="https://arxiv.org/pdf/2010.12042.pdf" target="_blank">https://arxiv.org/pdf/2010.12042.pdf</a><br> The code for our SAINT+ adaptation is available here <a href="https://github.com/rafiko1/Riiid-sharing" target="_blank">https://github.com/rafiko1/Riiid-sharing</a>.</p> <p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fa47293277b9e989ab5c81c269e6187a8%2Fdoc_saint.png?generation=1610121546239700&amp;alt=media" alt=""></p> <p>Our single model SAINT++ achieved, CV: 0.812 Public LB: 0.812, Private LB: 0.815</p> <p>We trained with 95% of users first then fine tuned with all data using smart window technique (see diagram below for SAKT). The model is simple in terms of features. It only contains the four features of SAINT+ (pictured above), with one additional feature - the number of attempts of a user for specific content (hence SAINT++):</p> <ul> <li>Content id</li> <li>Lag time</li> <li>Prior question elapsed time</li> <li>Previous responses</li> <li>Number of attempts </li> </ul> <p>The greatest improvement in features compared to SAINT+ came from grouping lag time into seconds, unlike minutes as done in the paper. <br> Then, we went bigger and bigger on the architecture and burned some GPU power 🔥. We increased on parameters of the model, most importantly the sequence length and number of layers. Final parameters of the model are as follows: </p> <ol> <li>Input Sequence length: 512</li> <li>Encoding layers: 4</li> <li>Decoding Layers: 4</li> <li>Embedding size: 288</li> <li>Dense Layer: 768</li> <li>heads: 8</li> <li>Dropout: 0.20</li> </ol> <p>We used the Noam learning rate scheduler: with initial warmup and exponential decrease down to 2e-5. </p> <p>Final improvement came from our <strong><em>recursive trick</em></strong> during inference. Here, we rounded predictions that came from the same bundle to <strong><em>0 or 1 </em></strong>- as their true response is unknown in time yet. The rounded predictions are then fed back to the model to predict the next response within the same bundle. This trick boosts CV LB +0.0025, but requires a batch size of 1, so we couldn’t ensemble multiple transformer models.</p> <h3>Solution 2: Ensemble of transformer, modified SAKT and LGB</h3> <ul> <li><p>LightGBM model scored CV=0.793, public LB=0.792 with 44 features. <br> Our main features: <br> Question correctness per content and per user, tags 1 and 2, part, elapsed time, had explanation, number of attempts, multiple lags, running average (answer), multiple rolling means/median (answers, lags, elapsed time) + weighted mean, mean after/before 30 interactions, multiple momentums (lag, answers), per part correctness, per session (8 hours split) running average. Only 3 categories: part, tags1, tags2. Train/valid split from <a href="https://www.kaggle.com/its7171/cv-strategy" target="_blank">Tito</a>. </p></li> <li><p>Pytorch SAKT modified scored CV=0.786, LB=0.789 with additional features. Training procedure with a smart window. </p></li> <li><p>When a user’s sequence length is larger than model input, i.e. N&gt;W, then using random crops gives +0.002 CV versus tiled crops. And using smart random crops gives +0.003 CV versus tiled crops. Basic random crops have a low probability of selecting the early or late questions from a user’s sequence whereas smart random crops have an equally likely probability of selecting all questions from a user.<br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fa1239fda69769669432879b2b1a4ae39%2Fdoc_window.png?generation=1610121604322940&amp;alt=media" alt=""></p></li> <li><p>TensorFlow Transformer model alone scored CV=0.811, LB=0.811</p> <p>Same as solution#1 but with sequence length = 256</p></li> </ul> <h3>What did not work:</h3> <ul> <li>TabNet</li> <li>Features with lectures for LGB. It worked on CV but not on LB (might be an issue in inference).</li> <li>Post processing using absolute position of question aka. question sequence number. Plotting mean(answered_correctly) vs question number looked like the image below. We can see that the 30 first questions have a different distribution compared with the rest. Also looks like there are subsequently batches of 30 questions (becomes visible if we zoom in the plot below). PP using that information worked in CV improving by around 0.0009, but didn’t work on LB. <br> <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe4b8abca67c11258a438304b8fc67db5%2Fdoc_pp.png?generation=1610121577542081&amp;alt=media" alt=""></li> </ul> <h3>What worked partially:</h3> <p>But was not applicable for us within the 9h runtime limit:</p> <ul> <li>More than 3 models ensemble</li> <li>Level 2 model (XGB) could boost by +0.001</li> </ul> <h3>Lessons learnt:</h3> <ul> <li>Start inference Kernel as soon as possible when you need to deal with an API.</li> <li>Try to simulate API locally to understand how data will be handled. <a href="https://www.kaggle.com/its7171/time-series-api-iter-test-emulator" target="_blank">Tito</a>’s simulator was perfect for that purpose.</li> <li>Push your inference (with the simulator) to the limits to debug it, it will avoid the frustrating “submission scoring error”. </li> <li>Team-up at some point, your teammates always have good ideas.</li> </ul> <p>One additional word to Kaggle <a href="https://www.kaggle.com/sohier" target="_blank">@sohier</a> I loved your API and the way it hides private data, it’s more realistic as in real world usage/production and it avoided chaotic blending. Congratulations for that, however, even if I guess you want to prevent probing, you should find a solution to provide better error feedback. If it is not possible (the more error codes the more probing) then you need to provide a simulator and guidelines allowing competitors to troubleshoot locally.</p>