validation set for SSP and fine-tuning details

#1
by adrienchaton - opened

Hello,
Thanks for sharing this benchmark, these are interesting comparisons you ran and I would like to expand on that by also benchmarking codon LMs which are somewhat middle ground between genomic and proteomic LMs. I have two questions please:

  • for the SSP task, I see no validation set ... comparing with your paper it looks like you uploaded train (=6224) and validation (=1556) into a single training data file (=7780) ... would it be possible to share the splits you used for your paper please?

  • regarding the finetuning details, did you use MSE loss for the regressions tasks please? and you didnt mention any learning rate decay (or any regularization) ... are you keeping it fixed to 0.003 and monitoring best R2 or cross entropy every epoch to select the checkpoint to evaluate?

Thanks!

InstaDeep Ltd org
edited 19 days ago

Sorry for the delayed response—I wasn’t aware of the discussion feature on Hugging Face datasets until recently.

1. Secondary Structure Prediction (SSP) Splits

You’re absolutely right that the Hugging Face repository currently contains a single combined training file for the SSP task.

What happened is the following:

  • Earlier experiments used static train/validation splits (train = 6,224; validation = 1,556).

  • For the final version of the paper, we switched to a 5-fold cross-validation setup.

  • For each fold:

    • The original train and validation sets were recombined.
    • New 80/20 train/validation splits were resampled.
  • As a result, there are no fixed static split files corresponding exactly to the final reported results.

Recommendation:
To reproduce the SSP protocol as closely as possible:

  • Use the combined training file provided on Hugging Face.
  • Create your own 80/20 train/validation split (or a 5-fold CV scheme).

Unfortunately, we don’t have separate static split files to share for the final experiments.

2. Fine-Tuning Details

Loss function

  • Classification tasks: cross-entropy loss (explicitly stated in the paper).
  • Regression tasks: MSE.

Checkpoint selection

  • Regression: highest validation (R^2).
  • Classification: lowest validation cross-entropy.

Regularization

  • The primary form of regularization comes from IA³ fine-tuning.

Learning rate & scheduling

  • Optimizer: Adam.
  • Learning rate: fixed at 0.003.
  • No learning rate decay or scheduler was used during fine-tuning (pre-training used warmup + square-root decay).
  • Models were evaluated at fixed intervals during training (not strictly once per epoch).
  • The best checkpoint according to the validation metric was used for final evaluation.

Hope this helps, and happy to discuss further details (I am more reachable at s.t.boshar@gmail.com).

Thanks for your reply! I am not working on this benchmark anymore but I appreciate the details you shared, it sounds reasonable, and it might be relevant for future readers.

adrienchaton changed discussion status to closed

Sign up or log in to comment