Model Card
This is a placeholder model card for the RerankerModel.
Model Details
This re-ranker model is fine-tuned on the Llama-3.1-8B model, utilizing the core implementation and the augmented MS MARCO passage ranking dataset from the Tevatron repository. The primary goal of this fine-tuning is to enable mechanistic interpretability studies on dense re-ranking models.
The model is fine-tuned with a LoRA rank of 8 and trained for 0.4 epochs.
Usage
Instructions on how to use the model. The code for finetuning the model and using the model for inference can be seen from the tevatron repository: https://github.com/texttron/tevatron/tree/main/examples/rankllama
Training Data
Information about the training data used.
Evaluation
Evaluation metrics and results.
Results for DL-19 MAP: 0.5070 MRR: 0.9543 NDCG@10: 0.7655