File size: 1,339 Bytes
e7c5f94 19f4289 e7c5f94 4000ca0 e7c5f94 4000ca0 e7c5f94 4000ca0 e7c5f94 fc93d04 4000ca0 e7c5f94 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | ---
library_name: transformers
tags: []
---
# Model Card for RankMistral
RankMistral, finetuned from [Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) using [rank_llm dataset](https://huggingface.co/datasets/castorini/rank_llm_data).
## Results
From [QPP-RA: Aggregating Large Language Model Rankings](https://dl.acm.org/doi/pdf/10.1145/3731120.3744575)
Using the [Rank LLM Library](https://github.com/castorini/rank_llm).

## Citation
If you use this model please cite:
```
@inproceedings{10.1145/3731120.3744575,
author = {Betello, Filippo and Russo, Matteo and D\"{u}tting, Paul and Leonardi, Stefano and Silvestri, Fabrizio},
title = {QPP-RA: Aggregating Large Language Model Rankings},
year = {2025},
isbn = {9798400718618},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3731120.3744575},
doi = {10.1145/3731120.3744575},
booktitle = {Proceedings of the 2025 International ACM SIGIR Conference on Innovative Concepts and Theories in Information Retrieval (ICTIR)},
pages = {103–114},
numpages = {12},
keywords = {llm, query performance prediction, rank aggregation},
location = {Padua, Italy},
series = {ICTIR '25}
}
```
|