Instructions to use Arastun/Qwen3-Reranker-0.6B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Arastun/Qwen3-Reranker-0.6B with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Arastun/Qwen3-Reranker-0.6B") model = AutoModelForCausalLM.from_pretrained("Arastun/Qwen3-Reranker-0.6B") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 6aec39639a0a2d1ca966356b8c2b8426a484f80ff80731f44fa8482040713bdf
- Size of remote file:
- 11.4 MB
- SHA256:
- aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.