Text Classification
Transformers
Safetensors
English
Chinese
qwen2
feature-extraction
reward model
custom_code
text-embeddings-inference
Instructions to use Qwen/Qwen2.5-Math-RM-72B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen2.5-Math-RM-72B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Qwen/Qwen2.5-Math-RM-72B", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Math-RM-72B", trust_remote_code=True) model = AutoModel.from_pretrained("Qwen/Qwen2.5-Math-RM-72B", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
vllm不支持Qwen2ForRewardModel
#4
by vinf - opened
Model architectures ['Qwen2ForRewardModel'] are not supported for now, how to solve it, help help help
Sorry, but vllm does not support this at the moment. We hope it will be supported in the future.
Zhenru changed discussion status to closed
Sorry, but vllm does not support this at the moment. We hope it will be supported in the future.
Is there any other way to deploy this model besides using transformers?