Instructions to use IAAR-Shanghai/xVerify-7B-I with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IAAR-Shanghai/xVerify-7B-I with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="IAAR-Shanghai/xVerify-7B-I") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("IAAR-Shanghai/xVerify-7B-I") model = AutoModelForCausalLM.from_pretrained("IAAR-Shanghai/xVerify-7B-I") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use IAAR-Shanghai/xVerify-7B-I with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "IAAR-Shanghai/xVerify-7B-I" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IAAR-Shanghai/xVerify-7B-I", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/IAAR-Shanghai/xVerify-7B-I
- SGLang
How to use IAAR-Shanghai/xVerify-7B-I with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "IAAR-Shanghai/xVerify-7B-I" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IAAR-Shanghai/xVerify-7B-I", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "IAAR-Shanghai/xVerify-7B-I" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IAAR-Shanghai/xVerify-7B-I", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use IAAR-Shanghai/xVerify-7B-I with Docker Model Runner:
docker model run hf.co/IAAR-Shanghai/xVerify-7B-I
π xVerify-7B-I
xVerify is an evaluation tool fine-tuned from a pre-trained large language model, designed specifically for objective questions with a single correct answer. It is presented in the paper xVerify: Efficient Answer Verifier for Reasoning Model Evaluations.
It accurately extracts the final answer from lengthy reasoning processes and efficiently identifies equivalence across different forms of expressions.
β¨ Key Features
π Broad Applicability
Suitable for various objective question evaluation scenarios including math problems, multiple-choice questions, classification tasks, and short-answer questions.
βοΈ Handles Long Reasoning Chains
Effectively processes answers with extensive reasoning steps to extract the final answer, regardless of complexity.
π Multilingual Support
Primarily handles Chinese and English responses while remaining compatible with other languages.
π Powerful Equivalence Judgment
- β Recognizes basic transformations like letter case changes and Greek letter conversions
- β Identifies equivalent mathematical expressions across formats (LaTeX, fractions, scientific notation)
- β Determines semantic equivalence in natural language answers
- β Matches multiple-choice responses by content rather than just option identifiers
π Sample Usage
This snippet demonstrates single-sample evaluation using the Evaluator logic provided in the official repository.
from src.xVerify.model import Model
from src.xVerify.eval import Evaluator
# initialization
model_name = 'xVerify-7B-I'
model_path = 'IAAR-Shanghai/xVerify-7B-I'
inference_mode = 'local'
model = Model(
model_name=model_name,
model_path_or_url=model_path,
inference_mode=inference_mode,
)
evaluator = Evaluator(model=model)
# input evaluation information
question = "New steel giant includes Lackawanna site A major change is coming to the global steel industry and a galvanized mill in Lackawanna that formerly belonged to Bethlehem Steel Corp.
Classify the topic of the above sentence as World, Sports, Business, or Sci/Tech."
llm_output = "The answer is Business."
correct_answer = "Business"
# evaluation
result = evaluator.single_evaluate(
question=question,
llm_output=llm_output,
correct_answer=correct_answer
)
print(result)
π Citation
@article{xVerify,
title={xVerify: Efficient Answer Verifier for Reasoning Model Evaluations},
author={Ding Chen and Qingchen Yu and Pengyuan Wang and Wentao Zhang and Bo Tang and Feiyu Xiong and Xinchi Li and Minchuan Yang and Zhiyu Li},
journal={arXiv preprint arXiv:2504.10481},
year={2025},
}
- Downloads last month
- 30