Instructions to use Thrillcrazyer/TACReward7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Thrillcrazyer/TACReward7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Thrillcrazyer/TACReward7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Thrillcrazyer/TACReward7B") model = AutoModelForCausalLM.from_pretrained("Thrillcrazyer/TACReward7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Thrillcrazyer/TACReward7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Thrillcrazyer/TACReward7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Thrillcrazyer/TACReward7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Thrillcrazyer/TACReward7B
- SGLang
How to use Thrillcrazyer/TACReward7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Thrillcrazyer/TACReward7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Thrillcrazyer/TACReward7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Thrillcrazyer/TACReward7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Thrillcrazyer/TACReward7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Thrillcrazyer/TACReward7B with Docker Model Runner:
docker model run hf.co/Thrillcrazyer/TACReward7B
Reasoning-Aware Proxy Reward Model using Process Mining
BAELAB, Pusan National University, Busan, Korea
Yongjae Lee*, Taekyhun Park* , Hyerim Bae†
🌟 Github | 📥 1.5B Download | 📥 7B Download | 📄 Arxiv Paper Link |
Abstract
Recent advances in sparse reward policy gradient methods have enabled effective reinforcement learning (LR) fine-tuning for post-training language models. However, for reasoning tasks such as mathematical problem solving, binarized outcome rewards provide limited feedback on intermediate reasoning steps. While some studies have attempted to address this issue by estimating overall reasoning quality, it remains unclear whether these rewards are reliable proxies for the quality of stepwise reasoning. In this study, we consider reasoning as a structured process and propose TACReward reward model. The model can be seamlessly integrated into sparse reward frameworks without additional human annotation costs or architectural modifications. TACReward aggregates stepwise structural deviations between teachers and policy reasoning using process mining techniques, producing a scalar output reward range of $[0, 1]$. Experiments on multiple mathematical reasoning benchmarks demonstrate that integrating the TACReward into sparse reward frameworks encourages the policy model to improve the structural quality of reasoning. Consequently, this leads to consistent performance improvements over existing sparse reward frameworks.
Illustration of TACReward
- Downloads last month
- 8
docker model run hf.co/Thrillcrazyer/TACReward7B