FR3E-Math-7B / README.md
nielsr's picture
nielsr HF Staff
Add comprehensive model card for FR3E
56c0c55 verified
|
raw
history blame
3.57 kB
---
license: mit
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen
- llm
- reinforcement-learning
- reasoning
---
# FR3E (First Return, Entropy-Eliciting Explore)
The FR3E (First Return, Entropy-Eliciting Explore) model is a novel structured exploration framework designed to enhance the reasoning abilities of Large Language Models (LLMs). It was presented in the paper [First Return, Entropy-Eliciting Explore](https://huggingface.co/papers/2507.07017).
## Model Description
FR3E addresses the challenges of unstable exploration in Reinforcement Learning from Verifiable Rewards (RLVR) by identifying high-uncertainty decision points within reasoning trajectories. It then performs targeted rollouts to construct semantically grounded intermediate feedback, providing precise guidance without the need for dense supervision.
Empirical results on mathematical reasoning benchmarks (AIME24) demonstrate that FR3E promotes more stable training, produces longer and more coherent responses, and significantly increases the proportion of fully correct trajectories. This framework highlights an effective approach to improving LLM reasoning through robust and structured exploration.
## Paper
For more detailed information, please refer to the research paper:
[**First Return, Entropy-Eliciting Explore**](https://huggingface.co/papers/2507.07017)
## Project Page
You can find more information about the project on its Hugging Face organization page:
[**FR3E-Bytedance**](https://huggingface.co/FR3E-Bytedance)
## Usage
You can use the FR3E model with the Hugging Face `transformers` library. This model is based on the Qwen2 architecture and can be loaded as a causal language model for text generation tasks, especially in a conversational format.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Replace "FR3E-Bytedance/FR3E-Qwen2-7B-Instruct" with the specific model ID if different
model_id = "FR3E-Bytedance/FR3E-Qwen2-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
]
# Apply chat template and tokenize
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate response
outputs = model.generate(input_ids, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.9)
# Decode the generated text, skipping the input prompt
generated_text = tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True)
print(generated_text)
# Example with a reasoning prompt
reasoning_messages = [
{"role": "user", "content": "Solve the following problem step-by-step: A rectangular garden has a length of 15 meters and a width of 10 meters. If you want to put a fence around it, and the fencing costs $5 per meter, how much will it cost to fence the entire garden?"},
]
reasoning_input_ids = tokenizer.apply_chat_template(
reasoning_messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
reasoning_outputs = model.generate(reasoning_input_ids, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.9)
reasoning_generated_text = tokenizer.decode(reasoning_outputs[0][len(reasoning_input_ids[0]):], skip_special_tokens=True)
print(reasoning_generated_text)
```