Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of Language Models
Paper
•
2502.04404
•
Published
•
25
Self-Backtracking: A novel self-backtracking method for improving language model reasoning, as described in Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of Language Models.
The integration of slow-thinking mechanisms into large language models (LLMs) offers a promising way toward achieving Level 2 AGI Reasoners.
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("yangxw/Llama-3.2-1B-countdown-backtrack")
model = AutoModelForCausalLM.from_pretrained("yangxw/Llama-3.2-1B-countdown-backtrack")
prompt = "What is 2 + 2?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))