LlamaForecaster-8B

Paper Blog Dataset

LlamaForecaster-8B is a specialized language model for open-ended forecasting and predicting future events. This model is post-trained from Llama-3.1-8B-Instruct using reinforcement learning on the OpenForesight dataset.

Performance on OpenForesight Test Set

Accuracy Brier Score

Training Llama-3.1-8B-Instruct on increasing number of samples from OpenForesight leads to continued improvements, making it surpass Qwen3-235B, DeepSeek v3, and almost match R1! We call the final checkpoint (trained on whole of OpenForesight) LlamaForecaster.

Model Description

LlamaForecaster-8B is trained to make calibrated predictions on open-ended questions about future events. The model has been trained to provide calibrated confidence estimates when asked (please prompt explicitly).

Training

This model was trained on the OpenForesight dataset, which contains over 52,000 forecasting questions generated from global news events. The training was done using GRPO optimizing a joint reward function combining accuracy and brier score. Please check the paper for more details.

Base Model: Llama-3.1-8B-Instruct
Training Dataset: OpenForesight

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "nikhilchandak/LlamaForecaster-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# template 
prompt = "What is the likelihood that [future event] will occur by [date]?"
# example
prompt = "Who will become the next Prime Minister of India based on the general election to be held in 2029? Provide specific predictions with probabilities."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=8192)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(prediction)

Performance

LlamaForecaster-8B achieves competitive performance with much larger models like DeepSeek-v3 and Qwen3-235B-A22B on forecasting benchmarks. Key improvements include:

  • Improved Accuracy: Better prediction of future events
  • Better Calibration: More reliable confidence estimates
  • Enhanced Consistency: Reduced logical violations in predictions

Citation

If you use this model in any way, please cite the corresponding paper:

@article{chandak2025scaling,
 title={Scaling Open-Ended Reasoning to Predict the Future},
 author={Chandak, Nikhil and Goel, Shashwat and Prabhu, Ameya and Hardt, Moritz and Geiping, Jonas},
 journal={arXiv preprint arXiv:2512.25070},
 year={2025}
}

License

This model is released under the MIT License.

Contact

For questions or issues, please visit our website or open an issue on the model repository.

Downloads last month
5
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nikhilchandak/LlamaForecaster-8B

Finetuned
(2187)
this model
Quantizations
2 models

Dataset used to train nikhilchandak/LlamaForecaster-8B

Collection including nikhilchandak/LlamaForecaster-8B

Paper for nikhilchandak/LlamaForecaster-8B