Update README.md
Browse filesadded note about knowledge cutoff of the model
README.md
CHANGED
|
@@ -39,6 +39,9 @@ OpenForecaster-8B is trained to make calibrated predictions on open-ended questi
|
|
| 39 |
- Reason about uncertainty and future scenarios
|
| 40 |
- Leverage retrieved information (when provided in context) to improve predictions
|
| 41 |
|
|
|
|
|
|
|
|
|
|
| 42 |
## Training
|
| 43 |
|
| 44 |
This model was trained on the **OpenForesight** dataset, which contains over 52,000 forecasting questions generated from global news events. The training was done using GRPO optimizing a joint reward function combining accuracy and brier score. Please check the paper for more details.
|
|
|
|
| 39 |
- Reason about uncertainty and future scenarios
|
| 40 |
- Leverage retrieved information (when provided in context) to improve predictions
|
| 41 |
|
| 42 |
+
**Note:** OpenForecaster-8B's knowledge cutoff is, at best, till **April 2025** (base model's cutoff being ~June 2024) so it has no knowledge about the events that have happened since then till now. Thus, if you ask it questions about 2026 or later without providing recent developments/relevant context, it will only be able to answer from its parametric knowledge which might not be helpful/up-to date. Thus, please be aware of this and use it with RAG over recent developments if possible.
|
| 43 |
+
|
| 44 |
+
|
| 45 |
## Training
|
| 46 |
|
| 47 |
This model was trained on the **OpenForesight** dataset, which contains over 52,000 forecasting questions generated from global news events. The training was done using GRPO optimizing a joint reward function combining accuracy and brier score. Please check the paper for more details.
|