File size: 3,216 Bytes
1e432f7 b9fe348 1e432f7 b9fe348 c304c3a 1e432f7 b9fe348 7f972b0 b9fe348 7f972b0 b9fe348 7f972b0 c56a2c7 b9fe348 c56a2c7 b9fe348 1e432f7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | ---
task_categories:
- time-series-forecasting
---
# TFRBench: A Reasoning Benchmark for Evaluating Forecasting Systems
[Paper](https://huggingface.co/papers/2604.05364) | [Project Page](https://tfrbench.github.io/)
TFRBench is the first benchmark designed to evaluate the reasoning capabilities of forecasting systems. While traditional time-series forecasting evaluations focus solely on numerical accuracy, TFRBench provides a protocol for evaluating the reasoning generated by models—specifically their analysis of cross-channel dependencies, trends, and external events. The benchmark spans ten datasets across five diverse domains.
## How to Download the Data
You can download the dataset using the `huggingface_hub` library:
```python
from huggingface_hub import snapshot_download
# Download the entire repository
snapshot_download(repo_id="AtikAhamed/TFRBench", repo_type="dataset", local_dir="./my_local_data")
```
# TFRBench Submission Guidelines
Thank you for your interest in TFRBench! To participate in the leaderboard, please follow the directory structure and schema below to format your model predictions.
## Public Inputs (What you receive)
You will be provided with public input JSON files. Each file is a list of objects containing historical data and the timestamps for which you need to predict.
### Public Input Schema example:
```json
[
{
"id": "NYC_Taxi_0",
"dataset": "NYC_Taxi",
"historical_window": {
"index": ["2009-01-09 00:00:00", ...],
"columns": ["Trip_Count"],
"data": [[19000], ...]
},
"future_window_timestamps": ["2009-01-13 00:00:00", ...]
}
]
```
## Submission Directory Structure (What you submit)
Your submission should be a directory containing JSON files for each dataset. It is required to include all datasets.
```text
my_submission/
├── metadata.json
├── NYC_Taxi.json
├── amazon.json
└── ...
```
## How to Submit
Please use this form to submit your predictions: https://forms.gle/gNqKrmw7hawY5VK99
## Metadata Schema
To display your model name and provide a link to your paper or project on the leaderboard, include a `metadata.json` file at the root of your submission directory.
```json
{
"model_name": "My Awesome Model",
"link": "https://github.com/myuser/myproject",
"description": "Optional description"
}
```
## File Schema
Each JSON file must be a list of objects. Each object represents a prediction for a single sample.
```json
[
{
"id": "solar_daily_0",
"Reasoning": "The trend will continue upwards due to clear summer skies. Weekend dips are expected.",
"Prediction": [
[2.5],
[2.6],
[2.4],
...
]
},
{
"id": "solar_daily_1",
"Reasoning": "Consistent stable pattern...",
"Prediction": [
[1.1],
[1.1],
[1.1],
...
]
}
]
```
### Required Fields:
- `id` (String): The unique identifier for the sample (must match the ID provided in public inputs).
- `Reasoning` (String): The text explanation generated by your model.
- `Prediction` (List of Lists): A 2D numerical array representing the forecast window. For single-channel datasets, use `[[value]]` per time step. |