TFRBench / README.md
AtikAhamed's picture
Improve dataset card: add paper link, project page, and task category (#1)
1e432f7
metadata
task_categories:
  - time-series-forecasting

TFRBench: A Reasoning Benchmark for Evaluating Forecasting Systems

Paper | Project Page

TFRBench is the first benchmark designed to evaluate the reasoning capabilities of forecasting systems. While traditional time-series forecasting evaluations focus solely on numerical accuracy, TFRBench provides a protocol for evaluating the reasoning generated by models—specifically their analysis of cross-channel dependencies, trends, and external events. The benchmark spans ten datasets across five diverse domains.

How to Download the Data

You can download the dataset using the huggingface_hub library:

from huggingface_hub import snapshot_download
# Download the entire repository
snapshot_download(repo_id="AtikAhamed/TFRBench", repo_type="dataset", local_dir="./my_local_data")

TFRBench Submission Guidelines

Thank you for your interest in TFRBench! To participate in the leaderboard, please follow the directory structure and schema below to format your model predictions.

Public Inputs (What you receive)

You will be provided with public input JSON files. Each file is a list of objects containing historical data and the timestamps for which you need to predict.

Public Input Schema example:

[
  {
    "id": "NYC_Taxi_0",
    "dataset": "NYC_Taxi",
    "historical_window": {
      "index": ["2009-01-09 00:00:00", ...],
      "columns": ["Trip_Count"],
      "data": [[19000], ...]
    },
    "future_window_timestamps": ["2009-01-13 00:00:00", ...]
  }
]

Submission Directory Structure (What you submit)

Your submission should be a directory containing JSON files for each dataset. It is required to include all datasets.

my_submission/
├── metadata.json          
├── NYC_Taxi.json
├── amazon.json
└── ...

How to Submit

Please use this form to submit your predictions: https://forms.gle/gNqKrmw7hawY5VK99

Metadata Schema

To display your model name and provide a link to your paper or project on the leaderboard, include a metadata.json file at the root of your submission directory.

{
  "model_name": "My Awesome Model",
  "link": "https://github.com/myuser/myproject",
  "description": "Optional description"
}

File Schema

Each JSON file must be a list of objects. Each object represents a prediction for a single sample.

[
  {
    "id": "solar_daily_0",
    "Reasoning": "The trend will continue upwards due to clear summer skies. Weekend dips are expected.",
    "Prediction": [
      [2.5],
      [2.6],
      [2.4],
      ...
    ]
  },
  {
    "id": "solar_daily_1",
    "Reasoning": "Consistent stable pattern...",
    "Prediction": [
      [1.1],
      [1.1],
      [1.1],
      ...
    ]
  }
]

Required Fields:

  • id (String): The unique identifier for the sample (must match the ID provided in public inputs).
  • Reasoning (String): The text explanation generated by your model.
  • Prediction (List of Lists): A 2D numerical array representing the forecast window. For single-channel datasets, use [[value]] per time step.