Datasets:
Update dataset card: Add paper link, task category, abstract, and sample usage
Browse filesThis PR updates the dataset card for TSAIA to:
- Add `task_categories: time-series-forecasting` to the metadata for improved discoverability.
- Include a direct link to the paper: [When LLM Meets Time Series: Can LLMs Perform Multi-Step Time Series Reasoning and Inference](https://huggingface.co/papers/2509.01822) at the top of the card.
- Provide a detailed "About" section using the paper's abstract, giving users immediate context on the dataset's purpose and scope.
- Replace the generic "Usage" section with a comprehensive "Sample Usage" section, featuring code snippets directly from the GitHub README. This guides users through environment setup, data preparation, and running experiments with the dataset.
These enhancements will make the dataset more accessible, understandable, and user-friendly for researchers and practitioners on the Hugging Face Hub.
|
@@ -2,15 +2,20 @@
|
|
| 2 |
license: cc-by-4.0
|
| 3 |
configs:
|
| 4 |
- config_name: analysis_questions
|
| 5 |
-
data_files:
|
| 6 |
default: true
|
| 7 |
- config_name: multiple_choice
|
| 8 |
-
data_files:
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
# TSAIA: Time Series Analysis Instructional Assessment
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |

|
| 16 |
|
|
@@ -18,34 +23,89 @@ configs:
|
|
| 18 |
|
| 19 |
The dataset comprises two subsets:
|
| 20 |
|
| 21 |
-
-
|
| 22 |
-
-
|
| 23 |
|
| 24 |
### Fields in `analysis_questions`:
|
| 25 |
|
| 26 |
-
-
|
| 27 |
-
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
|
| 35 |
### Fields in `multiple_choice`:
|
| 36 |
|
| 37 |
-
-
|
| 38 |
-
-
|
| 39 |
-
-
|
| 40 |
-
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
| 49 |
|
| 50 |
## 📄 License
|
| 51 |
|
|
@@ -55,4 +115,4 @@ This dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licen
|
|
| 55 |
|
| 56 |
If you utilize the TSAIA dataset in your research or projects, please cite it accordingly.
|
| 57 |
|
| 58 |
-
Contributions are welcome! Feel free to submit pull requests or open issues to suggest improvements or add new task samples.
|
|
|
|
| 2 |
license: cc-by-4.0
|
| 3 |
configs:
|
| 4 |
- config_name: analysis_questions
|
| 5 |
+
data_files: analysis_questions.csv
|
| 6 |
default: true
|
| 7 |
- config_name: multiple_choice
|
| 8 |
+
data_files: multiple_choice.csv
|
| 9 |
+
task_categories:
|
| 10 |
+
- time-series-forecasting
|
| 11 |
---
|
| 12 |
|
| 13 |
# TSAIA: Time Series Analysis Instructional Assessment
|
| 14 |
|
| 15 |
+
[Paper](https://huggingface.co/papers/2509.01822) | [Code](https://github.com/USC-Melady/TSAIA)
|
| 16 |
+
|
| 17 |
+
## About
|
| 18 |
+
The rapid advancement of Large Language Models (LLMs) has sparked growing interest in their application to time series analysis tasks. However, their ability to perform complex reasoning over temporal data in real-world application domains remains underexplored. To move toward this goal, a first step is to establish a rigorous benchmark dataset for evaluation. In this work, we introduce the TSAIA Benchmark, a first attempt to evaluate LLMs as time-series AI assistants. To ensure both scientific rigor and practical relevance, we surveyed over 20 academic publications and identified 33 real-world task formulations. The benchmark encompasses a broad spectrum of challenges, ranging from constraint-aware forecasting to anomaly detection with threshold calibration: tasks that require compositional reasoning and multi-step time series analysis. The question generator is designed to be dynamic and extensible, supporting continuous expansion as new datasets or task types are introduced. Given the heterogeneous nature of the tasks, we adopt task-specific success criteria and tailored inference-quality metrics to ensure meaningful evaluation for each task. We apply this benchmark to assess eight state-of-the-art LLMs under a unified evaluation protocol. Our analysis reveals limitations in current models' ability to assemble complex time series analysis workflows, underscoring the need for specialized methodologies for domain-specific adaptation.
|
| 19 |
|
| 20 |

|
| 21 |
|
|
|
|
| 23 |
|
| 24 |
The dataset comprises two subsets:
|
| 25 |
|
| 26 |
+
- **analysis_questions**: 904 samples
|
| 27 |
+
- **multiple_choice**: 150 samples
|
| 28 |
|
| 29 |
### Fields in `analysis_questions`:
|
| 30 |
|
| 31 |
+
- `question_id`: Unique identifier for each question
|
| 32 |
+
- `question_type`: Type of question (e.g., `easy_stock-future price`)
|
| 33 |
+
- `prompt`: Natural language description of the task
|
| 34 |
+
- `data_str`: Embedded time series data (typically stock prices)
|
| 35 |
+
- `executor_variables`: Definitions of variables available for model execution
|
| 36 |
+
- `ground_truth_data`: Reference answer or target output
|
| 37 |
+
- `context`: Contextual information for the task
|
| 38 |
+
- `constraint`: Constraints on output format or variable naming
|
| 39 |
|
| 40 |
### Fields in `multiple_choice`:
|
| 41 |
|
| 42 |
+
- `question_id`: Unique identifier for each question
|
| 43 |
+
- `question_type`: Type of question
|
| 44 |
+
- `prompt`: Natural language description of the task
|
| 45 |
+
- `options`: A list of multiple-choice options
|
| 46 |
+
- `answer`: The correct option(s)
|
| 47 |
+
- `data_info`: Description of the data
|
| 48 |
+
- `answer_info`: Description of the answer
|
| 49 |
+
- `executor_variables`: Definitions of variables available for model execution
|
| 50 |
+
|
| 51 |
+
## ⚡ Sample Usage
|
| 52 |
+
|
| 53 |
+
This section provides a quick start guide to use the TSAIA dataset with its associated code. For more comprehensive instructions and troubleshooting, please refer to the [GitHub Repository](https://github.com/USC-Melady/TSAIA).
|
| 54 |
+
|
| 55 |
+
### 1. Clone the Repository
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
git clone https://github.com/USC-Melady/TSAIA.git
|
| 59 |
+
cd TSAIA
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
### 2. Set up Conda Environment
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
conda env create -f agent_environment.yml
|
| 66 |
+
conda activate agentenv
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### 3. Configure API Keys
|
| 70 |
+
|
| 71 |
+
Create a secure API key file `my_api_keys.py` by running:
|
| 72 |
+
```bash
|
| 73 |
+
printf '%s
|
| 74 |
+
' '# my_api_keys.py -- DO NOT COMMIT TO GIT' '# Fill in your API Token' '' 'OPENAI_API_KEY = "your_openai_key_here"' 'DEEPSEEK_API_KEY = "your_deepseek_key_here"' 'GEMINI_API_KEY = "your_gemini_key_here"' 'QWEN_API_KEY = "your_qwen_key_here"' 'CODESTRAL_API_KEY = "your_codestral_key_here"' 'MISTRAL_API_KEY = "your_mistral_key_here"' 'CLAUDE_API_KEY = "your_claude_key_here"' 'LLAMA_API_KEY = "your_llama_key_here"' > my_api_keys.py && chmod 600 my_api_keys.py
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
> ⚠️ **Important**: Replace placeholders (`your_..._key_here`) in `my_api_keys.py` with your actual API keys. Ensure this file is never committed to git.
|
| 78 |
+
|
| 79 |
+
### 4. Download Raw Dataset
|
| 80 |
+
|
| 81 |
+
This command downloads the raw dataset from our Hugging Face repository ([Melady/TSAIA](https://huggingface.co/datasets/Melady/TSAIA)) into the directory `./data/raw_data`.
|
| 82 |
+
```bash
|
| 83 |
+
git clone https://huggingface.co/datasets/Melady/TSAIA ./data/raw_data
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
### 5. Generate Pickle Files
|
| 87 |
+
|
| 88 |
+
Running `get_pkl.py` converts the raw dataset downloaded in the previous step into two .pkl files and stores them under the `./data` directory.
|
| 89 |
+
```bash
|
| 90 |
+
python get_pkl.py
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
### 🔬 Running Experiments
|
| 94 |
+
|
| 95 |
+
Activate your conda environment (`agentenv`) before running experiments.
|
| 96 |
+
The following two commands use the GPT-4o model. Ensure that you have set a valid OpenAI API key for `OPENAI_API_KEY` in your `my_api_keys.py` file.
|
| 97 |
+
|
| 98 |
+
* **Analysis Questions:**
|
| 99 |
+
|
| 100 |
+
```bash
|
| 101 |
+
python3 -u static_query_codeact.py --question_path data/analysis_questions.pkl --model_id gpt-4o
|
| 102 |
+
```
|
| 103 |
|
| 104 |
+
* **Multiple Choice Questions:**
|
| 105 |
|
| 106 |
+
```bash
|
| 107 |
+
python3 -u static_query_CodeAct_mc.py --question_path data/multiple_choice.pkl --model_id gpt-4o
|
| 108 |
+
```
|
| 109 |
|
| 110 |
## 📄 License
|
| 111 |
|
|
|
|
| 115 |
|
| 116 |
If you utilize the TSAIA dataset in your research or projects, please cite it accordingly.
|
| 117 |
|
| 118 |
+
Contributions are welcome! Feel free to submit pull requests or open issues to suggest improvements or add new task samples.
|