--- license: mit dataset_info: features: - name: question dtype: string - name: options sequence: string - name: answer dtype: string - name: question_type dtype: string - name: tid dtype: int64 - name: difficulty dtype: string - name: format_hint dtype: string - name: relevant_concepts sequence: string - name: question_hint dtype: string - name: category dtype: string - name: subcategory dtype: string - name: id dtype: int64 - name: ts1 sequence: float64 - name: ts2 sequence: float64 splits: - name: test num_bytes: 1623762 num_examples: 763 download_size: 1278082 dataset_size: 1623762 configs: - config_name: default data_files: - split: test path: data/test-* task_categories: - question-answering language: - en tags: - Time-series - LLMs - GPT - Gemini - Phi - Reasoning - o3-mini pretty_name: timeseriesexam1 size_categories: - n<1K --- ## Changelog ### [v1.1] - 2025-03-12 **Enhancements:** - Adjusted generation hyperparameters and templates to eliminate scenarios that might lead to ambiguous or incorrect responses. - Improved data formatting for consistency. - Updated time-series sample length to 1024 to capture more diverse and complex features. **Updated Model Evaluations:** - The following table shows the updated evaluation on models (tokenization method): | Model | Tokenization Method | Accuracy | |----------|---------------------|----------| | gpt-4o | image | 75.2% | | gpt-4o | plain_text | 51.7% | | 4o-mini | plain_text | 46.6% | | o3-mini | plain_text | 59.0% | **Additional Information:** - **Note:** The previous version ([v1.0](https://huggingface.co/datasets/AutonLab/TimeSeriesExam1/resolve/9f23771ca10d66607ee7abca0c5dcbae57349ac2/qa_dataset.json)) is still available for reference. - **Research code for exam generation via templates is available on [GitHub](https://github.com/moment-timeseries-foundation-model/TimeSeriesExam/tree/exam_generation).** # Dataset Card for TimeSeriesExam-1 This dataset provides Question-Answer (QA) pairs for the paper [TimeSeriesExam: A Time Series Understanding Exam](https://arxiv.org/pdf/2410.14752). Example inference code can be found [here](https://github.com/moment-timeseries-foundation-model/TimeSeriesExam). ## đź“–Introduction Large Language Models (LLMs) have recently demonstrated a remarkable ability to model time series data. These capabilities can be partly explained if LLMs understand basic time series concepts. However, our knowledge of what these models understand about time series data remains relatively limited. To address this gap, we introduce TimeSeriesExam, a configurable and scalable multiple-choice question exam designed to assess LLMs across five core time series understanding categories: pattern recognition, noise understanding, similarity analysis, anomaly detection, and causality analysis.
Figure. 1: Accuracy of latest LLMs on the `TimeSeriesExam.` Closed-source LLMs outperform open-source ones in simple understanding tasks, but most models struggle with complex reasoning tasks.
Figure. 2: The pipeline enables diversity by combining different components to create numerous synthetic time series with varying properties.
