| # TRABL: Travel-Domain Aspect-Based Sentiment Analysis Dataset | |
| This repository contains the **TRABL dataset**, released in support of our paper accepted to **The ACM Web Conference 2026 (WWW 2026)**: | |
| > **TRABL: A Unified Framework for Travel Domain Aspect-Based Sentiment Analysis Applications of Large Language Models** | |
| The dataset is designed to support research on **Aspect-Based Sentiment Analysis (ABSA)** in the travel domain, with a particular focus on *joint extraction* of structured sentiment tuples and *supporting evidence snippets*. | |
| --- | |
| ## 1. Overview | |
| TRABL extends the standard ABSA *quad prediction* task by additionally requiring the extraction of **textual evidence snippets** that justify each extracted sentiment tuple. The dataset enables training and evaluation of models for both fine-grained opinion mining and downstream travel applications such as recommendation, review summarization, and property type detection. | |
| The data consists of **English user reviews** from three travel-related domains: | |
| * **Hotels** | |
| * **Attractions** | |
| * **Destinations** | |
| Each review is annotated by **two annotators**, following carefully designed guidelines and quality control procedures. | |
| --- | |
| ## 2. Task Definition | |
| Given a review text (T), the task is to extract a set of tuples of the form: | |
| ``` | |
| (aspect_term, aspect_category, opinion_span, sentiment, snippet) | |
| ``` | |
| Where: | |
| * **Aspect Term**: Explicit mention of the target entity (e.g., *"room"*) | |
| * **Aspect Category**: Semantic category from a **closed set of 112 travel-domain categories** | |
| * **Opinion Span**: Evaluative expression (e.g., *"spacious"*) | |
| * **Sentiment**: One of `{positive, negative, neutral}` | |
| * **Snippet**: A *contiguous span* of text from the review that provides sufficient evidence for the other fields | |
| Some fields may be `null` if not explicitly mentioned (e.g., implicit aspects). | |
| This formulation generalizes multiple ABSA subtasks, including: | |
| * Category extraction `(c)` | |
| * Category–sentiment extraction `(c, p)` | |
| * Category–sentiment with evidence `(c, p, s)` | |
| * Standard quad prediction `(a, o, c, p)` | |
| * Quad prediction with snippet extraction `(a, o, c, p, s)` | |
| --- | |
| ## 3. Dataset Splits | |
| The dataset is split into **train**, **validation**, and **test** sets as follows: | |
| | Split | #Rows | | |
| | ---------- | ----- | | |
| | Train | 5,768 | | |
| | Validation | 898 | | |
| | Test | 900 | | |
| > Note: A *row* corresponds to a single annotated review instance (including annotator-specific annotations). | |
| --- | |
| ## 4. Data Format | |
| The data is released in **JSONL** format (one JSON object per line). | |
| Each example contains the following top-level fields: | |
| * **review_id**: Unique identifier for the review | |
| * **annotator**: Identifier of the annotator (`1` or `2`) | |
| * **text**: Full review text | |
| * Hotel reviews include title, positive, and negative sections | |
| * Attraction and destination reviews contain free text only | |
| * **labels**: List of labeled tuples | |
| Each entry in `annotations` has the structure: | |
| ```json | |
| { | |
| "aspect_term": "value for money", | |
| "aspect_category": "value for money", | |
| "opinion_span": "good", | |
| "sentiment": "positive", | |
| "snippet": "The good value for money" | |
| } | |
| ``` | |
| --- | |
| ## 5. Example | |
| **Input Review (excerpt):** | |
| ``` | |
| The good value for money, the helpful & polite staff, the level of cleanliness. | |
| ``` | |
| **Annotations (excerpt):** | |
| ```json | |
| [ | |
| { | |
| "aspect_term": "value for money", | |
| "aspect_category": "value for money", | |
| "opinion_span": "good", | |
| "sentiment": "positive", | |
| "snippet": "The good value for money" | |
| }, | |
| { | |
| "aspect_term": "staff", | |
| "aspect_category": "property staff and service support", | |
| "opinion_span": "helpful & polite", | |
| "sentiment": "positive", | |
| "snippet": "the helpful & polite staff" | |
| } | |
| ] | |
| ``` | |
| --- | |
| ## 6. Annotation Process and Quality | |
| * Each review was annotated by **two annotators** | |
| * Annotation was supported by LLM-generated initial labels, followed by human correction | |
| * Disagreements were normalized using post-processing rules | |
| * **Overall annotator agreement** (F1): ~67%, consistent with prior ABSA benchmarks | |
| * Agreement exceeds **80%** when evaluated on partial field subsets (e.g., category or sentiment only) | |
| --- | |
| ## 7. Statistics | |
| * **Domains**: Hotels, Attractions, Destinations | |
| * **Total number of reviews**: 3,783 | |
| * **Average review length**: ~42 words | |
| * **Average number of extracted tuples per review**: | |
| * Annotator 1: ~6.96 | |
| * Annotator 2: ~6.77 | |
| * **Sentiment distribution**: Skewed positive, reflecting real-world travel review behavior | |
| --- | |
| ## 8. Intended Use | |
| This dataset is intended for **research purposes only**, including but not limited to: | |
| * Aspect-Based Sentiment Analysis | |
| * Opinion mining and evidence extraction | |
| * Travel-domain NLP | |
| * Evaluation of LLMs and lightweight fine-tuned models | |
| * Structured information extraction with explainability | |
| --- | |
| ## 9. Citation | |
| If you use this dataset, please cite: | |
| ``` | |
| @inproceedings{madmon2026trabl, | |
| title = {TRABL: A Unified Framework for Travel Domain Aspect-Based Sentiment Analysis Applications of Large Language Models}, | |
| author = {Madmon, Omer and Golan, Shiran and Fainman, Eran and Kleinfeld, Ofri and Beladev, Moran}, | |
| booktitle = {Proceedings of The ACM Web Conference}, | |
| year = {2026} | |
| } | |
| ``` | |
| --- | |
| ## 10. License and Privacy | |
| * All reviews were processed under strict internal privacy and PII guidelines | |
| * The dataset is released for **non-commercial research use**, cc-by-sa-4.0 license | |
| * No personally identifiable information is included | |
| --- | |
| For questions or issues related to the dataset, please refer to the paper or open an issue on the dataset repository. | |