| | --- |
| | language: |
| | - en |
| | - zh |
| | size_categories: |
| | - 1K<n<10K |
| | --- |
| | |
| | 📰 **Tech Reviews Dataset** |
| |
|
| | This dataset contains tech product reviews collected from online tech forums. The data is stored in JSON Lines (.jsonl) format, |
| | where each line represents a single article. |
| |
|
| | **License Information** |
| |
|
| | We do not own any of the public texts from which these text data has been extracted. |
| | We license the actual packaging of these text data under the Creative Commons CC0 license ("no rights reserved"). |
| |
|
| | 📂 **Dataset Structure** |
| |
|
| | Each record includes the following fields: |
| |
|
| | * id (string) – unique identifier of the article (UUID) |
| | * origin (string) – who crawled this data, e.g.: BUT, TAIPEITECH, LINGEA |
| | * url (string) – original article URL |
| | * title (string) – title of the article |
| | * html (string) – full HTML content of the article at the time of scraping in UTF-8 encoding |
| | * text (string) – plain text version of the article without HTML tags in UTF-8 encoding |
| | * dateCrawled (string, ISO 8601) – timestamp of article collection in ISO format |
| | * datePublished (string, ISO 8601) – date when the source page was published in ISO format |
| | * dateUpdated (string, ISO 8601) – date when the source page was updated in ISO format |
| |
|
| | Example record: |
| |
|
| | ```json |
| | { |
| | "id": "36f194ce-fff9-55bd-aa2c-30ec6204bb5a", |
| | "origin": "TAIPEITECH", |
| | "url": "https://example.com/article", |
| | "title": "Here goes the article title...", |
| | "html": "<html> ... </html>", |
| | "text": "Here goes the article review...", |
| | "dateCrawled": "2025-10-05T03:25:48", |
| | "datePublished": "2024-01-14T19:07:58+0800", |
| | "dateUpdated": "2024-01-14T19:07:58+0800" |
| | } |
| | ``` |
| |
|
| | 🔍 **Inter-rater Reliability** |
| |
|
| | The dataset consists of six distinct packages of review comments collected from four online forums. Each package contains |
| | approximately 500 comments, which were independently annotated by four annotators per package. Annotators were instructed to |
| | classify each comment according to a predefined annotation guideline, with an option to leave a comment undecided if the |
| | category was unclear. |
| |
|
| | To assess annotation reliability, we computed Cross-Replication Reliability (xRR). We additionally computed standard |
| | inter-annotator agreement metrics including Cohen’s Kappa, Fleiss’ Kappa, and Krippendorff’s Alpha to evaluate individual-level |
| | consistency. |
| |
|
| | This dataset is a work in progress, and future updates will include additional packages, expanded annotations, and refinements |
| | based on ongoing quality control. |
| |
|
| | The dataset includes comments in both English and Traditional Chinese, reflecting a bilingual annotation setting. |
| |
|
| | Inter-rater reliability (IRR) was assessed using various metrics. Krippendorff's nominal α ranged between -0.12 to 0.37 (mean 0.13), |
| | indicating a slight overall agreement. Cross-replication reliability (xRR; Wong et al., 2021), which measures chance-corrected agreement |
| | between majority labels of randomly split annotator groups, ranged between 0.05 to 0.32 (mean 0.19). High majority class prevalence (67–84%) |
| | across packages shows the importance of chance correction. |
| |
|
| | We observed the annotators who labeled the data if they actually agreed with each other, using several different scoring methods. Using |
| | Krippendorff's alpha metrics, we have a score ranged between -0.12 and 0.37 across the different packages, averaging 0.13. This means |
| | that the annotators showed an overall slight agreement. |
| |
|
| | Using Cross-replication reliability (xRR) metrics, we check by randomly splitting the annotators into two separate teams and let each team |
| | vote on the most common answer, how often would the two teams reach the same conclusion after taking random guesses into account. We have |
| | a score that ranging between 0.05 to 0.32, averaging 0.19. This means that there is slight agreement between what the two independent groups |
| | would decide. |
| |
|
| | One important trend that we observed is that there is a majority choice in what the annotators would annotate, ranging between 67% to 84% |
| | of the cases across the packages. This means that almost every review gets the same label in the package. Annotators seem to be in total |
| | agreement, but in actual fact, annotators agree by chance because the "popular" or "obvious" answer is selected most of the time. |
| |
|
| | 🔗 For more details about the annotation guidelines, see [this document](https://aclanthology.org/2021.acl-long.548/). |
| |
|
| | **Reference** |
| |
|
| | Ka Wong, Praveen Paritosh, and Lora Aroyo. 2021. Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater |
| | Reliability. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International |
| | Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7053–7065, Online. Association for Computational |
| | Linguistics. |