|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: mysql_dump_file |
|
|
dtype: binary |
|
|
language: |
|
|
- yue |
|
|
- zh |
|
|
language_details: "zh-Hant-HK; yue-Hant-HK" |
|
|
language_creators: |
|
|
- found |
|
|
annotations_creators: |
|
|
- no-annotation |
|
|
tags: |
|
|
- SQL |
|
|
- Hong Kong |
|
|
- diglossia |
|
|
- Cantonese |
|
|
- Traditional Chinese |
|
|
license: cc-by-4.0 |
|
|
pretty_name: HK Web Text Corpus (MySQL Dump, raw version) |
|
|
--- |
|
|
|
|
|
# HK Web Text Corpus (MySQL Dump, raw version) |
|
|
|
|
|
## Dataset Description |
|
|
- **Language:** Hong Kong Cantonese, Traditional Chinese |
|
|
- **Size:** ~49.2 GB (MySQL), 11.1 GB (7z archive) |
|
|
- **Format:** MySQL dump, UTF-8 encoding |
|
|
- **Source:** public web sources (news sites, online forums, encyclopedia and restaurant reviews). |
|
|
|
|
|
## Overview |
|
|
⚠ This dataset provides the **MySQL dump file** which contains a large-scale raw text corpus collected from various Hong Kong public web sources, primarily focused on **Hong Kong Cantonese** and **Traditional Chinese** language usage. |
|
|
|
|
|
It was used for generating Hong Kong Content Corpus, which was then used in the experiments reported in **https://doi.org/10.1145/3744341** to study the **effect of diglossia on Hong Kong language** modeling. |
|
|
|
|
|
This MySQL database is intended **for archival and reproducibility purposes**, and may include noise, duplication, HTML markup, crawler residues, and records that were subsequently cleaned/filtered in the derived corpus release. |
|
|
|
|
|
This dataset is also available at Zenodo as splited archive: https://doi.org/10.5281/zenodo.16875235 |
|
|
|
|
|
👉 If you are looking for the cleaned, ready-to-use corpus version, please refer to: |
|
|
https://huggingface.co/datasets/SolarisCipher/hk_content_corpus |
|
|
|
|
|
SHA256 checksum of files: |
|
|
b3b7a600ec2e2b5c6ce9ebc1e545712e696c6f6f94b78d0473486609eb7fb854 [SQL file after decompression] |
|
|
|
|
|
--- |
|
|
|
|
|
## Intended Uses |
|
|
|
|
|
- Pretraining or fine-tuning AI language models |
|
|
- Linguistic and sociolinguistic analysis |
|
|
- Text mining research |
|
|
|
|
|
NOTE: HKNSL became effective since 2020-6-30, which can create bias on user content created afterwards. Those portion of data should be used with caution. |
|
|
|
|
|
## Citation |
|
|
If you use this MYSQL database, please cite the following paper: |
|
|
|
|
|
```bibtex |
|
|
@article{Yung2025HKDiglossia, |
|
|
author = {Yung, Yiu Cheong and Lin, Ying-Jia and Kao, Hung-Yu}, |
|
|
title = {Exploring the Effectiveness of Pre-training Language Models with Incorporation of Diglossia for Hong Kong Content}, |
|
|
journal = {ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)}, |
|
|
volume = {24}, |
|
|
number = {7}, |
|
|
pages = {71:1--71:16}, |
|
|
year = {2025}, |
|
|
publisher = {Association for Computing Machinery}, |
|
|
doi = {10.1145/3744341} |
|
|
} |
|
|
``` |
|
|
|
|
|
and optionally also cite the database DOI: |
|
|
|
|
|
```bibtex |
|
|
@dataset{yung_2025_16875235, |
|
|
author = {Yung, Yiu Cheong}, |
|
|
title = {HK Web Text Corpus (MySQL Dump, raw version)}, |
|
|
month = aug, |
|
|
year = 2025, |
|
|
publisher = {Zenodo}, |
|
|
doi = {10.5281/zenodo.16875235}, |
|
|
url = {https://doi.org/10.5281/zenodo.16875235}, |
|
|
} |
|
|
``` |
|
|
|