File size: 3,021 Bytes
8e8e6fc
 
 
 
 
caffad0
 
 
 
 
 
5a90ad6
 
d925b73
 
 
 
 
 
8e8e6fc
d925b73
8e8e6fc
 
d925b73
8e8e6fc
 
 
8f4def8
6849b0a
e2a3296
8e8e6fc
 
d925b73
8e8e6fc
d925b73
 
 
 
 
8e8e6fc
d925b73
 
 
 
 
8e8e6fc
 
 
 
 
 
 
 
 
 
d925b73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a1a77b
d925b73
 
 
 
 
fce1d64
d925b73
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
dataset_info:
  features:
  - name: mysql_dump_file
    dtype: binary
language:
- yue
- zh
language_details: "zh-Hant-HK; yue-Hant-HK"
language_creators:
- found
annotations_creators:
- no-annotation
tags:
- SQL
- Hong Kong
- diglossia
- Cantonese
- Traditional Chinese
license: cc-by-4.0
pretty_name: HK Web Text Corpus (MySQL Dump, raw version)
---

# HK Web Text Corpus (MySQL Dump, raw version)

## Dataset Description
- **Language:** Hong Kong Cantonese, Traditional Chinese
- **Size:** ~49.2 GB (MySQL), 11.1 GB (7z archive)
- **Format:** MySQL dump, UTF-8 encoding
- **Source:** public web sources (news sites, online forums, encyclopedia and restaurant reviews).

## Overview
⚠ This dataset provides the **MySQL dump file** which contains a large-scale raw text corpus collected from various Hong Kong public web sources, primarily focused on **Hong Kong Cantonese** and **Traditional Chinese** language usage.  

It was used for generating Hong Kong Content Corpus, which was then used in the experiments reported in **https://doi.org/10.1145/3744341** to study the **effect of diglossia on Hong Kong language** modeling.  

This MySQL database is intended **for archival and reproducibility purposes**, and may include noise, duplication, HTML markup, crawler residues, and records that were subsequently cleaned/filtered in the derived corpus release.  

This dataset is also available at Zenodo as splited archive: https://doi.org/10.5281/zenodo.16875235  

👉 If you are looking for the cleaned, ready-to-use corpus version, please refer to:  
https://huggingface.co/datasets/SolarisCipher/hk_content_corpus  

SHA256 checksum of files:  
b3b7a600ec2e2b5c6ce9ebc1e545712e696c6f6f94b78d0473486609eb7fb854  [SQL file after decompression]  

---

## Intended Uses

- Pretraining or fine-tuning AI language models
- Linguistic and sociolinguistic analysis
- Text mining research

NOTE: HKNSL became effective since 2020-6-30, which can create bias on user content created afterwards. Those portion of data should be used with caution.

## Citation
If you use this MYSQL database, please cite the following paper:

```bibtex
@article{Yung2025HKDiglossia,
  author    = {Yung, Yiu Cheong and Lin, Ying-Jia and Kao, Hung-Yu},
  title     = {Exploring the Effectiveness of Pre-training Language Models with Incorporation of Diglossia for Hong Kong Content},
  journal   = {ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)},
  volume    = {24},
  number    = {7},
  pages     = {71:1--71:16},
  year      = {2025},
  publisher = {Association for Computing Machinery},
  doi       = {10.1145/3744341}
}
```

and optionally also cite the database DOI:

```bibtex
@dataset{yung_2025_16875235,
  author       = {Yung, Yiu Cheong},
  title        = {HK Web Text Corpus (MySQL Dump, raw version)},
  month        = aug,
  year         = 2025,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.16875235},
  url          = {https://doi.org/10.5281/zenodo.16875235},
}
```