|
|
--- |
|
|
task_categories: |
|
|
- token-classification |
|
|
language: |
|
|
- vi |
|
|
--- |
|
|
## Dataset Card for ViLexNorm |
|
|
|
|
|
### 1. Dataset Summary |
|
|
|
|
|
**ViLexNorm** is a Vietnamese lexical normalization corpus of **10,467** comment pairs, each comprising an *original* noisy social‑media comment and its *normalized* counterpart. In this unified version, all pairs are merged into one CSV with a `type` column indicating `train` / `dev` / `test`, and two additional columns `input`/`output` containing the tokenized forms. |
|
|
|
|
|
### 2. Supported Tasks and Metrics |
|
|
|
|
|
* **Primary Task**: Sequence‑to‑sequence lexical normalization |
|
|
* **Metric**: |
|
|
|
|
|
* **Error Reduction Rate (ERR)** (van der Goot 2019) |
|
|
* **Token‑level accuracy** |
|
|
|
|
|
### 3. Languages |
|
|
|
|
|
* Vietnamese |
|
|
|
|
|
|
|
|
### 4. Dataset Structure |
|
|
|
|
|
| Column | Type | Description | |
|
|
| ------------ | ------ | -------------------------------------------- | |
|
|
| `original` | string | The raw, unnormalized comment. | |
|
|
| `normalized` | string | The corrected, normalized comment. | |
|
|
| `input` | list | Tokenized original text (list of strings). | |
|
|
| `output` | list | Tokenized normalized text (list of strings). | |
|
|
| `type` | string | Split: `train` / `validation` / `test`. | |
|
|
| `dataset` | string | Always `ViLexNorm` for provenance. | |
|
|
|
|
|
|
|
|
### 5. Data Fields |
|
|
|
|
|
* **original** (`str`): Noisy input sentence. |
|
|
* **normalized** (`str`): Human‑annotated normalized sentence. |
|
|
* **input** (`List[str]`): Token list of `original`. |
|
|
* **output** (`List[str]`): Token list of `normalized`. |
|
|
* **type** (`str`): Which split the example belongs to. |
|
|
* **dataset** (`str`): Always `ViLexNorm`. |
|
|
|
|
|
### 6. Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("visolex/ViLexNorm") |
|
|
|
|
|
train = ds.filter(lambda ex: ex["type"] == "train") |
|
|
val = ds.filter(lambda ex: ex["type"] == "dev") |
|
|
test = ds.filter(lambda ex: ex["type"] == "test") |
|
|
|
|
|
print(train[0]) |
|
|
``` |
|
|
|
|
|
|
|
|
### 7. Source & Links |
|
|
|
|
|
* **Paper**: Nguyen et al. (2024), “ViLexNorm: A Lexical Normalization Corpus for Vietnamese Social Media Text” |
|
|
[https://aclanthology.org/2024.eacl-long.85](https://aclanthology.org/2024.eacl-long.85) |
|
|
* **Hugging Face** (this unified version): |
|
|
[https://huggingface.co/datasets/visolex/ViLexNorm](https://huggingface.co/datasets/visolex/ViLexNorm) |
|
|
* **Original GitHub** (if available): |
|
|
[https://github.com/ngxtnhi/ViLexNorm](https://github.com/ngxtnhi/ViLexNorm) |
|
|
|
|
|
|
|
|
### 8. Contact Information |
|
|
|
|
|
* **Ms. Thanh‑Nhi Nguyen**: [21521232@gm.uit.edu.vn](mailto:21521232@gm.uit.edu.vn) |
|
|
* **Mr. Thanh‑Phong Le**: [21520395@gm.uit.edu.vn](mailto:21520395@gm.uit.edu.vn) |
|
|
|
|
|
### 9. Licensing and Citation |
|
|
|
|
|
#### License |
|
|
|
|
|
Released under **CC BY‑NC‑SA 4.0** (Creative Commons Attribution‑NonCommercial‑ShareAlike 4.0 International). |
|
|
|
|
|
#### How to Cite |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{nguyen-etal-2024-vilexnorm, |
|
|
title = {ViLexNorm: A Lexical Normalization Corpus for Vietnamese Social Media Text}, |
|
|
author = {Nguyen, Thanh-Nhi and Le, Thanh-Phong and Nguyen, Kiet}, |
|
|
booktitle = {Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)}, |
|
|
month = mar, |
|
|
year = {2024}, |
|
|
address = {St. Julian's, Malta}, |
|
|
publisher = {Association for Computational Linguistics}, |
|
|
url = {https://aclanthology.org/2024.eacl-long.85}, |
|
|
pages = {1421--1437} |
|
|
} |
|
|
``` |