File size: 3,445 Bytes
bde58f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
task_categories:
- token-classification
language:
- vi
---
## Dataset Card for ViLexNorm

### 1. Dataset Summary

**ViLexNorm** is a Vietnamese lexical normalization corpus of **10,467** comment pairs, each comprising an *original* noisy social‑media comment and its *normalized* counterpart. In this unified version, all pairs are merged into one CSV with a `type` column indicating `train` / `dev` / `test`, and two additional columns `input`/`output` containing the tokenized forms.

### 2. Supported Tasks and Metrics

* **Primary Task**: Sequence‑to‑sequence lexical normalization
* **Metric**:

  * **Error Reduction Rate (ERR)** (van der Goot 2019)
  * **Token‑level accuracy**

### 3. Languages

* Vietnamese


### 4. Dataset Structure

| Column       | Type   | Description                                  |
| ------------ | ------ | -------------------------------------------- |
| `original`   | string | The raw, unnormalized comment.               |
| `normalized` | string | The corrected, normalized comment.           |
| `input`      | list   | Tokenized original text (list of strings).   |
| `output`     | list   | Tokenized normalized text (list of strings). |
| `type`       | string | Split: `train` / `validation` / `test`.      |
| `dataset`    | string | Always `ViLexNorm` for provenance.           |


### 5. Data Fields

* **original** (`str`): Noisy input sentence.
* **normalized** (`str`): Human‑annotated normalized sentence.
* **input** (`List[str]`): Token list of `original`.
* **output** (`List[str]`): Token list of `normalized`.
* **type** (`str`): Which split the example belongs to.
* **dataset** (`str`): Always `ViLexNorm`.

### 6. Usage

```python
from datasets import load_dataset

ds = load_dataset("visolex/ViLexNorm")

train = ds.filter(lambda ex: ex["type"] == "train")
val   = ds.filter(lambda ex: ex["type"] == "dev")
test  = ds.filter(lambda ex: ex["type"] == "test")

print(train[0])
```


### 7. Source & Links

* **Paper**: Nguyen et al. (2024), “ViLexNorm: A Lexical Normalization Corpus for Vietnamese Social Media Text”
  [https://aclanthology.org/2024.eacl-long.85](https://aclanthology.org/2024.eacl-long.85)
* **Hugging Face** (this unified version):
  [https://huggingface.co/datasets/visolex/ViLexNorm](https://huggingface.co/datasets/visolex/ViLexNorm)
* **Original GitHub** (if available):
  [https://github.com/ngxtnhi/ViLexNorm](https://github.com/ngxtnhi/ViLexNorm)


### 8. Contact Information

* **Ms. Thanh‑Nhi Nguyen**: [21521232@gm.uit.edu.vn](mailto:21521232@gm.uit.edu.vn)
* **Mr. Thanh‑Phong Le**: [21520395@gm.uit.edu.vn](mailto:21520395@gm.uit.edu.vn)

### 9. Licensing and Citation

#### License

Released under **CC BY‑NC‑SA 4.0** (Creative Commons Attribution‑NonCommercial‑ShareAlike 4.0 International).

#### How to Cite

```bibtex
@inproceedings{nguyen-etal-2024-vilexnorm,
  title     = {ViLexNorm: A Lexical Normalization Corpus for Vietnamese Social Media Text},
  author    = {Nguyen, Thanh-Nhi and Le, Thanh-Phong and Nguyen, Kiet},
  booktitle = {Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)},
  month     = mar,
  year      = {2024},
  address   = {St. Julian's, Malta},
  publisher = {Association for Computational Linguistics},
  url       = {https://aclanthology.org/2024.eacl-long.85},
  pages     = {1421--1437}
}
```