|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- th |
|
|
- en |
|
|
tags: |
|
|
- meme |
|
|
pretty_name: token_awareness |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
# Token Awareness Dataset |
|
|
Built from "strawberry" meme 🍓, most LLM can't count the character of the word. Since this dataset is very easy to generate, we create a dataset for this benchmark just for fun. |
|
|
|
|
|
For this dataset, we support only Thai (th) and English (en) language. |
|
|
|
|
|
## Dataset Creation |
|
|
We sample words for each language from these sources: |
|
|
- Thai: [pythainlp's Thai words](https://github.com/PyThaiNLP/pythainlp/blob/dev/pythainlp/corpus/words_th.txt) |
|
|
- English: [dwyl's English words](https://github.com/dwyl/english-words/blob/master/words_alpha.txt) |
|
|
|
|
|
We then sample 500 word each weigted by the word score, which was computed using this simple heuristic: |
|
|
$$ |
|
|
S(w) = \frac{\sqrt{|w|}}{\frac{1}{|w|} \sum_{i=1}^{|w|-1} \text{freq}(u_i) + \sum_{j=1}^{|w|-2} \text{freq}(b_j)} |
|
|
$$ |
|
|
|
|
|
where: |
|
|
- $|w|$ is the length of the word. |
|
|
- $u_i$ represents the unigram (single character) in the word. |
|
|
- $b_j$ represents the bigram (pair of consecutive characters) in the word. |
|
|
- $\text{freq}(u_i)$ is the frequency of unigram $u_i$ in the overall corpus. |
|
|
- $\text{freq}(b_j)$ is the frequency of bigram $b_j$ in the overall corpus. |
|
|
|
|
|
we use a random seed of 42. |
|
|
|
|
|
## Author |
|
|
Chompakorn Chaksangchaichot |