Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Token Awareness Dataset
|
| 2 |
+
Built from "strawberry" meme 🍓, most LLM can't count the character of the word. Since this dataset is very easy to generate, we create a dataset for this benchmark just for fun.
|
| 3 |
+
|
| 4 |
+
For this dataset, we support only Thai (th) and English (en) language.
|
| 5 |
+
|
| 6 |
+
## Dataset Creation
|
| 7 |
+
We sample words for each language from these sources:
|
| 8 |
+
- Thai: [pythainlp's Thai words](https://github.com/PyThaiNLP/pythainlp/blob/dev/pythainlp/corpus/words_th.txt)
|
| 9 |
+
- English: [dwyl's English words](https://github.com/dwyl/english-words/blob/master/words_alpha.txt)
|
| 10 |
+
|
| 11 |
+
We then sample 500 word each weigted by the word score, which was computed using this simple heuristic:
|
| 12 |
+
$$
|
| 13 |
+
S(w) = \frac{\sqrt{|w|}}{\frac{1}{|w|} \sum_{i=1}^{|w|-1} \text{freq}(u_i) + \sum_{j=1}^{|w|-2} \text{freq}(b_j)}
|
| 14 |
+
$$
|
| 15 |
+
|
| 16 |
+
where:
|
| 17 |
+
- $|w|$ is the length of the word.
|
| 18 |
+
- $u_i$ represents the unigram (single character) in the word.
|
| 19 |
+
- $b_j$ represents the bigram (pair of consecutive characters) in the word.
|
| 20 |
+
- $\text{freq}(u_i)$ is the frequency of unigram $u_i$ in the overall corpus.
|
| 21 |
+
- $\text{freq}(b_j)$ is the frequency of bigram $b_j$ in the overall corpus.
|
| 22 |
+
|
| 23 |
+
we use a random seed of 42.
|
| 24 |
+
|
| 25 |
+
## Author
|
| 26 |
+
Chompakorn Chaksangchaichot
|