Merge branch 'main' of https://huggingface.co/edugp/kenlm
Browse files
README.md
CHANGED
|
@@ -1,3 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# KenLM models
|
| 2 |
This repo contains several KenLM models trained on different tokenized datasets and languages.
|
| 3 |
KenLM models are probabilistic n-gram languge models that models. One use case of these models consist on fast perplexity estimation for [filtering or sampling large datasets](https://huggingface.co/bertin-project/bertin-roberta-base-spanish). For example, one could use a KenLM model trained on French Wikipedia to run inference on a large dataset and filter out samples that are very unlike to appear on Wikipedia (high perplexity), or very simple non-informative sentences that could appear repeatedly (low perplexity).
|
|
@@ -11,7 +49,7 @@ The models have been trained using some of the preprocessing steps from [cc_net]
|
|
| 11 |
|
| 12 |
# Dependencies
|
| 13 |
* KenLM: `pip install https://github.com/kpu/kenlm/archive/master.zip`
|
| 14 |
-
* SentencePiece: `pip install
|
| 15 |
|
| 16 |
# Example:
|
| 17 |
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- es
|
| 4 |
+
- af
|
| 5 |
+
- ar
|
| 6 |
+
- arz
|
| 7 |
+
- as
|
| 8 |
+
- bn
|
| 9 |
+
- fr
|
| 10 |
+
- sw
|
| 11 |
+
- eu
|
| 12 |
+
- ca
|
| 13 |
+
- zh
|
| 14 |
+
- en
|
| 15 |
+
- hi
|
| 16 |
+
- ur
|
| 17 |
+
- id
|
| 18 |
+
- pt
|
| 19 |
+
- vi
|
| 20 |
+
- gu
|
| 21 |
+
- kn
|
| 22 |
+
- ml
|
| 23 |
+
- mr
|
| 24 |
+
- ta
|
| 25 |
+
- te
|
| 26 |
+
- yo
|
| 27 |
+
tags:
|
| 28 |
+
- kenlm
|
| 29 |
+
- perplexity
|
| 30 |
+
- n-gram
|
| 31 |
+
- kneser-ney
|
| 32 |
+
- bigscience
|
| 33 |
+
license: "mit"
|
| 34 |
+
datasets:
|
| 35 |
+
- wikipedia
|
| 36 |
+
- oscar
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
# KenLM models
|
| 40 |
This repo contains several KenLM models trained on different tokenized datasets and languages.
|
| 41 |
KenLM models are probabilistic n-gram languge models that models. One use case of these models consist on fast perplexity estimation for [filtering or sampling large datasets](https://huggingface.co/bertin-project/bertin-roberta-base-spanish). For example, one could use a KenLM model trained on French Wikipedia to run inference on a large dataset and filter out samples that are very unlike to appear on Wikipedia (high perplexity), or very simple non-informative sentences that could appear repeatedly (low perplexity).
|
|
|
|
| 49 |
|
| 50 |
# Dependencies
|
| 51 |
* KenLM: `pip install https://github.com/kpu/kenlm/archive/master.zip`
|
| 52 |
+
* SentencePiece: `pip install sentencepiece`
|
| 53 |
|
| 54 |
# Example:
|
| 55 |
```
|