beshiribrahim commited on
Commit
610b77a
·
verified ·
1 Parent(s): 635231f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -21
README.md CHANGED
@@ -18,13 +18,9 @@ This repository provides a **3-gram Language Model (LM)** for the **Tigre** lang
18
  - Rescoring hypotheses in Automatic Speech Recognition (ASR).
19
  - Improving text generation and fluency in Machine Translation (MT).
20
  - Performing basic text filtering and quality control.
21
-
22
  The model is provided in the highly optimized binary (`.arpa`) format, making it suitable for efficient use in production environments.
23
 
24
- ---
25
-
26
  ## Model Statistics
27
-
28
  This language model was trained using KenLM on the **Tigre Monolingual Text Dataset (Tigre-Data 1.0)**.
29
 
30
  | Statistic | Value |
@@ -36,12 +32,8 @@ This language model was trained using KenLM on the **Tigre Monolingual Text Data
36
 
37
  _Note: The total raw training tokens used for this model can be found in the Tigre Monolingual Text Dataset card (approximately 14.7 million tokens)._
38
 
39
- ---
40
-
41
  ## Training Data Source
42
-
43
  This model was trained exclusively on the **BeitTigreAI/tigre-data-monolingual-text** dataset.
44
-
45
  More detailed information about the training data, including its domain, bias, preprocessing steps, and source statistics, can be found in the dataset's documentation:
46
  [Tigre Monolingual Text Dataset README](https://huggingface.co/datasets/BeitTigreAI/tigre-data-monolingual-text/blob/main/README.md)
47
 
@@ -50,20 +42,16 @@ More detailed information about the training data, including its domain, bias, p
50
  ## Files and Structure
51
 
52
  The repository contains the following files:
53
-
54
  tigre-data-kenLM/
55
  ├── README.md
56
  ├── hf_readme.ipynb
57
  └── tigre-data-kenLM.arpa
58
 
59
  ## How to Use the Model
60
-
61
  You can load and query the model using the Python bindings for **KenLM** (`kenlm`).
62
 
63
  ### Installation
64
-
65
  To use the model in Python, install the KenLM bindings:
66
-
67
  ```bash
68
  !pip install kenlm
69
 
@@ -71,28 +59,22 @@ To use the model in Python, install the KenLM bindings:
71
  The following Python code demonstrates how to load the model and query it for log probability and perplexity:
72
 
73
  ```python
74
-
75
  import kenlm
76
  from huggingface_hub import hf_hub_download
77
-
78
  # 1. Download the ARPA model file from the Hugging Face Hub
79
  arpa_path = hf_hub_download(
80
  repo_id="BeitTigreAI/tigre-data-kenLM",
81
  filename="tigre-data-kenLM.arpa",
82
  repo_type="model"
83
  )
84
-
85
  # 2. Load the KenLM model
86
  lm = kenlm.Model(arpa_path)
87
-
88
  # Example single sentence to score
89
  test_sentence = "ዕርቃን ሓይልን ንግሥን" # Or use one of the lines from your list
90
-
91
  # A. Calculate Log10 Probability of the entire sentence
92
  log_prob = lm.score(test_sentence)
93
  print(f"Sentence: '{test_sentence}'")
94
  print(f"Log10 Probability: {log_prob:.4f}")
95
-
96
  # B. Calculate Perplexity of the entire sentence
97
  perplexity = lm.perplexity(test_sentence)
98
  print(f"Perplexity: {perplexity:.2f}")
@@ -100,11 +82,9 @@ print(f"Perplexity: {perplexity:.2f}")
100
  ```
101
 
102
  ## Licensing and Citation
103
-
104
  The Tigre 3-gram Language Model is licensed under CC-BY-SA-4.0.
105
 
106
  ## Citation
107
-
108
  If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
109
 
110
  ## Recommended Citation Format:
@@ -112,5 +92,4 @@ If you use this resource in your work, please cite the repository by referencing
112
  ## Repository Name: Tigre 3-gram Language Model (KenLM)
113
 
114
  ## Organization: BeitTigreAI
115
-
116
  URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-kenLM
 
18
  - Rescoring hypotheses in Automatic Speech Recognition (ASR).
19
  - Improving text generation and fluency in Machine Translation (MT).
20
  - Performing basic text filtering and quality control.
 
21
  The model is provided in the highly optimized binary (`.arpa`) format, making it suitable for efficient use in production environments.
22
 
 
 
23
  ## Model Statistics
 
24
  This language model was trained using KenLM on the **Tigre Monolingual Text Dataset (Tigre-Data 1.0)**.
25
 
26
  | Statistic | Value |
 
32
 
33
  _Note: The total raw training tokens used for this model can be found in the Tigre Monolingual Text Dataset card (approximately 14.7 million tokens)._
34
 
 
 
35
  ## Training Data Source
 
36
  This model was trained exclusively on the **BeitTigreAI/tigre-data-monolingual-text** dataset.
 
37
  More detailed information about the training data, including its domain, bias, preprocessing steps, and source statistics, can be found in the dataset's documentation:
38
  [Tigre Monolingual Text Dataset README](https://huggingface.co/datasets/BeitTigreAI/tigre-data-monolingual-text/blob/main/README.md)
39
 
 
42
  ## Files and Structure
43
 
44
  The repository contains the following files:
 
45
  tigre-data-kenLM/
46
  ├── README.md
47
  ├── hf_readme.ipynb
48
  └── tigre-data-kenLM.arpa
49
 
50
  ## How to Use the Model
 
51
  You can load and query the model using the Python bindings for **KenLM** (`kenlm`).
52
 
53
  ### Installation
 
54
  To use the model in Python, install the KenLM bindings:
 
55
  ```bash
56
  !pip install kenlm
57
 
 
59
  The following Python code demonstrates how to load the model and query it for log probability and perplexity:
60
 
61
  ```python
 
62
  import kenlm
63
  from huggingface_hub import hf_hub_download
 
64
  # 1. Download the ARPA model file from the Hugging Face Hub
65
  arpa_path = hf_hub_download(
66
  repo_id="BeitTigreAI/tigre-data-kenLM",
67
  filename="tigre-data-kenLM.arpa",
68
  repo_type="model"
69
  )
 
70
  # 2. Load the KenLM model
71
  lm = kenlm.Model(arpa_path)
 
72
  # Example single sentence to score
73
  test_sentence = "ዕርቃን ሓይልን ንግሥን" # Or use one of the lines from your list
 
74
  # A. Calculate Log10 Probability of the entire sentence
75
  log_prob = lm.score(test_sentence)
76
  print(f"Sentence: '{test_sentence}'")
77
  print(f"Log10 Probability: {log_prob:.4f}")
 
78
  # B. Calculate Perplexity of the entire sentence
79
  perplexity = lm.perplexity(test_sentence)
80
  print(f"Perplexity: {perplexity:.2f}")
 
82
  ```
83
 
84
  ## Licensing and Citation
 
85
  The Tigre 3-gram Language Model is licensed under CC-BY-SA-4.0.
86
 
87
  ## Citation
 
88
  If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
89
 
90
  ## Recommended Citation Format:
 
92
  ## Repository Name: Tigre 3-gram Language Model (KenLM)
93
 
94
  ## Organization: BeitTigreAI
 
95
  URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-kenLM