Favourez commited on
Commit
2d73c8b
·
verified ·
1 Parent(s): d4c8fd7

Upload 5 files

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ Français[[:space:]]-[[:space:]]English[[:space:]]-[[:space:]]Búlu.xlsx filter=lfs diff=lfs merge=lfs -text
Englis_Bulu_Tokenizer.ipynb ADDED
@@ -0,0 +1,1042 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "provenance": []
7
+ },
8
+ "kernelspec": {
9
+ "name": "python3",
10
+ "display_name": "Python 3"
11
+ },
12
+ "language_info": {
13
+ "name": "python"
14
+ }
15
+ },
16
+ "cells": [
17
+ {
18
+ "cell_type": "markdown",
19
+ "source": [
20
+ "**Building A tokenizer for English to Bulu Language**\n",
21
+ "\n",
22
+ "This Tokenizer functions on an extracted Bible json file\n",
23
+ "\n"
24
+ ],
25
+ "metadata": {
26
+ "id": "OrXpcMwdOTF_"
27
+ }
28
+ },
29
+ {
30
+ "cell_type": "code",
31
+ "execution_count": 12,
32
+ "metadata": {
33
+ "colab": {
34
+ "base_uri": "https://localhost:8080/",
35
+ "height": 489
36
+ },
37
+ "id": "Li6PtThQEUpE",
38
+ "outputId": "b8d4fa43-e2e9-40b4-9d24-45349a16a095"
39
+ },
40
+ "outputs": [
41
+ {
42
+ "output_type": "stream",
43
+ "name": "stdout",
44
+ "text": [
45
+ "Requirement already satisfied: transformers in /usr/local/lib/python3.11/dist-packages (4.50.3)\n",
46
+ "Requirement already satisfied: tokenizers in /usr/local/lib/python3.11/dist-packages (0.21.1)\n",
47
+ "Requirement already satisfied: pandas in /usr/local/lib/python3.11/dist-packages (2.2.2)\n",
48
+ "Requirement already satisfied: openpyxl in /usr/local/lib/python3.11/dist-packages (3.1.5)\n",
49
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from transformers) (3.18.0)\n",
50
+ "Requirement already satisfied: huggingface-hub<1.0,>=0.26.0 in /usr/local/lib/python3.11/dist-packages (from transformers) (0.30.1)\n",
51
+ "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from transformers) (2.0.2)\n",
52
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.11/dist-packages (from transformers) (24.2)\n",
53
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from transformers) (6.0.2)\n",
54
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.11/dist-packages (from transformers) (2024.11.6)\n",
55
+ "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from transformers) (2.32.3)\n",
56
+ "Requirement already satisfied: safetensors>=0.4.3 in /usr/local/lib/python3.11/dist-packages (from transformers) (0.5.3)\n",
57
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.11/dist-packages (from transformers) (4.67.1)\n",
58
+ "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.11/dist-packages (from pandas) (2.8.2)\n",
59
+ "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.11/dist-packages (from pandas) (2025.2)\n",
60
+ "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.11/dist-packages (from pandas) (2025.2)\n",
61
+ "Requirement already satisfied: et-xmlfile in /usr/local/lib/python3.11/dist-packages (from openpyxl) (2.0.0)\n",
62
+ "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub<1.0,>=0.26.0->transformers) (2025.3.2)\n",
63
+ "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub<1.0,>=0.26.0->transformers) (4.13.0)\n",
64
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)\n",
65
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->transformers) (3.4.1)\n",
66
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests->transformers) (3.10)\n",
67
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests->transformers) (2.3.0)\n",
68
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests->transformers) (2025.1.31)\n"
69
+ ]
70
+ },
71
+ {
72
+ "output_type": "display_data",
73
+ "data": {
74
+ "text/plain": [
75
+ "<IPython.core.display.HTML object>"
76
+ ],
77
+ "text/html": [
78
+ "\n",
79
+ " <input type=\"file\" id=\"files-b3d56a9c-1349-498a-bc5f-849bef716a74\" name=\"files[]\" multiple disabled\n",
80
+ " style=\"border:none\" />\n",
81
+ " <output id=\"result-b3d56a9c-1349-498a-bc5f-849bef716a74\">\n",
82
+ " Upload widget is only available when the cell has been executed in the\n",
83
+ " current browser session. Please rerun this cell to enable.\n",
84
+ " </output>\n",
85
+ " <script>// Copyright 2017 Google LLC\n",
86
+ "//\n",
87
+ "// Licensed under the Apache License, Version 2.0 (the \"License\");\n",
88
+ "// you may not use this file except in compliance with the License.\n",
89
+ "// You may obtain a copy of the License at\n",
90
+ "//\n",
91
+ "// http://www.apache.org/licenses/LICENSE-2.0\n",
92
+ "//\n",
93
+ "// Unless required by applicable law or agreed to in writing, software\n",
94
+ "// distributed under the License is distributed on an \"AS IS\" BASIS,\n",
95
+ "// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
96
+ "// See the License for the specific language governing permissions and\n",
97
+ "// limitations under the License.\n",
98
+ "\n",
99
+ "/**\n",
100
+ " * @fileoverview Helpers for google.colab Python module.\n",
101
+ " */\n",
102
+ "(function(scope) {\n",
103
+ "function span(text, styleAttributes = {}) {\n",
104
+ " const element = document.createElement('span');\n",
105
+ " element.textContent = text;\n",
106
+ " for (const key of Object.keys(styleAttributes)) {\n",
107
+ " element.style[key] = styleAttributes[key];\n",
108
+ " }\n",
109
+ " return element;\n",
110
+ "}\n",
111
+ "\n",
112
+ "// Max number of bytes which will be uploaded at a time.\n",
113
+ "const MAX_PAYLOAD_SIZE = 100 * 1024;\n",
114
+ "\n",
115
+ "function _uploadFiles(inputId, outputId) {\n",
116
+ " const steps = uploadFilesStep(inputId, outputId);\n",
117
+ " const outputElement = document.getElementById(outputId);\n",
118
+ " // Cache steps on the outputElement to make it available for the next call\n",
119
+ " // to uploadFilesContinue from Python.\n",
120
+ " outputElement.steps = steps;\n",
121
+ "\n",
122
+ " return _uploadFilesContinue(outputId);\n",
123
+ "}\n",
124
+ "\n",
125
+ "// This is roughly an async generator (not supported in the browser yet),\n",
126
+ "// where there are multiple asynchronous steps and the Python side is going\n",
127
+ "// to poll for completion of each step.\n",
128
+ "// This uses a Promise to block the python side on completion of each step,\n",
129
+ "// then passes the result of the previous step as the input to the next step.\n",
130
+ "function _uploadFilesContinue(outputId) {\n",
131
+ " const outputElement = document.getElementById(outputId);\n",
132
+ " const steps = outputElement.steps;\n",
133
+ "\n",
134
+ " const next = steps.next(outputElement.lastPromiseValue);\n",
135
+ " return Promise.resolve(next.value.promise).then((value) => {\n",
136
+ " // Cache the last promise value to make it available to the next\n",
137
+ " // step of the generator.\n",
138
+ " outputElement.lastPromiseValue = value;\n",
139
+ " return next.value.response;\n",
140
+ " });\n",
141
+ "}\n",
142
+ "\n",
143
+ "/**\n",
144
+ " * Generator function which is called between each async step of the upload\n",
145
+ " * process.\n",
146
+ " * @param {string} inputId Element ID of the input file picker element.\n",
147
+ " * @param {string} outputId Element ID of the output display.\n",
148
+ " * @return {!Iterable<!Object>} Iterable of next steps.\n",
149
+ " */\n",
150
+ "function* uploadFilesStep(inputId, outputId) {\n",
151
+ " const inputElement = document.getElementById(inputId);\n",
152
+ " inputElement.disabled = false;\n",
153
+ "\n",
154
+ " const outputElement = document.getElementById(outputId);\n",
155
+ " outputElement.innerHTML = '';\n",
156
+ "\n",
157
+ " const pickedPromise = new Promise((resolve) => {\n",
158
+ " inputElement.addEventListener('change', (e) => {\n",
159
+ " resolve(e.target.files);\n",
160
+ " });\n",
161
+ " });\n",
162
+ "\n",
163
+ " const cancel = document.createElement('button');\n",
164
+ " inputElement.parentElement.appendChild(cancel);\n",
165
+ " cancel.textContent = 'Cancel upload';\n",
166
+ " const cancelPromise = new Promise((resolve) => {\n",
167
+ " cancel.onclick = () => {\n",
168
+ " resolve(null);\n",
169
+ " };\n",
170
+ " });\n",
171
+ "\n",
172
+ " // Wait for the user to pick the files.\n",
173
+ " const files = yield {\n",
174
+ " promise: Promise.race([pickedPromise, cancelPromise]),\n",
175
+ " response: {\n",
176
+ " action: 'starting',\n",
177
+ " }\n",
178
+ " };\n",
179
+ "\n",
180
+ " cancel.remove();\n",
181
+ "\n",
182
+ " // Disable the input element since further picks are not allowed.\n",
183
+ " inputElement.disabled = true;\n",
184
+ "\n",
185
+ " if (!files) {\n",
186
+ " return {\n",
187
+ " response: {\n",
188
+ " action: 'complete',\n",
189
+ " }\n",
190
+ " };\n",
191
+ " }\n",
192
+ "\n",
193
+ " for (const file of files) {\n",
194
+ " const li = document.createElement('li');\n",
195
+ " li.append(span(file.name, {fontWeight: 'bold'}));\n",
196
+ " li.append(span(\n",
197
+ " `(${file.type || 'n/a'}) - ${file.size} bytes, ` +\n",
198
+ " `last modified: ${\n",
199
+ " file.lastModifiedDate ? file.lastModifiedDate.toLocaleDateString() :\n",
200
+ " 'n/a'} - `));\n",
201
+ " const percent = span('0% done');\n",
202
+ " li.appendChild(percent);\n",
203
+ "\n",
204
+ " outputElement.appendChild(li);\n",
205
+ "\n",
206
+ " const fileDataPromise = new Promise((resolve) => {\n",
207
+ " const reader = new FileReader();\n",
208
+ " reader.onload = (e) => {\n",
209
+ " resolve(e.target.result);\n",
210
+ " };\n",
211
+ " reader.readAsArrayBuffer(file);\n",
212
+ " });\n",
213
+ " // Wait for the data to be ready.\n",
214
+ " let fileData = yield {\n",
215
+ " promise: fileDataPromise,\n",
216
+ " response: {\n",
217
+ " action: 'continue',\n",
218
+ " }\n",
219
+ " };\n",
220
+ "\n",
221
+ " // Use a chunked sending to avoid message size limits. See b/62115660.\n",
222
+ " let position = 0;\n",
223
+ " do {\n",
224
+ " const length = Math.min(fileData.byteLength - position, MAX_PAYLOAD_SIZE);\n",
225
+ " const chunk = new Uint8Array(fileData, position, length);\n",
226
+ " position += length;\n",
227
+ "\n",
228
+ " const base64 = btoa(String.fromCharCode.apply(null, chunk));\n",
229
+ " yield {\n",
230
+ " response: {\n",
231
+ " action: 'append',\n",
232
+ " file: file.name,\n",
233
+ " data: base64,\n",
234
+ " },\n",
235
+ " };\n",
236
+ "\n",
237
+ " let percentDone = fileData.byteLength === 0 ?\n",
238
+ " 100 :\n",
239
+ " Math.round((position / fileData.byteLength) * 100);\n",
240
+ " percent.textContent = `${percentDone}% done`;\n",
241
+ "\n",
242
+ " } while (position < fileData.byteLength);\n",
243
+ " }\n",
244
+ "\n",
245
+ " // All done.\n",
246
+ " yield {\n",
247
+ " response: {\n",
248
+ " action: 'complete',\n",
249
+ " }\n",
250
+ " };\n",
251
+ "}\n",
252
+ "\n",
253
+ "scope.google = scope.google || {};\n",
254
+ "scope.google.colab = scope.google.colab || {};\n",
255
+ "scope.google.colab._files = {\n",
256
+ " _uploadFiles,\n",
257
+ " _uploadFilesContinue,\n",
258
+ "};\n",
259
+ "})(self);\n",
260
+ "</script> "
261
+ ]
262
+ },
263
+ "metadata": {}
264
+ },
265
+ {
266
+ "output_type": "stream",
267
+ "name": "stdout",
268
+ "text": [
269
+ "Saving english_bulu_dataset.json to english_bulu_dataset (1).json\n"
270
+ ]
271
+ }
272
+ ],
273
+ "source": [
274
+ "!pip install transformers tokenizers pandas openpyxl\n",
275
+ "\n",
276
+ "from google.colab import files\n",
277
+ "\n",
278
+ "uploaded = files.upload()\n"
279
+ ]
280
+ },
281
+ {
282
+ "cell_type": "code",
283
+ "source": [
284
+ "#libraries successfully loaded\n",
285
+ "uploaded = files.upload()\n",
286
+ "#get column names\n",
287
+ "df = pd.read_excel(\"/content/Bible_EN_BULU.xlsx\")\n",
288
+ "print(\"Columns in the file:\")\n",
289
+ "print(df.columns)"
290
+ ],
291
+ "metadata": {
292
+ "colab": {
293
+ "base_uri": "https://localhost:8080/",
294
+ "height": 142
295
+ },
296
+ "id": "NVc0dmFSImih",
297
+ "outputId": "0e17ae14-6588-4191-8c9b-9ceb45f60132"
298
+ },
299
+ "execution_count": 13,
300
+ "outputs": [
301
+ {
302
+ "output_type": "display_data",
303
+ "data": {
304
+ "text/plain": [
305
+ "<IPython.core.display.HTML object>"
306
+ ],
307
+ "text/html": [
308
+ "\n",
309
+ " <input type=\"file\" id=\"files-1e6ddcd8-665a-4e29-aeeb-af4b36c89567\" name=\"files[]\" multiple disabled\n",
310
+ " style=\"border:none\" />\n",
311
+ " <output id=\"result-1e6ddcd8-665a-4e29-aeeb-af4b36c89567\">\n",
312
+ " Upload widget is only available when the cell has been executed in the\n",
313
+ " current browser session. Please rerun this cell to enable.\n",
314
+ " </output>\n",
315
+ " <script>// Copyright 2017 Google LLC\n",
316
+ "//\n",
317
+ "// Licensed under the Apache License, Version 2.0 (the \"License\");\n",
318
+ "// you may not use this file except in compliance with the License.\n",
319
+ "// You may obtain a copy of the License at\n",
320
+ "//\n",
321
+ "// http://www.apache.org/licenses/LICENSE-2.0\n",
322
+ "//\n",
323
+ "// Unless required by applicable law or agreed to in writing, software\n",
324
+ "// distributed under the License is distributed on an \"AS IS\" BASIS,\n",
325
+ "// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
326
+ "// See the License for the specific language governing permissions and\n",
327
+ "// limitations under the License.\n",
328
+ "\n",
329
+ "/**\n",
330
+ " * @fileoverview Helpers for google.colab Python module.\n",
331
+ " */\n",
332
+ "(function(scope) {\n",
333
+ "function span(text, styleAttributes = {}) {\n",
334
+ " const element = document.createElement('span');\n",
335
+ " element.textContent = text;\n",
336
+ " for (const key of Object.keys(styleAttributes)) {\n",
337
+ " element.style[key] = styleAttributes[key];\n",
338
+ " }\n",
339
+ " return element;\n",
340
+ "}\n",
341
+ "\n",
342
+ "// Max number of bytes which will be uploaded at a time.\n",
343
+ "const MAX_PAYLOAD_SIZE = 100 * 1024;\n",
344
+ "\n",
345
+ "function _uploadFiles(inputId, outputId) {\n",
346
+ " const steps = uploadFilesStep(inputId, outputId);\n",
347
+ " const outputElement = document.getElementById(outputId);\n",
348
+ " // Cache steps on the outputElement to make it available for the next call\n",
349
+ " // to uploadFilesContinue from Python.\n",
350
+ " outputElement.steps = steps;\n",
351
+ "\n",
352
+ " return _uploadFilesContinue(outputId);\n",
353
+ "}\n",
354
+ "\n",
355
+ "// This is roughly an async generator (not supported in the browser yet),\n",
356
+ "// where there are multiple asynchronous steps and the Python side is going\n",
357
+ "// to poll for completion of each step.\n",
358
+ "// This uses a Promise to block the python side on completion of each step,\n",
359
+ "// then passes the result of the previous step as the input to the next step.\n",
360
+ "function _uploadFilesContinue(outputId) {\n",
361
+ " const outputElement = document.getElementById(outputId);\n",
362
+ " const steps = outputElement.steps;\n",
363
+ "\n",
364
+ " const next = steps.next(outputElement.lastPromiseValue);\n",
365
+ " return Promise.resolve(next.value.promise).then((value) => {\n",
366
+ " // Cache the last promise value to make it available to the next\n",
367
+ " // step of the generator.\n",
368
+ " outputElement.lastPromiseValue = value;\n",
369
+ " return next.value.response;\n",
370
+ " });\n",
371
+ "}\n",
372
+ "\n",
373
+ "/**\n",
374
+ " * Generator function which is called between each async step of the upload\n",
375
+ " * process.\n",
376
+ " * @param {string} inputId Element ID of the input file picker element.\n",
377
+ " * @param {string} outputId Element ID of the output display.\n",
378
+ " * @return {!Iterable<!Object>} Iterable of next steps.\n",
379
+ " */\n",
380
+ "function* uploadFilesStep(inputId, outputId) {\n",
381
+ " const inputElement = document.getElementById(inputId);\n",
382
+ " inputElement.disabled = false;\n",
383
+ "\n",
384
+ " const outputElement = document.getElementById(outputId);\n",
385
+ " outputElement.innerHTML = '';\n",
386
+ "\n",
387
+ " const pickedPromise = new Promise((resolve) => {\n",
388
+ " inputElement.addEventListener('change', (e) => {\n",
389
+ " resolve(e.target.files);\n",
390
+ " });\n",
391
+ " });\n",
392
+ "\n",
393
+ " const cancel = document.createElement('button');\n",
394
+ " inputElement.parentElement.appendChild(cancel);\n",
395
+ " cancel.textContent = 'Cancel upload';\n",
396
+ " const cancelPromise = new Promise((resolve) => {\n",
397
+ " cancel.onclick = () => {\n",
398
+ " resolve(null);\n",
399
+ " };\n",
400
+ " });\n",
401
+ "\n",
402
+ " // Wait for the user to pick the files.\n",
403
+ " const files = yield {\n",
404
+ " promise: Promise.race([pickedPromise, cancelPromise]),\n",
405
+ " response: {\n",
406
+ " action: 'starting',\n",
407
+ " }\n",
408
+ " };\n",
409
+ "\n",
410
+ " cancel.remove();\n",
411
+ "\n",
412
+ " // Disable the input element since further picks are not allowed.\n",
413
+ " inputElement.disabled = true;\n",
414
+ "\n",
415
+ " if (!files) {\n",
416
+ " return {\n",
417
+ " response: {\n",
418
+ " action: 'complete',\n",
419
+ " }\n",
420
+ " };\n",
421
+ " }\n",
422
+ "\n",
423
+ " for (const file of files) {\n",
424
+ " const li = document.createElement('li');\n",
425
+ " li.append(span(file.name, {fontWeight: 'bold'}));\n",
426
+ " li.append(span(\n",
427
+ " `(${file.type || 'n/a'}) - ${file.size} bytes, ` +\n",
428
+ " `last modified: ${\n",
429
+ " file.lastModifiedDate ? file.lastModifiedDate.toLocaleDateString() :\n",
430
+ " 'n/a'} - `));\n",
431
+ " const percent = span('0% done');\n",
432
+ " li.appendChild(percent);\n",
433
+ "\n",
434
+ " outputElement.appendChild(li);\n",
435
+ "\n",
436
+ " const fileDataPromise = new Promise((resolve) => {\n",
437
+ " const reader = new FileReader();\n",
438
+ " reader.onload = (e) => {\n",
439
+ " resolve(e.target.result);\n",
440
+ " };\n",
441
+ " reader.readAsArrayBuffer(file);\n",
442
+ " });\n",
443
+ " // Wait for the data to be ready.\n",
444
+ " let fileData = yield {\n",
445
+ " promise: fileDataPromise,\n",
446
+ " response: {\n",
447
+ " action: 'continue',\n",
448
+ " }\n",
449
+ " };\n",
450
+ "\n",
451
+ " // Use a chunked sending to avoid message size limits. See b/62115660.\n",
452
+ " let position = 0;\n",
453
+ " do {\n",
454
+ " const length = Math.min(fileData.byteLength - position, MAX_PAYLOAD_SIZE);\n",
455
+ " const chunk = new Uint8Array(fileData, position, length);\n",
456
+ " position += length;\n",
457
+ "\n",
458
+ " const base64 = btoa(String.fromCharCode.apply(null, chunk));\n",
459
+ " yield {\n",
460
+ " response: {\n",
461
+ " action: 'append',\n",
462
+ " file: file.name,\n",
463
+ " data: base64,\n",
464
+ " },\n",
465
+ " };\n",
466
+ "\n",
467
+ " let percentDone = fileData.byteLength === 0 ?\n",
468
+ " 100 :\n",
469
+ " Math.round((position / fileData.byteLength) * 100);\n",
470
+ " percent.textContent = `${percentDone}% done`;\n",
471
+ "\n",
472
+ " } while (position < fileData.byteLength);\n",
473
+ " }\n",
474
+ "\n",
475
+ " // All done.\n",
476
+ " yield {\n",
477
+ " response: {\n",
478
+ " action: 'complete',\n",
479
+ " }\n",
480
+ " };\n",
481
+ "}\n",
482
+ "\n",
483
+ "scope.google = scope.google || {};\n",
484
+ "scope.google.colab = scope.google.colab || {};\n",
485
+ "scope.google.colab._files = {\n",
486
+ " _uploadFiles,\n",
487
+ " _uploadFilesContinue,\n",
488
+ "};\n",
489
+ "})(self);\n",
490
+ "</script> "
491
+ ]
492
+ },
493
+ "metadata": {}
494
+ },
495
+ {
496
+ "output_type": "stream",
497
+ "name": "stdout",
498
+ "text": [
499
+ "Saving Bible_EN_BULU.xlsx to Bible_EN_BULU (2).xlsx\n",
500
+ "Columns in the file:\n",
501
+ "Index(['FRENCH', 'ENGLISH', 'FRENCH_ALPHABET_TRANSCRIPTION',\n",
502
+ " 'AGCL_TRANSCRIPTION'],\n",
503
+ " dtype='object')\n"
504
+ ]
505
+ }
506
+ ]
507
+ },
508
+ {
509
+ "cell_type": "markdown",
510
+ "source": [
511
+ "Load the Json text file\n",
512
+ "and display the first rows\n",
513
+ "to make sure it was sucessfully installed"
514
+ ],
515
+ "metadata": {
516
+ "id": "CX2-ZH31JM3F"
517
+ }
518
+ },
519
+ {
520
+ "cell_type": "code",
521
+ "source": [
522
+ "import pandas as pd\n",
523
+ "import json\n",
524
+ "\n",
525
+ "try:\n",
526
+ " with open('english_bulu_dataset.json', 'r', encoding='utf-8') as f:\n",
527
+ " data = json.load(f)\n",
528
+ " df = pd.DataFrame(data)\n",
529
+ " display(df.head())\n",
530
+ "except FileNotFoundError:\n",
531
+ " print(\"Error: 'english_bulu_dataset.json' not found.\")\n",
532
+ " df = None\n",
533
+ "except json.JSONDecodeError:\n",
534
+ " print(\"Error: Invalid JSON format in 'english_bulu_dataset.json'.\")\n",
535
+ " df = None\n",
536
+ "except Exception as e:\n",
537
+ " print(f\"An unexpected error occurred: {e}\")\n",
538
+ " df = None"
539
+ ],
540
+ "metadata": {
541
+ "colab": {
542
+ "base_uri": "https://localhost:8080/",
543
+ "height": 206
544
+ },
545
+ "id": "dtEzfBDWIpzP",
546
+ "outputId": "da1f666b-8825-4030-a924-bb98c38dbcbb"
547
+ },
548
+ "execution_count": 14,
549
+ "outputs": [
550
+ {
551
+ "output_type": "display_data",
552
+ "data": {
553
+ "text/plain": [
554
+ " source \\\n",
555
+ "0 That which was from the beginning, which we ha... \n",
556
+ "1 (For the life was manifested, and we have seen... \n",
557
+ "2 That which we have seen and heard declare we u... \n",
558
+ "3 And these things write we unto you, that your ... \n",
559
+ "4 This then is the message which we have heard o... \n",
560
+ "\n",
561
+ " target \n",
562
+ "0 E jam e nga too a so atataʼa, e jam bi nga yen... \n",
563
+ "1 Amu ényiñ é nga yenéban, a bi nga yene je. Bi ... \n",
564
+ "2 E jam bi nga yen a wôk, bia kate mia de, nde b... \n",
565
+ "3 A bia tili mia jame te, nalé ate avaʼa dangan ... \n",
566
+ "4 Nyôna é ne foé bi nga wôʼô e be nye, a bia kal... "
567
+ ],
568
+ "text/html": [
569
+ "\n",
570
+ " <div id=\"df-02a19af7-ca18-4166-9beb-43e64dc2736e\" class=\"colab-df-container\">\n",
571
+ " <div>\n",
572
+ "<style scoped>\n",
573
+ " .dataframe tbody tr th:only-of-type {\n",
574
+ " vertical-align: middle;\n",
575
+ " }\n",
576
+ "\n",
577
+ " .dataframe tbody tr th {\n",
578
+ " vertical-align: top;\n",
579
+ " }\n",
580
+ "\n",
581
+ " .dataframe thead th {\n",
582
+ " text-align: right;\n",
583
+ " }\n",
584
+ "</style>\n",
585
+ "<table border=\"1\" class=\"dataframe\">\n",
586
+ " <thead>\n",
587
+ " <tr style=\"text-align: right;\">\n",
588
+ " <th></th>\n",
589
+ " <th>source</th>\n",
590
+ " <th>target</th>\n",
591
+ " </tr>\n",
592
+ " </thead>\n",
593
+ " <tbody>\n",
594
+ " <tr>\n",
595
+ " <th>0</th>\n",
596
+ " <td>That which was from the beginning, which we ha...</td>\n",
597
+ " <td>E jam e nga too a so atataʼa, e jam bi nga yen...</td>\n",
598
+ " </tr>\n",
599
+ " <tr>\n",
600
+ " <th>1</th>\n",
601
+ " <td>(For the life was manifested, and we have seen...</td>\n",
602
+ " <td>Amu ényiñ é nga yenéban, a bi nga yene je. Bi ...</td>\n",
603
+ " </tr>\n",
604
+ " <tr>\n",
605
+ " <th>2</th>\n",
606
+ " <td>That which we have seen and heard declare we u...</td>\n",
607
+ " <td>E jam bi nga yen a wôk, bia kate mia de, nde b...</td>\n",
608
+ " </tr>\n",
609
+ " <tr>\n",
610
+ " <th>3</th>\n",
611
+ " <td>And these things write we unto you, that your ...</td>\n",
612
+ " <td>A bia tili mia jame te, nalé ate avaʼa dangan ...</td>\n",
613
+ " </tr>\n",
614
+ " <tr>\n",
615
+ " <th>4</th>\n",
616
+ " <td>This then is the message which we have heard o...</td>\n",
617
+ " <td>Nyôna é ne foé bi nga wôʼô e be nye, a bia kal...</td>\n",
618
+ " </tr>\n",
619
+ " </tbody>\n",
620
+ "</table>\n",
621
+ "</div>\n",
622
+ " <div class=\"colab-df-buttons\">\n",
623
+ "\n",
624
+ " <div class=\"colab-df-container\">\n",
625
+ " <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-02a19af7-ca18-4166-9beb-43e64dc2736e')\"\n",
626
+ " title=\"Convert this dataframe to an interactive table.\"\n",
627
+ " style=\"display:none;\">\n",
628
+ "\n",
629
+ " <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
630
+ " <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
631
+ " </svg>\n",
632
+ " </button>\n",
633
+ "\n",
634
+ " <style>\n",
635
+ " .colab-df-container {\n",
636
+ " display:flex;\n",
637
+ " gap: 12px;\n",
638
+ " }\n",
639
+ "\n",
640
+ " .colab-df-convert {\n",
641
+ " background-color: #E8F0FE;\n",
642
+ " border: none;\n",
643
+ " border-radius: 50%;\n",
644
+ " cursor: pointer;\n",
645
+ " display: none;\n",
646
+ " fill: #1967D2;\n",
647
+ " height: 32px;\n",
648
+ " padding: 0 0 0 0;\n",
649
+ " width: 32px;\n",
650
+ " }\n",
651
+ "\n",
652
+ " .colab-df-convert:hover {\n",
653
+ " background-color: #E2EBFA;\n",
654
+ " box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
655
+ " fill: #174EA6;\n",
656
+ " }\n",
657
+ "\n",
658
+ " .colab-df-buttons div {\n",
659
+ " margin-bottom: 4px;\n",
660
+ " }\n",
661
+ "\n",
662
+ " [theme=dark] .colab-df-convert {\n",
663
+ " background-color: #3B4455;\n",
664
+ " fill: #D2E3FC;\n",
665
+ " }\n",
666
+ "\n",
667
+ " [theme=dark] .colab-df-convert:hover {\n",
668
+ " background-color: #434B5C;\n",
669
+ " box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
670
+ " filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
671
+ " fill: #FFFFFF;\n",
672
+ " }\n",
673
+ " </style>\n",
674
+ "\n",
675
+ " <script>\n",
676
+ " const buttonEl =\n",
677
+ " document.querySelector('#df-02a19af7-ca18-4166-9beb-43e64dc2736e button.colab-df-convert');\n",
678
+ " buttonEl.style.display =\n",
679
+ " google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
680
+ "\n",
681
+ " async function convertToInteractive(key) {\n",
682
+ " const element = document.querySelector('#df-02a19af7-ca18-4166-9beb-43e64dc2736e');\n",
683
+ " const dataTable =\n",
684
+ " await google.colab.kernel.invokeFunction('convertToInteractive',\n",
685
+ " [key], {});\n",
686
+ " if (!dataTable) return;\n",
687
+ "\n",
688
+ " const docLinkHtml = 'Like what you see? Visit the ' +\n",
689
+ " '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
690
+ " + ' to learn more about interactive tables.';\n",
691
+ " element.innerHTML = '';\n",
692
+ " dataTable['output_type'] = 'display_data';\n",
693
+ " await google.colab.output.renderOutput(dataTable, element);\n",
694
+ " const docLink = document.createElement('div');\n",
695
+ " docLink.innerHTML = docLinkHtml;\n",
696
+ " element.appendChild(docLink);\n",
697
+ " }\n",
698
+ " </script>\n",
699
+ " </div>\n",
700
+ "\n",
701
+ "\n",
702
+ "<div id=\"df-a9cd7a11-103a-4062-90e2-d74fe151ab11\">\n",
703
+ " <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-a9cd7a11-103a-4062-90e2-d74fe151ab11')\"\n",
704
+ " title=\"Suggest charts\"\n",
705
+ " style=\"display:none;\">\n",
706
+ "\n",
707
+ "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
708
+ " width=\"24px\">\n",
709
+ " <g>\n",
710
+ " <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
711
+ " </g>\n",
712
+ "</svg>\n",
713
+ " </button>\n",
714
+ "\n",
715
+ "<style>\n",
716
+ " .colab-df-quickchart {\n",
717
+ " --bg-color: #E8F0FE;\n",
718
+ " --fill-color: #1967D2;\n",
719
+ " --hover-bg-color: #E2EBFA;\n",
720
+ " --hover-fill-color: #174EA6;\n",
721
+ " --disabled-fill-color: #AAA;\n",
722
+ " --disabled-bg-color: #DDD;\n",
723
+ " }\n",
724
+ "\n",
725
+ " [theme=dark] .colab-df-quickchart {\n",
726
+ " --bg-color: #3B4455;\n",
727
+ " --fill-color: #D2E3FC;\n",
728
+ " --hover-bg-color: #434B5C;\n",
729
+ " --hover-fill-color: #FFFFFF;\n",
730
+ " --disabled-bg-color: #3B4455;\n",
731
+ " --disabled-fill-color: #666;\n",
732
+ " }\n",
733
+ "\n",
734
+ " .colab-df-quickchart {\n",
735
+ " background-color: var(--bg-color);\n",
736
+ " border: none;\n",
737
+ " border-radius: 50%;\n",
738
+ " cursor: pointer;\n",
739
+ " display: none;\n",
740
+ " fill: var(--fill-color);\n",
741
+ " height: 32px;\n",
742
+ " padding: 0;\n",
743
+ " width: 32px;\n",
744
+ " }\n",
745
+ "\n",
746
+ " .colab-df-quickchart:hover {\n",
747
+ " background-color: var(--hover-bg-color);\n",
748
+ " box-shadow: 0 1px 2px rgba(60, 64, 67, 0.3), 0 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
749
+ " fill: var(--button-hover-fill-color);\n",
750
+ " }\n",
751
+ "\n",
752
+ " .colab-df-quickchart-complete:disabled,\n",
753
+ " .colab-df-quickchart-complete:disabled:hover {\n",
754
+ " background-color: var(--disabled-bg-color);\n",
755
+ " fill: var(--disabled-fill-color);\n",
756
+ " box-shadow: none;\n",
757
+ " }\n",
758
+ "\n",
759
+ " .colab-df-spinner {\n",
760
+ " border: 2px solid var(--fill-color);\n",
761
+ " border-color: transparent;\n",
762
+ " border-bottom-color: var(--fill-color);\n",
763
+ " animation:\n",
764
+ " spin 1s steps(1) infinite;\n",
765
+ " }\n",
766
+ "\n",
767
+ " @keyframes spin {\n",
768
+ " 0% {\n",
769
+ " border-color: transparent;\n",
770
+ " border-bottom-color: var(--fill-color);\n",
771
+ " border-left-color: var(--fill-color);\n",
772
+ " }\n",
773
+ " 20% {\n",
774
+ " border-color: transparent;\n",
775
+ " border-left-color: var(--fill-color);\n",
776
+ " border-top-color: var(--fill-color);\n",
777
+ " }\n",
778
+ " 30% {\n",
779
+ " border-color: transparent;\n",
780
+ " border-left-color: var(--fill-color);\n",
781
+ " border-top-color: var(--fill-color);\n",
782
+ " border-right-color: var(--fill-color);\n",
783
+ " }\n",
784
+ " 40% {\n",
785
+ " border-color: transparent;\n",
786
+ " border-right-color: var(--fill-color);\n",
787
+ " border-top-color: var(--fill-color);\n",
788
+ " }\n",
789
+ " 60% {\n",
790
+ " border-color: transparent;\n",
791
+ " border-right-color: var(--fill-color);\n",
792
+ " }\n",
793
+ " 80% {\n",
794
+ " border-color: transparent;\n",
795
+ " border-right-color: var(--fill-color);\n",
796
+ " border-bottom-color: var(--fill-color);\n",
797
+ " }\n",
798
+ " 90% {\n",
799
+ " border-color: transparent;\n",
800
+ " border-bottom-color: var(--fill-color);\n",
801
+ " }\n",
802
+ " }\n",
803
+ "</style>\n",
804
+ "\n",
805
+ " <script>\n",
806
+ " async function quickchart(key) {\n",
807
+ " const quickchartButtonEl =\n",
808
+ " document.querySelector('#' + key + ' button');\n",
809
+ " quickchartButtonEl.disabled = true; // To prevent multiple clicks.\n",
810
+ " quickchartButtonEl.classList.add('colab-df-spinner');\n",
811
+ " try {\n",
812
+ " const charts = await google.colab.kernel.invokeFunction(\n",
813
+ " 'suggestCharts', [key], {});\n",
814
+ " } catch (error) {\n",
815
+ " console.error('Error during call to suggestCharts:', error);\n",
816
+ " }\n",
817
+ " quickchartButtonEl.classList.remove('colab-df-spinner');\n",
818
+ " quickchartButtonEl.classList.add('colab-df-quickchart-complete');\n",
819
+ " }\n",
820
+ " (() => {\n",
821
+ " let quickchartButtonEl =\n",
822
+ " document.querySelector('#df-a9cd7a11-103a-4062-90e2-d74fe151ab11 button');\n",
823
+ " quickchartButtonEl.style.display =\n",
824
+ " google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
825
+ " })();\n",
826
+ " </script>\n",
827
+ "</div>\n",
828
+ "\n",
829
+ " </div>\n",
830
+ " </div>\n"
831
+ ],
832
+ "application/vnd.google.colaboratory.intrinsic+json": {
833
+ "type": "dataframe",
834
+ "summary": "{\n \"name\": \" df = None\",\n \"rows\": 5,\n \"fields\": [\n {\n \"column\": \"source\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"(For the life was manifested, and we have seen it, and bear witness, and shew unto you that eternal life, which was with the Father, and was manifested unto us;)\",\n \"This then is the message which we have heard of him, and declare unto you, that God is light, and in him is no darkness at all.\",\n \"That which we have seen and heard declare we unto you, that ye also may have fellowship with us: and truly our fellowship is with the Father, and with his Son Jesus Christ.\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"target\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"Amu \\u00e9nyi\\u00f1 \\u00e9 nga yen\\u00e9ban, a bi nga yene je. Bi ne beka\\u00f1ete, a bia ka\\u00f1ete mia \\u00e9nyi\\u00f1e ya melu mese \\u00e9 nga so e be Esaa a yen\\u00e9 e be bia. \",\n \"Ny\\u00f4na \\u00e9 ne fo\\u00e9 bi nga w\\u00f4\\u02bc\\u00f4 e be nye, a bia kalane mia je, e ne na: Zambe a ne \\u00e9fufup, a teke ata\\u00f1e dibi e kui nye beb\\u00e9. \",\n \"E jam bi nga yen a w\\u00f4k, bia kate mia de, nde bia be mia bia ye fulane minlem. A mfula\\u02bcane minleme wongan a ne ny\\u00f4 bia be Esaa a Mone w\\u00e9 Y\\u00e9sus Krist. \"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}"
835
+ }
836
+ },
837
+ "metadata": {}
838
+ }
839
+ ]
840
+ },
841
+ {
842
+ "cell_type": "code",
843
+ "source": [
844
+ "try:\n",
845
+ " bulu_sentences = df['target'].tolist()\n",
846
+ " print(len(bulu_sentences))\n",
847
+ "except KeyError:\n",
848
+ " print(\"Error: 'target' column not found in the DataFrame.\")\n",
849
+ "except Exception as e:\n",
850
+ " print(f\"An unexpected error occurred: {e}\")"
851
+ ],
852
+ "metadata": {
853
+ "colab": {
854
+ "base_uri": "https://localhost:8080/"
855
+ },
856
+ "id": "n8ODmLxfJXw6",
857
+ "outputId": "24dfd05f-637e-4ee7-f814-33ea9ebeaa25"
858
+ },
859
+ "execution_count": 15,
860
+ "outputs": [
861
+ {
862
+ "output_type": "stream",
863
+ "name": "stdout",
864
+ "text": [
865
+ "31297\n"
866
+ ]
867
+ }
868
+ ]
869
+ },
870
+ {
871
+ "cell_type": "code",
872
+ "source": [
873
+ "import json\n",
874
+ "from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers, processors\n",
875
+ "from transformers import BertTokenizerFast\n",
876
+ "from google.colab import files\n",
877
+ "\n",
878
+ "\n",
879
+ "# Load Bulu sentences from JSON\n",
880
+ "def load_bulu_sentences(filepath):\n",
881
+ " with open(filepath, 'r', encoding='utf-8') as f:\n",
882
+ " data = json.load(f)\n",
883
+ " return [entry['target'] for entry in data if 'bulu' in entry and entry['bulu'].strip() != \"\"]\n",
884
+ "\n",
885
+ " \"\"\"Loads Tupuri sentences from a JSON file.\n",
886
+ " Args:\n",
887
+ " file_path: Path to the JSON file containing the Tupuri data.\n",
888
+ " Returns:\n",
889
+ " A list of Tupuri sentences.\n",
890
+ "\n",
891
+ " try:\n",
892
+ " with open(file_path, 'r', encoding='utf-8') as f:\n",
893
+ " data = json.load(f)\n",
894
+ " df = pd.DataFrame(data)\n",
895
+ " tupuri_sentences = df['target'].tolist() # Extract Tupuri sentences\n",
896
+ " return tupuri_sentences\n",
897
+ " except FileNotFoundError:\n",
898
+ " print(f\"Error: File '{file_path}' not found.\")\n",
899
+ " return None\n",
900
+ " except json.JSONDecodeError:\n",
901
+ " print(f\"Error: Invalid JSON format in '{file_path}'.\")\n",
902
+ " return None\n",
903
+ " except Exception as e:\n",
904
+ " print(f\"An unexpected error occurred: {e}\")\n",
905
+ " return None\n",
906
+ " \"\"\"\n",
907
+ "\n",
908
+ "# Define Bulu language tokens\n",
909
+ "bulu_consonants = [\n",
910
+ " 'p', 'b', 't', 'd', 'k', 'g', 'kp', 'gb',\n",
911
+ " 'm', 'n', 'ɲ', 'ŋ', 'f', 'v', 's', 'z', 'ʃ', 'ʒ', 'h',\n",
912
+ " 'ʧ', 'ʤ', 'l', 'j', 'w'\n",
913
+ "]\n",
914
+ "\n",
915
+ "bulu_vowels = [\n",
916
+ " 'i', 'e', 'ɛ', 'a', 'ɔ', 'o', 'u',\n",
917
+ " 'ĩ', 'ẽ', 'ɛ̃', 'ã', 'ɔ̃', 'õ', 'ũ'\n",
918
+ "]\n",
919
+ "\n",
920
+ "bulu_tones = ['́', '̀', '̂', '̃', '̄']\n",
921
+ "special_chars = [\"...\", \"-\", \"—\", \"–\", \"_\", \"(\", \")\", \"[\", \"]\", \"<\", \">\", \" \"]\n",
922
+ "\n",
923
+ "special_tokens = [\"[UNK]\", \"[PAD]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"] + \\\n",
924
+ " bulu_consonants + bulu_vowels + bulu_tones + special_chars\n",
925
+ "\n",
926
+ "# Train the tokenizer\n",
927
+ "def train_bulu_tokenizer(filepath):\n",
928
+ " sentences = load_bulu_sentences(filepath)\n",
929
+ "\n",
930
+ " tokenizer = Tokenizer(models.WordPiece(unk_token=\"[UNK]\"))\n",
931
+ " tokenizer.normalizer = normalizers.Sequence([normalizers.NFD(), normalizers.Lowercase()])\n",
932
+ " tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()\n",
933
+ "\n",
934
+ " trainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)\n",
935
+ " tokenizer.train_from_iterator(sentences, trainer=trainer)\n",
936
+ "\n",
937
+ " cls_id = tokenizer.token_to_id(\"[CLS]\")\n",
938
+ " sep_id = tokenizer.token_to_id(\"[SEP]\")\n",
939
+ "\n",
940
+ " tokenizer.post_processor = processors.TemplateProcessing(\n",
941
+ " single=\"[CLS]:0 $A:0 [SEP]:0\",\n",
942
+ " pair=\"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1\",\n",
943
+ " special_tokens=[\n",
944
+ " (\"[CLS]\", cls_id),\n",
945
+ " (\"[SEP]\", sep_id),\n",
946
+ " ],\n",
947
+ " )\n",
948
+ "\n",
949
+ " return BertTokenizerFast(tokenizer_object=tokenizer)\n",
950
+ "\n",
951
+ "# Train\n",
952
+ "bulu_tokenizer = train_bulu_tokenizer(\"english_bulu_dataset.json\")\n",
953
+ "\n",
954
+ "# Test tokenization\n",
955
+ "test_sentences = [\n",
956
+ " \"A bia tili mia jame te, nalé ate avaʼa dangan da ye bo ngumba.\",\n",
957
+ " \"E nté ôse yʼényiñe jé ô mbe mimbu 930; wôna a nga wu.\",\n",
958
+ " \"E môt a kômbô jô na, a too e be nye a yiane ñhe wulu zen a nga wulu. \",\n",
959
+ " \"Nge bia jô na bi ne mfulaʼane minlem a nye, ve bi wuluʼu dibi été, bia laa minsos, a bi nji wulu éfufup été. \",\n",
960
+ " \"Yéréd a nga biaé Hénok a too mimbu \",\n",
961
+ " \"A e jame di nde e ne ndeme na bia yeme nye: nge bia baʼale metiñe mé. \",\n",
962
+ " \"Sét a nga biaé Enos a too mimbu \",\n",
963
+ "]\n",
964
+ "\n",
965
+ "for sentence in test_sentences:\n",
966
+ " print(f\"Sentence: {sentence}\")\n",
967
+ " print(\"Tokens:\", bulu_tokenizer.tokenize(sentence), \"\\n\")\n",
968
+ "\n",
969
+ "# Vocabulary size\n",
970
+ "print(\"Vocab Size:\", len(bulu_tokenizer.get_vocab()))\n",
971
+ "\n",
972
+ "# Tokenization efficiency\n",
973
+ "def efficiency(tokenizer, sents):\n",
974
+ " total = sum(len(tokenizer(s)['input_ids']) for s in sents)\n",
975
+ " avg = total / len(sents)\n",
976
+ " print(f\"Avg tokens per sentence: {avg:.2f}\")\n",
977
+ "\n",
978
+ "efficiency(bulu_tokenizer, test_sentences)\n",
979
+ "\n",
980
+ "# OOV Rate\n",
981
+ "def oov_rate(tokenizer, sents):\n",
982
+ " oov, total = 0, 0\n",
983
+ " for s in sents:\n",
984
+ " ids = tokenizer(s)['input_ids']\n",
985
+ " total += len(ids)\n",
986
+ " oov += ids.count(tokenizer.unk_token_id)\n",
987
+ " print(f\"OOV Rate: {(oov / total) * 100:.2f}%\")\n",
988
+ "\n",
989
+ "oov_rate(bulu_tokenizer, test_sentences)\n",
990
+ "\n",
991
+ "# Decode test\n",
992
+ "s = \"Na mbé God a yidî ebɔlo nnam ayi nnam a mbólo.\"\n",
993
+ "encoded = bulu_tokenizer(s)['input_ids']\n",
994
+ "decoded = bulu_tokenizer.decode(encoded)\n",
995
+ "print(f\"Original: {s}\")\n",
996
+ "print(f\"Decoded: {decoded}\")\n"
997
+ ],
998
+ "metadata": {
999
+ "colab": {
1000
+ "base_uri": "https://localhost:8080/"
1001
+ },
1002
+ "id": "bdqBCfxtJmhT",
1003
+ "outputId": "06b89b05-d242-4470-861a-244254e1c895"
1004
+ },
1005
+ "execution_count": 21,
1006
+ "outputs": [
1007
+ {
1008
+ "output_type": "stream",
1009
+ "name": "stdout",
1010
+ "text": [
1011
+ "Sentence: A bia tili mia jame te, nalé ate avaʼa dangan da ye bo ngumba.\n",
1012
+ "Tokens: ['a', ' ', 'b', 'i', 'a', ' ', 't', 'i', 'l', 'i', ' ', 'm', 'i', 'a', ' ', 'j', 'a', 'm', 'e', ' ', 't', 'e', '[UNK]', ' ', 'n', 'a', 'l', '[UNK]', ' ', 'a', 't', 'e', ' ', 'a', 'v', 'a', '[UNK]', 'a', ' ', 'd', 'a', 'n', 'g', 'a', 'n', ' ', 'd', 'a', ' ', '[UNK]', 'e', ' ', 'b', 'o', ' ', 'n', 'g', 'u', 'm', 'b', 'a', '[UNK]'] \n",
1013
+ "\n",
1014
+ "Sentence: E nté ôse yʼényiñe jé ô mbe mimbu 930; wôna a nga wu.\n",
1015
+ "Tokens: ['e', ' ', 'n', 't', '[UNK]', ' ', '[UNK]', 's', 'e', ' ', '[UNK]', 'n', '[UNK]', 'i', '[UNK]', 'e', ' ', 'j', '[UNK]', ' ', '[UNK]', ' ', 'm', 'b', 'e', ' ', 'm', 'i', 'm', 'b', 'u', ' ', '[UNK]', '[UNK]', ' ', 'w', '[UNK]', 'n', 'a', ' ', 'a', ' ', 'n', 'g', 'a', ' ', 'w', 'u', '[UNK]'] \n",
1016
+ "\n",
1017
+ "Sentence: E môt a kômbô jô na, a too e be nye a yiane ñhe wulu zen a nga wulu. \n",
1018
+ "Tokens: ['e', ' ', 'm', '[UNK]', 't', ' ', 'a', ' ', 'k', '[UNK]', 'm', 'b', '[UNK]', ' ', 'j', '[UNK]', ' ', 'n', 'a', '[UNK]', ' ', 'a', ' ', 't', 'o', 'o', ' ', 'e', ' ', 'b', 'e', ' ', 'n', '[UNK]', 'e', ' ', 'a', ' ', '[UNK]', 'i', 'a', 'n', 'e', ' ', '[UNK]', 'h', 'e', ' ', 'w', 'u', 'l', 'u', ' ', 'z', 'e', 'n', ' ', 'a', ' ', 'n', 'g', 'a', ' ', 'w', 'u', 'l', 'u', '[UNK]', ' '] \n",
1019
+ "\n",
1020
+ "Sentence: Nge bia jô na bi ne mfulaʼane minlem a nye, ve bi wuluʼu dibi été, bia laa minsos, a bi nji wulu éfufup été. \n",
1021
+ "Tokens: ['n', 'g', 'e', ' ', 'b', 'i', 'a', ' ', 'j', '[UNK]', ' ', 'n', 'a', ' ', 'b', 'i', ' ', 'n', 'e', ' ', 'm', 'f', 'u', 'l', 'a', '[UNK]', 'a', 'n', 'e', ' ', 'm', 'i', 'n', 'l', 'e', 'm', ' ', 'a', ' ', 'n', '[UNK]', 'e', '[UNK]', ' ', 'v', 'e', ' ', 'b', 'i', ' ', 'w', 'u', 'l', 'u', '[UNK]', 'u', ' ', 'd', 'i', 'b', 'i', ' ', '[UNK]', 't', '[UNK]', '[UNK]', ' ', 'b', 'i', 'a', ' ', 'l', 'a', 'a', ' ', 'm', 'i', 'n', 's', 'o', 's', '[UNK]', ' ', 'a', ' ', 'b', 'i', ' ', 'n', 'j', 'i', ' ', 'w', 'u', 'l', 'u', ' ', '[UNK]', 'f', 'u', 'f', 'u', 'p', ' ', '[UNK]', 't', '[UNK]', '[UNK]', ' '] \n",
1022
+ "\n",
1023
+ "Sentence: Yéréd a nga biaé Hénok a too mimbu \n",
1024
+ "Tokens: ['[UNK]', 'd', ' ', 'a', ' ', 'n', 'g', 'a', ' ', 'b', 'i', 'a', '[UNK]', ' ', '[UNK]', 'n', 'o', 'k', ' ', 'a', ' ', 't', 'o', 'o', ' ', 'm', 'i', 'm', 'b', 'u', ' '] \n",
1025
+ "\n",
1026
+ "Sentence: A e jame di nde e ne ndeme na bia yeme nye: nge bia baʼale metiñe mé. \n",
1027
+ "Tokens: ['a', ' ', 'e', ' ', 'j', 'a', 'm', 'e', ' ', 'd', 'i', ' ', 'n', 'd', 'e', ' ', 'e', ' ', 'n', 'e', ' ', 'n', 'd', 'e', 'm', 'e', ' ', 'n', 'a', ' ', 'b', 'i', 'a', ' ', '[UNK]', 'e', 'm', 'e', ' ', 'n', '[UNK]', 'e', '[UNK]', ' ', 'n', 'g', 'e', ' ', 'b', 'i', 'a', ' ', 'b', 'a', '[UNK]', 'a', 'l', 'e', ' ', 'm', 'e', 't', 'i', '[UNK]', 'e', ' ', 'm', '[UNK]', '[UNK]', ' '] \n",
1028
+ "\n",
1029
+ "Sentence: Sét a nga biaé Enos a too mimbu \n",
1030
+ "Tokens: ['[UNK]', 't', ' ', 'a', ' ', 'n', 'g', 'a', ' ', 'b', 'i', 'a', '[UNK]', ' ', 'e', 'n', 'o', 's', ' ', 'a', ' ', 't', 'o', 'o', ' ', 'm', 'i', 'm', 'b', 'u', ' '] \n",
1031
+ "\n",
1032
+ "Vocab Size: 60\n",
1033
+ "Avg tokens per sentence: 62.14\n",
1034
+ "OOV Rate: 11.49%\n",
1035
+ "Original: Na mbé God a yidî ebɔlo nnam ayi nnam a mbólo.\n",
1036
+ "Decoded: [CLS] n a m b [UNK] g o d a [UNK] i d [UNK] e b ɔ l o n n a m a [UNK] i n n a m a m b [UNK] l o [UNK] [SEP]\n"
1037
+ ]
1038
+ }
1039
+ ]
1040
+ }
1041
+ ]
1042
+ }
Français - English - Búlu.xlsx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d06fbe0a4e469195486f31c5161a43f9d01b744eb1c247c515dba55ebe7a662
3
+ size 114578
bulu-tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
english_bulu_dataset.json ADDED
The diff for this file is too large to render. See raw diff
 
fr_en_bulu.json ADDED
The diff for this file is too large to render. See raw diff