hassan869835 commited on
Commit
6401a19
·
verified ·
1 Parent(s): c7b8181

Upload 62 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ paper.pdf filter=lfs diff=lfs merge=lfs -text
37
+ test.wav filter=lfs diff=lfs merge=lfs -text
0 ADDED
Binary file (300 Bytes). View file
 
1 ADDED
Binary file (600 Bytes). View file
 
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Abdullah
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
PKG-INFO ADDED
@@ -0,0 +1,357 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: quran-muaalem
3
+ Version: 0.0.3
4
+ Summary: Quran Phonetic Script with addional quarnic utils
5
+ Author-email: Abdullah <abdullahamlyossef@gmail.com>
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/obadx/quran-muaalem
8
+ Project-URL: Issues, https://github.com/obadx/quran-muaalem/issues
9
+ Classifier: Programming Language :: Python :: 3.10
10
+ Classifier: Programming Language :: Python :: 3.11
11
+ Classifier: Programming Language :: Python :: 3.12
12
+ Classifier: Programming Language :: Python :: 3.13
13
+ Classifier: Operating System :: OS Independent
14
+ Requires-Python: >=3.10
15
+ Description-Content-Type: text/markdown
16
+ License-File: LICENSE
17
+ Requires-Dist: diff-match-patch>=20241021
18
+ Requires-Dist: numpy>=2.2.6
19
+ Requires-Dist: quran-transcript>=0.1.0
20
+ Requires-Dist: rich>=14.1.0
21
+ Requires-Dist: torch>=2.7.0
22
+ Requires-Dist: transformers>=4.55.0
23
+ Provides-Extra: test
24
+ Requires-Dist: librosa>=0.11.0; extra == "test"
25
+ Requires-Dist: numba>=0.61.2; extra == "test"
26
+ Requires-Dist: pytest; extra == "test"
27
+ Provides-Extra: ui
28
+ Requires-Dist: gradio>=5.43.1; extra == "ui"
29
+ Requires-Dist: librosa>=0.11.0; extra == "ui"
30
+ Requires-Dist: numba>=0.61.2; extra == "ui"
31
+ Requires-Dist: moviepy>=2.2.1; extra == "ui"
32
+ Dynamic: license-file
33
+
34
+ # Quran Muaalem
35
+
36
+ <div align="center">
37
+ <strong>بعون الله وتوفيقه لا شريك له نقدم المعلم القرآني الذكي القادر على كشف أخطاء التلاوة والتجويد وصفات الحروف</strong>
38
+
39
+ [![PyPI][pypi-badge]][pypi-url]
40
+ [![Python Versions][python-badge]][python-url]
41
+ [![Hugging Face Model][hf-model-badge]][hf-model-url]
42
+ [![Hugging Face Dataset][hf-dataset-badge]][hf-dataset-url]
43
+ [![Google Colab][colab-badge]][colab-url]
44
+ [![arXiv][arxiv-badge]][arxiv-url]
45
+ [![MIT License][mit-badge]][mit-url]
46
+ [![Discord][discord-badge]][discord-url]
47
+
48
+ </div>
49
+
50
+ [pypi-badge]: https://img.shields.io/pypi/v/quran-muaalem.svg
51
+ [pypi-url]: https://pypi.org/project/quran-muaalem/
52
+ [mit-badge]: https://img.shields.io/github/license/obadx/quran-muaalem.svg
53
+ [mit-url]: https://github.com/obadx/quran-muaalem/blob/main/LICENSE
54
+ [python-badge]: https://img.shields.io/pypi/pyversions/quran-muaalem.svg
55
+ [python-url]: https://pypi.org/project/quran-muaalem/
56
+ [colab-badge]: https://img.shields.io/badge/Google%20Colab-Open%20in%20Colab-F9AB00?logo=google-colab&logoColor=white
57
+ [colab-url]: https://colab.research.google.com/drive/1If0G9NtdXiSRu6PVGtIMvLwxizF2jspn?usp=sharing
58
+ [hf-model-badge]: https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue
59
+ [hf-model-url]: https://huggingface.co/obadx/muaalem-model-v3_0
60
+ [hf-dataset-badge]: https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-orange
61
+ [hf-dataset-url]: https://huggingface.co/datasets/obadx/muaalem-annotated-v3
62
+ [arxiv-badge]: https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg
63
+ [arxiv-url]: https://arxiv.org/abs/2509.00094
64
+ [discord-badge]: https://img.shields.io/badge/Discord-Join%20Community-7289da?logo=discord&logoColor=white
65
+ [discord-url]: https://discord.gg/hJWW6fCH
66
+
67
+ <div align="center" style="background-color: #f0f8ff; border-left: 5px solid #4CAF50; padding: 15px; margin: 20px 0; border-radius: 5px;">
68
+ <h3 style="color: #2c3e50; margin-top: 0;">📖 رابط لتجربة المعلم القرآني</h3>
69
+ <p style="margin: 10px 0;">يرجى الضغط على للتجربة:</p>
70
+ <a href="https://662a040e1863a5445c.gradio.live" style="display: inline-block; background-color: #4CAF50; color: white; padding: 10px 20px; text-decoration: none; border-radius: 5px; font-weight: bold; margin: 10px 0;">الرابط</a>
71
+ <p style="background-color: #ffeb3b; padding: 8px; border-radius: 3px; display: inline-block; margin: 10px 0;">
72
+ ⚠️ <strong>تنبيه:</strong> هذا الرابط سينتهي في <span style="color: #d32f2f; font-weight: bold;">27 أغسطس 2025</span>
73
+ </p>
74
+ </div>
75
+
76
+ [![ALT_TEXT](https://img.youtube.com/vi/CsFoznO08-Q/0.jpg)](https://www.youtube.com/watch?v=CsFoznO08-Q)
77
+
78
+
79
+ ## الممزيات
80
+
81
+ * مدرب على الرسم الصوتي للقرآن الكريم: [quran-transcript](https://github.com/obadx/quran-transcript) القادر على كشف أخطاء الحروف والتجويد وصفات الحروف
82
+ * نموذج معقول الحجم 660 MP
83
+ * يحتاج فقط إله 1.5 GB من ذاكرة معالج الرسوميات
84
+ * معمارية مبتكرة: CTC متعدد المستويات
85
+
86
+ ## المعمارية
87
+ معمارية مبتكرة: CTC متعدد المستويات. حيث كل مستوي يتدرب على وجه معين
88
+
89
+ ![multi-lvel-ctc](./assets/figures/mutli-level-ctc.png)
90
+
91
+ ## الخطوات المختصرة للتطوير
92
+
93
+ * تجميع التلاوت القرآنية من القراء المتقنين: [prepare-quran-dataset](https://github.com/obadx/prepare-quran-dataset)
94
+ * تقسيم التلاوت على حسب الوقف وليس الآية باستخدام [المقسم](https://github.com/obadx/recitations-segmenter)
95
+ * الحصو على النص القرآني من المقاطع الصوتية باسخدام [نموذج ترتيل](https://huggingface.co/tarteel-ai/whisper-base-ar-quran)
96
+ * تصحيح النصوص المستخرجة من ترتيل باستخدام [خوارزمية التسميع](https://github.com/obadx/quran-transcript)
97
+ * تحويل الرسم الإملائي للرسم العثماني: [quran-transcript](https://github.com/obadx/quran-transcript)
98
+ * تحويل الرسم العثماني للرسم الصوتي للقرآني الكريم الذي يصف كل قواعد التجويد ما عدا الإشمام: [quran-transcript](https://github.com/obadx/quran-transcript)
99
+ * تدريب النموذج على معمارية [Wav2Vec2BERT](https://huggingface.co/docs/transformers/model_doc/wav2vec2-bert)
100
+
101
+
102
+ ## استخدام النوذج
103
+
104
+
105
+ ### استخدام النموذج عن طريق واجهة gradio
106
+
107
+ قم بتزيل [uv](https://docs.astral.sh/uv/)
108
+
109
+ ```bash
110
+ pip install uv
111
+ ```
112
+ أو
113
+ ```bash
114
+ curl -LsSf https://astral.sh/uv/install.sh | sh
115
+ ```
116
+
117
+ بعد ذلك قم بتنزيل `ffmpeg`
118
+
119
+ ```bash
120
+ sudo apt-get update
121
+ sudo apt-get install -y ffmpeg
122
+ ```
123
+
124
+ أو من خلال `anaconda`
125
+ ```bash
126
+ conda install ffmpeg
127
+ ```
128
+
129
+ قم بتشغيل `gradio` ب command واحد فقط:
130
+ ```bash
131
+ uvx --no-cache --from https://github.com/obadx/quran-muaalem.git[ui] quran-muaalem-ui
132
+ ```
133
+ او
134
+ ```bash
135
+ uvx quran-muaalem[ui] quran-muaalem-ui
136
+ ```
137
+
138
+ ### عن طريق python API
139
+
140
+
141
+ #### Installation
142
+
143
+ First, install the required dependencies:
144
+
145
+ ```bash
146
+ # Install system dependencies
147
+ sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
148
+
149
+ # Install Python packages
150
+ pip install quran-muaalem librosa "numba>=0.61.2"
151
+ ```
152
+
153
+ ## Basic Usage Example
154
+
155
+ ```python
156
+ """
157
+ Basic example of using the Quran Muaalem package for phonetic analysis of Quranic recitation.
158
+ """
159
+
160
+ from dataclasses import asdict
161
+ import json
162
+ import logging
163
+
164
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
165
+ import torch
166
+ from librosa.core import load
167
+
168
+ # Import the main Muaalem class (adjust import based on your actual package structure)
169
+ from quran_muaalem import Muaalem
170
+
171
+ # Setup logging to see informative messages
172
+ logging.basicConfig(level=logging.INFO)
173
+
174
+ def analyze_recitation(audio_path):
175
+ """
176
+ Analyze a Quranic recitation audio file using the Muaalem model.
177
+
178
+ Args:
179
+ audio_path (str): Path to the audio file to analyze
180
+ """
181
+ # Configuration
182
+ sampling_rate = 16000 # Must be 16000 Hz
183
+ device = "cuda" if torch.cuda.is_available() else "cpu" # Use GPU if available
184
+
185
+ # Step 1: Prepare the Quranic reference text
186
+ # Get the Uthmani script for a specific verse (Aya 8, Surah 75 in this example)
187
+ uthmani_ref = Aya(8, 75).get_by_imlaey_words(17, 9).uthmani
188
+
189
+ # Step 2: Configure the recitation style (Moshaf attributes)
190
+ moshaf = MoshafAttributes(
191
+ rewaya="hafs", # Recitation style (Hafs is most common)
192
+ madd_monfasel_len=2, # Length of separated elongation
193
+ madd_mottasel_len=4, # Length of connected elongation
194
+ madd_mottasel_waqf=4, # Length of connected elongation when stopping
195
+ madd_aared_len=2, # Length of necessary elongation
196
+ )
197
+ # see: https://github.com/obadx/prepare-quran-dataset?tab=readme-ov-file#moshaf-attributes-docs
198
+
199
+ # Step 3: Convert text to phonetic representation
200
+ # see docs for phnetizer: https://github.com/obadx/quran-transcript
201
+ phonetizer_out = quran_phonetizer(uthmani_ref, moshaf, remove_spaces=True)
202
+
203
+ # Step 4: Initialize the Muaalem model
204
+ muaalem = Muaalem(device=device)
205
+
206
+ # Step 5: Load and prepare the audio
207
+ wave, _ = load(audio_path, sr=sampling_rate, mono=True)
208
+
209
+ # Step 6: Process the audio with the model
210
+ # The model analyzes the phonetic properties of the recitation
211
+ outs = muaalem(
212
+ [wave], # Audio data
213
+ [phonetizer_out], # Phonetic reference
214
+ sampling_rate=sampling_rate
215
+ )
216
+
217
+ # Step 7: Display the results
218
+ for out in outs:
219
+ print("Predicted Phonemes:", out.phonemes.text)
220
+
221
+ # Display detailed phonetic features for each phoneme
222
+ for sifa in out.sifat:
223
+ print(json.dumps(asdict(sifa), indent=2, ensure_ascii=False))
224
+ print("*" * 30)
225
+ print("-" * 40)
226
+
227
+ # Explaining Results
228
+ explain_for_terminal(
229
+ outs[0].phonemes.text,
230
+ phonetizer_out.phonemes,
231
+ outs[0].sifat,
232
+ phonetizer_out.sifat,
233
+ )
234
+
235
+
236
+ if __name__ == "__main__":
237
+ # Replace with the path to your audio file
238
+ audio_path = "./assets/test.wav"
239
+
240
+ try:
241
+ analyze_recitation(audio_path)
242
+ except Exception as e:
243
+ logging.error(f"Error processing audio: {e}")
244
+ ```
245
+
246
+ Output:
247
+
248
+ ```bash
249
+ ءِننننَللَااهَبِكُللِشَيءِنعَلِۦۦمُ۾۾۾بَرَااااءَتُممممِنَللَااهِوَرَسُۥۥلِه
250
+ ┏━━━━━━━━━━┳━━━━━��━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
251
+ ┃ Phonemes ┃ Tafashie ┃ Qalqla ┃ Ghonna ┃ Hams Or Jahr ┃ Safeer ┃ Tikraar ┃ Tafkheem Or Taqeeq ┃ Istitala ┃ Shidda Or Rakhawa ┃ Itbaq ┃
252
+ ┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
253
+ │ ءِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
254
+ │ ننننَ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
255
+ │ للَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ between │ monfateh │
256
+ │ اا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
257
+ │ هَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
258
+ │ بِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
259
+ │ كُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
260
+ │ للِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
261
+ │ شَ │ motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
262
+ │ ي │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
263
+ │ ءِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
264
+ │ ن │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
265
+ │ عَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
266
+ │ لِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
267
+ │ ۦۦ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
268
+ │ مُ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
269
+ │ ۾۾۾ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
270
+ │ بَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
271
+ │ رَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ mokarar │ mofakham │ not_mostateel │ between │ monfateh │
272
+ │ اااا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
273
+ │ ءَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
274
+ │ تُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
275
+ │ ممممِ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
276
+ │ نَ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
277
+ │ للَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ between │ monfateh │
278
+ │ اا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
279
+ │ هِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
280
+ │ وَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
281
+ │ رَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ mokarar │ mofakham │ not_mostateel │ between │ monfateh │
282
+ │ سُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
283
+ │ ۥۥ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
284
+ │ لِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
285
+ │ ه │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
286
+ └──────────┴────────────────┴──────────────┴──────────────┴──────────────┴───────────┴─────────────┴────────────────────┴───────────────┴───────────────────┴──────────┘
287
+ ```
288
+
289
+ ### API Docs
290
+
291
+ ```python
292
+ class Muaalem:
293
+ def __init__(
294
+ self,
295
+ model_name_or_path: str = "obadx/muaalem-model-v3_2",
296
+ device: str = "cpu",
297
+ dtype=torch.bfloat16,
298
+ ):
299
+ """
300
+ Initializing Muallem Model
301
+
302
+ Args:
303
+ model_name_or_path: the huggingface model name or path
304
+ device: the device to run model on
305
+ dtype: the torch dtype. Default is `torch.bfloat16` as the model was trained on
306
+ """
307
+
308
+ @torch.no_grad()
309
+ def __call__(
310
+ self,
311
+ waves: list[list[float] | torch.FloatTensor | NDArray],
312
+ ref_quran_phonetic_script_list: list[QuranPhoneticScriptOutput],
313
+ sampling_rate: int,
314
+ ) -> list[MuaalemOutput]:
315
+ """Infrence Funcion for the Quran Muaalem Project
316
+
317
+ waves: input waves batch , seq_len with different formats described above
318
+ ref_quran_phonetic_script_list (list[QuranPhoneticScriptOutput]): list of the
319
+ phonetized ouput of `quran_transcript.quran_phonetizer` with `remove_space=True`
320
+
321
+ sampleing_rate (int): has to be 16000
322
+
323
+ Returns:
324
+ list[MuaalemOutput]:
325
+ A list of output objects, each containing phoneme predictions and their
326
+ phonetic features (sifat) for a processed input.
327
+
328
+ Each MuaalemOutput contains:
329
+ phonemes (Unit):
330
+ A dataclass representing the predicted phoneme sequence with:
331
+ text (str): Concatenated string of all phonemes.
332
+ probs (Union[torch.FloatTensor, list[float]]):
333
+ Confidence probabilities for each predicted phoneme.
334
+ ids (Union[torch.LongTensor, list[int]]):
335
+ Token IDs corresponding to each phoneme.
336
+
337
+ sifat (list[Sifa]):
338
+ A list of phonetic feature dataclasses (one per phoneme) with the
339
+ following optional properties (each is a SingleUnit or None):
340
+ - phonemes_group (str): the phonemes associated with the `sifa`
341
+ - hams_or_jahr (SingleUnit): either `hams` or `jahr`
342
+ - shidda_or_rakhawa (SingleUnit): either `shadeed`, `between`, or `rikhw`
343
+ - tafkheem_or_taqeeq (SingleUnit): either `mofakham`, `moraqaq`, or `low_mofakham`
344
+ - itbaq (SingleUnit): either `monfateh`, or `motbaq`
345
+ - safeer (SingleUnit): either `safeer`, or `no_safeer`
346
+ - qalqla (SingleUnit): eithr `moqalqal`, or `not_moqalqal`
347
+ - tikraar (SingleUnit): either `mokarar` or `not_mokarar`
348
+ - tafashie (SingleUnit): either `motafashie`, or `not_motafashie`
349
+ - istitala (SingleUnit): either `mostateel`, or `not_mostateel`
350
+ - ghonna (SingleUnit): either `maghnoon`, or `not_maghnoon`
351
+
352
+ Each SingleUnit in Sifa properties contains:
353
+ text (str): The feature's categorical label (e.g., "hams", "shidda").
354
+ prob (float): Confidence probability for this feature.
355
+ idx (int): Identifier for the feature class.
356
+ """
357
+ ```
README.md CHANGED
@@ -1,13 +1,324 @@
1
- ---
2
- title: Youssef
3
- emoji: 🚀
4
- colorFrom: green
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 6.4.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quran Muaalem
2
+
3
+ <div align="center">
4
+ <strong>بعون الله وتوفيقه لا شريك له نقدم المعلم القرآني الذكي القادر على كشف أخطاء التلاوة والتجويد وصفات الحروف</strong>
5
+
6
+ [![PyPI][pypi-badge]][pypi-url]
7
+ [![Python Versions][python-badge]][python-url]
8
+ [![Hugging Face Model][hf-model-badge]][hf-model-url]
9
+ [![Hugging Face Dataset][hf-dataset-badge]][hf-dataset-url]
10
+ [![Google Colab][colab-badge]][colab-url]
11
+ [![arXiv][arxiv-badge]][arxiv-url]
12
+ [![MIT License][mit-badge]][mit-url]
13
+ [![Discord][discord-badge]][discord-url]
14
+
15
+ </div>
16
+
17
+ [pypi-badge]: https://img.shields.io/pypi/v/quran-muaalem.svg
18
+ [pypi-url]: https://pypi.org/project/quran-muaalem/
19
+ [mit-badge]: https://img.shields.io/github/license/obadx/quran-muaalem.svg
20
+ [mit-url]: https://github.com/obadx/quran-muaalem/blob/main/LICENSE
21
+ [python-badge]: https://img.shields.io/pypi/pyversions/quran-muaalem.svg
22
+ [python-url]: https://pypi.org/project/quran-muaalem/
23
+ [colab-badge]: https://img.shields.io/badge/Google%20Colab-Open%20in%20Colab-F9AB00?logo=google-colab&logoColor=white
24
+ [colab-url]: https://colab.research.google.com/drive/1If0G9NtdXiSRu6PVGtIMvLwxizF2jspn?usp=sharing
25
+ [hf-model-badge]: https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue
26
+ [hf-model-url]: https://huggingface.co/obadx/muaalem-model-v3_0
27
+ [hf-dataset-badge]: https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-orange
28
+ [hf-dataset-url]: https://huggingface.co/datasets/obadx/muaalem-annotated-v3
29
+ [arxiv-badge]: https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg
30
+ [arxiv-url]: https://arxiv.org/abs/2509.00094
31
+ [discord-badge]: https://img.shields.io/badge/Discord-Join%20Community-7289da?logo=discord&logoColor=white
32
+ [discord-url]: https://discord.gg/hJWW6fCH
33
+
34
+ <div align="center" style="background-color: #f0f8ff; border-left: 5px solid #4CAF50; padding: 15px; margin: 20px 0; border-radius: 5px;">
35
+ <h3 style="color: #2c3e50; margin-top: 0;">📖 رابط لتجربة المعلم القرآني</h3>
36
+ <p style="margin: 10px 0;">يرجى الضغط على للتجربة:</p>
37
+ <a href="https://662a040e1863a5445c.gradio.live" style="display: inline-block; background-color: #4CAF50; color: white; padding: 10px 20px; text-decoration: none; border-radius: 5px; font-weight: bold; margin: 10px 0;">الرابط</a>
38
+ <p style="background-color: #ffeb3b; padding: 8px; border-radius: 3px; display: inline-block; margin: 10px 0;">
39
+ ⚠️ <strong>تنبيه:</strong> هذا الرابط سينتهي في <span style="color: #d32f2f; font-weight: bold;">27 أغسطس 2025</span>
40
+ </p>
41
+ </div>
42
+
43
+ [![ALT_TEXT](https://img.youtube.com/vi/CsFoznO08-Q/0.jpg)](https://www.youtube.com/watch?v=CsFoznO08-Q)
44
+
45
+
46
+ ## الممزيات
47
+
48
+ * مدرب على الرسم الصوتي للقرآن الكريم: [quran-transcript](https://github.com/obadx/quran-transcript) القادر على كشف أخطاء الحروف والتجويد وصفات الحروف
49
+ * نموذج معقول الحجم 660 MP
50
+ * يحتاج فقط إله 1.5 GB من ذاكرة معالج الرسوميات
51
+ * معمارية مبتكرة: CTC متعدد المستويات
52
+
53
+ ## المعمارية
54
+ معمارية مبتكرة: CTC متعدد المستويات. حيث كل مستوي يتدرب على وجه معين
55
+
56
+ ![multi-lvel-ctc](./assets/figures/mutli-level-ctc.png)
57
+
58
+ ## الخطوات المختصرة للتطوير
59
+
60
+ * تجميع التلاوت القرآنية من القراء المتقنين: [prepare-quran-dataset](https://github.com/obadx/prepare-quran-dataset)
61
+ * تقسيم التلاوت على حسب الوقف وليس الآية باستخدام [المقسم](https://github.com/obadx/recitations-segmenter)
62
+ * الحصو على النص القرآني من المقاطع الصوتية باسخدام [نموذج ترتيل](https://huggingface.co/tarteel-ai/whisper-base-ar-quran)
63
+ * تصحيح النصوص المستخرجة من ترتيل باستخدام [خوارزمية التسميع](https://github.com/obadx/quran-transcript)
64
+ * تحويل الرسم الإملائي للرسم العثماني: [quran-transcript](https://github.com/obadx/quran-transcript)
65
+ * تحويل الرسم العثماني للرسم الصوتي للقرآني الكريم الذي يصف كل قواعد التجويد ما عدا الإشمام: [quran-transcript](https://github.com/obadx/quran-transcript)
66
+ * تدريب النموذج على معمارية [Wav2Vec2BERT](https://huggingface.co/docs/transformers/model_doc/wav2vec2-bert)
67
+
68
+
69
+ ## استخدام النوذج
70
+
71
+
72
+ ### استخدام النموذج عن طريق واجهة gradio
73
+
74
+ قم بتزيل [uv](https://docs.astral.sh/uv/)
75
+
76
+ ```bash
77
+ pip install uv
78
+ ```
79
+ أو
80
+ ```bash
81
+ curl -LsSf https://astral.sh/uv/install.sh | sh
82
+ ```
83
+
84
+ بعد ذلك قم بتنزيل `ffmpeg`
85
+
86
+ ```bash
87
+ sudo apt-get update
88
+ sudo apt-get install -y ffmpeg
89
+ ```
90
+
91
+ أو من خلال `anaconda`
92
+ ```bash
93
+ conda install ffmpeg
94
+ ```
95
+
96
+ قم بتشغيل `gradio` ب command واحد فقط:
97
+ ```bash
98
+ uvx --no-cache --from https://github.com/obadx/quran-muaalem.git[ui] quran-muaalem-ui
99
+ ```
100
+ او
101
+ ```bash
102
+ uvx quran-muaalem[ui] quran-muaalem-ui
103
+ ```
104
+
105
+ ### عن طريق python API
106
+
107
+
108
+ #### Installation
109
+
110
+ First, install the required dependencies:
111
+
112
+ ```bash
113
+ # Install system dependencies
114
+ sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
115
+
116
+ # Install Python packages
117
+ pip install quran-muaalem librosa "numba>=0.61.2"
118
+ ```
119
+
120
+ ## Basic Usage Example
121
+
122
+ ```python
123
+ """
124
+ Basic example of using the Quran Muaalem package for phonetic analysis of Quranic recitation.
125
+ """
126
+
127
+ from dataclasses import asdict
128
+ import json
129
+ import logging
130
+
131
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
132
+ import torch
133
+ from librosa.core import load
134
+
135
+ # Import the main Muaalem class (adjust import based on your actual package structure)
136
+ from quran_muaalem import Muaalem
137
+
138
+ # Setup logging to see informative messages
139
+ logging.basicConfig(level=logging.INFO)
140
+
141
+ def analyze_recitation(audio_path):
142
+ """
143
+ Analyze a Quranic recitation audio file using the Muaalem model.
144
+
145
+ Args:
146
+ audio_path (str): Path to the audio file to analyze
147
+ """
148
+ # Configuration
149
+ sampling_rate = 16000 # Must be 16000 Hz
150
+ device = "cuda" if torch.cuda.is_available() else "cpu" # Use GPU if available
151
+
152
+ # Step 1: Prepare the Quranic reference text
153
+ # Get the Uthmani script for a specific verse (Aya 8, Surah 75 in this example)
154
+ uthmani_ref = Aya(8, 75).get_by_imlaey_words(17, 9).uthmani
155
+
156
+ # Step 2: Configure the recitation style (Moshaf attributes)
157
+ moshaf = MoshafAttributes(
158
+ rewaya="hafs", # Recitation style (Hafs is most common)
159
+ madd_monfasel_len=2, # Length of separated elongation
160
+ madd_mottasel_len=4, # Length of connected elongation
161
+ madd_mottasel_waqf=4, # Length of connected elongation when stopping
162
+ madd_aared_len=2, # Length of necessary elongation
163
+ )
164
+ # see: https://github.com/obadx/prepare-quran-dataset?tab=readme-ov-file#moshaf-attributes-docs
165
+
166
+ # Step 3: Convert text to phonetic representation
167
+ # see docs for phnetizer: https://github.com/obadx/quran-transcript
168
+ phonetizer_out = quran_phonetizer(uthmani_ref, moshaf, remove_spaces=True)
169
+
170
+ # Step 4: Initialize the Muaalem model
171
+ muaalem = Muaalem(device=device)
172
+
173
+ # Step 5: Load and prepare the audio
174
+ wave, _ = load(audio_path, sr=sampling_rate, mono=True)
175
+
176
+ # Step 6: Process the audio with the model
177
+ # The model analyzes the phonetic properties of the recitation
178
+ outs = muaalem(
179
+ [wave], # Audio data
180
+ [phonetizer_out], # Phonetic reference
181
+ sampling_rate=sampling_rate
182
+ )
183
+
184
+ # Step 7: Display the results
185
+ for out in outs:
186
+ print("Predicted Phonemes:", out.phonemes.text)
187
+
188
+ # Display detailed phonetic features for each phoneme
189
+ for sifa in out.sifat:
190
+ print(json.dumps(asdict(sifa), indent=2, ensure_ascii=False))
191
+ print("*" * 30)
192
+ print("-" * 40)
193
+
194
+ # Explaining Results
195
+ explain_for_terminal(
196
+ outs[0].phonemes.text,
197
+ phonetizer_out.phonemes,
198
+ outs[0].sifat,
199
+ phonetizer_out.sifat,
200
+ )
201
+
202
+
203
+ if __name__ == "__main__":
204
+ # Replace with the path to your audio file
205
+ audio_path = "./assets/test.wav"
206
+
207
+ try:
208
+ analyze_recitation(audio_path)
209
+ except Exception as e:
210
+ logging.error(f"Error processing audio: {e}")
211
+ ```
212
+
213
+ Output:
214
+
215
+ ```bash
216
+ ءِننننَللَااهَبِكُللِشَيءِنعَلِۦۦمُ۾۾۾بَرَااااءَتُممممِنَللَااهِوَرَسُۥۥلِه
217
+ ┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
218
+ ┃ Phonemes ┃ Tafashie ┃ Qalqla ┃ Ghonna ┃ Hams Or Jahr ┃ Safeer ┃ Tikraar ┃ Tafkheem Or Taqeeq ┃ Istitala ┃ Shidda Or Rakhawa ┃ Itbaq ┃
219
+ ┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
220
+ │ ءِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
221
+ │ ننننَ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
222
+ │ للَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ between │ monfateh │
223
+ │ اا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
224
+ │ هَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
225
+ │ بِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
226
+ │ كُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
227
+ │ للِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
228
+ │ شَ │ motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
229
+ │ ي │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
230
+ │ ءِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
231
+ │ ن │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
232
+ │ عَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
233
+ │ لِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
234
+ │ ۦۦ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
235
+ │ مُ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
236
+ │ ۾۾۾ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
237
+ │ بَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
238
+ │ رَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ mokarar │ mofakham │ not_mostateel │ between │ monfateh │
239
+ │ اااا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
240
+ │ ءَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
241
+ │ تُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ shadeed │ monfateh │
242
+ │ ممممِ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
243
+ │ نَ │ not_motafashie │ not_moqalqal │ maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
244
+ │ للَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ between │ monfateh │
245
+ │ اا │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ mofakham │ not_mostateel │ rikhw │ monfateh │
246
+ │ هِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
247
+ │ وَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
248
+ │ رَ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ mokarar │ mofakham │ not_mostateel │ between │ monfateh │
249
+ │ سُ │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
250
+ │ ۥۥ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
251
+ │ لِ │ not_motafashie │ not_moqalqal │ not_maghnoon │ jahr │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ between │ monfateh │
252
+ │ ه │ not_motafashie │ not_moqalqal │ not_maghnoon │ hams │ no_safeer │ not_mokarar │ moraqaq │ not_mostateel │ rikhw │ monfateh │
253
+ └──────────┴────────────────┴──────────────┴──────────────┴──────────────┴───────────┴─────────────┴────────────────────┴───────────────┴───────────────────┴──────────┘
254
+ ```
255
+
256
+ ### API Docs
257
+
258
+ ```python
259
+ class Muaalem:
260
+ def __init__(
261
+ self,
262
+ model_name_or_path: str = "obadx/muaalem-model-v3_2",
263
+ device: str = "cpu",
264
+ dtype=torch.bfloat16,
265
+ ):
266
+ """
267
+ Initializing Muallem Model
268
+
269
+ Args:
270
+ model_name_or_path: the huggingface model name or path
271
+ device: the device to run model on
272
+ dtype: the torch dtype. Default is `torch.bfloat16` as the model was trained on
273
+ """
274
+
275
+ @torch.no_grad()
276
+ def __call__(
277
+ self,
278
+ waves: list[list[float] | torch.FloatTensor | NDArray],
279
+ ref_quran_phonetic_script_list: list[QuranPhoneticScriptOutput],
280
+ sampling_rate: int,
281
+ ) -> list[MuaalemOutput]:
282
+ """Infrence Funcion for the Quran Muaalem Project
283
+
284
+ waves: input waves batch , seq_len with different formats described above
285
+ ref_quran_phonetic_script_list (list[QuranPhoneticScriptOutput]): list of the
286
+ phonetized ouput of `quran_transcript.quran_phonetizer` with `remove_space=True`
287
+
288
+ sampleing_rate (int): has to be 16000
289
+
290
+ Returns:
291
+ list[MuaalemOutput]:
292
+ A list of output objects, each containing phoneme predictions and their
293
+ phonetic features (sifat) for a processed input.
294
+
295
+ Each MuaalemOutput contains:
296
+ phonemes (Unit):
297
+ A dataclass representing the predicted phoneme sequence with:
298
+ text (str): Concatenated string of all phonemes.
299
+ probs (Union[torch.FloatTensor, list[float]]):
300
+ Confidence probabilities for each predicted phoneme.
301
+ ids (Union[torch.LongTensor, list[int]]):
302
+ Token IDs corresponding to each phoneme.
303
+
304
+ sifat (list[Sifa]):
305
+ A list of phonetic feature dataclasses (one per phoneme) with the
306
+ following optional properties (each is a SingleUnit or None):
307
+ - phonemes_group (str): the phonemes associated with the `sifa`
308
+ - hams_or_jahr (SingleUnit): either `hams` or `jahr`
309
+ - shidda_or_rakhawa (SingleUnit): either `shadeed`, `between`, or `rikhw`
310
+ - tafkheem_or_taqeeq (SingleUnit): either `mofakham`, `moraqaq`, or `low_mofakham`
311
+ - itbaq (SingleUnit): either `monfateh`, or `motbaq`
312
+ - safeer (SingleUnit): either `safeer`, or `no_safeer`
313
+ - qalqla (SingleUnit): eithr `moqalqal`, or `not_moqalqal`
314
+ - tikraar (SingleUnit): either `mokarar` or `not_mokarar`
315
+ - tafashie (SingleUnit): either `motafashie`, or `not_motafashie`
316
+ - istitala (SingleUnit): either `mostateel`, or `not_mostateel`
317
+ - ghonna (SingleUnit): either `maghnoon`, or `not_maghnoon`
318
+
319
+ Each SingleUnit in Sifa properties contains:
320
+ text (str): The feature's categorical label (e.g., "hams", "shidda").
321
+ prob (float): Confidence probability for this feature.
322
+ idx (int): Identifier for the feature class.
323
+ """
324
+ ```
SOURCES.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/quran_muaalem/__init__.py
5
+ src/quran_muaalem/decode.py
6
+ src/quran_muaalem/explain.py
7
+ src/quran_muaalem/explain_gradio.py
8
+ src/quran_muaalem/gradio_app.py
9
+ src/quran_muaalem/inference.py
10
+ src/quran_muaalem/muaalem_typing.py
11
+ src/quran_muaalem.egg-info/PKG-INFO
12
+ src/quran_muaalem.egg-info/SOURCES.txt
13
+ src/quran_muaalem.egg-info/dependency_links.txt
14
+ src/quran_muaalem.egg-info/entry_points.txt
15
+ src/quran_muaalem.egg-info/requires.txt
16
+ src/quran_muaalem.egg-info/top_level.txt
17
+ src/quran_muaalem/modeling/__init__.py
18
+ src/quran_muaalem/modeling/configuration_multi_level_ctc.py
19
+ src/quran_muaalem/modeling/modeling_multi_level_ctc.py
20
+ src/quran_muaalem/modeling/multi_level_tokenizer.py
21
+ src/quran_muaalem/modeling/vocab.py
22
+ tests/test_align_phonemes.py
23
+ tests/test_best_match.py
24
+ tests/test_exaplain_modules.py
25
+ tests/test_modules.py
26
+ tests/test_muaalem_infrence.py
__init__.cpython-312.pyc ADDED
Binary file (428 Bytes). View file
 
__init__.py ADDED
File without changes
architecture.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # المعمارية
2
+
3
+ يعتمد المشروع معمارية **CTC متعددة المستويات** فوق Wav2Vec2BERT، بحيث يتنبأ كل مستوى بسلسلة مختلفة:
4
+
5
+ - **سلسلة الفونيمات** (المستوى الأساسي)
6
+ - **صفات الحروف** (مستويات ثانوية: رأس لكل صفة)
7
+
8
+ المكوّنات الأساسية موجودة في `src/quran_muaalem/modeling/`.
9
+
10
+ ## التدفق العام
11
+
12
+ ```
13
+ الصوت (16 kHz)
14
+ → مستخرج الخصائص (AutoFeatureExtractor)
15
+ → مشفر Wav2Vec2BERT
16
+ → رؤوس CTC متعددة المستويات
17
+ → فك ترميز CTC + محاذاة
18
+ → فونيمات + صفات
19
+ ```
20
+
21
+ ## رؤوس CTC متعددة المستويات
22
+
23
+ `Wav2Vec2BertForMultilevelCTC` في `modeling_multi_level_ctc.py` ينشئ رأسًا لكل مستوى:
24
+
25
+ - `phonemes`
26
+ - `hams_or_jahr`
27
+ - `shidda_or_rakhawa`
28
+ - `tafkheem_or_taqeeq`
29
+ - `itbaq`
30
+ - `safeer`
31
+ - `qalqla`
32
+ - `tikraar`
33
+ - `tafashie`
34
+ - `istitala`
35
+ - `ghonna`
36
+
37
+ كل رأس له خسارة CTC مستقلة. أوزان الخسائر تُضبط عبر:
38
+
39
+ - `level_to_vocab_size`
40
+ - `level_to_loss_weight`
41
+
42
+ (راجع `configuration_multi_level_ctc.py`).
43
+
44
+ ## الترميز لكل مستوى
45
+
46
+ `MultiLevelTokenizer` يبني مُرمّزًا لكل مستوى باستخدام `Wav2Vec2CTCTokenizer` وقواميس النموذج. رموز الفونيمات هي الرموز الصوتية العربية، بينما رموز الصفات هي وسوم عربية محاطة بأقواس تُشتق من `SifaOutput`.
47
+
48
+ ## فك الترميز والمحاذاة
49
+
50
+ أثناء الاستدلال:
51
+
52
+ 1. ينتج النموذج لوغيتس لكل مستوى.
53
+ 2. يتم فك ترميز كل مستوى بـ CTC الجشعة (top‑1 مع دمج التكرارات وحذف الفراغ).
54
+ 3. تُقسّم سلسلة الفونيمات إلى مجموعات فونيمية.
55
+ 4. تُحاذى سلاسل الصفات مع المجموعات ومع المرجع إن توفر.
56
+
57
+ منطق المحاذاة موجود في `src/quran_muaalem/decode.py`.
58
+
59
+ ## ملاحظات للباحثين
60
+
61
+ - **المستوى الأساسي:** الفونيمات هي أكثر الإشارات استقرارًا.
62
+ - **الصفات** تعتمد على جودة المحاذاة؛ من الأفضل قياس الدقة مع/بدون محاذاة.
63
+ - **أوزان الخسائر** تؤثر مباشرة على دقة الصفات ويُنصح بضبطها تجريبيًا.
byteorder ADDED
@@ -0,0 +1 @@
 
 
1
+ little
configuration_multi_level_ctc.cpython-312.pyc ADDED
Binary file (17.5 kB). View file
 
configuration_multi_level_ctc.py ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+
4
+ class Wav2Vec2BertForMultilevelCTCConfig(PretrainedConfig):
5
+ r"""
6
+ This is the configuration class to store the configuration of a [`Wav2Vec2BertModel`]. It is used to
7
+ instantiate an Wav2Vec2Bert model according to the specified arguments, defining the model architecture.
8
+ Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Bert
9
+ [facebook/wav2vec2-bert-rel-pos-large](https://huggingface.co/facebook/wav2vec2-bert-rel-pos-large)
10
+ architecture.
11
+
12
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
13
+ documentation from [`PretrainedConfig`] for more information.
14
+
15
+
16
+ Args:
17
+ level_to_vocab_size (`dict[str, int]`, *optional*):
18
+ Every level has its own vocabulary: {'phonemes': 44, 'hams_or_jahr': 3, ....}
19
+ Vocabulary size of the Wav2Vec2Bert model. Defines the number of different tokens that can be
20
+ represented by the `inputs_ids` passed when calling [`Wav2Vec2BertModel`]. Vocabulary size of the
21
+ model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward
22
+ method of [`Wav2Vec2BertModel`].
23
+ level_to_loss_weigth (`dict[str, int]`, *optional*):
24
+ Every level has its own loss weigth such that the sum of all levels adds to 1:
25
+ If you supply only one level: the rest of the level will have loss weigth of (1-given_loss_weigth) / nmber of rest of levels
26
+
27
+ hidden_size (`int`, *optional*, defaults to 1024):
28
+ Dimensionality of the encoder layers and the pooler layer.
29
+ num_hidden_layers (`int`, *optional*, defaults to 24):
30
+ Number of hidden layers in the Transformer encoder.
31
+ num_attention_heads (`int`, *optional*, defaults to 16):
32
+ Number of attention heads for each attention layer in the Transformer encoder.
33
+ intermediate_size (`int`, *optional*, defaults to 4096):
34
+ Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
35
+ feature_projection_input_dim (`int`, *optional*, defaults to 160):
36
+ Input dimension of this model, i.e the dimension after processing input audios with [`SeamlessM4TFeatureExtractor`] or [`Wav2Vec2BertProcessor`].
37
+ hidden_act (`str` or `function`, *optional*, defaults to `"swish"`):
38
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
39
+ `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
40
+ hidden_dropout (`float`, *optional*, defaults to 0.0):
41
+ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
42
+ activation_dropout (`float`, *optional*, defaults to 0.0):
43
+ The dropout ratio for activations inside the fully connected layer.
44
+ attention_dropout (`float`, *optional*, defaults to 0.0):
45
+ The dropout ratio for the attention probabilities.
46
+ feat_proj_dropout (`float`, *optional*, defaults to 0.0):
47
+ The dropout probability for the feature projection.
48
+ final_dropout (`float`, *optional*, defaults to 0.1):
49
+ The dropout probability for the final projection layer of [`Wav2Vec2BertForCTC`].
50
+ layerdrop (`float`, *optional*, defaults to 0.1):
51
+ The LayerDrop probability. See the [LayerDrop paper](see https://huggingface.co/papers/1909.11556) for more
52
+ details.
53
+ initializer_range (`float`, *optional*, defaults to 0.02):
54
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
55
+ layer_norm_eps (`float`, *optional*, defaults to 1e-05):
56
+ The epsilon used by the layer normalization layers.
57
+ apply_spec_augment (`bool`, *optional*, defaults to `True`):
58
+ Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
59
+ [SpecAugment: A Simple Data Augmentation Method for Automatic Speech
60
+ Recognition](https://huggingface.co/papers/1904.08779).
61
+ mask_time_prob (`float`, *optional*, defaults to 0.05):
62
+ Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
63
+ procedure generates `mask_time_prob*len(time_axis)/mask_time_length ``independent masks over the axis. If
64
+ reasoning from the probability of each feature vector to be chosen as the start of the vector span to be
65
+ masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
66
+ actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
67
+ mask_time_length (`int`, *optional*, defaults to 10):
68
+ Length of vector span along the time axis.
69
+ mask_time_min_masks (`int`, *optional*, defaults to 2):
70
+ The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
71
+ irrespectively of `mask_feature_prob`. Only relevant if `mask_time_prob*len(time_axis)/mask_time_length <
72
+ mask_time_min_masks`.
73
+ mask_feature_prob (`float`, *optional*, defaults to 0.0):
74
+ Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
75
+ masking procedure generates `mask_feature_prob*len(feature_axis)/mask_time_length` independent masks over
76
+ the axis. If reasoning from the probability of each feature vector to be chosen as the start of the vector
77
+ span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
78
+ may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
79
+ True`.
80
+ mask_feature_length (`int`, *optional*, defaults to 10):
81
+ Length of vector span along the feature axis.
82
+ mask_feature_min_masks (`int`, *optional*, defaults to 0):
83
+ The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
84
+ step, irrespectively of `mask_feature_prob`. Only relevant if
85
+ `mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks`.
86
+ ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
87
+ Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
88
+ instance of [`Wav2Vec2BertForCTC`].
89
+ ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
90
+ Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
91
+ occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
92
+ of [`Wav2Vec2BertForCTC`].
93
+ use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
94
+ Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
95
+ instance of [`Wav2Vec2BertForSequenceClassification`].
96
+ classifier_proj_size (`int`, *optional*, defaults to 768):
97
+ Dimensionality of the projection before token mean-pooling for classification.
98
+ tdnn_dim (`tuple[int]` or `list[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
99
+ A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
100
+ module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
101
+ tdnn_kernel (`tuple[int]` or `list[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
102
+ A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
103
+ *XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
104
+ tdnn_dilation (`tuple[int]` or `list[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
105
+ A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
106
+ *XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
107
+ xvector_output_dim (`int`, *optional*, defaults to 512):
108
+ Dimensionality of the *XVector* embedding vectors.
109
+ pad_token_id (`int`, *optional*, defaults to 0): The id of the _beginning-of-stream_ token.
110
+ bos_token_id (`int`, *optional*, defaults to 1): The id of the _padding_ token.
111
+ eos_token_id (`int`, *optional*, defaults to 2): The id of the _end-of-stream_ token.
112
+ add_adapter (`bool`, *optional*, defaults to `False`):
113
+ Whether a convolutional attention network should be stacked on top of the Wav2Vec2Bert Encoder. Can be very
114
+ useful for warm-starting Wav2Vec2Bert for SpeechEncoderDecoder models.
115
+ adapter_kernel_size (`int`, *optional*, defaults to 3):
116
+ Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
117
+ adapter_stride (`int`, *optional*, defaults to 2):
118
+ Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
119
+ num_adapter_layers (`int`, *optional*, defaults to 1):
120
+ Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
121
+ True`.
122
+ adapter_act (`str` or `function`, *optional*, defaults to `"relu"`):
123
+ The non-linear activation function (function or string) in the adapter layers. If string, `"gelu"`,
124
+ `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported.
125
+ use_intermediate_ffn_before_adapter (`bool`, *optional*, defaults to `False`):
126
+ Whether an intermediate feed-forward block should be stacked on top of the Wav2Vec2Bert Encoder and before the adapter network.
127
+ Only relevant if `add_adapter is True`.
128
+ output_hidden_size (`int`, *optional*):
129
+ Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
130
+ if `add_adapter is True`.
131
+ position_embeddings_type (`str`, *optional*, defaults to `"relative_key"`):
132
+ Can be specified to :
133
+ - `rotary`, for rotary position embeddings.
134
+ - `relative`, for relative position embeddings.
135
+ - `relative_key`, for relative position embeddings as defined by Shaw in [Self-Attention
136
+ with Relative Position Representations (Shaw et al.)](https://huggingface.co/papers/1803.02155).
137
+ If left to `None`, no relative position embeddings is applied.
138
+ rotary_embedding_base (`int`, *optional*, defaults to 10000):
139
+ If `"rotary"` position embeddings are used, defines the size of the embedding base.
140
+ max_source_positions (`int`, *optional*, defaults to 5000):
141
+ if `"relative"` position embeddings are used, defines the maximum source input positions.
142
+ left_max_position_embeddings (`int`, *optional*, defaults to 64):
143
+ If `"relative_key"` (aka Shaw) position embeddings are used, defines the left clipping value for relative positions.
144
+ right_max_position_embeddings (`int`, *optional*, defaults to 8):
145
+ If `"relative_key"` (aka Shaw) position embeddings are used, defines the right clipping value for relative positions.
146
+ conv_depthwise_kernel_size (`int`, *optional*, defaults to 31):
147
+ Kernel size of convolutional depthwise 1D layer in Conformer blocks.
148
+ conformer_conv_dropout (`float`, *optional*, defaults to 0.1):
149
+ The dropout probability for all convolutional layers in Conformer blocks.
150
+ Example:
151
+
152
+ ```python
153
+ >>> from transformers import Wav2Vec2BertConfig, Wav2Vec2BertModel
154
+
155
+ >>> # Initializing a Wav2Vec2Bert facebook/wav2vec2-bert-rel-pos-large style configuration
156
+ >>> configuration = Wav2Vec2BertConfig()
157
+
158
+ >>> # Initializing a model (with random weights) from the facebook/wav2vec2-bert-rel-pos-large style configuration
159
+ >>> model = Wav2Vec2BertModel(configuration)
160
+
161
+ >>> # Accessing the model configuration
162
+ >>> configuration = model.config
163
+ ```"""
164
+
165
+ model_type = "multi_level_ctc"
166
+
167
+ def __init__(
168
+ self,
169
+ level_to_vocab_size: dict[str, int] = {},
170
+ level_to_loss_weight: dict[str, float] = {"phonemes": 0.4},
171
+ hidden_size=1024,
172
+ num_hidden_layers=24,
173
+ num_attention_heads=16,
174
+ intermediate_size=4096,
175
+ feature_projection_input_dim=160,
176
+ hidden_act="swish",
177
+ hidden_dropout=0.0,
178
+ activation_dropout=0.0,
179
+ attention_dropout=0.0,
180
+ feat_proj_dropout=0.0,
181
+ final_dropout=0.1,
182
+ layerdrop=0.1,
183
+ initializer_range=0.02,
184
+ layer_norm_eps=1e-5,
185
+ apply_spec_augment=True,
186
+ mask_time_prob=0.05,
187
+ mask_time_length=10,
188
+ mask_time_min_masks=2,
189
+ mask_feature_prob=0.0,
190
+ mask_feature_length=10,
191
+ mask_feature_min_masks=0,
192
+ ctc_loss_reduction="sum",
193
+ ctc_zero_infinity=False,
194
+ use_weighted_layer_sum=False,
195
+ classifier_proj_size=768,
196
+ tdnn_dim=(512, 512, 512, 512, 1500),
197
+ tdnn_kernel=(5, 3, 3, 1, 1),
198
+ tdnn_dilation=(1, 2, 3, 1, 1),
199
+ xvector_output_dim=512,
200
+ pad_token_id=0,
201
+ bos_token_id=1,
202
+ eos_token_id=2,
203
+ add_adapter=False,
204
+ adapter_kernel_size=3,
205
+ adapter_stride=2,
206
+ num_adapter_layers=1,
207
+ adapter_act="relu",
208
+ use_intermediate_ffn_before_adapter=False,
209
+ output_hidden_size=None,
210
+ position_embeddings_type="relative_key",
211
+ rotary_embedding_base=10000,
212
+ max_source_positions=5000,
213
+ left_max_position_embeddings=64,
214
+ right_max_position_embeddings=8,
215
+ conv_depthwise_kernel_size=31,
216
+ conformer_conv_dropout=0.1,
217
+ **kwargs,
218
+ ):
219
+ super().__init__(
220
+ **kwargs,
221
+ pad_token_id=pad_token_id,
222
+ bos_token_id=bos_token_id,
223
+ eos_token_id=eos_token_id,
224
+ )
225
+ self.hidden_size = hidden_size
226
+ self.num_hidden_layers = num_hidden_layers
227
+ self.intermediate_size = intermediate_size
228
+ self.hidden_act = hidden_act
229
+ self.num_attention_heads = num_attention_heads
230
+ self.feature_projection_input_dim = feature_projection_input_dim
231
+ self.hidden_dropout = hidden_dropout
232
+ self.attention_dropout = attention_dropout
233
+ self.activation_dropout = activation_dropout
234
+ self.feat_proj_dropout = feat_proj_dropout
235
+ self.final_dropout = final_dropout
236
+ self.layerdrop = layerdrop
237
+ self.layer_norm_eps = layer_norm_eps
238
+ self.initializer_range = initializer_range
239
+ self.level_to_vocab_size = level_to_vocab_size
240
+ self.use_weighted_layer_sum = use_weighted_layer_sum
241
+ self.max_source_positions = max_source_positions
242
+
243
+ loss_weights_sum = sum(level_to_loss_weight.values())
244
+ if loss_weights_sum > 1:
245
+ raise ValueError(
246
+ f"The sum of loss weight per level has to be less than one! got: `{level_to_loss_weight}`"
247
+ )
248
+ unmentioned_loss_levels_count = len(
249
+ [l for l in self.level_to_vocab_size if l not in level_to_loss_weight]
250
+ )
251
+ for level in self.level_to_vocab_size:
252
+ if level not in level_to_loss_weight:
253
+ level_to_loss_weight[level] = (
254
+ 1 - loss_weights_sum
255
+ ) / unmentioned_loss_levels_count
256
+ self.level_to_loss_weight = level_to_loss_weight
257
+
258
+ if position_embeddings_type is not None and position_embeddings_type not in [
259
+ "rotary",
260
+ "relative",
261
+ "relative_key",
262
+ ]:
263
+ raise ValueError(
264
+ """
265
+ `position_embeddings_type` is not valid. It must be one of the following values:
266
+ `["rotary", "relative", "relative_key"]` or left as `None`.
267
+ """
268
+ )
269
+ self.position_embeddings_type = position_embeddings_type
270
+ self.rotary_embedding_base = rotary_embedding_base
271
+ self.left_max_position_embeddings = left_max_position_embeddings
272
+ self.right_max_position_embeddings = right_max_position_embeddings
273
+
274
+ # Conformer-block related
275
+ self.conv_depthwise_kernel_size = conv_depthwise_kernel_size
276
+ self.conformer_conv_dropout = conformer_conv_dropout
277
+
278
+ # fine-tuning config parameters for SpecAugment: https://huggingface.co/papers/1904.08779
279
+ self.apply_spec_augment = apply_spec_augment
280
+ self.mask_time_prob = mask_time_prob
281
+ self.mask_time_length = mask_time_length
282
+ self.mask_time_min_masks = mask_time_min_masks
283
+ self.mask_feature_prob = mask_feature_prob
284
+ self.mask_feature_length = mask_feature_length
285
+ self.mask_feature_min_masks = mask_feature_min_masks
286
+
287
+ # ctc loss
288
+ self.ctc_loss_reduction = ctc_loss_reduction
289
+ self.ctc_zero_infinity = ctc_zero_infinity
290
+
291
+ # adapter
292
+ self.add_adapter = add_adapter
293
+ self.adapter_kernel_size = adapter_kernel_size
294
+ self.adapter_stride = adapter_stride
295
+ self.num_adapter_layers = num_adapter_layers
296
+ self.adapter_act = adapter_act
297
+ self.output_hidden_size = (
298
+ output_hidden_size if output_hidden_size is not None else hidden_size
299
+ )
300
+ if use_intermediate_ffn_before_adapter and not add_adapter:
301
+ raise ValueError(
302
+ "`use_intermediate_ffn_before_adapter` is `True` but `add_adapter` is `False`."
303
+ )
304
+ self.use_intermediate_ffn_before_adapter = use_intermediate_ffn_before_adapter
305
+
306
+ # SequenceClassification-specific parameter. Feel free to ignore for other classes.
307
+ self.classifier_proj_size = classifier_proj_size
308
+
309
+ # XVector-specific parameters. Feel free to ignore for other classes.
310
+ self.tdnn_dim = list(tdnn_dim)
311
+ self.tdnn_kernel = list(tdnn_kernel)
312
+ self.tdnn_dilation = list(tdnn_dilation)
313
+ self.xvector_output_dim = xvector_output_dim
314
+
315
+ @property
316
+ def inputs_to_logits_ratio(self):
317
+ ratio = self.feature_projection_input_dim * 2
318
+ if self.add_adapter:
319
+ ratio = ratio * (self.adapter_stride**self.num_adapter_layers)
320
+ return ratio
conftest.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytest
2
+
3
+
4
+ def pytest_addoption(parser):
5
+ parser.addoption(
6
+ "--skip-slow",
7
+ action="store_true",
8
+ default=False,
9
+ help="Skip tests marked as slow",
10
+ )
11
+
12
+
13
+ def pytest_configure(config):
14
+ config.addinivalue_line("markers", "slow: mark test as slow to run")
15
+
16
+
17
+ def pytest_collection_modifyitems(config, items):
18
+ if config.getoption("--skip-slow"):
19
+ skip_slow = pytest.mark.skip(reason="Skipped due to --skip-slow flag")
20
+ for item in items:
21
+ if "slow" in item.keywords:
22
+ item.add_marker(skip_slow)
contributing.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # المساهمة
2
+
3
+ - افتح قضايا أو طلبات دمج عبر https://github.com/obadx/quran-muaalem.
4
+ - حاول أن تكون التغييرات مركزة إما في `src/quran_muaalem/` أو `quran-transcript/` قدر الإمكان.
5
+ - أضف أو حدّث الاختبارات في المجموعة المناسبة (`tests/` أو `quran-transcript/tests/`).
6
+
7
+ إذا احتجت دليل أسلوب أو إعدادات تنسيق، أخبرني لأضيفها.
data.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:908eb046e8ce28359c047aa9701517ceab69ce2f14cd5aa2bb551b518534042c
3
+ size 14476
decode.cpython-312.pyc ADDED
Binary file (14.1 kB). View file
 
decode.py ADDED
@@ -0,0 +1,580 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from typing import Sequence, Any
3
+ from dataclasses import dataclass
4
+ import torch
5
+ import numpy as np
6
+ from numpy.typing import NDArray
7
+
8
+ from .modeling.vocab import PAD_TOKEN_IDX
9
+ from .muaalem_typing import Unit
10
+
11
+
12
+ # def align_predicted_sequence(
13
+ # ref: Sequence[Any], predicted: Sequence[Any]
14
+ # ) -> Sequence[Any]:
15
+ # """Aligns the preficeted sequence to the ref sequnce
16
+ #
17
+ # Example (1): `predicted` length > `ref` length
18
+ # ref: abcde
19
+ # predicted: abcdef
20
+ # Returns: abcde
21
+ #
22
+ # Example (2): `predicted` length <`ref` length
23
+ # ref: abcde
24
+ # predicted: abcd
25
+ # Returns: abcde
26
+ #
27
+ # Returns:
28
+ # Sequnce[Any]: new precicted sequence that best matches the ref sequence
29
+ # """
30
+ # n = len(ref)
31
+ # m = len(predicted)
32
+ # if n == m:
33
+ # return predicted
34
+ # if n == 0:
35
+ # return []
36
+ # if m == 0:
37
+ # return ref
38
+ #
39
+ # dp = [[0] * (m + 1) for _ in range(n + 1)]
40
+ #
41
+ # for i in range(1, n + 1):
42
+ # dp[i][0] = 0
43
+ # for j in range(1, m + 1):
44
+ # dp[0][j] = 0
45
+ #
46
+ # for i in range(1, n + 1):
47
+ # for j in range(1, m + 1):
48
+ # insertion = dp[i - 1][j]
49
+ # deletion = dp[i][j - 1]
50
+ # match_cost = dp[i - 1][j - 1] + (1 if ref[i - 1] != predicted[j - 1] else 0)
51
+ # dp[i][j] = min(insertion, deletion, match_cost)
52
+ #
53
+ # i, j = n, m
54
+ # output_chars = []
55
+ # while i > 0 or j > 0:
56
+ # if (
57
+ # i > 0
58
+ # and j > 0
59
+ # and ref[i - 1] == predicted[j - 1]
60
+ # and dp[i][j] == dp[i - 1][j - 1]
61
+ # ):
62
+ # output_chars.append(predicted[j - 1])
63
+ # i -= 1
64
+ # j -= 1
65
+ # elif i > 0 and dp[i][j] == dp[i - 1][j]:
66
+ # output_chars.append(ref[i - 1])
67
+ # i -= 1
68
+ # elif j > 0 and dp[i][j] == dp[i][j - 1]:
69
+ # j -= 1
70
+ # else:
71
+ # output_chars.append(predicted[j - 1])
72
+ # i -= 1
73
+ # j -= 1
74
+ #
75
+ # # return "".join(output_chars[::-1])
76
+ # return output_chars[::-1]
77
+
78
+
79
+ # def align_predicted_sequence(ref, predicted):
80
+ # n = len(ref)
81
+ # m = len(predicted)
82
+ # if m == n:
83
+ # return predicted
84
+ #
85
+ # INF = 10**9
86
+ # dp = [[0] * (m + 1) for _ in range(n + 1)]
87
+ # choice = [[0] * (m + 1) for _ in range(n + 1)]
88
+ #
89
+ # for j in range(m + 1):
90
+ # dp[0][j] = 0
91
+ #
92
+ # for i in range(1, n + 1):
93
+ # dp[i][0] = INF
94
+ #
95
+ # for i in range(1, n + 1):
96
+ # for j in range(1, m + 1):
97
+ # # above
98
+ # option1 = dp[i][j - 1]
99
+ # # adjacent
100
+ # if j >= i:
101
+ # cost = 0 if predicted[j - 1] == ref[i - 1] else 1
102
+ # option2 = dp[i - 1][j - 1] + cost
103
+ # else:
104
+ # option2 = INF
105
+ #
106
+ # if option2 <= option1:
107
+ # dp[i][j] = option2
108
+ # choice[i][j] = 1
109
+ # else:
110
+ # dp[i][j] = option1
111
+ # choice[i][j] = 0
112
+ #
113
+ # print(np.array(dp))
114
+ # print(np.array(choice))
115
+ #
116
+ # res_chars = []
117
+ # i, j = n, m
118
+ # while i > 0 and j > 0:
119
+ # if choice[i][j] == 1:
120
+ # res_chars.append(predicted[j - 1])
121
+ # i -= 1
122
+ # j -= 1
123
+ # else:
124
+ # j -= 1
125
+ #
126
+ # return res_chars[::-1]
127
+
128
+
129
+ def align_chunked_phonemes_sequence(
130
+ ref: list[list[str]],
131
+ predicted: list[list[str]],
132
+ ) -> list[bool]:
133
+ """Aligns phonemes level to get mask that descripts what is missing
134
+
135
+ Returns the mask for the `ref` inputs that best matches the `predicted`
136
+ Note element wise comparison but retuns mask for best seqence (even with errors)
137
+
138
+ Example (1): `predicted` length > `ref` length
139
+ ref: abcde
140
+ predicted: abcdef
141
+ Returns: [T, T, T, T]
142
+
143
+ Example (2): `predicted` length <`ref` length
144
+ ref: abcde
145
+ predicted: abcd
146
+ Returns: [T, T, T, T, F]
147
+
148
+ Example (2): `predicted` length <`ref` length
149
+ ref: afcde
150
+ predicted: abcd
151
+ Returns: [T, T, T, T, F]
152
+
153
+
154
+ Len(mask] == Len(ref)
155
+
156
+ """
157
+
158
+ n = len(predicted)
159
+ m = len(ref)
160
+
161
+ if len(predicted) == len(ref):
162
+ return [True] * len(predicted)
163
+
164
+ if m == 0:
165
+ raise ValueError("`ref` length must not be zero length")
166
+
167
+ dp = [[0] * (m + 1) for _ in range(n + 1)]
168
+ choice = [[0] * (m + 1) for _ in range(n + 1)]
169
+
170
+ for j in range(m + 1):
171
+ dp[0][j] = 0
172
+
173
+ for i in range(1, n + 1):
174
+ dp[i][0] = i
175
+
176
+ for i in range(1, n + 1):
177
+ for j in range(1, m + 1):
178
+ option1 = dp[i][j - 1]
179
+ option2 = dp[i - 1][j] + 1
180
+ cost = 0 if predicted[i - 1][0] == ref[j - 1][0] else 1
181
+ option3 = dp[i - 1][j - 1] + cost
182
+
183
+ if option3 <= option1 and option3 <= option2:
184
+ dp[i][j] = option3
185
+ choice[i][j] = 3
186
+ elif option1 <= option2:
187
+ dp[i][j] = option1
188
+ choice[i][j] = 1
189
+ else:
190
+ dp[i][j] = option2
191
+ choice[i][j] = 2
192
+
193
+ i = n
194
+ j = m
195
+ mask = []
196
+ # res_chars = []
197
+ while i > 0 or j > 0:
198
+ if i > 0 and j > 0:
199
+ if choice[i][j] == 3:
200
+ # res_chars.append(ref[j - 1])
201
+ mask.append(True)
202
+ i -= 1
203
+ j -= 1
204
+ elif choice[i][j] == 2:
205
+ # res_chars.append(missing_placeholder)
206
+ i -= 1
207
+ else:
208
+ j -= 1
209
+ mask.append(False)
210
+ elif i > 0:
211
+ # res_chars.append(missing_placeholder)
212
+ i -= 1
213
+ else:
214
+ j -= 1
215
+ mask.append(False)
216
+
217
+ return mask[::-1]
218
+
219
+
220
+ def align_predicted_sequence(
221
+ ref: Sequence[Any] | torch.LongTensor,
222
+ predicted: Sequence[Any] | torch.LongTensor,
223
+ missing_placeholder=-100,
224
+ ) -> Sequence[Any] | torch.LongTensor:
225
+ """Aligns the preficeted sequence to the ref sequnce
226
+
227
+ Example (1): `predicted` length > `ref` length
228
+ ref: abcde
229
+ predicted: abcdef
230
+ Returns: abcde
231
+
232
+ Example (2): `predicted` length <`ref` length
233
+ ref: abcde
234
+ predicted: abcd
235
+ Returns: abcd(missing_placeholder)
236
+
237
+ Returns:
238
+ Sequnce[Any]: new precicted sequence that best matches the ref sequence
239
+ """
240
+
241
+ n = len(ref)
242
+ m = len(predicted)
243
+
244
+ if len(ref) == len(predicted):
245
+ return predicted, [True] * len(ref)
246
+
247
+ if m == 0:
248
+ return [missing_placeholder] * n
249
+
250
+ dp = [[0] * (m + 1) for _ in range(n + 1)]
251
+ choice = [[0] * (m + 1) for _ in range(n + 1)]
252
+
253
+ for j in range(m + 1):
254
+ dp[0][j] = 0
255
+
256
+ for i in range(1, n + 1):
257
+ dp[i][0] = i
258
+
259
+ for i in range(1, n + 1):
260
+ for j in range(1, m + 1):
261
+ option1 = dp[i][j - 1]
262
+ option2 = dp[i - 1][j] + 1
263
+ cost = 0 if ref[i - 1] == predicted[j - 1] else 1
264
+ option3 = dp[i - 1][j - 1] + cost
265
+
266
+ if option3 <= option1 and option3 <= option2:
267
+ dp[i][j] = option3
268
+ choice[i][j] = 3
269
+ elif option1 <= option2:
270
+ dp[i][j] = option1
271
+ choice[i][j] = 1
272
+ else:
273
+ dp[i][j] = option2
274
+ choice[i][j] = 2
275
+
276
+ i = n
277
+ j = m
278
+ mask = []
279
+ res_chars = []
280
+ while i > 0 or j > 0:
281
+ if i > 0 and j > 0:
282
+ if choice[i][j] == 3:
283
+ res_chars.append(predicted[j - 1])
284
+ mask.append(True)
285
+ i -= 1
286
+ j -= 1
287
+ elif choice[i][j] == 2:
288
+ res_chars.append(missing_placeholder)
289
+ i -= 1
290
+ else:
291
+ j -= 1
292
+ mask.append(False)
293
+ elif i > 0:
294
+ res_chars.append(missing_placeholder)
295
+ i -= 1
296
+ else:
297
+ j -= 1
298
+ mask.append(False)
299
+
300
+ return res_chars[::-1], mask[::-1]
301
+
302
+
303
+ @dataclass
304
+ class CTCDecodeOut:
305
+ """
306
+ Both are 1D Tensors
307
+ """
308
+
309
+ ids: torch.LongTensor
310
+ p: torch.FloatTensor
311
+
312
+
313
+ def ctc_decode(
314
+ batch_ids: torch.LongTensor,
315
+ batch_probs: torch.FloatTensor,
316
+ blank_id=PAD_TOKEN_IDX,
317
+ collapse_consecutive=True,
318
+ ) -> list[CTCDecodeOut]:
319
+ """
320
+ batch_ids (torch.LongTensor): batch on integer ids of shape: batch, sequecne_len
321
+ batch_probs (torch.LongTensor): batch on float32 ids of shape: batch, sequecne_len
322
+
323
+ Return:
324
+ list[tuple[list[int], float]]]:
325
+
326
+
327
+ """
328
+ outs = []
329
+ assert batch_ids.shape == batch_probs.shape
330
+ for seq_idx, seq in enumerate(batch_ids):
331
+ if collapse_consecutive:
332
+ tokens = []
333
+ probs = []
334
+ start = 0
335
+ end = 0
336
+ if len(seq) == 1 and seq[0] != blank_id:
337
+ tokens.append(seq[0])
338
+ probs.append(batch_probs[seq_idx][0])
339
+
340
+ for idx in range(len(seq) - 1):
341
+ curr = seq[idx]
342
+ next = seq[idx + 1]
343
+ # Last Item
344
+ if idx == len(seq) - 2 and curr != blank_id:
345
+ if curr == next:
346
+ end = idx + 2
347
+ tokens.append(curr)
348
+ probs.append(
349
+ batch_probs[seq_idx][start:end].sum() / (end - start)
350
+ )
351
+ elif curr != next:
352
+ end = idx + 1
353
+ tokens.append(curr)
354
+ probs.append(
355
+ batch_probs[seq_idx][start:end].sum() / (end - start)
356
+ )
357
+ tokens.append(next)
358
+ probs.append(batch_probs[seq_idx][idx + 1])
359
+ # Normal Case
360
+ elif curr != next and curr != blank_id:
361
+ end = idx + 1
362
+ tokens.append(curr)
363
+ probs.append(batch_probs[seq_idx][start:end].sum() / (end - start))
364
+ start = end
365
+ elif curr == blank_id:
366
+ start = idx + 1
367
+
368
+ outs.append(
369
+ CTCDecodeOut(
370
+ ids=torch.LongTensor(tokens),
371
+ p=torch.FloatTensor(probs),
372
+ )
373
+ )
374
+ else:
375
+ mask = seq != blank_id
376
+ tokens = seq[mask]
377
+ probs = batch_probs[seq_idx][mask]
378
+ outs.append(CTCDecodeOut(ids=tokens, p=probs))
379
+ return outs
380
+
381
+
382
+ # def multilevel_greedy_decode(
383
+ # level_to_probs: dict[str, torch.FloatTensor],
384
+ # level_to_id_to_vocab: dict[str, dict[int, str]],
385
+ # level_to_ref_ids: dict[str, torch.LongTensor],
386
+ # missing_placeholder=-100,
387
+ # pad_idx=PAD_TOKEN_IDX,
388
+ # ) -> dict[str, list[Unit]]:
389
+ # level_to_units = {}
390
+ # for level in level_to_probs:
391
+ # batch_probs, batch_ids = level_to_probs[level].topk(1, dim=-1)
392
+ # decode_outs = ctc_decode(
393
+ # batch_ids.squeeze(-1), batch_probs.squeeze(-1), collapse_consecutive=True
394
+ # )
395
+ # level_to_units[level] = []
396
+ # for seq_idx, decode_out in enumerate(decode_outs):
397
+ # # Trying to align Ids of the sifat levels
398
+ # if level != "phonemes":
399
+ # aligned_ids, mask = align_predicted_sequence(
400
+ # level_to_ref_ids[level][seq_idx],
401
+ # decode_out.ids,
402
+ # missing_placeholder=missing_placeholder,
403
+ # )
404
+ # else:
405
+ # aligned_ids = decode_out.ids
406
+ #
407
+ # probs = decode_out.p
408
+ # if len(aligned_ids) != len(decode_out.ids):
409
+ # aligned_ids = torch.LongTensor(aligned_ids)
410
+ # mask = torch.BoolTensor(mask)
411
+ #
412
+ # new_probs = torch.zeros(len(aligned_ids), dtype=torch.float32)
413
+ # new_probs[aligned_ids != missing_placeholder] = probs[mask]
414
+ #
415
+ # aligned_ids[aligned_ids == missing_placeholder] = pad_idx
416
+ # probs = new_probs
417
+ #
418
+ # probs = decode_out.p.clone()
419
+ # text = ""
420
+ # for idx in aligned_ids:
421
+ # text += level_to_id_to_vocab[level][int(idx)]
422
+ # level_to_units[level].append(
423
+ # Unit(
424
+ # text=text,
425
+ # probs=probs,
426
+ # ids=aligned_ids,
427
+ # ),
428
+ # )
429
+ #
430
+ # return level_to_units
431
+
432
+
433
+ def phonemes_level_greedy_decode(
434
+ probs: torch.FloatTensor,
435
+ phonemes_level_vocab: dict[int, str],
436
+ ) -> list[Unit]:
437
+ """Decodes only phonemes level
438
+
439
+ Args:
440
+ probs (torch.FloatTensor) of shape batch, seq_len, num_classes
441
+ phonmes_level_vocab (dict[int, str]): mapping ids of phonemes to the
442
+ acutial string represnetation
443
+ """
444
+ batch_probs, batch_ids = probs.topk(1, dim=-1)
445
+ decode_outs = ctc_decode(
446
+ batch_ids.squeeze(-1), batch_probs.squeeze(-1), collapse_consecutive=True
447
+ )
448
+ units = []
449
+ for seq_idx, decode_out in enumerate(decode_outs):
450
+ text = ""
451
+ for idx in decode_out.ids:
452
+ text += phonemes_level_vocab[int(idx)]
453
+ units.append(
454
+ Unit(
455
+ text=text,
456
+ probs=decode_out.p,
457
+ ids=decode_out.ids,
458
+ ),
459
+ )
460
+ return units
461
+
462
+
463
+ def multilevel_greedy_decode(
464
+ level_to_probs: dict[str, torch.FloatTensor],
465
+ level_to_id_to_vocab: dict[str, dict[int, str]],
466
+ level_to_ref_ids: dict[str, torch.LongTensor],
467
+ chunked_phonemes_batch: list[list[str]],
468
+ ref_chuncked_phonemes_batch: list[list[str]],
469
+ phonemes_units: list[Unit],
470
+ missing_placeholder=-100,
471
+ pad_idx=PAD_TOKEN_IDX,
472
+ ) -> dict[str, list[Unit]]:
473
+ level_to_units = {}
474
+ for level in level_to_probs:
475
+ if level == "phonemes":
476
+ continue
477
+ batch_probs, batch_ids = level_to_probs[level].topk(1, dim=-1)
478
+ decode_outs = ctc_decode(
479
+ batch_ids.squeeze(-1), batch_probs.squeeze(-1), collapse_consecutive=True
480
+ )
481
+ level_to_units[level] = []
482
+ for seq_idx, decode_out in enumerate(decode_outs):
483
+ # Trying to align Ids of the sifat levels
484
+ phonemes_mask = align_chunked_phonemes_sequence(
485
+ ref=ref_chuncked_phonemes_batch[seq_idx],
486
+ predicted=chunked_phonemes_batch[seq_idx],
487
+ )
488
+ phonemes_mask = torch.BoolTensor(phonemes_mask)
489
+
490
+ # NOTE:
491
+ # We want to align every level with predited phonme, but
492
+ # in some cases the length of every sifa level is > or < the
493
+ # length for the predited phonemes
494
+ # we slove this by two steps
495
+ # 1. Align the sifa level with length mismatch to the refrence sifa level
496
+ # 2. align the alinged sifa level back to the the length of prediced phonmes
497
+ if len(decode_out.ids) != len(chunked_phonemes_batch[seq_idx]) and (
498
+ len(chunked_phonemes_batch[seq_idx])
499
+ <= len(ref_chuncked_phonemes_batch[seq_idx])
500
+ ):
501
+ logging.info(f"Sequence: `{seq_idx}` has mismatch Level: {level}")
502
+ # 1. Align sifa level to the reference sifa level
503
+ ref_aligned_ids, mask = align_predicted_sequence(
504
+ level_to_ref_ids[level][seq_idx],
505
+ decode_out.ids,
506
+ missing_placeholder=missing_placeholder,
507
+ )
508
+
509
+ probs = decode_out.p
510
+ ref_aligned_ids = torch.LongTensor(ref_aligned_ids)
511
+ mask = torch.BoolTensor(mask)
512
+
513
+ new_probs = torch.zeros(len(ref_aligned_ids), dtype=torch.float32)
514
+ new_probs[ref_aligned_ids != missing_placeholder] = probs[mask]
515
+
516
+ ref_aligned_ids[ref_aligned_ids == missing_placeholder] = pad_idx
517
+
518
+ # 2. Align the predicted aligned to the ref back to the predicted seqence
519
+ aligned_ids = ref_aligned_ids[phonemes_mask]
520
+ new_probs = ref_aligned_ids[phonemes_mask]
521
+
522
+ probs = new_probs
523
+ else:
524
+ aligned_ids = decode_out.ids
525
+ probs = decode_out.p
526
+
527
+ text = ""
528
+ for idx in aligned_ids:
529
+ text += level_to_id_to_vocab[level][int(idx)]
530
+ level_to_units[level].append(
531
+ Unit(
532
+ text=text,
533
+ probs=probs,
534
+ ids=aligned_ids,
535
+ ),
536
+ )
537
+ level_to_units["phonemes"] = phonemes_units
538
+
539
+ return level_to_units
540
+
541
+
542
+ def align_sequence(
543
+ seq: Sequence[int] | torch.LongTensor, target_len: int, min_repeat: int = 3
544
+ ) -> list[int]:
545
+ """Aligns a sequence by removing items from the longest repateted items
546
+
547
+ Returns:
548
+ list[int]: the ids which are goning to be deleted if longest_repeat > len(seq) - target_len
549
+
550
+ Example:
551
+ seq = [1, 0, 1, 0, 0, 0, 0, 1], target_len = 7
552
+ ^ ^ ^
553
+ Longest Repeat ^ ^ ^
554
+ Ouput: [3]
555
+ """
556
+
557
+ if len(seq) <= target_len:
558
+ return []
559
+
560
+ longest_start = 0
561
+ longest_repeat = 0
562
+ curr_repeat = 1
563
+ curr_start = 0
564
+ for idx in range(len(seq) - 1):
565
+ curr = seq[idx]
566
+ next = seq[idx + 1]
567
+ if curr == next:
568
+ curr_repeat += 1
569
+ if (curr != next) or (idx == len(seq) - 2):
570
+ if curr_repeat > longest_repeat and curr_repeat >= min_repeat:
571
+ longest_repeat = curr_repeat
572
+ longest_start = curr_start
573
+ curr_start = idx + 1
574
+ curr_repeat = 1
575
+
576
+ # logical case to remote only from the longest repeat
577
+ if longest_repeat > len(seq) - target_len:
578
+ return list(range(longest_start, longest_start + len(seq) - target_len))
579
+ else:
580
+ return []
dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
entry_points.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [console_scripts]
2
+ quran-muaalem-ui = quran_muaalem.gradio_app:main
explain.cpython-312.pyc ADDED
Binary file (9.66 kB). View file
 
explain.py ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, asdict
2
+ from typing import Literal
3
+ import json
4
+
5
+ from quran_transcript import SifaOutput
6
+ import quran_transcript.alphabet as alph
7
+ import diff_match_patch as dmp
8
+ from rich import print
9
+ from rich.text import Text
10
+ from rich.table import Table
11
+ from rich.console import Console
12
+
13
+ from .muaalem_typing import Sifa
14
+ from .modeling.vocab import SIFAT_ATTR_TO_ARABIC_WITHOUT_BRACKETS
15
+
16
+
17
+ @dataclass
18
+ class PhonemeGroup:
19
+ ref: str = ""
20
+ out: str = ""
21
+ ref_idx: int | None = None
22
+ out_idx: int | None = None
23
+ tag: Literal["exact", "partial", "insert", "delete"] | None = None
24
+
25
+ def get_tag(self):
26
+ if self.ref == "" and self.out == "":
27
+ raise ValueError("The Entire group is empty")
28
+ if self.ref == self.out:
29
+ self.tag = "exact"
30
+ elif self.ref != "" and self.out == "":
31
+ self.tag = "delete"
32
+ elif self.out != "" and self.ref == "":
33
+ self.tag = "insert"
34
+ else:
35
+ self.tag = "partial"
36
+ return self.tag
37
+
38
+
39
+ def merge_same_phoneme_group(ph_groups: list[PhonemeGroup]) -> list[PhonemeGroup]:
40
+ outs = [ph_groups[0]]
41
+ prev_idx = 0
42
+ for curr_idx in range(1, len(ph_groups)):
43
+ # out is part of ref
44
+ if (
45
+ ph_groups[prev_idx].out_idx is not None
46
+ and ph_groups[curr_idx].ref_idx is not None
47
+ and ph_groups[prev_idx].out in ph_groups[curr_idx].ref
48
+ ):
49
+ del outs[-1]
50
+ outs.append(
51
+ PhonemeGroup(
52
+ ref=ph_groups[curr_idx].ref,
53
+ ref_idx=ph_groups[curr_idx].ref_idx,
54
+ out=ph_groups[prev_idx].out,
55
+ out_idx=ph_groups[prev_idx].out_idx,
56
+ )
57
+ )
58
+ # ref is part of out
59
+ elif (
60
+ ph_groups[prev_idx].ref_idx is not None
61
+ and ph_groups[curr_idx].out_idx is not None
62
+ and ph_groups[prev_idx].ref in ph_groups[curr_idx].out
63
+ ):
64
+ del outs[-1]
65
+ outs.append(
66
+ PhonemeGroup(
67
+ ref=ph_groups[prev_idx].ref,
68
+ ref_idx=ph_groups[prev_idx].ref_idx,
69
+ out=ph_groups[curr_idx].out,
70
+ out_idx=ph_groups[curr_idx].out_idx,
71
+ )
72
+ )
73
+ else:
74
+ outs.append(ph_groups[curr_idx])
75
+ prev_idx = curr_idx
76
+ return outs
77
+
78
+
79
+ def segment_groups(
80
+ ref_groups: list[str],
81
+ groups: list[str],
82
+ diffs,
83
+ ) -> list[PhonemeGroup]:
84
+ """Join similar phonmes groups and diffrentiate between groups"""
85
+ ref_counter = 0
86
+ ref_ptr = 0
87
+ ref_group_idx = 0
88
+ out_counter = 0
89
+ out_ptr = 0
90
+ out_group_idx = 0
91
+
92
+ out_pairs = []
93
+ for op, data in diffs:
94
+ if op == 0:
95
+ ref_counter += len(data)
96
+ out_counter += len(data)
97
+ elif op == 1:
98
+ out_counter += len(data)
99
+ elif op == -1:
100
+ ref_counter += len(data)
101
+
102
+ ref_has_match = True
103
+ out_has_match = True
104
+ while ref_has_match or out_has_match:
105
+ pair = PhonemeGroup()
106
+ if ref_group_idx < len(ref_groups):
107
+ if (ref_counter - ref_ptr) >= len(ref_groups[ref_group_idx]):
108
+ pair.ref = ref_groups[ref_group_idx]
109
+ pair.ref_idx = ref_group_idx
110
+ ref_ptr += len(ref_groups[ref_group_idx])
111
+ ref_group_idx += 1
112
+ else:
113
+ ref_has_match = False
114
+ else:
115
+ ref_has_match = False
116
+
117
+ if out_group_idx < len(groups):
118
+ if (out_counter - out_ptr) >= len(groups[out_group_idx]):
119
+ pair.out = groups[out_group_idx]
120
+ pair.out_idx = out_group_idx
121
+ out_ptr += len(groups[out_group_idx])
122
+ out_group_idx += 1
123
+ else:
124
+ out_has_match = False
125
+ else:
126
+ out_has_match = False
127
+
128
+ if pair.ref or pair.out:
129
+ out_pairs.append(pair)
130
+ return merge_same_phoneme_group(out_pairs)
131
+
132
+
133
+ def expalin_sifat(
134
+ sifat: list[Sifa],
135
+ exp_sifat: list[SifaOutput],
136
+ diffs,
137
+ ):
138
+ table = []
139
+ chunks = [s.phonemes_group for s in sifat]
140
+ exp_chunks = [s.phonemes for s in exp_sifat]
141
+
142
+ groups = segment_groups(ref_groups=exp_chunks, groups=chunks, diffs=diffs)
143
+ keys = set(asdict(sifat[0]).keys()) - {"phonemes_group"}
144
+ madd_group = alph.phonetics.alif + alph.phonetics.yaa_madd + alph.phonetics.waw_madd
145
+
146
+ for group in groups:
147
+ raw = {}
148
+ tag = group.get_tag()
149
+ if (tag == "exact") or (tag == "partial" and group.ref[0] in madd_group):
150
+ raw["tag"] = "exact"
151
+ raw["phonemes"] = sifat[group.out_idx].phonemes_group
152
+ raw["exp_phonemes"] = exp_sifat[group.ref_idx].phonemes
153
+ for key in keys:
154
+ if getattr(sifat[group.out_idx], key) is not None:
155
+ raw[f"{key}"] = getattr(sifat[group.out_idx], key).text
156
+ else:
157
+ raw[f"{key}"] = "None"
158
+
159
+ raw[f"exp_{key}"] = getattr(exp_sifat[group.ref_idx], key)
160
+ elif tag in {"partial", "insert"}:
161
+ raw["tag"] = "insert"
162
+ raw["phonemes"] = sifat[group.out_idx].phonemes_group
163
+ raw["exp_phonemes"] = ""
164
+ for key in keys:
165
+ if getattr(sifat[group.out_idx], key) is not None:
166
+ raw[f"{key}"] = getattr(sifat[group.out_idx], key).text
167
+ else:
168
+ raw[f"{key}"] = "None"
169
+
170
+ raw[f"exp_{key}"] = ""
171
+ if raw:
172
+ table.append(raw)
173
+
174
+ # print(json.dumps(table, indent=2, ensure_ascii=False))
175
+ return table
176
+
177
+
178
+ def print_sifat_table(
179
+ table: list[dict],
180
+ lang: Literal["arabic", "english"] = "arabic",
181
+ ):
182
+ """Print the sifat comparison table with rich highlighting"""
183
+ if not table:
184
+ return
185
+
186
+ # Create a rich Table
187
+ rich_table = Table()
188
+
189
+ # Get base columns (non-exp keys without 'tag')
190
+ base_keys = [k for k in table[0].keys() if not k.startswith("exp_") and k != "tag"]
191
+
192
+ # Add columns
193
+ # rich_table.add_column("Tag", style="cyan")
194
+ for key in base_keys:
195
+ rich_table.add_column(key.replace("_", " ").title())
196
+
197
+ # Add rows
198
+ for row in table:
199
+ tag = row["tag"]
200
+ values = []
201
+ for key in base_keys:
202
+ exp_key = f"exp_{key}"
203
+ value = str(row[key])
204
+ if key != "phonemes" and lang == "arabic":
205
+ value = SIFAT_ATTR_TO_ARABIC_WITHOUT_BRACKETS[value]
206
+
207
+ # Apply styling based on tag and comparison
208
+ if tag == "exact" and row.get(exp_key) != row[key]:
209
+ values.append(f"[red]{value}[/red]")
210
+ elif tag == "insert":
211
+ values.append(f"[yellow]{value}[/yellow]")
212
+ else:
213
+ values.append(value)
214
+
215
+ rich_table.add_row(*values)
216
+
217
+ # Print the table
218
+ console = Console()
219
+ console.print(rich_table)
220
+
221
+
222
+ def explain_for_terminal(
223
+ phonemes: str,
224
+ exp_phonemes: str,
225
+ sifat: list[Sifa],
226
+ exp_sifat: list[SifaOutput],
227
+ lang: Literal["arabic", "english"] = "english",
228
+ ):
229
+ # Create diff-match-patch object
230
+ dmp_obj = dmp.diff_match_patch()
231
+
232
+ # Calculate differences
233
+ diffs = dmp_obj.diff_main(exp_phonemes, phonemes)
234
+
235
+ # Create a Rich Text object for colored output
236
+ result = Text()
237
+
238
+ # Process each difference
239
+ for op, data in diffs:
240
+ if op == dmp_obj.DIFF_EQUAL:
241
+ result.append(data, style="white")
242
+ elif op == dmp_obj.DIFF_INSERT:
243
+ result.append(data, style="green")
244
+ elif op == dmp_obj.DIFF_DELETE:
245
+ result.append(data, style="red strike")
246
+
247
+ # Print the result
248
+ print(result)
249
+ sifat_table = expalin_sifat(sifat, exp_sifat, diffs)
250
+ print_sifat_table(sifat_table, lang=lang) # Add this line to print the table
explain_gradio.cpython-312.pyc ADDED
Binary file (3.91 kB). View file
 
explain_gradio.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Literal
2
+ import diff_match_patch as dmp
3
+
4
+ from .explain import expalin_sifat
5
+ from .modeling.vocab import SIFAT_ATTR_TO_ARABIC_WITHOUT_BRACKETS
6
+
7
+
8
+ def explain_for_gradio(
9
+ phonemes: str,
10
+ exp_phonemes: str,
11
+ sifat: list,
12
+ exp_sifat: list,
13
+ lang: Literal["arabic", "english"] = "english",
14
+ ) -> str:
15
+ # Create diff-match-patch object
16
+ dmp_obj = dmp.diff_match_patch()
17
+
18
+ # Calculate differences using Google's diff-match-patch (same as terminal)
19
+ diffs = dmp_obj.diff_main(exp_phonemes, phonemes)
20
+
21
+ # Create HTML for phoneme differences
22
+ phoneme_html = explain_phonemes_html(dmp_obj, diffs)
23
+
24
+ # Create HTML for sifat table using your existing function
25
+ sifat_table = expalin_sifat(sifat, exp_sifat, diffs)
26
+ sifat_html = explain_sifat_html(sifat_table, lang)
27
+
28
+ # Combine both sections
29
+ html_output = f"""
30
+ <div style="font-family: monospace; width: 100%;">
31
+ <h3>مقارنة الحروف</h3>
32
+ {phoneme_html}
33
+ <h3>مقارنة صفات الحروف</h3>
34
+ {sifat_html}
35
+ <div class="color-legend">
36
+ </div>
37
+ """
38
+
39
+ return html_output
40
+
41
+
42
+ def explain_phonemes_html(dmp_obj, diffs):
43
+ html_output = '<div style="background-color: #000; padding: 10px; border-radius: 5px; margin-bottom: 20px; font-size: 30px;">'
44
+
45
+ # Process each difference (same logic as terminal version)
46
+ for op, data in diffs:
47
+ if op == dmp_obj.DIFF_EQUAL:
48
+ html_output += f'<span style="color: #ffffff;">{data}</span>'
49
+ elif op == dmp_obj.DIFF_INSERT:
50
+ html_output += f'<span style="color: #00ff00;">{data}</span>'
51
+ elif op == dmp_obj.DIFF_DELETE:
52
+ html_output += f'<span style="color: #ff0000; text-decoration: line-through;">{data}</span>'
53
+
54
+ html_output += "</div>"
55
+ return html_output
56
+
57
+
58
+ def explain_sifat_html(table, lang):
59
+ if not table:
60
+ return "<p>No sifat data available</p>"
61
+
62
+ # Create HTML table with full width
63
+ html_output = """
64
+ <table style="width: 100%; border-collapse: collapse; background-color: #000; color: #fff; margin-bottom: 20px;">
65
+ <thead>
66
+ <tr>
67
+ """
68
+
69
+ # Get base columns (non-exp keys without 'tag')
70
+ base_keys = [k for k in table[0].keys() if not k.startswith("exp_") and k != "tag"]
71
+
72
+ # Add columns
73
+ for key in base_keys:
74
+ html_output += f'<th style="border: 1px solid #444; padding: 8px; text-align: left;">{key.replace("_", " ").title()}</th>'
75
+
76
+ html_output += """
77
+ </tr>
78
+ </thead>
79
+ <tbody>
80
+ """
81
+
82
+ # Add rows
83
+ for row in table:
84
+ tag = row["tag"]
85
+ html_output += "<tr>"
86
+
87
+ for key in base_keys:
88
+ exp_key = f"exp_{key}"
89
+ value = str(row[key])
90
+
91
+ # Apply Arabic translation if needed
92
+ if key != "phonemes" and lang == "arabic":
93
+ value = SIFAT_ATTR_TO_ARABIC_WITHOUT_BRACKETS.get(value, value)
94
+
95
+ # Apply styling based on tag and comparison
96
+ if tag == "exact" and row.get(exp_key) != row[key]:
97
+ html_output += f'<td style="border: 1px solid #444; padding: 8px; color: #ff0000;">{value}</td>'
98
+ elif tag == "insert":
99
+ html_output += f'<td style="border: 1px solid #444; padding: 8px; color: #ffff00;">{value}</td>'
100
+ else:
101
+ html_output += (
102
+ f'<td style="border: 1px solid #444; padding: 8px;">{value}</td>'
103
+ )
104
+
105
+ html_output += "</tr>"
106
+
107
+ html_output += """
108
+ </tbody>
109
+ </table>
110
+ """
111
+
112
+ return html_output
faq.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FAQ
2
+
3
+ ## Why do I get a sampling rate error?
4
+
5
+ `Muaalem.__call__` enforces `sampling_rate == 16000` in `src/quran_muaalem/inference.py`. Make sure your audio is resampled to 16 kHz.
6
+
7
+ ## The UI fails to load audio files
8
+
9
+ Install system audio dependencies (see `README.md`):
10
+
11
+ ```bash
12
+ sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
13
+ ```
14
+
15
+ ## How do I change the model checkpoint?
16
+
17
+ Pass `model_name_or_path` when constructing `Muaalem` or update `model_id` in `src/quran_muaalem/gradio_app.py`.
getting-started.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Getting Started
2
+
3
+ This project is a Python package with an optional Gradio UI. The core package lives in `src/quran_muaalem/` and depends on `quran-transcript` for phonetic reference generation.
4
+
5
+ ## Requirements
6
+
7
+ From `README.md` and `pyproject.toml`:
8
+
9
+ - Python 3.10+
10
+ - System audio tools for common workflows:
11
+ - `ffmpeg` for audio decoding
12
+ - `libsndfile1` and `portaudio19-dev` if you work with audio I/O (see `README.md` install snippet)
13
+ - Optional GPU (CUDA) for faster inference; the code uses `torch.cuda.is_available()` in `src/quran_muaalem/gradio_app.py`.
14
+
15
+ ## Install
16
+
17
+ Core package:
18
+
19
+ ```bash
20
+ pip install quran-muaalem
21
+ ```
22
+
23
+ UI extras (adds Gradio + audio tooling):
24
+
25
+ ```bash
26
+ pip install "quran-muaalem[ui]"
27
+ ```
28
+
29
+ If you use `uv`, the README documents an all‑in‑one command for the UI:
30
+
31
+ ```bash
32
+ uvx --no-cache --from https://github.com/obadx/quran-muaalem.git[ui] quran-muaalem-ui
33
+ ```
34
+
35
+ ## Quick Start (Python API)
36
+
37
+ The main inference class is `Muaalem` in `src/quran_muaalem/inference.py`. It expects:
38
+
39
+ - audio at **16 kHz** (`sampling_rate=16000` is enforced)
40
+ - a reference phonetic script from `quran_transcript.quran_phonetizer`
41
+
42
+ Minimal flow based on `README.md`:
43
+
44
+ ```python
45
+ from librosa.core import load
46
+ import torch
47
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
48
+ from quran_muaalem import Muaalem
49
+
50
+ sampling_rate = 16000
51
+ device = "cuda" if torch.cuda.is_available() else "cpu"
52
+
53
+ uthmani_ref = Aya(8, 75).get_by_imlaey_words(17, 9).uthmani
54
+ moshaf = MoshafAttributes(rewaya="hafs", madd_monfasel_len=2, madd_mottasel_len=4, madd_mottasel_waqf=4, madd_aared_len=2)
55
+ ref = quran_phonetizer(uthmani_ref, moshaf, remove_spaces=True)
56
+
57
+ muaalem = Muaalem(device=device)
58
+ wave, _ = load("./assets/test.wav", sr=sampling_rate, mono=True)
59
+ outs = muaalem([wave], [ref], sampling_rate=sampling_rate)
60
+ ```
61
+
62
+ ## Model download and cache
63
+
64
+ The model is pulled from Hugging Face on first use. Cache locations are controlled by environment variables such as:
65
+
66
+ - `HF_HOME`
67
+ - `HUGGINGFACE_HUB_CACHE`
68
+ - `TRANSFORMERS_CACHE`
69
+
70
+ (see `Dockerfile` for example defaults).
71
+
72
+ ## Troubleshooting (common cases)
73
+
74
+ - **`ValueError: sampling_rate has to be 16000`** → resample your audio to 16 kHz.
75
+ - **Missing `ffmpeg`** → install it via your system package manager.
76
+ - **Slow inference on CPU** → use GPU or shorten audio segments.
77
+
78
+ For a full walkthrough, see the Quran Muaalem API page.
gradio-ui.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # واجهة Gradio
2
+
3
+ مدخل الواجهة موجود في `src/quran_muaalem/gradio_app.py` ويُعرّف كسكربت طرفي باسم `quran-muaalem-ui` في `pyproject.toml`.
4
+
5
+ ## ماذا تفعل الواجهة؟
6
+
7
+ أهم الدوال في `src/quran_muaalem/gradio_app.py`:
8
+
9
+ - `process_audio(...)`
10
+ - تحميل الصوت عبر `librosa.load`.
11
+ - بناء مرجع صوتي عبر `quran_phonetizer`.
12
+ - تشغيل `Muaalem` وإرجاع HTML.
13
+ - `update_uthmani_ref(...)`
14
+ - جلب نص الرسم العثماني عبر `quran_transcript.Aya`.
15
+ - `create_gradio_input_for_field(...)`
16
+ - بناء عناصر الإدخال من `MoshafAttributes.model_fields`.
17
+
18
+ ## مسار الاستخدام في الواجهة
19
+
20
+ 1. اختيار **السورة** و **الآية**.
21
+ 2. تحديد **رقم الكلمة** و **عدد الكلمات**.
22
+ 3. رفع الصوت أو التسجيل.
23
+ 4. الضغط على زر التحليل لإظهار المقارنة.
24
+
25
+ إذا كان مدى الكلمات يقطع كلمة عثمانية، ستظهر رسالة تحذير مرتبطة بـ `PartOfUthmaniWord`.
26
+
27
+ ## تشغيل الواجهة
28
+
29
+ ```python
30
+ app.launch(server_name="0.0.0.0", share=True)
31
+ ```
32
+
33
+ - لتعطيل المشاركة العامة: عدّل `share=False` في `main()`.
34
+ - لتغيير المنفذ أو الواجهة: عدّل استدعاء `app.launch(...)`.
35
+
36
+ ## تشغيل الواجهة محليًا
37
+
38
+ ```bash
39
+ pip install "quran-muaalem[ui]"
40
+ quran-muaalem-ui
41
+ ```
42
+
43
+ السكربت `quran-muaalem-ui` يشير إلى `quran_muaalem.gradio_app:main`.
44
+
45
+ ## قيود معروفة
46
+
47
+ - الواجهة تعمل بأسلوب **غير متدفق** (تعالج الصوت كاملًا دفعة واحدة).
48
+ - الأداء يعتمد على طول الصوت وتوفر GPU.
49
+ - للمعالجة الدُفعية يُفضّل استخدام واجهة بايثون.
gradio_app.py ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from dataclasses import asdict
3
+ import json
4
+ from typing import Literal, Optional, Any, get_origin, get_args
5
+
6
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
7
+ from quran_transcript.utils import PartOfUthmaniWord
8
+ from quran_transcript.phonetics.moshaf_attributes import (
9
+ get_arabic_attributes,
10
+ get_arabic_name,
11
+ )
12
+ from librosa.core import load
13
+ from pydantic.fields import FieldInfo, PydanticUndefined
14
+ import torch
15
+ import gradio as gr
16
+
17
+ from quran_muaalem.inference import Muaalem
18
+ from quran_muaalem.muaalem_typing import MuaalemOutput
19
+ from quran_muaalem.explain import explain_for_terminal
20
+ from quran_muaalem.explain_gradio import explain_for_gradio
21
+
22
+ # Initialize components
23
+ REQUIRED_MOSHAF_FIELDS = [
24
+ "rewaya",
25
+ "takbeer",
26
+ "madd_monfasel_len",
27
+ "madd_mottasel_len",
28
+ "madd_mottasel_waqf",
29
+ "madd_aared_len",
30
+ "madd_alleen_len",
31
+ "ghonna_lam_and_raa",
32
+ "meem_aal_imran",
33
+ "madd_yaa_alayn_alharfy",
34
+ "saken_before_hamz",
35
+ "sakt_iwaja",
36
+ "sakt_marqdena",
37
+ "sakt_man_raq",
38
+ "sakt_bal_ran",
39
+ "sakt_maleeyah",
40
+ "between_anfal_and_tawba",
41
+ "noon_and_yaseen",
42
+ "yaa_ataan",
43
+ "start_with_ism",
44
+ "yabsut",
45
+ "bastah",
46
+ "almusaytirun",
47
+ "bimusaytir",
48
+ "tasheel_or_madd",
49
+ "yalhath_dhalik",
50
+ "irkab_maana",
51
+ "noon_tamnna",
52
+ "harakat_daaf",
53
+ "alif_salasila",
54
+ "idgham_nakhluqkum",
55
+ "raa_firq",
56
+ "raa_alqitr",
57
+ "raa_misr",
58
+ "raa_nudhur",
59
+ "raa_yasr",
60
+ "meem_mokhfah",
61
+ ]
62
+ model_id = "obadx/muaalem-model-v3_2"
63
+ logging.basicConfig(level=logging.INFO)
64
+ device = "cuda" if torch.cuda.is_available() else "cpu"
65
+ muaalem = Muaalem(model_name_or_path=model_id, device=device)
66
+ sampling_rate = 16000
67
+
68
+ # Load Sura information
69
+ sura_idx_to_name = {}
70
+ sura_to_aya_count = {}
71
+ start_aya = Aya()
72
+ for sura_idx in range(1, 115):
73
+ start_aya.set(sura_idx, 1)
74
+ sura_idx_to_name[sura_idx] = start_aya.get().sura_name
75
+ sura_to_aya_count[sura_idx] = start_aya.get().num_ayat_in_sura
76
+
77
+ # Default moshaf settings
78
+ default_moshaf = MoshafAttributes(
79
+ rewaya="hafs",
80
+ madd_monfasel_len=4,
81
+ madd_mottasel_len=4,
82
+ madd_mottasel_waqf=4,
83
+ madd_aared_len=4,
84
+ )
85
+
86
+ # Current moshaf settings (will be updated from settings page)
87
+ current_moshaf = default_moshaf
88
+
89
+
90
+ def get_field_name(field_name: str, field_info: FieldInfo) -> str:
91
+ """Return the Arabic name of the field if applicable else the field_name"""
92
+ label = field_name
93
+ arabic_name = get_arabic_name(field_info)
94
+ if arabic_name:
95
+ label = f"{arabic_name} ({field_name})"
96
+ return label
97
+
98
+
99
+ def create_gradio_input_for_field(
100
+ field_name: str,
101
+ field_info: FieldInfo,
102
+ default_value: Any = None,
103
+ key_prefix="model_",
104
+ help: str | None = None,
105
+ ) -> Any:
106
+ """Create a gradio input field given a pydantic field info"""
107
+ # Extract Arabic name from field description if available
108
+ label = get_field_name(field_name, field_info)
109
+
110
+ if default_value is None:
111
+ if field_info.default != PydanticUndefined:
112
+ default_value = field_info.default
113
+
114
+ if help is None:
115
+ help = field_info.description
116
+
117
+ # Handle Literal types
118
+ if get_origin(field_info.annotation) is Literal:
119
+ choices = list(get_args(field_info.annotation))
120
+ arabic_attributes = get_arabic_attributes(field_info)
121
+
122
+ # Create choice list with Arabic labels if available
123
+ choice_list = []
124
+ for choice in choices:
125
+ if arabic_attributes and choice in arabic_attributes:
126
+ choice_list.append((arabic_attributes[choice], choice))
127
+ else:
128
+ choice_list.append((str(choice), choice))
129
+
130
+ return gr.Dropdown(
131
+ choices=choice_list,
132
+ value=default_value,
133
+ label=label,
134
+ info=help,
135
+ interactive=True,
136
+ )
137
+
138
+ # Handle different field types
139
+ if field_info.annotation in [str, Optional[str]]:
140
+ return gr.Textbox(value=default_value or "", label=label, info=help)
141
+ elif field_info.annotation in [int, Optional[int]]:
142
+ return gr.Number(value=default_value or 0, label=label, info=help, precision=0)
143
+ elif field_info.annotation in [float, Optional[float]]:
144
+ return gr.Number(
145
+ value=default_value or 0.0, label=label, info=help, precision=1
146
+ )
147
+ elif field_info.annotation in [bool, Optional[bool]]:
148
+ return gr.Checkbox(value=default_value or False, label=label, info=help)
149
+
150
+ raise ValueError(f"Unsupported field type for {label}: {field_info.annotation}")
151
+
152
+
153
+ def update_aya_dropdown(sura_idx):
154
+ if not sura_idx:
155
+ sura_idx = 1
156
+ return gr.update(
157
+ choices=list(range(1, sura_to_aya_count[int(sura_idx)] + 1)), value=1
158
+ )
159
+
160
+
161
+ def update_uthmani_ref(sura_idx, aya_idx, start_idx, num_words):
162
+ if not all([sura_idx, aya_idx, start_idx is not None, num_words is not None]):
163
+ return ""
164
+ try:
165
+ uthmani_ref = (
166
+ Aya(int(sura_idx), int(aya_idx))
167
+ .get_by_imlaey_words(int(start_idx), int(num_words))
168
+ .uthmani
169
+ )
170
+ return uthmani_ref
171
+ except PartOfUthmaniWord as e:
172
+ return f"⚠️ Warning: You've selected part of a Uthmani word. Please adjust the number of words to include complete words only.\n\nError details: {str(e)}"
173
+ except Exception as e:
174
+ return f"Error: {str(e)}"
175
+
176
+
177
+ def process_audio(audio, sura_idx, aya_idx, start_idx, num_words):
178
+ global current_moshaf
179
+
180
+ if audio is None:
181
+ return "Please upload an audio file first"
182
+
183
+ try:
184
+ # Get Uthmani reference text
185
+ uthmani_ref = (
186
+ Aya(int(sura_idx), int(aya_idx))
187
+ .get_by_imlaey_words(int(start_idx), int(num_words))
188
+ .uthmani
189
+ )
190
+ phonetizer_out = quran_phonetizer(
191
+ uthmani_ref, current_moshaf, remove_spaces=True
192
+ )
193
+
194
+ # Process audio
195
+ wave, _ = load(audio, sr=sampling_rate, mono=True)
196
+ outs = muaalem(
197
+ [wave],
198
+ [phonetizer_out],
199
+ sampling_rate=sampling_rate,
200
+ )
201
+
202
+ # # Prepare output
203
+ # output_text = f"Phonemes: {outs[0].phonemes}\n\n"
204
+ # for sifa in outs[0].sifat:
205
+ # output_text += json.dumps(asdict(sifa), indent=2, ensure_ascii=False) + "\n"
206
+ # output_text += "*" * 30 + "\n"
207
+ # output_text += "-" * 40 + "\n\n"
208
+
209
+ # Add explanation
210
+ explanation_html = explain_for_gradio(
211
+ outs[0].phonemes.text,
212
+ phonetizer_out.phonemes,
213
+ outs[0].sifat,
214
+ phonetizer_out.sifat,
215
+ lang="arabic",
216
+ )
217
+
218
+ return explanation_html
219
+
220
+ except PartOfUthmaniWord as e:
221
+ return f"⚠️ Error: The selected word range includes partial Uthmani words. Please adjust the number of words to include complete words only.\n\nError details: {str(e)}"
222
+ # except Exception as e:
223
+ # return f"Error processing audio: {str(e)}"
224
+
225
+
226
+ def update_moshaf_settings(*args):
227
+ """Update the global moshaf settings with values from the settings page"""
228
+ global current_moshaf, field_names
229
+
230
+ try:
231
+ # Create a dictionary from the field names and values
232
+ settings_dict = dict(zip(field_names, args))
233
+
234
+ # Create a new MoshafAttributes object with the updated values
235
+ current_moshaf = MoshafAttributes(**settings_dict)
236
+ return "✅ تم حفظ الإعدادات بنجاح - Settings saved successfully!"
237
+ except Exception as e:
238
+ return f"❌ خطأ في حفظ الإعدادات - Error saving settings: {str(e)}"
239
+
240
+
241
+ def reset_settings():
242
+ """Reset all settings to default values"""
243
+ global current_moshaf
244
+
245
+ try:
246
+ current_moshaf = default_moshaf
247
+ # Return default values for all fields
248
+ default_values = [
249
+ getattr(default_moshaf, field_name) for field_name in field_names
250
+ ]
251
+ return default_values + [
252
+ "✅ تم إعادة التعيين إلى الإعدادات الافتراضية - Reset to default settings successfully!"
253
+ ]
254
+ except Exception as e:
255
+ return [getattr(current_moshaf, field_name) for field_name in field_names] + [
256
+ f"❌ Error resetting settings: {str(e)}"
257
+ ]
258
+
259
+
260
+ # Create the Gradio app
261
+ with gr.Blocks(title="المعلم القرآني") as app:
262
+ # Store current moshaf settings in session state
263
+ current_moshaf_state = gr.State(default_moshaf)
264
+
265
+ # Initialize field names list
266
+ field_names = []
267
+
268
+ with gr.Tab("التحليل الرئيسي - Main Analysis"):
269
+ gr.Markdown("# كشف أخطاء التلاوة والتجويد وصفات الحروف")
270
+ gr.Markdown("اختر المقطع القرآني المراد تعلمه")
271
+
272
+ with gr.Row():
273
+ with gr.Column(scale=1):
274
+ gr.Markdown("### التلاة المقارنة")
275
+
276
+ # Create sura dropdown with both index and name
277
+ sura_choices = [
278
+ (f"{idx} - {sura_idx_to_name[idx]}", idx) for idx in range(1, 115)
279
+ ]
280
+ sura_dropdown = gr.Dropdown(
281
+ choices=sura_choices,
282
+ label="السورة",
283
+ value=1,
284
+ elem_id="sura_dropdown",
285
+ )
286
+
287
+ aya_dropdown = gr.Dropdown(
288
+ choices=list(range(1, sura_to_aya_count[1] + 1)),
289
+ label="رقم الآية",
290
+ value=1,
291
+ elem_id="aya_dropdown",
292
+ )
293
+ start_idx = gr.Number(
294
+ value=0,
295
+ label="رقمة الكلمة بداية من صفر (Word Index)",
296
+ minimum=0,
297
+ step=1,
298
+ elem_id="start_idx",
299
+ )
300
+ num_words = gr.Number(
301
+ value=4,
302
+ label="عدد الكلمات",
303
+ minimum=1,
304
+ step=1,
305
+ elem_id="num_words",
306
+ )
307
+ uthmani_text = gr.Textbox(
308
+ label="الرسم العثماني",
309
+ interactive=False,
310
+ elem_id="uthmani_text",
311
+ )
312
+
313
+ with gr.Column(scale=2):
314
+ gr.Markdown("### فحص التلاوة القرآنية")
315
+ audio_input = gr.Audio(
316
+ sources=["upload", "microphone"],
317
+ label="Upload or Record Audio",
318
+ type="filepath",
319
+ elem_id="audio_input",
320
+ )
321
+ analyze_btn = gr.Button(
322
+ "افحص التلاوة", variant="primary", elem_id="analyze_btn"
323
+ )
324
+ output_html = gr.HTML(
325
+ label="نتيجة الفحص",
326
+ elem_id="output_html",
327
+ )
328
+
329
+ # Initial update of uthmani text
330
+ app.load(
331
+ update_uthmani_ref,
332
+ inputs=[sura_dropdown, aya_dropdown, start_idx, num_words],
333
+ outputs=uthmani_text,
334
+ )
335
+
336
+ # Update aya dropdown when sura changes and reset aya_idx to 1
337
+ sura_dropdown.change(
338
+ update_aya_dropdown, inputs=sura_dropdown, outputs=aya_dropdown
339
+ ).then(
340
+ update_uthmani_ref,
341
+ inputs=[sura_dropdown, aya_dropdown, start_idx, num_words],
342
+ outputs=uthmani_text,
343
+ )
344
+
345
+ # Update uthmani text when any parameter changes
346
+ for component in [aya_dropdown, start_idx, num_words]:
347
+ component.change(
348
+ update_uthmani_ref,
349
+ inputs=[sura_dropdown, aya_dropdown, start_idx, num_words],
350
+ outputs=uthmani_text,
351
+ )
352
+
353
+ # Process audio when button is clicked
354
+ analyze_btn.click(
355
+ process_audio,
356
+ inputs=[audio_input, sura_dropdown, aya_dropdown, start_idx, num_words],
357
+ outputs=output_html,
358
+ )
359
+
360
+ with gr.Tab("إعدادات المصحف - Moshaf Settings"):
361
+ gr.Markdown("# إعدادات خصائص المصحف")
362
+ gr.Markdown("قم بتعديل خصائص المصحف حسب التلاوة المطلوبة")
363
+
364
+ # Create settings inputs directly in the tab
365
+ settings_components = []
366
+ fields = MoshafAttributes.model_fields
367
+
368
+ # Create inputs for all required fields
369
+ for field_name in REQUIRED_MOSHAF_FIELDS:
370
+ field_info = fields[field_name]
371
+ input_component = create_gradio_input_for_field(
372
+ field_name, field_info, getattr(default_moshaf, field_name, None)
373
+ )
374
+ settings_components.append(input_component)
375
+ field_names.append(field_name)
376
+
377
+ # Save button and status message
378
+ with gr.Row():
379
+ save_btn = gr.Button("حفظ الإعدادات - Save Settings", variant="primary")
380
+ reset_btn = gr.Button("إعادة التعيين - Reset to Default")
381
+
382
+ status_message = gr.Markdown()
383
+
384
+ # Save settings event
385
+ save_btn.click(
386
+ update_moshaf_settings, inputs=settings_components, outputs=status_message
387
+ )
388
+
389
+ # Reset to default event
390
+ reset_btn.click(
391
+ reset_settings, inputs=[], outputs=settings_components + [status_message]
392
+ )
393
+
394
+
395
+ def main(app=app):
396
+ app.launch(server_name="0.0.0.0", share=True)
397
+
398
+
399
+ if __name__ == "__main__":
400
+ main()
401
+ # app.launch(server_name="0.0.0.0", share=True)
index.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # نظرة عامة على المعلّم القرآني
2
+
3
+ المعلّم القرآني هو طبقة الاستدلال التي **تقارن التلاوة بمرجع صوتي** وتنتج مخرجات متعددة المستويات: **فونيمات** + **صفات تجويدية** لكل مجموعة فونيمات.
4
+
5
+ نقاط الدخول الأساسية:
6
+
7
+ - الفئة `Muaalem` في `src/quran_muaalem/inference.py`.
8
+ - واجهة Gradio في `src/quran_muaalem/gradio_app.py`.
9
+
10
+ ## لماذا هذا مهم للباحثين؟
11
+
12
+ - النموذج لا يقوم بالنسخ فقط؛ بل ينتج **طبقة صفات** (سِفَات الحروف) قابلة للقياس والتحليل.
13
+ - هذا يتيح دراسات مقارنة أدق من WER/PER التقليدي.
14
+
15
+ ## مسار الاستدلال الأساسي
16
+
17
+ داخل `Muaalem.__call__`:
18
+
19
+ 1. ترميز المرجع الصوتي عبر `MultiLevelTokenizer`.
20
+ 2. استخراج خصائص الصوت عبر `AutoFeatureExtractor`.
21
+ 3. تشغيل نموذج `Wav2Vec2BertForMultilevelCTC`.
22
+ 4. فك الشيفرة عبر `phonemes_level_greedy_decode` و `multilevel_greedy_decode`.
23
+ 5. تجميع صفات كل مجموعة فونيمات في `Sifa` وإرجاع `MuaalemOutput`.
24
+
25
+ > **ملاحظة:** المرجع الصوتي يُبنى باستخدام `quran_transcript.quran_phonetizer`.
26
+
27
+ ## القيود العملية
28
+
29
+ - معدل العينة المطلوب: **16 kHz**.
30
+ - جودة النتائج تعتمد على جودة المرجع (الرسم الصوتي) وجودة الصوت.
31
+ - قيم الاحتمالات (`probs`) ليست مُعايرة افتراضيًا.
32
+
33
+ ## أين تجد التفاصيل؟
34
+
35
+ - **واجهة بايثون**: شرح المدخلات والمخرجات والأمثلة.
36
+ - **المخرجات**: مخطط تفصيلي للـ `MuaalemOutput`.
37
+ - **المعمارية**: تفاصيل CTC متعدد المستويات.
38
+
39
+ ## ملفات أساسية
40
+
41
+ - `src/quran_muaalem/inference.py` — فئة النموذج ومسار الاستدلال.
42
+ - `src/quran_muaalem/decode.py` — فك الشيفرة والمحاذاة.
43
+ - `src/quran_muaalem/muaalem_typing.py` — تعريف المخرجات.
44
+ - `src/quran_muaalem/gradio_app.py` — واجهة المستخدم وإعدادات المصحف.
inference.cpython-312.pyc ADDED
Binary file (8.92 kB). View file
 
inference.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+
3
+ from quran_transcript import chunck_phonemes, QuranPhoneticScriptOutput
4
+ from transformers import AutoFeatureExtractor
5
+ import torch
6
+ from numpy.typing import NDArray
7
+
8
+ from .modeling.multi_level_tokenizer import MultiLevelTokenizer
9
+ from .modeling.modeling_multi_level_ctc import Wav2Vec2BertForMultilevelCTC
10
+ from .decode import (
11
+ multilevel_greedy_decode,
12
+ phonemes_level_greedy_decode,
13
+ )
14
+ from .muaalem_typing import Unit, SingleUnit, Sifa, MuaalemOutput
15
+
16
+
17
+ def format_sifat(
18
+ level_to_units: dict[str, list[Unit]],
19
+ chunked_phonemes_batch: list[list[str]],
20
+ multi_level_tokenizer: MultiLevelTokenizer,
21
+ ) -> list[list[Sifa]]:
22
+ sifat_batch = []
23
+ for seq_idx in range(len(chunked_phonemes_batch)):
24
+ sifat = []
25
+ for idx, ph_group in enumerate(chunked_phonemes_batch[seq_idx]):
26
+ sifa_dict = {}
27
+ for level in level_to_units:
28
+ if level == "phonemes":
29
+ continue
30
+ sifa_idx = idx
31
+ if sifa_idx < len(level_to_units[level][seq_idx].ids):
32
+ label = int(level_to_units[level][seq_idx].ids[sifa_idx])
33
+ text = multi_level_tokenizer.sifat_to_en_vocab[level][label]
34
+ p = level_to_units[level][seq_idx].probs[sifa_idx]
35
+ sifa_dict[level] = SingleUnit(
36
+ text=text, prob=float(p), idx=int(label)
37
+ )
38
+ else:
39
+ logging.info(
40
+ f"Sequence: `{seq_idx}` has short Level: {level} we will place it with `None`"
41
+ )
42
+ sifa_dict[level] = None
43
+ sifat.append(
44
+ Sifa(
45
+ phonemes_group=chunked_phonemes_batch[seq_idx][idx],
46
+ **sifa_dict,
47
+ )
48
+ )
49
+ sifat_batch.append(sifat)
50
+ return sifat_batch
51
+
52
+
53
+ class Muaalem:
54
+ def __init__(
55
+ self,
56
+ model_name_or_path: str = "obadx/muaalem-model-v3_2",
57
+ device: str = "cpu",
58
+ dtype=torch.bfloat16,
59
+ ):
60
+ """
61
+ Initializing Muallem Model
62
+
63
+ Args:
64
+ model_name_or_path: the huggingface model name or path
65
+ device: the device to run model on
66
+ dtype: the torch dtype. Default is `torch.bfloat16` as the model was trained on
67
+ """
68
+ self.device = device
69
+ self.dtype = dtype
70
+
71
+ self.model = Wav2Vec2BertForMultilevelCTC.from_pretrained(model_name_or_path)
72
+ self.multi_level_tokenizer = MultiLevelTokenizer(model_name_or_path)
73
+ self.processor = AutoFeatureExtractor.from_pretrained(model_name_or_path)
74
+
75
+ self.model.to(device, dtype=dtype)
76
+
77
+ @torch.no_grad()
78
+ def __call__(
79
+ self,
80
+ waves: list[list[float] | torch.FloatTensor | NDArray],
81
+ ref_quran_phonetic_script_list: list[QuranPhoneticScriptOutput],
82
+ sampling_rate: int,
83
+ ) -> list[MuaalemOutput]:
84
+ """Infrence Funcion for the Quran Muaalem Project
85
+
86
+ waves: input waves batch , seq_len with different formats described above
87
+ ref_quran_phonetic_script_list (list[QuranPhoneticScriptOutput]): list of the
88
+ phonetized ouput of `quran_transcript.quran_phonetizer` with `remove_space=True`
89
+
90
+ sampleing_rate (int): has to be 16000
91
+
92
+ Returns:
93
+ list[MuaalemOutput]:
94
+ A list of output objects, each containing phoneme predictions and their
95
+ phonetic features (sifat) for a processed input.
96
+
97
+ Each MuaalemOutput contains:
98
+ phonemes (Unit):
99
+ A dataclass representing the predicted phoneme sequence with:
100
+ text (str): Concatenated string of all phonemes.
101
+ probs (Union[torch.FloatTensor, list[float]]):
102
+ Confidence probabilities for each predicted phoneme.
103
+ ids (Union[torch.LongTensor, list[int]]):
104
+ Token IDs corresponding to each phoneme.
105
+
106
+ sifat (list[Sifa]):
107
+ A list of phonetic feature dataclasses (one per phoneme) with the
108
+ following optional properties (each is a SingleUnit or None):
109
+ - phonemes_group (str): the phonemes associated with the `sifa`
110
+ - hams_or_jahr (SingleUnit): either `hams` or `jahr`
111
+ - shidda_or_rakhawa (SingleUnit): either `shadeed`, `between`, or `rikhw`
112
+ - tafkheem_or_taqeeq (SingleUnit): either `mofakham`, `moraqaq`, or `low_mofakham`
113
+ - itbaq (SingleUnit): either `monfateh`, or `motbaq`
114
+ - safeer (SingleUnit): either `safeer`, or `no_safeer`
115
+ - qalqla (SingleUnit): eithr `moqalqal`, or `not_moqalqal`
116
+ - tikraar (SingleUnit): either `mokarar` or `not_mokarar`
117
+ - tafashie (SingleUnit): either `motafashie`, or `not_motafashie`
118
+ - istitala (SingleUnit): either `mostateel`, or `not_mostateel`
119
+ - ghonna (SingleUnit): either `maghnoon`, or `not_maghnoon`
120
+
121
+ Each SingleUnit in Sifa properties contains:
122
+ text (str): The feature's categorical label (e.g., "hams", "shidda").
123
+ prob (float): Confidence probability for this feature.
124
+ idx (int): Identifier for the feature class.
125
+ """
126
+
127
+ if sampling_rate != 16000:
128
+ raise ValueError(f"`sampling_rate` has to be 16000 got: `{sampling_rate}`")
129
+
130
+ # TODO: check input waves
131
+
132
+ # Tokanizing Ref
133
+ level_to_ref_ids = self.multi_level_tokenizer.tokenize(
134
+ [r.phonemes for r in ref_quran_phonetic_script_list],
135
+ [r.sifat for r in ref_quran_phonetic_script_list],
136
+ to_dict=True,
137
+ return_tensors="pt",
138
+ padding="longest",
139
+ )["input_ids"]
140
+
141
+ features = self.processor(
142
+ waves, sampling_rate=sampling_rate, return_tensors="pt"
143
+ )
144
+ features = {k: v.to(self.device, dtype=self.dtype) for k, v in features.items()}
145
+ outs = self.model(**features, return_dict=False)[0]
146
+
147
+ probs = {}
148
+ for level in outs:
149
+ probs[level] = (
150
+ torch.nn.functional.softmax(outs[level], dim=-1).cpu().to(torch.float32)
151
+ )
152
+
153
+ # Decoding only Phonemes Level
154
+ phonemes_units = phonemes_level_greedy_decode(
155
+ probs["phonemes"], self.multi_level_tokenizer.id_to_vocab["phonemes"]
156
+ )
157
+
158
+ chunked_phonemes_batch: list[list[str]] = []
159
+ for phonemes_unit in phonemes_units:
160
+ chunked_phonemes_batch.append(chunck_phonemes(phonemes_unit.text))
161
+
162
+ level_to_units = multilevel_greedy_decode(
163
+ level_to_probs=probs,
164
+ level_to_id_to_vocab=self.multi_level_tokenizer.id_to_vocab,
165
+ level_to_ref_ids=level_to_ref_ids,
166
+ chunked_phonemes_batch=chunked_phonemes_batch,
167
+ ref_chuncked_phonemes_batch=[
168
+ [s.phonemes for s in r.sifat] for r in ref_quran_phonetic_script_list
169
+ ],
170
+ phonemes_units=phonemes_units,
171
+ )
172
+
173
+ sifat_batch: list[list[Sifa]] = format_sifat(
174
+ level_to_units,
175
+ chunked_phonemes_batch,
176
+ self.multi_level_tokenizer,
177
+ )
178
+
179
+ outs = []
180
+ # looping over the batch
181
+ for idx in range(len(level_to_units["phonemes"])):
182
+ outs.append(
183
+ MuaalemOutput(
184
+ phonemes=level_to_units["phonemes"][idx],
185
+ sifat=sifat_batch[idx],
186
+ )
187
+ )
188
+ return outs
license.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # الترخيص
2
+
3
+ المشروع مرخّص وفق رخصة MIT.
4
+
5
+ راجع `LICENSE` في جذر المستودع للنص الكامل.
modeling_multi_level_ctc.cpython-312.pyc ADDED
Binary file (6.77 kB). View file
 
modeling_multi_level_ctc.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union
2
+
3
+ from transformers.models.wav2vec2_bert.modeling_wav2vec2_bert import (
4
+ Wav2Vec2BertPreTrainedModel,
5
+ Wav2Vec2BertModel,
6
+ _HIDDEN_STATES_START_POSITION,
7
+ )
8
+ from transformers.utils import auto_docstring
9
+ from transformers.modeling_outputs import CausalLMOutput
10
+ import torch
11
+ from torch import nn
12
+
13
+ from .configuration_multi_level_ctc import Wav2Vec2BertForMultilevelCTCConfig
14
+
15
+
16
+ class Wav2Vec2BertForMultilevelCTC(Wav2Vec2BertPreTrainedModel):
17
+ config_class = Wav2Vec2BertForMultilevelCTCConfig
18
+
19
+ def __init__(self, config):
20
+ super().__init__(config)
21
+
22
+ self.wav2vec2_bert = Wav2Vec2BertModel(config)
23
+ self.dropout = nn.Dropout(config.final_dropout)
24
+
25
+ if config.level_to_vocab_size == {}:
26
+ raise ValueError(
27
+ f"You are trying to instantiate {self.__class__} with a configuration that "
28
+ "does not define the vocabulary size of the language model head. Please "
29
+ "instantiate the model as follows: `Wav2Vec2BertForCTC.from_pretrained(..., level_to_vocab_size=level_to_vocab_size)`. "
30
+ "or define `level_to_vocab_size` of your model's configuration."
31
+ )
32
+ output_hidden_size = (
33
+ config.output_hidden_size
34
+ if hasattr(config, "add_adapter") and config.add_adapter
35
+ else config.hidden_size
36
+ )
37
+ self.level_to_lm_head = nn.ModuleDict(
38
+ {
39
+ level: nn.Linear(output_hidden_size, vocab_size)
40
+ for level, vocab_size in config.level_to_vocab_size.items()
41
+ }
42
+ )
43
+
44
+ # Initialize weights and apply final processing
45
+ self.post_init()
46
+
47
+ @auto_docstring
48
+ def forward(
49
+ self,
50
+ input_features: Optional[torch.Tensor],
51
+ attention_mask: Optional[torch.Tensor] = None,
52
+ output_attentions: Optional[bool] = None,
53
+ output_hidden_states: Optional[bool] = None,
54
+ return_dict: Optional[bool] = None,
55
+ labels: Optional[dict[str, torch.Tensor]] = None,
56
+ ) -> Union[tuple, CausalLMOutput]:
57
+ r"""
58
+ labels (dict[`str`, `torch.LongTensor`] level_name to its labels of shape `(batch_size, target_length)`, *optional*):
59
+ Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
60
+ the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`.
61
+ All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ...,
62
+ config.vocab_size - 1]`.
63
+ """
64
+ if labels is not None:
65
+ if not isinstance(labels, dict):
66
+ raise ValueError(
67
+ f"Label has to be a dict for level to its tartget labels got `{type(labels)}`"
68
+ )
69
+ for level in labels:
70
+ if labels[level].max() >= self.config.level_to_vocab_size[level]:
71
+ raise ValueError(
72
+ f"Label values must be <= vocab_size: {self.config.level_to_vocab_size[level]} for level: `{level}`"
73
+ )
74
+
75
+ return_dict = (
76
+ return_dict if return_dict is not None else self.config.use_return_dict
77
+ )
78
+
79
+ outputs = self.wav2vec2_bert(
80
+ input_features,
81
+ attention_mask=attention_mask,
82
+ output_attentions=output_attentions,
83
+ output_hidden_states=output_hidden_states,
84
+ return_dict=return_dict,
85
+ )
86
+
87
+ hidden_states = outputs[0]
88
+ hidden_states = self.dropout(hidden_states)
89
+
90
+ level_to_logits = {}
91
+ for level in self.level_to_lm_head:
92
+ level_to_logits[level] = self.level_to_lm_head[level](hidden_states)
93
+
94
+ loss = None
95
+ if labels is not None:
96
+ # retrieve loss input_lengths from attention_mask
97
+ attention_mask = (
98
+ attention_mask
99
+ if attention_mask is not None
100
+ else torch.ones(
101
+ input_features.shape[:2],
102
+ device=input_features.device,
103
+ dtype=torch.long,
104
+ )
105
+ )
106
+ input_lengths = self._get_feat_extract_output_lengths(
107
+ attention_mask.sum([-1])
108
+ ).to(torch.long)
109
+
110
+ loss = 0.0
111
+ for level in labels:
112
+ # assuming that padded tokens are filled with -100
113
+ # when not being attended to
114
+ labels_mask = labels[level] >= 0
115
+ target_lengths = labels_mask.sum(-1)
116
+ flattened_targets = labels[level].masked_select(labels_mask)
117
+
118
+ # ctc_loss doesn't support fp16
119
+ log_probs = nn.functional.log_softmax(
120
+ level_to_logits[level], dim=-1, dtype=torch.float32
121
+ ).transpose(0, 1)
122
+
123
+ with torch.backends.cudnn.flags(enabled=False):
124
+ loss += self.config.level_to_loss_weight[
125
+ level
126
+ ] * nn.functional.ctc_loss(
127
+ log_probs,
128
+ flattened_targets,
129
+ input_lengths,
130
+ target_lengths,
131
+ blank=self.config.pad_token_id,
132
+ reduction=self.config.ctc_loss_reduction,
133
+ zero_infinity=self.config.ctc_zero_infinity,
134
+ )
135
+
136
+ if not return_dict:
137
+ output = (level_to_logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
138
+ return ((loss,) + output) if loss is not None else output
139
+
140
+ return CausalLMOutput(
141
+ loss=loss,
142
+ logits=level_to_logits,
143
+ hidden_states=outputs.hidden_states,
144
+ attentions=outputs.attentions,
145
+ )
146
+
147
+
148
+ __all__ = ["Wav2Vec2BertForMultilevelCTC"]
muaalem_typing.cpython-312.pyc ADDED
Binary file (3.55 kB). View file
 
muaalem_typing.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass
2
+ import torch
3
+
4
+
5
+ @dataclass
6
+ class Unit:
7
+ """
8
+ probs: 1D tensors
9
+ """
10
+
11
+ text: str
12
+ probs: torch.FloatTensor | list[float]
13
+ ids: torch.LongTensor | list[int]
14
+
15
+
16
+ @dataclass
17
+ class SingleUnit:
18
+ """
19
+ A dataclass representing the predicted phoneme sequence with:
20
+ text (str): Concatenated string of all phonemes.
21
+ probs (Union[torch.FloatTensor, list[float]]):
22
+ Confidence probabilities for each predicted phoneme (1D tensor).
23
+ ids (Union[torch.LongTensor, list[int]]) (1D tensor):
24
+ Token IDs corresponding to each phoneme.
25
+
26
+ """
27
+
28
+ text: str
29
+ prob: float
30
+ idx: int
31
+
32
+
33
+ @dataclass
34
+ class Sifa:
35
+ """
36
+ following optional properties (each is a SingleUnit or None):
37
+ - phonemes_group (str): the phonemes associated with the `sifa`
38
+ - hams_or_jahr (SingleUnit): either `hams` or `jahr`
39
+ - shidda_or_rakhawa (SingleUnit): either `shadeed`, `between`, or `rikhw`
40
+ - tafkheem_or_taqeeq (SingleUnit): either `mofakham`, `moraqaq`, or `low_mofakham`
41
+ - itbaq (SingleUnit): either `monfateh`, or `motbaq`
42
+ - safeer (SingleUnit): either `safeer`, or `no_safeer`
43
+ - qalqla (SingleUnit): eithr `moqalqal`, or `not_moqalqal`
44
+ - tikraar (SingleUnit): either `mokarar` or `not_mokarar`
45
+ - tafashie (SingleUnit): either `motafashie`, or `not_motafashie`
46
+ - istitala (SingleUnit): either `mostateel`, or `not_mostateel`
47
+ - ghonna (SingleUnit): either `maghnoon`, or `not_maghnoon`
48
+
49
+ Each SingleUnit in Sifa properties contains:
50
+ text (str): The feature's categorical label (e.g., "hams", "shidda").
51
+ prob (float): Confidence probability for this feature.
52
+ idx (int): Identifier for the feature class.
53
+
54
+ """
55
+
56
+ phonemes_group: str
57
+ hams_or_jahr: SingleUnit | None
58
+ shidda_or_rakhawa: SingleUnit | None
59
+ tafkheem_or_taqeeq: SingleUnit | None
60
+ itbaq: SingleUnit | None
61
+ safeer: SingleUnit | None
62
+ qalqla: SingleUnit | None
63
+ tikraar: SingleUnit | None
64
+ tafashie: SingleUnit | None
65
+ istitala: SingleUnit | None
66
+ ghonna: SingleUnit | None
67
+
68
+
69
+ @dataclass
70
+ class MuaalemOutput:
71
+ """
72
+ text (str): The feature's categorical label (e.g., "hams", "shidda").
73
+ prob (float): Confidence probability for this feature.
74
+ idx (int): Identifier for the feature class.
75
+ """
76
+
77
+ phonemes: Unit
78
+ sifat: list[Sifa]
multi_level_tokenizer.cpython-312.pyc ADDED
Binary file (6.24 kB). View file
 
multi_level_tokenizer.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import get_origin, Literal, Any
2
+
3
+ from quran_transcript import SifaOutput, quran_phonetizer
4
+ from transformers import Wav2Vec2CTCTokenizer
5
+
6
+ from .vocab import PAD_TOKEN, PAD_TOKEN_IDX, SIFAT_ATTR_TO_ARABIC, SIFAT_ATTR_TO_ENGLISH
7
+
8
+
9
+ def add_zero_between(L, x=PAD_TOKEN_IDX):
10
+ out = []
11
+ for i, item in enumerate(L):
12
+ out.append(item)
13
+ if i < len(L) - 1: # Don't add zero after the last element
14
+ out.append(0)
15
+ return out
16
+
17
+
18
+ class MultiLevelTokenizer:
19
+ def __init__(self, model_name_or_path: str):
20
+ self.levels = ["phonemes"]
21
+ for fieldname, fieldinfo in SifaOutput.model_fields.items():
22
+ if get_origin(fieldinfo.annotation) == Literal:
23
+ self.levels.append(fieldname)
24
+
25
+ self.level_to_tokenizer = {}
26
+ for level in self.levels:
27
+ self.level_to_tokenizer[level] = Wav2Vec2CTCTokenizer.from_pretrained(
28
+ model_name_or_path, pad_token=PAD_TOKEN, target_lang=level
29
+ )
30
+
31
+ self.level_to_id_vocab = self.get_level_to_id_to_voab()
32
+ self.sifat_level_to_id_to_en_vocab = self.get_sifat_levels_to_en_name()
33
+
34
+ def get_tokenizer(self):
35
+ return self.level_to_tokenizer["phonemes"]
36
+
37
+ @property
38
+ def vocab(self):
39
+ return self.get_tokenizer().vocab
40
+
41
+ @property
42
+ def id_to_vocab(self):
43
+ return self.level_to_id_vocab
44
+
45
+ @property
46
+ def sifat_to_en_vocab(self):
47
+ return self.sifat_level_to_id_to_en_vocab
48
+
49
+ def tokenize(
50
+ self,
51
+ phonetic_script: list[str] | str,
52
+ sifat: list[list[SifaOutput | dict]] | list[SifaOutput | dict],
53
+ to_dict=False,
54
+ **kwargs,
55
+ ) -> dict:
56
+ if isinstance(phonetic_script, str):
57
+ phonetic_script = [phonetic_script]
58
+ if not isinstance(sifat[0], list):
59
+ sifat = [sifat]
60
+
61
+ if isinstance(sifat[0][0], dict):
62
+ sifat = [[SifaOutput(**s) for s in inner_list] for inner_list in sifat]
63
+
64
+ level_to_text_list = {}
65
+ for level in self.levels:
66
+ if level == "phonemes":
67
+ text_list = phonetic_script
68
+ else:
69
+ text_list = [
70
+ "".join(
71
+ [SIFAT_ATTR_TO_ARABIC[getattr(s, level)] for s in inner_list]
72
+ )
73
+ for inner_list in sifat
74
+ ]
75
+ level_to_text_list[level] = text_list
76
+
77
+ level_to_tokenized = {}
78
+ for level in self.levels:
79
+ level_to_tokenized[level] = self.level_to_tokenizer[level](
80
+ level_to_text_list[level], **kwargs
81
+ )
82
+
83
+ if to_dict:
84
+ out_dict = {"input_ids": {}, "attention_mask": {}}
85
+ for level in level_to_tokenized:
86
+ for k in out_dict:
87
+ out_dict[k][level] = level_to_tokenized[level][k]
88
+ return out_dict
89
+ return level_to_tokenized
90
+
91
+ def decode(
92
+ self, level_to_input_ids: dict[str, Any], place_zeros_in_between=False
93
+ ) -> dict[str, list[str] | str]:
94
+ level_to_decoded_outs = {}
95
+ for level in level_to_input_ids:
96
+ input_ids = level_to_input_ids[level]
97
+ if place_zeros_in_between:
98
+ input_ids = [add_zero_between(ids) for ids in input_ids]
99
+ level_to_decoded_outs[level] = self.level_to_tokenizer[level].batch_decode(
100
+ input_ids,
101
+ )
102
+ return level_to_decoded_outs
103
+
104
+ def get_level_to_id_to_voab(self):
105
+ vocab = self.get_tokenizer().vocab
106
+ level_to_ids_to_vocab = {}
107
+ for level in vocab:
108
+ level_to_ids_to_vocab[level] = {v: k for k, v in vocab[level].items()}
109
+ return level_to_ids_to_vocab
110
+
111
+ def get_sifat_levels_to_en_name(self):
112
+ level_to_id_to_vocab = self.get_level_to_id_to_voab()
113
+ level_to_id_to_en_vocab = {}
114
+ for level in level_to_id_to_vocab:
115
+ if level == "phonemes":
116
+ continue
117
+ level_to_id_to_en_vocab[level] = {
118
+ k: SIFAT_ATTR_TO_ENGLISH[v] if k != PAD_TOKEN_IDX else PAD_TOKEN
119
+ for k, v in level_to_id_to_vocab[level].items()
120
+ }
121
+ return level_to_id_to_en_vocab
mutli-level-ctc.png ADDED
output.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # المخرجات والشرح
2
+
3
+ الاستدلال يُرجع قائمة من `MuaalemOutput` (`src/quran_muaalem/muaalem_typing.py`). كل عنصر يحتوي على:
4
+
5
+ - `phonemes`: كائن `Unit` مع نص الفونيمات، الاحتمالات، والمعرّفات.
6
+ - `sifat`: قائمة `Sifa` (عنصر لكل مجموعة فونيمات) مع خصائص اختيارية.
7
+
8
+ ## مخطط المخرجات (مفهومي)
9
+
10
+ ```text
11
+ MuaalemOutput
12
+ phonemes: Unit
13
+ sifat: list[Sifa]
14
+
15
+ Unit
16
+ text: str
17
+ probs: Tensor | list[float]
18
+ ids: Tensor | list[int]
19
+
20
+ Sifa
21
+ phonemes_group: str
22
+ hams_or_jahr: SingleUnit | None
23
+ shidda_or_rakhawa: SingleUnit | None
24
+ tafkheem_or_taqeeq: SingleUnit | None
25
+ itbaq: SingleUnit | None
26
+ safeer: SingleUnit | None
27
+ qalqla: SingleUnit | None
28
+ tikraar: SingleUnit | None
29
+ tafashie: SingleUnit | None
30
+ istitala: SingleUnit | None
31
+ ghonna: SingleUnit | None
32
+
33
+ SingleUnit
34
+ text: str
35
+ prob: float
36
+ idx: int
37
+ ```
38
+
39
+ ## مثال مبسّط
40
+
41
+ ```json
42
+ {
43
+ "phonemes": {
44
+ "text": "بِسْمِٱللَّهِ...",
45
+ "probs": [0.98, 0.93, 0.87],
46
+ "ids": [12, 7, 31]
47
+ },
48
+ "sifat": [
49
+ {
50
+ "phonemes_group": "بِ",
51
+ "hams_or_jahr": {"text": "jahr", "prob": 0.99, "idx": 1},
52
+ "shidda_or_rakhawa": {"text": "shadeed", "prob": 0.95, "idx": 2},
53
+ "tafkheem_or_taqeeq": {"text": "moraqaq", "prob": 0.94, "idx": 1},
54
+ "itbaq": {"text": "monfateh", "prob": 0.92, "idx": 1},
55
+ "safeer": {"text": "no_safeer", "prob": 0.99, "idx": 0},
56
+ "qalqla": {"text": "not_moqalqal", "prob": 0.99, "idx": 0},
57
+ "tikraar": {"text": "not_mokarar", "prob": 0.99, "idx": 0},
58
+ "tafashie": {"text": "not_motafashie", "prob": 0.99, "idx": 0},
59
+ "istitala": {"text": "not_mostateel", "prob": 0.99, "idx": 0},
60
+ "ghonna": {"text": "not_maghnoon", "prob": 0.99, "idx": 0}
61
+ }
62
+ ]
63
+ }
64
+ ```
65
+
66
+ > ملاحظات:
67
+ > - `probs` ناتجة عن softmax خاص بـ CTC وليست مُعايرة بالضرورة.
68
+ > - قد تكون بعض حقول `Sifa` بقيمة `None` إذا حدث عدم تطابق في المحاذاة.
69
+
70
+ ## مقارنة التوقع مع المرجع
71
+
72
+ هناك وحدتان لعرض النتائج:
73
+
74
+ - `src/quran_muaalem/explain.py` يعرض جدولًا في الطرفية باستخدام `rich`.
75
+ - `explain_for_terminal(...)` يبني فرقًا بين الفونيمات المتوقعة والمرجع ثم يطبع جدولًا.
76
+ - `src/quran_muaalem/explain_gradio.py` يولّد HTML لواجهة Gradio.
77
+ - `explain_for_gradio(...)` يعرض فرقًا ملونًا للفونيمات وجدول خصائص.
78
+
79
+ كلاهما يستخدم `diff-match-patch` لتقسيم الإدراجات والحذف والاختلافات الجزئية بين الفونيمات.
80
+
81
+ ## حقول `Sifa`
82
+
83
+ - `hams_or_jahr`
84
+ - `shidda_or_rakhawa`
85
+ - `tafkheem_or_taqeeq`
86
+ - `itbaq`
87
+ - `safeer`
88
+ - `qalqla`
89
+ - `tikraar`
90
+ - `tafashie`
91
+ - `istitala`
92
+ - `ghonna`
package-lock.json ADDED
@@ -0,0 +1,2471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "quran-muaalem-docs",
3
+ "lockfileVersion": 3,
4
+ "requires": true,
5
+ "packages": {
6
+ "": {
7
+ "name": "quran-muaalem-docs",
8
+ "devDependencies": {
9
+ "vitepress": "^1.6.4"
10
+ }
11
+ },
12
+ "node_modules/@algolia/abtesting": {
13
+ "version": "1.12.2",
14
+ "resolved": "https://registry.npmjs.org/@algolia/abtesting/-/abtesting-1.12.2.tgz",
15
+ "integrity": "sha512-oWknd6wpfNrmRcH0vzed3UPX0i17o4kYLM5OMITyMVM2xLgaRbIafoxL0e8mcrNNb0iORCJA0evnNDKRYth5WQ==",
16
+ "dev": true,
17
+ "license": "MIT",
18
+ "dependencies": {
19
+ "@algolia/client-common": "5.46.2",
20
+ "@algolia/requester-browser-xhr": "5.46.2",
21
+ "@algolia/requester-fetch": "5.46.2",
22
+ "@algolia/requester-node-http": "5.46.2"
23
+ },
24
+ "engines": {
25
+ "node": ">= 14.0.0"
26
+ }
27
+ },
28
+ "node_modules/@algolia/autocomplete-core": {
29
+ "version": "1.17.7",
30
+ "resolved": "https://registry.npmjs.org/@algolia/autocomplete-core/-/autocomplete-core-1.17.7.tgz",
31
+ "integrity": "sha512-BjiPOW6ks90UKl7TwMv7oNQMnzU+t/wk9mgIDi6b1tXpUek7MW0lbNOUHpvam9pe3lVCf4xPFT+lK7s+e+fs7Q==",
32
+ "dev": true,
33
+ "license": "MIT",
34
+ "dependencies": {
35
+ "@algolia/autocomplete-plugin-algolia-insights": "1.17.7",
36
+ "@algolia/autocomplete-shared": "1.17.7"
37
+ }
38
+ },
39
+ "node_modules/@algolia/autocomplete-plugin-algolia-insights": {
40
+ "version": "1.17.7",
41
+ "resolved": "https://registry.npmjs.org/@algolia/autocomplete-plugin-algolia-insights/-/autocomplete-plugin-algolia-insights-1.17.7.tgz",
42
+ "integrity": "sha512-Jca5Ude6yUOuyzjnz57og7Et3aXjbwCSDf/8onLHSQgw1qW3ALl9mrMWaXb5FmPVkV3EtkD2F/+NkT6VHyPu9A==",
43
+ "dev": true,
44
+ "license": "MIT",
45
+ "dependencies": {
46
+ "@algolia/autocomplete-shared": "1.17.7"
47
+ },
48
+ "peerDependencies": {
49
+ "search-insights": ">= 1 < 3"
50
+ }
51
+ },
52
+ "node_modules/@algolia/autocomplete-preset-algolia": {
53
+ "version": "1.17.7",
54
+ "resolved": "https://registry.npmjs.org/@algolia/autocomplete-preset-algolia/-/autocomplete-preset-algolia-1.17.7.tgz",
55
+ "integrity": "sha512-ggOQ950+nwbWROq2MOCIL71RE0DdQZsceqrg32UqnhDz8FlO9rL8ONHNsI2R1MH0tkgVIDKI/D0sMiUchsFdWA==",
56
+ "dev": true,
57
+ "license": "MIT",
58
+ "dependencies": {
59
+ "@algolia/autocomplete-shared": "1.17.7"
60
+ },
61
+ "peerDependencies": {
62
+ "@algolia/client-search": ">= 4.9.1 < 6",
63
+ "algoliasearch": ">= 4.9.1 < 6"
64
+ }
65
+ },
66
+ "node_modules/@algolia/autocomplete-shared": {
67
+ "version": "1.17.7",
68
+ "resolved": "https://registry.npmjs.org/@algolia/autocomplete-shared/-/autocomplete-shared-1.17.7.tgz",
69
+ "integrity": "sha512-o/1Vurr42U/qskRSuhBH+VKxMvkkUVTLU6WZQr+L5lGZZLYWyhdzWjW0iGXY7EkwRTjBqvN2EsR81yCTGV/kmg==",
70
+ "dev": true,
71
+ "license": "MIT",
72
+ "peerDependencies": {
73
+ "@algolia/client-search": ">= 4.9.1 < 6",
74
+ "algoliasearch": ">= 4.9.1 < 6"
75
+ }
76
+ },
77
+ "node_modules/@algolia/client-abtesting": {
78
+ "version": "5.46.2",
79
+ "resolved": "https://registry.npmjs.org/@algolia/client-abtesting/-/client-abtesting-5.46.2.tgz",
80
+ "integrity": "sha512-oRSUHbylGIuxrlzdPA8FPJuwrLLRavOhAmFGgdAvMcX47XsyM+IOGa9tc7/K5SPvBqn4nhppOCEz7BrzOPWc4A==",
81
+ "dev": true,
82
+ "license": "MIT",
83
+ "dependencies": {
84
+ "@algolia/client-common": "5.46.2",
85
+ "@algolia/requester-browser-xhr": "5.46.2",
86
+ "@algolia/requester-fetch": "5.46.2",
87
+ "@algolia/requester-node-http": "5.46.2"
88
+ },
89
+ "engines": {
90
+ "node": ">= 14.0.0"
91
+ }
92
+ },
93
+ "node_modules/@algolia/client-analytics": {
94
+ "version": "5.46.2",
95
+ "resolved": "https://registry.npmjs.org/@algolia/client-analytics/-/client-analytics-5.46.2.tgz",
96
+ "integrity": "sha512-EPBN2Oruw0maWOF4OgGPfioTvd+gmiNwx0HmD9IgmlS+l75DatcBkKOPNJN+0z3wBQWUO5oq602ATxIfmTQ8bA==",
97
+ "dev": true,
98
+ "license": "MIT",
99
+ "dependencies": {
100
+ "@algolia/client-common": "5.46.2",
101
+ "@algolia/requester-browser-xhr": "5.46.2",
102
+ "@algolia/requester-fetch": "5.46.2",
103
+ "@algolia/requester-node-http": "5.46.2"
104
+ },
105
+ "engines": {
106
+ "node": ">= 14.0.0"
107
+ }
108
+ },
109
+ "node_modules/@algolia/client-common": {
110
+ "version": "5.46.2",
111
+ "resolved": "https://registry.npmjs.org/@algolia/client-common/-/client-common-5.46.2.tgz",
112
+ "integrity": "sha512-Hj8gswSJNKZ0oyd0wWissqyasm+wTz1oIsv5ZmLarzOZAp3vFEda8bpDQ8PUhO+DfkbiLyVnAxsPe4cGzWtqkg==",
113
+ "dev": true,
114
+ "license": "MIT",
115
+ "engines": {
116
+ "node": ">= 14.0.0"
117
+ }
118
+ },
119
+ "node_modules/@algolia/client-insights": {
120
+ "version": "5.46.2",
121
+ "resolved": "https://registry.npmjs.org/@algolia/client-insights/-/client-insights-5.46.2.tgz",
122
+ "integrity": "sha512-6dBZko2jt8FmQcHCbmNLB0kCV079Mx/DJcySTL3wirgDBUH7xhY1pOuUTLMiGkqM5D8moVZTvTdRKZUJRkrwBA==",
123
+ "dev": true,
124
+ "license": "MIT",
125
+ "dependencies": {
126
+ "@algolia/client-common": "5.46.2",
127
+ "@algolia/requester-browser-xhr": "5.46.2",
128
+ "@algolia/requester-fetch": "5.46.2",
129
+ "@algolia/requester-node-http": "5.46.2"
130
+ },
131
+ "engines": {
132
+ "node": ">= 14.0.0"
133
+ }
134
+ },
135
+ "node_modules/@algolia/client-personalization": {
136
+ "version": "5.46.2",
137
+ "resolved": "https://registry.npmjs.org/@algolia/client-personalization/-/client-personalization-5.46.2.tgz",
138
+ "integrity": "sha512-1waE2Uqh/PHNeDXGn/PM/WrmYOBiUGSVxAWqiJIj73jqPqvfzZgzdakHscIVaDl6Cp+j5dwjsZ5LCgaUr6DtmA==",
139
+ "dev": true,
140
+ "license": "MIT",
141
+ "dependencies": {
142
+ "@algolia/client-common": "5.46.2",
143
+ "@algolia/requester-browser-xhr": "5.46.2",
144
+ "@algolia/requester-fetch": "5.46.2",
145
+ "@algolia/requester-node-http": "5.46.2"
146
+ },
147
+ "engines": {
148
+ "node": ">= 14.0.0"
149
+ }
150
+ },
151
+ "node_modules/@algolia/client-query-suggestions": {
152
+ "version": "5.46.2",
153
+ "resolved": "https://registry.npmjs.org/@algolia/client-query-suggestions/-/client-query-suggestions-5.46.2.tgz",
154
+ "integrity": "sha512-EgOzTZkyDcNL6DV0V/24+oBJ+hKo0wNgyrOX/mePBM9bc9huHxIY2352sXmoZ648JXXY2x//V1kropF/Spx83w==",
155
+ "dev": true,
156
+ "license": "MIT",
157
+ "dependencies": {
158
+ "@algolia/client-common": "5.46.2",
159
+ "@algolia/requester-browser-xhr": "5.46.2",
160
+ "@algolia/requester-fetch": "5.46.2",
161
+ "@algolia/requester-node-http": "5.46.2"
162
+ },
163
+ "engines": {
164
+ "node": ">= 14.0.0"
165
+ }
166
+ },
167
+ "node_modules/@algolia/client-search": {
168
+ "version": "5.46.2",
169
+ "resolved": "https://registry.npmjs.org/@algolia/client-search/-/client-search-5.46.2.tgz",
170
+ "integrity": "sha512-ZsOJqu4HOG5BlvIFnMU0YKjQ9ZI6r3C31dg2jk5kMWPSdhJpYL9xa5hEe7aieE+707dXeMI4ej3diy6mXdZpgA==",
171
+ "dev": true,
172
+ "license": "MIT",
173
+ "peer": true,
174
+ "dependencies": {
175
+ "@algolia/client-common": "5.46.2",
176
+ "@algolia/requester-browser-xhr": "5.46.2",
177
+ "@algolia/requester-fetch": "5.46.2",
178
+ "@algolia/requester-node-http": "5.46.2"
179
+ },
180
+ "engines": {
181
+ "node": ">= 14.0.0"
182
+ }
183
+ },
184
+ "node_modules/@algolia/ingestion": {
185
+ "version": "1.46.2",
186
+ "resolved": "https://registry.npmjs.org/@algolia/ingestion/-/ingestion-1.46.2.tgz",
187
+ "integrity": "sha512-1Uw2OslTWiOFDtt83y0bGiErJYy5MizadV0nHnOoHFWMoDqWW0kQoMFI65pXqRSkVvit5zjXSLik2xMiyQJDWQ==",
188
+ "dev": true,
189
+ "license": "MIT",
190
+ "dependencies": {
191
+ "@algolia/client-common": "5.46.2",
192
+ "@algolia/requester-browser-xhr": "5.46.2",
193
+ "@algolia/requester-fetch": "5.46.2",
194
+ "@algolia/requester-node-http": "5.46.2"
195
+ },
196
+ "engines": {
197
+ "node": ">= 14.0.0"
198
+ }
199
+ },
200
+ "node_modules/@algolia/monitoring": {
201
+ "version": "1.46.2",
202
+ "resolved": "https://registry.npmjs.org/@algolia/monitoring/-/monitoring-1.46.2.tgz",
203
+ "integrity": "sha512-xk9f+DPtNcddWN6E7n1hyNNsATBCHIqAvVGG2EAGHJc4AFYL18uM/kMTiOKXE/LKDPyy1JhIerrh9oYb7RBrgw==",
204
+ "dev": true,
205
+ "license": "MIT",
206
+ "dependencies": {
207
+ "@algolia/client-common": "5.46.2",
208
+ "@algolia/requester-browser-xhr": "5.46.2",
209
+ "@algolia/requester-fetch": "5.46.2",
210
+ "@algolia/requester-node-http": "5.46.2"
211
+ },
212
+ "engines": {
213
+ "node": ">= 14.0.0"
214
+ }
215
+ },
216
+ "node_modules/@algolia/recommend": {
217
+ "version": "5.46.2",
218
+ "resolved": "https://registry.npmjs.org/@algolia/recommend/-/recommend-5.46.2.tgz",
219
+ "integrity": "sha512-NApbTPj9LxGzNw4dYnZmj2BoXiAc8NmbbH6qBNzQgXklGklt/xldTvu+FACN6ltFsTzoNU6j2mWNlHQTKGC5+Q==",
220
+ "dev": true,
221
+ "license": "MIT",
222
+ "dependencies": {
223
+ "@algolia/client-common": "5.46.2",
224
+ "@algolia/requester-browser-xhr": "5.46.2",
225
+ "@algolia/requester-fetch": "5.46.2",
226
+ "@algolia/requester-node-http": "5.46.2"
227
+ },
228
+ "engines": {
229
+ "node": ">= 14.0.0"
230
+ }
231
+ },
232
+ "node_modules/@algolia/requester-browser-xhr": {
233
+ "version": "5.46.2",
234
+ "resolved": "https://registry.npmjs.org/@algolia/requester-browser-xhr/-/requester-browser-xhr-5.46.2.tgz",
235
+ "integrity": "sha512-ekotpCwpSp033DIIrsTpYlGUCF6momkgupRV/FA3m62SreTSZUKjgK6VTNyG7TtYfq9YFm/pnh65bATP/ZWJEg==",
236
+ "dev": true,
237
+ "license": "MIT",
238
+ "dependencies": {
239
+ "@algolia/client-common": "5.46.2"
240
+ },
241
+ "engines": {
242
+ "node": ">= 14.0.0"
243
+ }
244
+ },
245
+ "node_modules/@algolia/requester-fetch": {
246
+ "version": "5.46.2",
247
+ "resolved": "https://registry.npmjs.org/@algolia/requester-fetch/-/requester-fetch-5.46.2.tgz",
248
+ "integrity": "sha512-gKE+ZFi/6y7saTr34wS0SqYFDcjHW4Wminv8PDZEi0/mE99+hSrbKgJWxo2ztb5eqGirQTgIh1AMVacGGWM1iw==",
249
+ "dev": true,
250
+ "license": "MIT",
251
+ "dependencies": {
252
+ "@algolia/client-common": "5.46.2"
253
+ },
254
+ "engines": {
255
+ "node": ">= 14.0.0"
256
+ }
257
+ },
258
+ "node_modules/@algolia/requester-node-http": {
259
+ "version": "5.46.2",
260
+ "resolved": "https://registry.npmjs.org/@algolia/requester-node-http/-/requester-node-http-5.46.2.tgz",
261
+ "integrity": "sha512-ciPihkletp7ttweJ8Zt+GukSVLp2ANJHU+9ttiSxsJZThXc4Y2yJ8HGVWesW5jN1zrsZsezN71KrMx/iZsOYpg==",
262
+ "dev": true,
263
+ "license": "MIT",
264
+ "dependencies": {
265
+ "@algolia/client-common": "5.46.2"
266
+ },
267
+ "engines": {
268
+ "node": ">= 14.0.0"
269
+ }
270
+ },
271
+ "node_modules/@babel/helper-string-parser": {
272
+ "version": "7.27.1",
273
+ "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz",
274
+ "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==",
275
+ "dev": true,
276
+ "license": "MIT",
277
+ "engines": {
278
+ "node": ">=6.9.0"
279
+ }
280
+ },
281
+ "node_modules/@babel/helper-validator-identifier": {
282
+ "version": "7.28.5",
283
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.28.5.tgz",
284
+ "integrity": "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==",
285
+ "dev": true,
286
+ "license": "MIT",
287
+ "engines": {
288
+ "node": ">=6.9.0"
289
+ }
290
+ },
291
+ "node_modules/@babel/parser": {
292
+ "version": "7.28.5",
293
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.28.5.tgz",
294
+ "integrity": "sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ==",
295
+ "dev": true,
296
+ "license": "MIT",
297
+ "dependencies": {
298
+ "@babel/types": "^7.28.5"
299
+ },
300
+ "bin": {
301
+ "parser": "bin/babel-parser.js"
302
+ },
303
+ "engines": {
304
+ "node": ">=6.0.0"
305
+ }
306
+ },
307
+ "node_modules/@babel/types": {
308
+ "version": "7.28.5",
309
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.28.5.tgz",
310
+ "integrity": "sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA==",
311
+ "dev": true,
312
+ "license": "MIT",
313
+ "dependencies": {
314
+ "@babel/helper-string-parser": "^7.27.1",
315
+ "@babel/helper-validator-identifier": "^7.28.5"
316
+ },
317
+ "engines": {
318
+ "node": ">=6.9.0"
319
+ }
320
+ },
321
+ "node_modules/@docsearch/css": {
322
+ "version": "3.8.2",
323
+ "resolved": "https://registry.npmjs.org/@docsearch/css/-/css-3.8.2.tgz",
324
+ "integrity": "sha512-y05ayQFyUmCXze79+56v/4HpycYF3uFqB78pLPrSV5ZKAlDuIAAJNhaRi8tTdRNXh05yxX/TyNnzD6LwSM89vQ==",
325
+ "dev": true,
326
+ "license": "MIT"
327
+ },
328
+ "node_modules/@docsearch/js": {
329
+ "version": "3.8.2",
330
+ "resolved": "https://registry.npmjs.org/@docsearch/js/-/js-3.8.2.tgz",
331
+ "integrity": "sha512-Q5wY66qHn0SwA7Taa0aDbHiJvaFJLOJyHmooQ7y8hlwwQLQ/5WwCcoX0g7ii04Qi2DJlHsd0XXzJ8Ypw9+9YmQ==",
332
+ "dev": true,
333
+ "license": "MIT",
334
+ "dependencies": {
335
+ "@docsearch/react": "3.8.2",
336
+ "preact": "^10.0.0"
337
+ }
338
+ },
339
+ "node_modules/@docsearch/react": {
340
+ "version": "3.8.2",
341
+ "resolved": "https://registry.npmjs.org/@docsearch/react/-/react-3.8.2.tgz",
342
+ "integrity": "sha512-xCRrJQlTt8N9GU0DG4ptwHRkfnSnD/YpdeaXe02iKfqs97TkZJv60yE+1eq/tjPcVnTW8dP5qLP7itifFVV5eg==",
343
+ "dev": true,
344
+ "license": "MIT",
345
+ "dependencies": {
346
+ "@algolia/autocomplete-core": "1.17.7",
347
+ "@algolia/autocomplete-preset-algolia": "1.17.7",
348
+ "@docsearch/css": "3.8.2",
349
+ "algoliasearch": "^5.14.2"
350
+ },
351
+ "peerDependencies": {
352
+ "@types/react": ">= 16.8.0 < 19.0.0",
353
+ "react": ">= 16.8.0 < 19.0.0",
354
+ "react-dom": ">= 16.8.0 < 19.0.0",
355
+ "search-insights": ">= 1 < 3"
356
+ },
357
+ "peerDependenciesMeta": {
358
+ "@types/react": {
359
+ "optional": true
360
+ },
361
+ "react": {
362
+ "optional": true
363
+ },
364
+ "react-dom": {
365
+ "optional": true
366
+ },
367
+ "search-insights": {
368
+ "optional": true
369
+ }
370
+ }
371
+ },
372
+ "node_modules/@esbuild/aix-ppc64": {
373
+ "version": "0.21.5",
374
+ "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.21.5.tgz",
375
+ "integrity": "sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==",
376
+ "cpu": [
377
+ "ppc64"
378
+ ],
379
+ "dev": true,
380
+ "license": "MIT",
381
+ "optional": true,
382
+ "os": [
383
+ "aix"
384
+ ],
385
+ "engines": {
386
+ "node": ">=12"
387
+ }
388
+ },
389
+ "node_modules/@esbuild/android-arm": {
390
+ "version": "0.21.5",
391
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.21.5.tgz",
392
+ "integrity": "sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==",
393
+ "cpu": [
394
+ "arm"
395
+ ],
396
+ "dev": true,
397
+ "license": "MIT",
398
+ "optional": true,
399
+ "os": [
400
+ "android"
401
+ ],
402
+ "engines": {
403
+ "node": ">=12"
404
+ }
405
+ },
406
+ "node_modules/@esbuild/android-arm64": {
407
+ "version": "0.21.5",
408
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.21.5.tgz",
409
+ "integrity": "sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==",
410
+ "cpu": [
411
+ "arm64"
412
+ ],
413
+ "dev": true,
414
+ "license": "MIT",
415
+ "optional": true,
416
+ "os": [
417
+ "android"
418
+ ],
419
+ "engines": {
420
+ "node": ">=12"
421
+ }
422
+ },
423
+ "node_modules/@esbuild/android-x64": {
424
+ "version": "0.21.5",
425
+ "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.21.5.tgz",
426
+ "integrity": "sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==",
427
+ "cpu": [
428
+ "x64"
429
+ ],
430
+ "dev": true,
431
+ "license": "MIT",
432
+ "optional": true,
433
+ "os": [
434
+ "android"
435
+ ],
436
+ "engines": {
437
+ "node": ">=12"
438
+ }
439
+ },
440
+ "node_modules/@esbuild/darwin-arm64": {
441
+ "version": "0.21.5",
442
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.21.5.tgz",
443
+ "integrity": "sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==",
444
+ "cpu": [
445
+ "arm64"
446
+ ],
447
+ "dev": true,
448
+ "license": "MIT",
449
+ "optional": true,
450
+ "os": [
451
+ "darwin"
452
+ ],
453
+ "engines": {
454
+ "node": ">=12"
455
+ }
456
+ },
457
+ "node_modules/@esbuild/darwin-x64": {
458
+ "version": "0.21.5",
459
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.21.5.tgz",
460
+ "integrity": "sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==",
461
+ "cpu": [
462
+ "x64"
463
+ ],
464
+ "dev": true,
465
+ "license": "MIT",
466
+ "optional": true,
467
+ "os": [
468
+ "darwin"
469
+ ],
470
+ "engines": {
471
+ "node": ">=12"
472
+ }
473
+ },
474
+ "node_modules/@esbuild/freebsd-arm64": {
475
+ "version": "0.21.5",
476
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.21.5.tgz",
477
+ "integrity": "sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==",
478
+ "cpu": [
479
+ "arm64"
480
+ ],
481
+ "dev": true,
482
+ "license": "MIT",
483
+ "optional": true,
484
+ "os": [
485
+ "freebsd"
486
+ ],
487
+ "engines": {
488
+ "node": ">=12"
489
+ }
490
+ },
491
+ "node_modules/@esbuild/freebsd-x64": {
492
+ "version": "0.21.5",
493
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.21.5.tgz",
494
+ "integrity": "sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==",
495
+ "cpu": [
496
+ "x64"
497
+ ],
498
+ "dev": true,
499
+ "license": "MIT",
500
+ "optional": true,
501
+ "os": [
502
+ "freebsd"
503
+ ],
504
+ "engines": {
505
+ "node": ">=12"
506
+ }
507
+ },
508
+ "node_modules/@esbuild/linux-arm": {
509
+ "version": "0.21.5",
510
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.21.5.tgz",
511
+ "integrity": "sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==",
512
+ "cpu": [
513
+ "arm"
514
+ ],
515
+ "dev": true,
516
+ "license": "MIT",
517
+ "optional": true,
518
+ "os": [
519
+ "linux"
520
+ ],
521
+ "engines": {
522
+ "node": ">=12"
523
+ }
524
+ },
525
+ "node_modules/@esbuild/linux-arm64": {
526
+ "version": "0.21.5",
527
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.21.5.tgz",
528
+ "integrity": "sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==",
529
+ "cpu": [
530
+ "arm64"
531
+ ],
532
+ "dev": true,
533
+ "license": "MIT",
534
+ "optional": true,
535
+ "os": [
536
+ "linux"
537
+ ],
538
+ "engines": {
539
+ "node": ">=12"
540
+ }
541
+ },
542
+ "node_modules/@esbuild/linux-ia32": {
543
+ "version": "0.21.5",
544
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.21.5.tgz",
545
+ "integrity": "sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==",
546
+ "cpu": [
547
+ "ia32"
548
+ ],
549
+ "dev": true,
550
+ "license": "MIT",
551
+ "optional": true,
552
+ "os": [
553
+ "linux"
554
+ ],
555
+ "engines": {
556
+ "node": ">=12"
557
+ }
558
+ },
559
+ "node_modules/@esbuild/linux-loong64": {
560
+ "version": "0.21.5",
561
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.21.5.tgz",
562
+ "integrity": "sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==",
563
+ "cpu": [
564
+ "loong64"
565
+ ],
566
+ "dev": true,
567
+ "license": "MIT",
568
+ "optional": true,
569
+ "os": [
570
+ "linux"
571
+ ],
572
+ "engines": {
573
+ "node": ">=12"
574
+ }
575
+ },
576
+ "node_modules/@esbuild/linux-mips64el": {
577
+ "version": "0.21.5",
578
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.21.5.tgz",
579
+ "integrity": "sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==",
580
+ "cpu": [
581
+ "mips64el"
582
+ ],
583
+ "dev": true,
584
+ "license": "MIT",
585
+ "optional": true,
586
+ "os": [
587
+ "linux"
588
+ ],
589
+ "engines": {
590
+ "node": ">=12"
591
+ }
592
+ },
593
+ "node_modules/@esbuild/linux-ppc64": {
594
+ "version": "0.21.5",
595
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.21.5.tgz",
596
+ "integrity": "sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==",
597
+ "cpu": [
598
+ "ppc64"
599
+ ],
600
+ "dev": true,
601
+ "license": "MIT",
602
+ "optional": true,
603
+ "os": [
604
+ "linux"
605
+ ],
606
+ "engines": {
607
+ "node": ">=12"
608
+ }
609
+ },
610
+ "node_modules/@esbuild/linux-riscv64": {
611
+ "version": "0.21.5",
612
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.21.5.tgz",
613
+ "integrity": "sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==",
614
+ "cpu": [
615
+ "riscv64"
616
+ ],
617
+ "dev": true,
618
+ "license": "MIT",
619
+ "optional": true,
620
+ "os": [
621
+ "linux"
622
+ ],
623
+ "engines": {
624
+ "node": ">=12"
625
+ }
626
+ },
627
+ "node_modules/@esbuild/linux-s390x": {
628
+ "version": "0.21.5",
629
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.21.5.tgz",
630
+ "integrity": "sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==",
631
+ "cpu": [
632
+ "s390x"
633
+ ],
634
+ "dev": true,
635
+ "license": "MIT",
636
+ "optional": true,
637
+ "os": [
638
+ "linux"
639
+ ],
640
+ "engines": {
641
+ "node": ">=12"
642
+ }
643
+ },
644
+ "node_modules/@esbuild/linux-x64": {
645
+ "version": "0.21.5",
646
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.21.5.tgz",
647
+ "integrity": "sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==",
648
+ "cpu": [
649
+ "x64"
650
+ ],
651
+ "dev": true,
652
+ "license": "MIT",
653
+ "optional": true,
654
+ "os": [
655
+ "linux"
656
+ ],
657
+ "engines": {
658
+ "node": ">=12"
659
+ }
660
+ },
661
+ "node_modules/@esbuild/netbsd-x64": {
662
+ "version": "0.21.5",
663
+ "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.21.5.tgz",
664
+ "integrity": "sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==",
665
+ "cpu": [
666
+ "x64"
667
+ ],
668
+ "dev": true,
669
+ "license": "MIT",
670
+ "optional": true,
671
+ "os": [
672
+ "netbsd"
673
+ ],
674
+ "engines": {
675
+ "node": ">=12"
676
+ }
677
+ },
678
+ "node_modules/@esbuild/openbsd-x64": {
679
+ "version": "0.21.5",
680
+ "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.21.5.tgz",
681
+ "integrity": "sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==",
682
+ "cpu": [
683
+ "x64"
684
+ ],
685
+ "dev": true,
686
+ "license": "MIT",
687
+ "optional": true,
688
+ "os": [
689
+ "openbsd"
690
+ ],
691
+ "engines": {
692
+ "node": ">=12"
693
+ }
694
+ },
695
+ "node_modules/@esbuild/sunos-x64": {
696
+ "version": "0.21.5",
697
+ "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.21.5.tgz",
698
+ "integrity": "sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==",
699
+ "cpu": [
700
+ "x64"
701
+ ],
702
+ "dev": true,
703
+ "license": "MIT",
704
+ "optional": true,
705
+ "os": [
706
+ "sunos"
707
+ ],
708
+ "engines": {
709
+ "node": ">=12"
710
+ }
711
+ },
712
+ "node_modules/@esbuild/win32-arm64": {
713
+ "version": "0.21.5",
714
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.21.5.tgz",
715
+ "integrity": "sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==",
716
+ "cpu": [
717
+ "arm64"
718
+ ],
719
+ "dev": true,
720
+ "license": "MIT",
721
+ "optional": true,
722
+ "os": [
723
+ "win32"
724
+ ],
725
+ "engines": {
726
+ "node": ">=12"
727
+ }
728
+ },
729
+ "node_modules/@esbuild/win32-ia32": {
730
+ "version": "0.21.5",
731
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.21.5.tgz",
732
+ "integrity": "sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==",
733
+ "cpu": [
734
+ "ia32"
735
+ ],
736
+ "dev": true,
737
+ "license": "MIT",
738
+ "optional": true,
739
+ "os": [
740
+ "win32"
741
+ ],
742
+ "engines": {
743
+ "node": ">=12"
744
+ }
745
+ },
746
+ "node_modules/@esbuild/win32-x64": {
747
+ "version": "0.21.5",
748
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.21.5.tgz",
749
+ "integrity": "sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==",
750
+ "cpu": [
751
+ "x64"
752
+ ],
753
+ "dev": true,
754
+ "license": "MIT",
755
+ "optional": true,
756
+ "os": [
757
+ "win32"
758
+ ],
759
+ "engines": {
760
+ "node": ">=12"
761
+ }
762
+ },
763
+ "node_modules/@iconify-json/simple-icons": {
764
+ "version": "1.2.63",
765
+ "resolved": "https://registry.npmjs.org/@iconify-json/simple-icons/-/simple-icons-1.2.63.tgz",
766
+ "integrity": "sha512-xZl2UWCwE58VlqZ+pDPmaUhE2tq8MVSTJRr4/9nzzHlDdjJ0Ud1VxNXPrwTSgESKY29iCQw3S0r2nJTSNNngHw==",
767
+ "dev": true,
768
+ "license": "CC0-1.0",
769
+ "dependencies": {
770
+ "@iconify/types": "*"
771
+ }
772
+ },
773
+ "node_modules/@iconify/types": {
774
+ "version": "2.0.0",
775
+ "resolved": "https://registry.npmjs.org/@iconify/types/-/types-2.0.0.tgz",
776
+ "integrity": "sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg==",
777
+ "dev": true,
778
+ "license": "MIT"
779
+ },
780
+ "node_modules/@jridgewell/sourcemap-codec": {
781
+ "version": "1.5.5",
782
+ "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
783
+ "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
784
+ "dev": true,
785
+ "license": "MIT"
786
+ },
787
+ "node_modules/@rollup/rollup-android-arm-eabi": {
788
+ "version": "4.54.0",
789
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.54.0.tgz",
790
+ "integrity": "sha512-OywsdRHrFvCdvsewAInDKCNyR3laPA2mc9bRYJ6LBp5IyvF3fvXbbNR0bSzHlZVFtn6E0xw2oZlyjg4rKCVcng==",
791
+ "cpu": [
792
+ "arm"
793
+ ],
794
+ "dev": true,
795
+ "license": "MIT",
796
+ "optional": true,
797
+ "os": [
798
+ "android"
799
+ ]
800
+ },
801
+ "node_modules/@rollup/rollup-android-arm64": {
802
+ "version": "4.54.0",
803
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.54.0.tgz",
804
+ "integrity": "sha512-Skx39Uv+u7H224Af+bDgNinitlmHyQX1K/atIA32JP3JQw6hVODX5tkbi2zof/E69M1qH2UoN3Xdxgs90mmNYw==",
805
+ "cpu": [
806
+ "arm64"
807
+ ],
808
+ "dev": true,
809
+ "license": "MIT",
810
+ "optional": true,
811
+ "os": [
812
+ "android"
813
+ ]
814
+ },
815
+ "node_modules/@rollup/rollup-darwin-arm64": {
816
+ "version": "4.54.0",
817
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.54.0.tgz",
818
+ "integrity": "sha512-k43D4qta/+6Fq+nCDhhv9yP2HdeKeP56QrUUTW7E6PhZP1US6NDqpJj4MY0jBHlJivVJD5P8NxrjuobZBJTCRw==",
819
+ "cpu": [
820
+ "arm64"
821
+ ],
822
+ "dev": true,
823
+ "license": "MIT",
824
+ "optional": true,
825
+ "os": [
826
+ "darwin"
827
+ ]
828
+ },
829
+ "node_modules/@rollup/rollup-darwin-x64": {
830
+ "version": "4.54.0",
831
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.54.0.tgz",
832
+ "integrity": "sha512-cOo7biqwkpawslEfox5Vs8/qj83M/aZCSSNIWpVzfU2CYHa2G3P1UN5WF01RdTHSgCkri7XOlTdtk17BezlV3A==",
833
+ "cpu": [
834
+ "x64"
835
+ ],
836
+ "dev": true,
837
+ "license": "MIT",
838
+ "optional": true,
839
+ "os": [
840
+ "darwin"
841
+ ]
842
+ },
843
+ "node_modules/@rollup/rollup-freebsd-arm64": {
844
+ "version": "4.54.0",
845
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.54.0.tgz",
846
+ "integrity": "sha512-miSvuFkmvFbgJ1BevMa4CPCFt5MPGw094knM64W9I0giUIMMmRYcGW/JWZDriaw/k1kOBtsWh1z6nIFV1vPNtA==",
847
+ "cpu": [
848
+ "arm64"
849
+ ],
850
+ "dev": true,
851
+ "license": "MIT",
852
+ "optional": true,
853
+ "os": [
854
+ "freebsd"
855
+ ]
856
+ },
857
+ "node_modules/@rollup/rollup-freebsd-x64": {
858
+ "version": "4.54.0",
859
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.54.0.tgz",
860
+ "integrity": "sha512-KGXIs55+b/ZfZsq9aR026tmr/+7tq6VG6MsnrvF4H8VhwflTIuYh+LFUlIsRdQSgrgmtM3fVATzEAj4hBQlaqQ==",
861
+ "cpu": [
862
+ "x64"
863
+ ],
864
+ "dev": true,
865
+ "license": "MIT",
866
+ "optional": true,
867
+ "os": [
868
+ "freebsd"
869
+ ]
870
+ },
871
+ "node_modules/@rollup/rollup-linux-arm-gnueabihf": {
872
+ "version": "4.54.0",
873
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.54.0.tgz",
874
+ "integrity": "sha512-EHMUcDwhtdRGlXZsGSIuXSYwD5kOT9NVnx9sqzYiwAc91wfYOE1g1djOEDseZJKKqtHAHGwnGPQu3kytmfaXLQ==",
875
+ "cpu": [
876
+ "arm"
877
+ ],
878
+ "dev": true,
879
+ "license": "MIT",
880
+ "optional": true,
881
+ "os": [
882
+ "linux"
883
+ ]
884
+ },
885
+ "node_modules/@rollup/rollup-linux-arm-musleabihf": {
886
+ "version": "4.54.0",
887
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.54.0.tgz",
888
+ "integrity": "sha512-+pBrqEjaakN2ySv5RVrj/qLytYhPKEUwk+e3SFU5jTLHIcAtqh2rLrd/OkbNuHJpsBgxsD8ccJt5ga/SeG0JmA==",
889
+ "cpu": [
890
+ "arm"
891
+ ],
892
+ "dev": true,
893
+ "license": "MIT",
894
+ "optional": true,
895
+ "os": [
896
+ "linux"
897
+ ]
898
+ },
899
+ "node_modules/@rollup/rollup-linux-arm64-gnu": {
900
+ "version": "4.54.0",
901
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.54.0.tgz",
902
+ "integrity": "sha512-NSqc7rE9wuUaRBsBp5ckQ5CVz5aIRKCwsoa6WMF7G01sX3/qHUw/z4pv+D+ahL1EIKy6Enpcnz1RY8pf7bjwng==",
903
+ "cpu": [
904
+ "arm64"
905
+ ],
906
+ "dev": true,
907
+ "license": "MIT",
908
+ "optional": true,
909
+ "os": [
910
+ "linux"
911
+ ]
912
+ },
913
+ "node_modules/@rollup/rollup-linux-arm64-musl": {
914
+ "version": "4.54.0",
915
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.54.0.tgz",
916
+ "integrity": "sha512-gr5vDbg3Bakga5kbdpqx81m2n9IX8M6gIMlQQIXiLTNeQW6CucvuInJ91EuCJ/JYvc+rcLLsDFcfAD1K7fMofg==",
917
+ "cpu": [
918
+ "arm64"
919
+ ],
920
+ "dev": true,
921
+ "license": "MIT",
922
+ "optional": true,
923
+ "os": [
924
+ "linux"
925
+ ]
926
+ },
927
+ "node_modules/@rollup/rollup-linux-loong64-gnu": {
928
+ "version": "4.54.0",
929
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.54.0.tgz",
930
+ "integrity": "sha512-gsrtB1NA3ZYj2vq0Rzkylo9ylCtW/PhpLEivlgWe0bpgtX5+9j9EZa0wtZiCjgu6zmSeZWyI/e2YRX1URozpIw==",
931
+ "cpu": [
932
+ "loong64"
933
+ ],
934
+ "dev": true,
935
+ "license": "MIT",
936
+ "optional": true,
937
+ "os": [
938
+ "linux"
939
+ ]
940
+ },
941
+ "node_modules/@rollup/rollup-linux-ppc64-gnu": {
942
+ "version": "4.54.0",
943
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.54.0.tgz",
944
+ "integrity": "sha512-y3qNOfTBStmFNq+t4s7Tmc9hW2ENtPg8FeUD/VShI7rKxNW7O4fFeaYbMsd3tpFlIg1Q8IapFgy7Q9i2BqeBvA==",
945
+ "cpu": [
946
+ "ppc64"
947
+ ],
948
+ "dev": true,
949
+ "license": "MIT",
950
+ "optional": true,
951
+ "os": [
952
+ "linux"
953
+ ]
954
+ },
955
+ "node_modules/@rollup/rollup-linux-riscv64-gnu": {
956
+ "version": "4.54.0",
957
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.54.0.tgz",
958
+ "integrity": "sha512-89sepv7h2lIVPsFma8iwmccN7Yjjtgz0Rj/Ou6fEqg3HDhpCa+Et+YSufy27i6b0Wav69Qv4WBNl3Rs6pwhebQ==",
959
+ "cpu": [
960
+ "riscv64"
961
+ ],
962
+ "dev": true,
963
+ "license": "MIT",
964
+ "optional": true,
965
+ "os": [
966
+ "linux"
967
+ ]
968
+ },
969
+ "node_modules/@rollup/rollup-linux-riscv64-musl": {
970
+ "version": "4.54.0",
971
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.54.0.tgz",
972
+ "integrity": "sha512-ZcU77ieh0M2Q8Ur7D5X7KvK+UxbXeDHwiOt/CPSBTI1fBmeDMivW0dPkdqkT4rOgDjrDDBUed9x4EgraIKoR2A==",
973
+ "cpu": [
974
+ "riscv64"
975
+ ],
976
+ "dev": true,
977
+ "license": "MIT",
978
+ "optional": true,
979
+ "os": [
980
+ "linux"
981
+ ]
982
+ },
983
+ "node_modules/@rollup/rollup-linux-s390x-gnu": {
984
+ "version": "4.54.0",
985
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.54.0.tgz",
986
+ "integrity": "sha512-2AdWy5RdDF5+4YfG/YesGDDtbyJlC9LHmL6rZw6FurBJ5n4vFGupsOBGfwMRjBYH7qRQowT8D/U4LoSvVwOhSQ==",
987
+ "cpu": [
988
+ "s390x"
989
+ ],
990
+ "dev": true,
991
+ "license": "MIT",
992
+ "optional": true,
993
+ "os": [
994
+ "linux"
995
+ ]
996
+ },
997
+ "node_modules/@rollup/rollup-linux-x64-gnu": {
998
+ "version": "4.54.0",
999
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.54.0.tgz",
1000
+ "integrity": "sha512-WGt5J8Ij/rvyqpFexxk3ffKqqbLf9AqrTBbWDk7ApGUzaIs6V+s2s84kAxklFwmMF/vBNGrVdYgbblCOFFezMQ==",
1001
+ "cpu": [
1002
+ "x64"
1003
+ ],
1004
+ "dev": true,
1005
+ "license": "MIT",
1006
+ "optional": true,
1007
+ "os": [
1008
+ "linux"
1009
+ ]
1010
+ },
1011
+ "node_modules/@rollup/rollup-linux-x64-musl": {
1012
+ "version": "4.54.0",
1013
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.54.0.tgz",
1014
+ "integrity": "sha512-JzQmb38ATzHjxlPHuTH6tE7ojnMKM2kYNzt44LO/jJi8BpceEC8QuXYA908n8r3CNuG/B3BV8VR3Hi1rYtmPiw==",
1015
+ "cpu": [
1016
+ "x64"
1017
+ ],
1018
+ "dev": true,
1019
+ "license": "MIT",
1020
+ "optional": true,
1021
+ "os": [
1022
+ "linux"
1023
+ ]
1024
+ },
1025
+ "node_modules/@rollup/rollup-openharmony-arm64": {
1026
+ "version": "4.54.0",
1027
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.54.0.tgz",
1028
+ "integrity": "sha512-huT3fd0iC7jigGh7n3q/+lfPcXxBi+om/Rs3yiFxjvSxbSB6aohDFXbWvlspaqjeOh+hx7DDHS+5Es5qRkWkZg==",
1029
+ "cpu": [
1030
+ "arm64"
1031
+ ],
1032
+ "dev": true,
1033
+ "license": "MIT",
1034
+ "optional": true,
1035
+ "os": [
1036
+ "openharmony"
1037
+ ]
1038
+ },
1039
+ "node_modules/@rollup/rollup-win32-arm64-msvc": {
1040
+ "version": "4.54.0",
1041
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.54.0.tgz",
1042
+ "integrity": "sha512-c2V0W1bsKIKfbLMBu/WGBz6Yci8nJ/ZJdheE0EwB73N3MvHYKiKGs3mVilX4Gs70eGeDaMqEob25Tw2Gb9Nqyw==",
1043
+ "cpu": [
1044
+ "arm64"
1045
+ ],
1046
+ "dev": true,
1047
+ "license": "MIT",
1048
+ "optional": true,
1049
+ "os": [
1050
+ "win32"
1051
+ ]
1052
+ },
1053
+ "node_modules/@rollup/rollup-win32-ia32-msvc": {
1054
+ "version": "4.54.0",
1055
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.54.0.tgz",
1056
+ "integrity": "sha512-woEHgqQqDCkAzrDhvDipnSirm5vxUXtSKDYTVpZG3nUdW/VVB5VdCYA2iReSj/u3yCZzXID4kuKG7OynPnB3WQ==",
1057
+ "cpu": [
1058
+ "ia32"
1059
+ ],
1060
+ "dev": true,
1061
+ "license": "MIT",
1062
+ "optional": true,
1063
+ "os": [
1064
+ "win32"
1065
+ ]
1066
+ },
1067
+ "node_modules/@rollup/rollup-win32-x64-gnu": {
1068
+ "version": "4.54.0",
1069
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-gnu/-/rollup-win32-x64-gnu-4.54.0.tgz",
1070
+ "integrity": "sha512-dzAc53LOuFvHwbCEOS0rPbXp6SIhAf2txMP5p6mGyOXXw5mWY8NGGbPMPrs4P1WItkfApDathBj/NzMLUZ9rtQ==",
1071
+ "cpu": [
1072
+ "x64"
1073
+ ],
1074
+ "dev": true,
1075
+ "license": "MIT",
1076
+ "optional": true,
1077
+ "os": [
1078
+ "win32"
1079
+ ]
1080
+ },
1081
+ "node_modules/@rollup/rollup-win32-x64-msvc": {
1082
+ "version": "4.54.0",
1083
+ "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.54.0.tgz",
1084
+ "integrity": "sha512-hYT5d3YNdSh3mbCU1gwQyPgQd3T2ne0A3KG8KSBdav5TiBg6eInVmV+TeR5uHufiIgSFg0XsOWGW5/RhNcSvPg==",
1085
+ "cpu": [
1086
+ "x64"
1087
+ ],
1088
+ "dev": true,
1089
+ "license": "MIT",
1090
+ "optional": true,
1091
+ "os": [
1092
+ "win32"
1093
+ ]
1094
+ },
1095
+ "node_modules/@shikijs/core": {
1096
+ "version": "2.5.0",
1097
+ "resolved": "https://registry.npmjs.org/@shikijs/core/-/core-2.5.0.tgz",
1098
+ "integrity": "sha512-uu/8RExTKtavlpH7XqnVYBrfBkUc20ngXiX9NSrBhOVZYv/7XQRKUyhtkeflY5QsxC0GbJThCerruZfsUaSldg==",
1099
+ "dev": true,
1100
+ "license": "MIT",
1101
+ "dependencies": {
1102
+ "@shikijs/engine-javascript": "2.5.0",
1103
+ "@shikijs/engine-oniguruma": "2.5.0",
1104
+ "@shikijs/types": "2.5.0",
1105
+ "@shikijs/vscode-textmate": "^10.0.2",
1106
+ "@types/hast": "^3.0.4",
1107
+ "hast-util-to-html": "^9.0.4"
1108
+ }
1109
+ },
1110
+ "node_modules/@shikijs/engine-javascript": {
1111
+ "version": "2.5.0",
1112
+ "resolved": "https://registry.npmjs.org/@shikijs/engine-javascript/-/engine-javascript-2.5.0.tgz",
1113
+ "integrity": "sha512-VjnOpnQf8WuCEZtNUdjjwGUbtAVKuZkVQ/5cHy/tojVVRIRtlWMYVjyWhxOmIq05AlSOv72z7hRNRGVBgQOl0w==",
1114
+ "dev": true,
1115
+ "license": "MIT",
1116
+ "dependencies": {
1117
+ "@shikijs/types": "2.5.0",
1118
+ "@shikijs/vscode-textmate": "^10.0.2",
1119
+ "oniguruma-to-es": "^3.1.0"
1120
+ }
1121
+ },
1122
+ "node_modules/@shikijs/engine-oniguruma": {
1123
+ "version": "2.5.0",
1124
+ "resolved": "https://registry.npmjs.org/@shikijs/engine-oniguruma/-/engine-oniguruma-2.5.0.tgz",
1125
+ "integrity": "sha512-pGd1wRATzbo/uatrCIILlAdFVKdxImWJGQ5rFiB5VZi2ve5xj3Ax9jny8QvkaV93btQEwR/rSz5ERFpC5mKNIw==",
1126
+ "dev": true,
1127
+ "license": "MIT",
1128
+ "dependencies": {
1129
+ "@shikijs/types": "2.5.0",
1130
+ "@shikijs/vscode-textmate": "^10.0.2"
1131
+ }
1132
+ },
1133
+ "node_modules/@shikijs/langs": {
1134
+ "version": "2.5.0",
1135
+ "resolved": "https://registry.npmjs.org/@shikijs/langs/-/langs-2.5.0.tgz",
1136
+ "integrity": "sha512-Qfrrt5OsNH5R+5tJ/3uYBBZv3SuGmnRPejV9IlIbFH3HTGLDlkqgHymAlzklVmKBjAaVmkPkyikAV/sQ1wSL+w==",
1137
+ "dev": true,
1138
+ "license": "MIT",
1139
+ "dependencies": {
1140
+ "@shikijs/types": "2.5.0"
1141
+ }
1142
+ },
1143
+ "node_modules/@shikijs/themes": {
1144
+ "version": "2.5.0",
1145
+ "resolved": "https://registry.npmjs.org/@shikijs/themes/-/themes-2.5.0.tgz",
1146
+ "integrity": "sha512-wGrk+R8tJnO0VMzmUExHR+QdSaPUl/NKs+a4cQQRWyoc3YFbUzuLEi/KWK1hj+8BfHRKm2jNhhJck1dfstJpiw==",
1147
+ "dev": true,
1148
+ "license": "MIT",
1149
+ "dependencies": {
1150
+ "@shikijs/types": "2.5.0"
1151
+ }
1152
+ },
1153
+ "node_modules/@shikijs/transformers": {
1154
+ "version": "2.5.0",
1155
+ "resolved": "https://registry.npmjs.org/@shikijs/transformers/-/transformers-2.5.0.tgz",
1156
+ "integrity": "sha512-SI494W5X60CaUwgi8u4q4m4s3YAFSxln3tzNjOSYqq54wlVgz0/NbbXEb3mdLbqMBztcmS7bVTaEd2w0qMmfeg==",
1157
+ "dev": true,
1158
+ "license": "MIT",
1159
+ "dependencies": {
1160
+ "@shikijs/core": "2.5.0",
1161
+ "@shikijs/types": "2.5.0"
1162
+ }
1163
+ },
1164
+ "node_modules/@shikijs/types": {
1165
+ "version": "2.5.0",
1166
+ "resolved": "https://registry.npmjs.org/@shikijs/types/-/types-2.5.0.tgz",
1167
+ "integrity": "sha512-ygl5yhxki9ZLNuNpPitBWvcy9fsSKKaRuO4BAlMyagszQidxcpLAr0qiW/q43DtSIDxO6hEbtYLiFZNXO/hdGw==",
1168
+ "dev": true,
1169
+ "license": "MIT",
1170
+ "dependencies": {
1171
+ "@shikijs/vscode-textmate": "^10.0.2",
1172
+ "@types/hast": "^3.0.4"
1173
+ }
1174
+ },
1175
+ "node_modules/@shikijs/vscode-textmate": {
1176
+ "version": "10.0.2",
1177
+ "resolved": "https://registry.npmjs.org/@shikijs/vscode-textmate/-/vscode-textmate-10.0.2.tgz",
1178
+ "integrity": "sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==",
1179
+ "dev": true,
1180
+ "license": "MIT"
1181
+ },
1182
+ "node_modules/@types/estree": {
1183
+ "version": "1.0.8",
1184
+ "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
1185
+ "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==",
1186
+ "dev": true,
1187
+ "license": "MIT"
1188
+ },
1189
+ "node_modules/@types/hast": {
1190
+ "version": "3.0.4",
1191
+ "resolved": "https://registry.npmjs.org/@types/hast/-/hast-3.0.4.tgz",
1192
+ "integrity": "sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==",
1193
+ "dev": true,
1194
+ "license": "MIT",
1195
+ "dependencies": {
1196
+ "@types/unist": "*"
1197
+ }
1198
+ },
1199
+ "node_modules/@types/linkify-it": {
1200
+ "version": "5.0.0",
1201
+ "resolved": "https://registry.npmjs.org/@types/linkify-it/-/linkify-it-5.0.0.tgz",
1202
+ "integrity": "sha512-sVDA58zAw4eWAffKOaQH5/5j3XeayukzDk+ewSsnv3p4yJEZHCCzMDiZM8e0OUrRvmpGZ85jf4yDHkHsgBNr9Q==",
1203
+ "dev": true,
1204
+ "license": "MIT"
1205
+ },
1206
+ "node_modules/@types/markdown-it": {
1207
+ "version": "14.1.2",
1208
+ "resolved": "https://registry.npmjs.org/@types/markdown-it/-/markdown-it-14.1.2.tgz",
1209
+ "integrity": "sha512-promo4eFwuiW+TfGxhi+0x3czqTYJkG8qB17ZUJiVF10Xm7NLVRSLUsfRTU/6h1e24VvRnXCx+hG7li58lkzog==",
1210
+ "dev": true,
1211
+ "license": "MIT",
1212
+ "dependencies": {
1213
+ "@types/linkify-it": "^5",
1214
+ "@types/mdurl": "^2"
1215
+ }
1216
+ },
1217
+ "node_modules/@types/mdast": {
1218
+ "version": "4.0.4",
1219
+ "resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-4.0.4.tgz",
1220
+ "integrity": "sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA==",
1221
+ "dev": true,
1222
+ "license": "MIT",
1223
+ "dependencies": {
1224
+ "@types/unist": "*"
1225
+ }
1226
+ },
1227
+ "node_modules/@types/mdurl": {
1228
+ "version": "2.0.0",
1229
+ "resolved": "https://registry.npmjs.org/@types/mdurl/-/mdurl-2.0.0.tgz",
1230
+ "integrity": "sha512-RGdgjQUZba5p6QEFAVx2OGb8rQDL/cPRG7GiedRzMcJ1tYnUANBncjbSB1NRGwbvjcPeikRABz2nshyPk1bhWg==",
1231
+ "dev": true,
1232
+ "license": "MIT"
1233
+ },
1234
+ "node_modules/@types/unist": {
1235
+ "version": "3.0.3",
1236
+ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-3.0.3.tgz",
1237
+ "integrity": "sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==",
1238
+ "dev": true,
1239
+ "license": "MIT"
1240
+ },
1241
+ "node_modules/@types/web-bluetooth": {
1242
+ "version": "0.0.21",
1243
+ "resolved": "https://registry.npmjs.org/@types/web-bluetooth/-/web-bluetooth-0.0.21.tgz",
1244
+ "integrity": "sha512-oIQLCGWtcFZy2JW77j9k8nHzAOpqMHLQejDA48XXMWH6tjCQHz5RCFz1bzsmROyL6PUm+LLnUiI4BCn221inxA==",
1245
+ "dev": true,
1246
+ "license": "MIT"
1247
+ },
1248
+ "node_modules/@ungap/structured-clone": {
1249
+ "version": "1.3.0",
1250
+ "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.3.0.tgz",
1251
+ "integrity": "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==",
1252
+ "dev": true,
1253
+ "license": "ISC"
1254
+ },
1255
+ "node_modules/@vitejs/plugin-vue": {
1256
+ "version": "5.2.4",
1257
+ "resolved": "https://registry.npmjs.org/@vitejs/plugin-vue/-/plugin-vue-5.2.4.tgz",
1258
+ "integrity": "sha512-7Yx/SXSOcQq5HiiV3orevHUFn+pmMB4cgbEkDYgnkUWb0WfeQ/wa2yFv6D5ICiCQOVpjA7vYDXrC7AGO8yjDHA==",
1259
+ "dev": true,
1260
+ "license": "MIT",
1261
+ "engines": {
1262
+ "node": "^18.0.0 || >=20.0.0"
1263
+ },
1264
+ "peerDependencies": {
1265
+ "vite": "^5.0.0 || ^6.0.0",
1266
+ "vue": "^3.2.25"
1267
+ }
1268
+ },
1269
+ "node_modules/@vue/compiler-core": {
1270
+ "version": "3.5.26",
1271
+ "resolved": "https://registry.npmjs.org/@vue/compiler-core/-/compiler-core-3.5.26.tgz",
1272
+ "integrity": "sha512-vXyI5GMfuoBCnv5ucIT7jhHKl55Y477yxP6fc4eUswjP8FG3FFVFd41eNDArR+Uk3QKn2Z85NavjaxLxOC19/w==",
1273
+ "dev": true,
1274
+ "license": "MIT",
1275
+ "dependencies": {
1276
+ "@babel/parser": "^7.28.5",
1277
+ "@vue/shared": "3.5.26",
1278
+ "entities": "^7.0.0",
1279
+ "estree-walker": "^2.0.2",
1280
+ "source-map-js": "^1.2.1"
1281
+ }
1282
+ },
1283
+ "node_modules/@vue/compiler-dom": {
1284
+ "version": "3.5.26",
1285
+ "resolved": "https://registry.npmjs.org/@vue/compiler-dom/-/compiler-dom-3.5.26.tgz",
1286
+ "integrity": "sha512-y1Tcd3eXs834QjswshSilCBnKGeQjQXB6PqFn/1nxcQw4pmG42G8lwz+FZPAZAby6gZeHSt/8LMPfZ4Rb+Bd/A==",
1287
+ "dev": true,
1288
+ "license": "MIT",
1289
+ "dependencies": {
1290
+ "@vue/compiler-core": "3.5.26",
1291
+ "@vue/shared": "3.5.26"
1292
+ }
1293
+ },
1294
+ "node_modules/@vue/compiler-sfc": {
1295
+ "version": "3.5.26",
1296
+ "resolved": "https://registry.npmjs.org/@vue/compiler-sfc/-/compiler-sfc-3.5.26.tgz",
1297
+ "integrity": "sha512-egp69qDTSEZcf4bGOSsprUr4xI73wfrY5oRs6GSgXFTiHrWj4Y3X5Ydtip9QMqiCMCPVwLglB9GBxXtTadJ3mA==",
1298
+ "dev": true,
1299
+ "license": "MIT",
1300
+ "dependencies": {
1301
+ "@babel/parser": "^7.28.5",
1302
+ "@vue/compiler-core": "3.5.26",
1303
+ "@vue/compiler-dom": "3.5.26",
1304
+ "@vue/compiler-ssr": "3.5.26",
1305
+ "@vue/shared": "3.5.26",
1306
+ "estree-walker": "^2.0.2",
1307
+ "magic-string": "^0.30.21",
1308
+ "postcss": "^8.5.6",
1309
+ "source-map-js": "^1.2.1"
1310
+ }
1311
+ },
1312
+ "node_modules/@vue/compiler-ssr": {
1313
+ "version": "3.5.26",
1314
+ "resolved": "https://registry.npmjs.org/@vue/compiler-ssr/-/compiler-ssr-3.5.26.tgz",
1315
+ "integrity": "sha512-lZT9/Y0nSIRUPVvapFJEVDbEXruZh2IYHMk2zTtEgJSlP5gVOqeWXH54xDKAaFS4rTnDeDBQUYDtxKyoW9FwDw==",
1316
+ "dev": true,
1317
+ "license": "MIT",
1318
+ "dependencies": {
1319
+ "@vue/compiler-dom": "3.5.26",
1320
+ "@vue/shared": "3.5.26"
1321
+ }
1322
+ },
1323
+ "node_modules/@vue/devtools-api": {
1324
+ "version": "7.7.9",
1325
+ "resolved": "https://registry.npmjs.org/@vue/devtools-api/-/devtools-api-7.7.9.tgz",
1326
+ "integrity": "sha512-kIE8wvwlcZ6TJTbNeU2HQNtaxLx3a84aotTITUuL/4bzfPxzajGBOoqjMhwZJ8L9qFYDU/lAYMEEm11dnZOD6g==",
1327
+ "dev": true,
1328
+ "license": "MIT",
1329
+ "dependencies": {
1330
+ "@vue/devtools-kit": "^7.7.9"
1331
+ }
1332
+ },
1333
+ "node_modules/@vue/devtools-kit": {
1334
+ "version": "7.7.9",
1335
+ "resolved": "https://registry.npmjs.org/@vue/devtools-kit/-/devtools-kit-7.7.9.tgz",
1336
+ "integrity": "sha512-PyQ6odHSgiDVd4hnTP+aDk2X4gl2HmLDfiyEnn3/oV+ckFDuswRs4IbBT7vacMuGdwY/XemxBoh302ctbsptuA==",
1337
+ "dev": true,
1338
+ "license": "MIT",
1339
+ "dependencies": {
1340
+ "@vue/devtools-shared": "^7.7.9",
1341
+ "birpc": "^2.3.0",
1342
+ "hookable": "^5.5.3",
1343
+ "mitt": "^3.0.1",
1344
+ "perfect-debounce": "^1.0.0",
1345
+ "speakingurl": "^14.0.1",
1346
+ "superjson": "^2.2.2"
1347
+ }
1348
+ },
1349
+ "node_modules/@vue/devtools-shared": {
1350
+ "version": "7.7.9",
1351
+ "resolved": "https://registry.npmjs.org/@vue/devtools-shared/-/devtools-shared-7.7.9.tgz",
1352
+ "integrity": "sha512-iWAb0v2WYf0QWmxCGy0seZNDPdO3Sp5+u78ORnyeonS6MT4PC7VPrryX2BpMJrwlDeaZ6BD4vP4XKjK0SZqaeA==",
1353
+ "dev": true,
1354
+ "license": "MIT",
1355
+ "dependencies": {
1356
+ "rfdc": "^1.4.1"
1357
+ }
1358
+ },
1359
+ "node_modules/@vue/reactivity": {
1360
+ "version": "3.5.26",
1361
+ "resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.5.26.tgz",
1362
+ "integrity": "sha512-9EnYB1/DIiUYYnzlnUBgwU32NNvLp/nhxLXeWRhHUEeWNTn1ECxX8aGO7RTXeX6PPcxe3LLuNBFoJbV4QZ+CFQ==",
1363
+ "dev": true,
1364
+ "license": "MIT",
1365
+ "dependencies": {
1366
+ "@vue/shared": "3.5.26"
1367
+ }
1368
+ },
1369
+ "node_modules/@vue/runtime-core": {
1370
+ "version": "3.5.26",
1371
+ "resolved": "https://registry.npmjs.org/@vue/runtime-core/-/runtime-core-3.5.26.tgz",
1372
+ "integrity": "sha512-xJWM9KH1kd201w5DvMDOwDHYhrdPTrAatn56oB/LRG4plEQeZRQLw0Bpwih9KYoqmzaxF0OKSn6swzYi84e1/Q==",
1373
+ "dev": true,
1374
+ "license": "MIT",
1375
+ "dependencies": {
1376
+ "@vue/reactivity": "3.5.26",
1377
+ "@vue/shared": "3.5.26"
1378
+ }
1379
+ },
1380
+ "node_modules/@vue/runtime-dom": {
1381
+ "version": "3.5.26",
1382
+ "resolved": "https://registry.npmjs.org/@vue/runtime-dom/-/runtime-dom-3.5.26.tgz",
1383
+ "integrity": "sha512-XLLd/+4sPC2ZkN/6+V4O4gjJu6kSDbHAChvsyWgm1oGbdSO3efvGYnm25yCjtFm/K7rrSDvSfPDgN1pHgS4VNQ==",
1384
+ "dev": true,
1385
+ "license": "MIT",
1386
+ "dependencies": {
1387
+ "@vue/reactivity": "3.5.26",
1388
+ "@vue/runtime-core": "3.5.26",
1389
+ "@vue/shared": "3.5.26",
1390
+ "csstype": "^3.2.3"
1391
+ }
1392
+ },
1393
+ "node_modules/@vue/server-renderer": {
1394
+ "version": "3.5.26",
1395
+ "resolved": "https://registry.npmjs.org/@vue/server-renderer/-/server-renderer-3.5.26.tgz",
1396
+ "integrity": "sha512-TYKLXmrwWKSodyVuO1WAubucd+1XlLg4set0YoV+Hu8Lo79mp/YMwWV5mC5FgtsDxX3qo1ONrxFaTP1OQgy1uA==",
1397
+ "dev": true,
1398
+ "license": "MIT",
1399
+ "dependencies": {
1400
+ "@vue/compiler-ssr": "3.5.26",
1401
+ "@vue/shared": "3.5.26"
1402
+ },
1403
+ "peerDependencies": {
1404
+ "vue": "3.5.26"
1405
+ }
1406
+ },
1407
+ "node_modules/@vue/shared": {
1408
+ "version": "3.5.26",
1409
+ "resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.5.26.tgz",
1410
+ "integrity": "sha512-7Z6/y3uFI5PRoKeorTOSXKcDj0MSasfNNltcslbFrPpcw6aXRUALq4IfJlaTRspiWIUOEZbrpM+iQGmCOiWe4A==",
1411
+ "dev": true,
1412
+ "license": "MIT"
1413
+ },
1414
+ "node_modules/@vueuse/core": {
1415
+ "version": "12.8.2",
1416
+ "resolved": "https://registry.npmjs.org/@vueuse/core/-/core-12.8.2.tgz",
1417
+ "integrity": "sha512-HbvCmZdzAu3VGi/pWYm5Ut+Kd9mn1ZHnn4L5G8kOQTPs/IwIAmJoBrmYk2ckLArgMXZj0AW3n5CAejLUO+PhdQ==",
1418
+ "dev": true,
1419
+ "license": "MIT",
1420
+ "dependencies": {
1421
+ "@types/web-bluetooth": "^0.0.21",
1422
+ "@vueuse/metadata": "12.8.2",
1423
+ "@vueuse/shared": "12.8.2",
1424
+ "vue": "^3.5.13"
1425
+ },
1426
+ "funding": {
1427
+ "url": "https://github.com/sponsors/antfu"
1428
+ }
1429
+ },
1430
+ "node_modules/@vueuse/integrations": {
1431
+ "version": "12.8.2",
1432
+ "resolved": "https://registry.npmjs.org/@vueuse/integrations/-/integrations-12.8.2.tgz",
1433
+ "integrity": "sha512-fbGYivgK5uBTRt7p5F3zy6VrETlV9RtZjBqd1/HxGdjdckBgBM4ugP8LHpjolqTj14TXTxSK1ZfgPbHYyGuH7g==",
1434
+ "dev": true,
1435
+ "license": "MIT",
1436
+ "dependencies": {
1437
+ "@vueuse/core": "12.8.2",
1438
+ "@vueuse/shared": "12.8.2",
1439
+ "vue": "^3.5.13"
1440
+ },
1441
+ "funding": {
1442
+ "url": "https://github.com/sponsors/antfu"
1443
+ },
1444
+ "peerDependencies": {
1445
+ "async-validator": "^4",
1446
+ "axios": "^1",
1447
+ "change-case": "^5",
1448
+ "drauu": "^0.4",
1449
+ "focus-trap": "^7",
1450
+ "fuse.js": "^7",
1451
+ "idb-keyval": "^6",
1452
+ "jwt-decode": "^4",
1453
+ "nprogress": "^0.2",
1454
+ "qrcode": "^1.5",
1455
+ "sortablejs": "^1",
1456
+ "universal-cookie": "^7"
1457
+ },
1458
+ "peerDependenciesMeta": {
1459
+ "async-validator": {
1460
+ "optional": true
1461
+ },
1462
+ "axios": {
1463
+ "optional": true
1464
+ },
1465
+ "change-case": {
1466
+ "optional": true
1467
+ },
1468
+ "drauu": {
1469
+ "optional": true
1470
+ },
1471
+ "focus-trap": {
1472
+ "optional": true
1473
+ },
1474
+ "fuse.js": {
1475
+ "optional": true
1476
+ },
1477
+ "idb-keyval": {
1478
+ "optional": true
1479
+ },
1480
+ "jwt-decode": {
1481
+ "optional": true
1482
+ },
1483
+ "nprogress": {
1484
+ "optional": true
1485
+ },
1486
+ "qrcode": {
1487
+ "optional": true
1488
+ },
1489
+ "sortablejs": {
1490
+ "optional": true
1491
+ },
1492
+ "universal-cookie": {
1493
+ "optional": true
1494
+ }
1495
+ }
1496
+ },
1497
+ "node_modules/@vueuse/metadata": {
1498
+ "version": "12.8.2",
1499
+ "resolved": "https://registry.npmjs.org/@vueuse/metadata/-/metadata-12.8.2.tgz",
1500
+ "integrity": "sha512-rAyLGEuoBJ/Il5AmFHiziCPdQzRt88VxR+Y/A/QhJ1EWtWqPBBAxTAFaSkviwEuOEZNtW8pvkPgoCZQ+HxqW1A==",
1501
+ "dev": true,
1502
+ "license": "MIT",
1503
+ "funding": {
1504
+ "url": "https://github.com/sponsors/antfu"
1505
+ }
1506
+ },
1507
+ "node_modules/@vueuse/shared": {
1508
+ "version": "12.8.2",
1509
+ "resolved": "https://registry.npmjs.org/@vueuse/shared/-/shared-12.8.2.tgz",
1510
+ "integrity": "sha512-dznP38YzxZoNloI0qpEfpkms8knDtaoQ6Y/sfS0L7Yki4zh40LFHEhur0odJC6xTHG5dxWVPiUWBXn+wCG2s5w==",
1511
+ "dev": true,
1512
+ "license": "MIT",
1513
+ "dependencies": {
1514
+ "vue": "^3.5.13"
1515
+ },
1516
+ "funding": {
1517
+ "url": "https://github.com/sponsors/antfu"
1518
+ }
1519
+ },
1520
+ "node_modules/algoliasearch": {
1521
+ "version": "5.46.2",
1522
+ "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-5.46.2.tgz",
1523
+ "integrity": "sha512-qqAXW9QvKf2tTyhpDA4qXv1IfBwD2eduSW6tUEBFIfCeE9gn9HQ9I5+MaKoenRuHrzk5sQoNh1/iof8mY7uD6Q==",
1524
+ "dev": true,
1525
+ "license": "MIT",
1526
+ "peer": true,
1527
+ "dependencies": {
1528
+ "@algolia/abtesting": "1.12.2",
1529
+ "@algolia/client-abtesting": "5.46.2",
1530
+ "@algolia/client-analytics": "5.46.2",
1531
+ "@algolia/client-common": "5.46.2",
1532
+ "@algolia/client-insights": "5.46.2",
1533
+ "@algolia/client-personalization": "5.46.2",
1534
+ "@algolia/client-query-suggestions": "5.46.2",
1535
+ "@algolia/client-search": "5.46.2",
1536
+ "@algolia/ingestion": "1.46.2",
1537
+ "@algolia/monitoring": "1.46.2",
1538
+ "@algolia/recommend": "5.46.2",
1539
+ "@algolia/requester-browser-xhr": "5.46.2",
1540
+ "@algolia/requester-fetch": "5.46.2",
1541
+ "@algolia/requester-node-http": "5.46.2"
1542
+ },
1543
+ "engines": {
1544
+ "node": ">= 14.0.0"
1545
+ }
1546
+ },
1547
+ "node_modules/birpc": {
1548
+ "version": "2.9.0",
1549
+ "resolved": "https://registry.npmjs.org/birpc/-/birpc-2.9.0.tgz",
1550
+ "integrity": "sha512-KrayHS5pBi69Xi9JmvoqrIgYGDkD6mcSe/i6YKi3w5kekCLzrX4+nawcXqrj2tIp50Kw/mT/s3p+GVK0A0sKxw==",
1551
+ "dev": true,
1552
+ "license": "MIT",
1553
+ "funding": {
1554
+ "url": "https://github.com/sponsors/antfu"
1555
+ }
1556
+ },
1557
+ "node_modules/ccount": {
1558
+ "version": "2.0.1",
1559
+ "resolved": "https://registry.npmjs.org/ccount/-/ccount-2.0.1.tgz",
1560
+ "integrity": "sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==",
1561
+ "dev": true,
1562
+ "license": "MIT",
1563
+ "funding": {
1564
+ "type": "github",
1565
+ "url": "https://github.com/sponsors/wooorm"
1566
+ }
1567
+ },
1568
+ "node_modules/character-entities-html4": {
1569
+ "version": "2.1.0",
1570
+ "resolved": "https://registry.npmjs.org/character-entities-html4/-/character-entities-html4-2.1.0.tgz",
1571
+ "integrity": "sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==",
1572
+ "dev": true,
1573
+ "license": "MIT",
1574
+ "funding": {
1575
+ "type": "github",
1576
+ "url": "https://github.com/sponsors/wooorm"
1577
+ }
1578
+ },
1579
+ "node_modules/character-entities-legacy": {
1580
+ "version": "3.0.0",
1581
+ "resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-3.0.0.tgz",
1582
+ "integrity": "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==",
1583
+ "dev": true,
1584
+ "license": "MIT",
1585
+ "funding": {
1586
+ "type": "github",
1587
+ "url": "https://github.com/sponsors/wooorm"
1588
+ }
1589
+ },
1590
+ "node_modules/comma-separated-tokens": {
1591
+ "version": "2.0.3",
1592
+ "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-2.0.3.tgz",
1593
+ "integrity": "sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==",
1594
+ "dev": true,
1595
+ "license": "MIT",
1596
+ "funding": {
1597
+ "type": "github",
1598
+ "url": "https://github.com/sponsors/wooorm"
1599
+ }
1600
+ },
1601
+ "node_modules/copy-anything": {
1602
+ "version": "4.0.5",
1603
+ "resolved": "https://registry.npmjs.org/copy-anything/-/copy-anything-4.0.5.tgz",
1604
+ "integrity": "sha512-7Vv6asjS4gMOuILabD3l739tsaxFQmC+a7pLZm02zyvs8p977bL3zEgq3yDk5rn9B0PbYgIv++jmHcuUab4RhA==",
1605
+ "dev": true,
1606
+ "license": "MIT",
1607
+ "dependencies": {
1608
+ "is-what": "^5.2.0"
1609
+ },
1610
+ "engines": {
1611
+ "node": ">=18"
1612
+ },
1613
+ "funding": {
1614
+ "url": "https://github.com/sponsors/mesqueeb"
1615
+ }
1616
+ },
1617
+ "node_modules/csstype": {
1618
+ "version": "3.2.3",
1619
+ "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
1620
+ "integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
1621
+ "dev": true,
1622
+ "license": "MIT"
1623
+ },
1624
+ "node_modules/dequal": {
1625
+ "version": "2.0.3",
1626
+ "resolved": "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz",
1627
+ "integrity": "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==",
1628
+ "dev": true,
1629
+ "license": "MIT",
1630
+ "engines": {
1631
+ "node": ">=6"
1632
+ }
1633
+ },
1634
+ "node_modules/devlop": {
1635
+ "version": "1.1.0",
1636
+ "resolved": "https://registry.npmjs.org/devlop/-/devlop-1.1.0.tgz",
1637
+ "integrity": "sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==",
1638
+ "dev": true,
1639
+ "license": "MIT",
1640
+ "dependencies": {
1641
+ "dequal": "^2.0.0"
1642
+ },
1643
+ "funding": {
1644
+ "type": "github",
1645
+ "url": "https://github.com/sponsors/wooorm"
1646
+ }
1647
+ },
1648
+ "node_modules/emoji-regex-xs": {
1649
+ "version": "1.0.0",
1650
+ "resolved": "https://registry.npmjs.org/emoji-regex-xs/-/emoji-regex-xs-1.0.0.tgz",
1651
+ "integrity": "sha512-LRlerrMYoIDrT6jgpeZ2YYl/L8EulRTt5hQcYjy5AInh7HWXKimpqx68aknBFpGL2+/IcogTcaydJEgaTmOpDg==",
1652
+ "dev": true,
1653
+ "license": "MIT"
1654
+ },
1655
+ "node_modules/entities": {
1656
+ "version": "7.0.0",
1657
+ "resolved": "https://registry.npmjs.org/entities/-/entities-7.0.0.tgz",
1658
+ "integrity": "sha512-FDWG5cmEYf2Z00IkYRhbFrwIwvdFKH07uV8dvNy0omp/Qb1xcyCWp2UDtcwJF4QZZvk0sLudP6/hAu42TaqVhQ==",
1659
+ "dev": true,
1660
+ "license": "BSD-2-Clause",
1661
+ "engines": {
1662
+ "node": ">=0.12"
1663
+ },
1664
+ "funding": {
1665
+ "url": "https://github.com/fb55/entities?sponsor=1"
1666
+ }
1667
+ },
1668
+ "node_modules/esbuild": {
1669
+ "version": "0.21.5",
1670
+ "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.21.5.tgz",
1671
+ "integrity": "sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==",
1672
+ "dev": true,
1673
+ "hasInstallScript": true,
1674
+ "license": "MIT",
1675
+ "bin": {
1676
+ "esbuild": "bin/esbuild"
1677
+ },
1678
+ "engines": {
1679
+ "node": ">=12"
1680
+ },
1681
+ "optionalDependencies": {
1682
+ "@esbuild/aix-ppc64": "0.21.5",
1683
+ "@esbuild/android-arm": "0.21.5",
1684
+ "@esbuild/android-arm64": "0.21.5",
1685
+ "@esbuild/android-x64": "0.21.5",
1686
+ "@esbuild/darwin-arm64": "0.21.5",
1687
+ "@esbuild/darwin-x64": "0.21.5",
1688
+ "@esbuild/freebsd-arm64": "0.21.5",
1689
+ "@esbuild/freebsd-x64": "0.21.5",
1690
+ "@esbuild/linux-arm": "0.21.5",
1691
+ "@esbuild/linux-arm64": "0.21.5",
1692
+ "@esbuild/linux-ia32": "0.21.5",
1693
+ "@esbuild/linux-loong64": "0.21.5",
1694
+ "@esbuild/linux-mips64el": "0.21.5",
1695
+ "@esbuild/linux-ppc64": "0.21.5",
1696
+ "@esbuild/linux-riscv64": "0.21.5",
1697
+ "@esbuild/linux-s390x": "0.21.5",
1698
+ "@esbuild/linux-x64": "0.21.5",
1699
+ "@esbuild/netbsd-x64": "0.21.5",
1700
+ "@esbuild/openbsd-x64": "0.21.5",
1701
+ "@esbuild/sunos-x64": "0.21.5",
1702
+ "@esbuild/win32-arm64": "0.21.5",
1703
+ "@esbuild/win32-ia32": "0.21.5",
1704
+ "@esbuild/win32-x64": "0.21.5"
1705
+ }
1706
+ },
1707
+ "node_modules/estree-walker": {
1708
+ "version": "2.0.2",
1709
+ "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz",
1710
+ "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==",
1711
+ "dev": true,
1712
+ "license": "MIT"
1713
+ },
1714
+ "node_modules/focus-trap": {
1715
+ "version": "7.7.0",
1716
+ "resolved": "https://registry.npmjs.org/focus-trap/-/focus-trap-7.7.0.tgz",
1717
+ "integrity": "sha512-DJJDHpEgoSbP8ZE1MNeU2IzCpfFyFdNZZRilqmfH2XiQsPK6PtD8AfJqWzEBudUQB2yHwZc5iq54rjTaGQ+ljw==",
1718
+ "dev": true,
1719
+ "license": "MIT",
1720
+ "peer": true,
1721
+ "dependencies": {
1722
+ "tabbable": "^6.3.0"
1723
+ }
1724
+ },
1725
+ "node_modules/fsevents": {
1726
+ "version": "2.3.3",
1727
+ "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
1728
+ "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==",
1729
+ "dev": true,
1730
+ "hasInstallScript": true,
1731
+ "license": "MIT",
1732
+ "optional": true,
1733
+ "os": [
1734
+ "darwin"
1735
+ ],
1736
+ "engines": {
1737
+ "node": "^8.16.0 || ^10.6.0 || >=11.0.0"
1738
+ }
1739
+ },
1740
+ "node_modules/hast-util-to-html": {
1741
+ "version": "9.0.5",
1742
+ "resolved": "https://registry.npmjs.org/hast-util-to-html/-/hast-util-to-html-9.0.5.tgz",
1743
+ "integrity": "sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw==",
1744
+ "dev": true,
1745
+ "license": "MIT",
1746
+ "dependencies": {
1747
+ "@types/hast": "^3.0.0",
1748
+ "@types/unist": "^3.0.0",
1749
+ "ccount": "^2.0.0",
1750
+ "comma-separated-tokens": "^2.0.0",
1751
+ "hast-util-whitespace": "^3.0.0",
1752
+ "html-void-elements": "^3.0.0",
1753
+ "mdast-util-to-hast": "^13.0.0",
1754
+ "property-information": "^7.0.0",
1755
+ "space-separated-tokens": "^2.0.0",
1756
+ "stringify-entities": "^4.0.0",
1757
+ "zwitch": "^2.0.4"
1758
+ },
1759
+ "funding": {
1760
+ "type": "opencollective",
1761
+ "url": "https://opencollective.com/unified"
1762
+ }
1763
+ },
1764
+ "node_modules/hast-util-whitespace": {
1765
+ "version": "3.0.0",
1766
+ "resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-3.0.0.tgz",
1767
+ "integrity": "sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw==",
1768
+ "dev": true,
1769
+ "license": "MIT",
1770
+ "dependencies": {
1771
+ "@types/hast": "^3.0.0"
1772
+ },
1773
+ "funding": {
1774
+ "type": "opencollective",
1775
+ "url": "https://opencollective.com/unified"
1776
+ }
1777
+ },
1778
+ "node_modules/hookable": {
1779
+ "version": "5.5.3",
1780
+ "resolved": "https://registry.npmjs.org/hookable/-/hookable-5.5.3.tgz",
1781
+ "integrity": "sha512-Yc+BQe8SvoXH1643Qez1zqLRmbA5rCL+sSmk6TVos0LWVfNIB7PGncdlId77WzLGSIB5KaWgTaNTs2lNVEI6VQ==",
1782
+ "dev": true,
1783
+ "license": "MIT"
1784
+ },
1785
+ "node_modules/html-void-elements": {
1786
+ "version": "3.0.0",
1787
+ "resolved": "https://registry.npmjs.org/html-void-elements/-/html-void-elements-3.0.0.tgz",
1788
+ "integrity": "sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg==",
1789
+ "dev": true,
1790
+ "license": "MIT",
1791
+ "funding": {
1792
+ "type": "github",
1793
+ "url": "https://github.com/sponsors/wooorm"
1794
+ }
1795
+ },
1796
+ "node_modules/is-what": {
1797
+ "version": "5.5.0",
1798
+ "resolved": "https://registry.npmjs.org/is-what/-/is-what-5.5.0.tgz",
1799
+ "integrity": "sha512-oG7cgbmg5kLYae2N5IVd3jm2s+vldjxJzK1pcu9LfpGuQ93MQSzo0okvRna+7y5ifrD+20FE8FvjusyGaz14fw==",
1800
+ "dev": true,
1801
+ "license": "MIT",
1802
+ "engines": {
1803
+ "node": ">=18"
1804
+ },
1805
+ "funding": {
1806
+ "url": "https://github.com/sponsors/mesqueeb"
1807
+ }
1808
+ },
1809
+ "node_modules/magic-string": {
1810
+ "version": "0.30.21",
1811
+ "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz",
1812
+ "integrity": "sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==",
1813
+ "dev": true,
1814
+ "license": "MIT",
1815
+ "dependencies": {
1816
+ "@jridgewell/sourcemap-codec": "^1.5.5"
1817
+ }
1818
+ },
1819
+ "node_modules/mark.js": {
1820
+ "version": "8.11.1",
1821
+ "resolved": "https://registry.npmjs.org/mark.js/-/mark.js-8.11.1.tgz",
1822
+ "integrity": "sha512-1I+1qpDt4idfgLQG+BNWmrqku+7/2bi5nLf4YwF8y8zXvmfiTBY3PV3ZibfrjBueCByROpuBjLLFCajqkgYoLQ==",
1823
+ "dev": true,
1824
+ "license": "MIT"
1825
+ },
1826
+ "node_modules/mdast-util-to-hast": {
1827
+ "version": "13.2.1",
1828
+ "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-13.2.1.tgz",
1829
+ "integrity": "sha512-cctsq2wp5vTsLIcaymblUriiTcZd0CwWtCbLvrOzYCDZoWyMNV8sZ7krj09FSnsiJi3WVsHLM4k6Dq/yaPyCXA==",
1830
+ "dev": true,
1831
+ "license": "MIT",
1832
+ "dependencies": {
1833
+ "@types/hast": "^3.0.0",
1834
+ "@types/mdast": "^4.0.0",
1835
+ "@ungap/structured-clone": "^1.0.0",
1836
+ "devlop": "^1.0.0",
1837
+ "micromark-util-sanitize-uri": "^2.0.0",
1838
+ "trim-lines": "^3.0.0",
1839
+ "unist-util-position": "^5.0.0",
1840
+ "unist-util-visit": "^5.0.0",
1841
+ "vfile": "^6.0.0"
1842
+ },
1843
+ "funding": {
1844
+ "type": "opencollective",
1845
+ "url": "https://opencollective.com/unified"
1846
+ }
1847
+ },
1848
+ "node_modules/micromark-util-character": {
1849
+ "version": "2.1.1",
1850
+ "resolved": "https://registry.npmjs.org/micromark-util-character/-/micromark-util-character-2.1.1.tgz",
1851
+ "integrity": "sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q==",
1852
+ "dev": true,
1853
+ "funding": [
1854
+ {
1855
+ "type": "GitHub Sponsors",
1856
+ "url": "https://github.com/sponsors/unifiedjs"
1857
+ },
1858
+ {
1859
+ "type": "OpenCollective",
1860
+ "url": "https://opencollective.com/unified"
1861
+ }
1862
+ ],
1863
+ "license": "MIT",
1864
+ "dependencies": {
1865
+ "micromark-util-symbol": "^2.0.0",
1866
+ "micromark-util-types": "^2.0.0"
1867
+ }
1868
+ },
1869
+ "node_modules/micromark-util-encode": {
1870
+ "version": "2.0.1",
1871
+ "resolved": "https://registry.npmjs.org/micromark-util-encode/-/micromark-util-encode-2.0.1.tgz",
1872
+ "integrity": "sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw==",
1873
+ "dev": true,
1874
+ "funding": [
1875
+ {
1876
+ "type": "GitHub Sponsors",
1877
+ "url": "https://github.com/sponsors/unifiedjs"
1878
+ },
1879
+ {
1880
+ "type": "OpenCollective",
1881
+ "url": "https://opencollective.com/unified"
1882
+ }
1883
+ ],
1884
+ "license": "MIT"
1885
+ },
1886
+ "node_modules/micromark-util-sanitize-uri": {
1887
+ "version": "2.0.1",
1888
+ "resolved": "https://registry.npmjs.org/micromark-util-sanitize-uri/-/micromark-util-sanitize-uri-2.0.1.tgz",
1889
+ "integrity": "sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ==",
1890
+ "dev": true,
1891
+ "funding": [
1892
+ {
1893
+ "type": "GitHub Sponsors",
1894
+ "url": "https://github.com/sponsors/unifiedjs"
1895
+ },
1896
+ {
1897
+ "type": "OpenCollective",
1898
+ "url": "https://opencollective.com/unified"
1899
+ }
1900
+ ],
1901
+ "license": "MIT",
1902
+ "dependencies": {
1903
+ "micromark-util-character": "^2.0.0",
1904
+ "micromark-util-encode": "^2.0.0",
1905
+ "micromark-util-symbol": "^2.0.0"
1906
+ }
1907
+ },
1908
+ "node_modules/micromark-util-symbol": {
1909
+ "version": "2.0.1",
1910
+ "resolved": "https://registry.npmjs.org/micromark-util-symbol/-/micromark-util-symbol-2.0.1.tgz",
1911
+ "integrity": "sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q==",
1912
+ "dev": true,
1913
+ "funding": [
1914
+ {
1915
+ "type": "GitHub Sponsors",
1916
+ "url": "https://github.com/sponsors/unifiedjs"
1917
+ },
1918
+ {
1919
+ "type": "OpenCollective",
1920
+ "url": "https://opencollective.com/unified"
1921
+ }
1922
+ ],
1923
+ "license": "MIT"
1924
+ },
1925
+ "node_modules/micromark-util-types": {
1926
+ "version": "2.0.2",
1927
+ "resolved": "https://registry.npmjs.org/micromark-util-types/-/micromark-util-types-2.0.2.tgz",
1928
+ "integrity": "sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA==",
1929
+ "dev": true,
1930
+ "funding": [
1931
+ {
1932
+ "type": "GitHub Sponsors",
1933
+ "url": "https://github.com/sponsors/unifiedjs"
1934
+ },
1935
+ {
1936
+ "type": "OpenCollective",
1937
+ "url": "https://opencollective.com/unified"
1938
+ }
1939
+ ],
1940
+ "license": "MIT"
1941
+ },
1942
+ "node_modules/minisearch": {
1943
+ "version": "7.2.0",
1944
+ "resolved": "https://registry.npmjs.org/minisearch/-/minisearch-7.2.0.tgz",
1945
+ "integrity": "sha512-dqT2XBYUOZOiC5t2HRnwADjhNS2cecp9u+TJRiJ1Qp/f5qjkeT5APcGPjHw+bz89Ms8Jp+cG4AlE+QZ/QnDglg==",
1946
+ "dev": true,
1947
+ "license": "MIT"
1948
+ },
1949
+ "node_modules/mitt": {
1950
+ "version": "3.0.1",
1951
+ "resolved": "https://registry.npmjs.org/mitt/-/mitt-3.0.1.tgz",
1952
+ "integrity": "sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==",
1953
+ "dev": true,
1954
+ "license": "MIT"
1955
+ },
1956
+ "node_modules/nanoid": {
1957
+ "version": "3.3.11",
1958
+ "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.11.tgz",
1959
+ "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==",
1960
+ "dev": true,
1961
+ "funding": [
1962
+ {
1963
+ "type": "github",
1964
+ "url": "https://github.com/sponsors/ai"
1965
+ }
1966
+ ],
1967
+ "license": "MIT",
1968
+ "bin": {
1969
+ "nanoid": "bin/nanoid.cjs"
1970
+ },
1971
+ "engines": {
1972
+ "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1"
1973
+ }
1974
+ },
1975
+ "node_modules/oniguruma-to-es": {
1976
+ "version": "3.1.1",
1977
+ "resolved": "https://registry.npmjs.org/oniguruma-to-es/-/oniguruma-to-es-3.1.1.tgz",
1978
+ "integrity": "sha512-bUH8SDvPkH3ho3dvwJwfonjlQ4R80vjyvrU8YpxuROddv55vAEJrTuCuCVUhhsHbtlD9tGGbaNApGQckXhS8iQ==",
1979
+ "dev": true,
1980
+ "license": "MIT",
1981
+ "dependencies": {
1982
+ "emoji-regex-xs": "^1.0.0",
1983
+ "regex": "^6.0.1",
1984
+ "regex-recursion": "^6.0.2"
1985
+ }
1986
+ },
1987
+ "node_modules/perfect-debounce": {
1988
+ "version": "1.0.0",
1989
+ "resolved": "https://registry.npmjs.org/perfect-debounce/-/perfect-debounce-1.0.0.tgz",
1990
+ "integrity": "sha512-xCy9V055GLEqoFaHoC1SoLIaLmWctgCUaBaWxDZ7/Zx4CTyX7cJQLJOok/orfjZAh9kEYpjJa4d0KcJmCbctZA==",
1991
+ "dev": true,
1992
+ "license": "MIT"
1993
+ },
1994
+ "node_modules/picocolors": {
1995
+ "version": "1.1.1",
1996
+ "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
1997
+ "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==",
1998
+ "dev": true,
1999
+ "license": "ISC"
2000
+ },
2001
+ "node_modules/postcss": {
2002
+ "version": "8.5.6",
2003
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
2004
+ "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==",
2005
+ "dev": true,
2006
+ "funding": [
2007
+ {
2008
+ "type": "opencollective",
2009
+ "url": "https://opencollective.com/postcss/"
2010
+ },
2011
+ {
2012
+ "type": "tidelift",
2013
+ "url": "https://tidelift.com/funding/github/npm/postcss"
2014
+ },
2015
+ {
2016
+ "type": "github",
2017
+ "url": "https://github.com/sponsors/ai"
2018
+ }
2019
+ ],
2020
+ "license": "MIT",
2021
+ "dependencies": {
2022
+ "nanoid": "^3.3.11",
2023
+ "picocolors": "^1.1.1",
2024
+ "source-map-js": "^1.2.1"
2025
+ },
2026
+ "engines": {
2027
+ "node": "^10 || ^12 || >=14"
2028
+ }
2029
+ },
2030
+ "node_modules/preact": {
2031
+ "version": "10.28.1",
2032
+ "resolved": "https://registry.npmjs.org/preact/-/preact-10.28.1.tgz",
2033
+ "integrity": "sha512-u1/ixq/lVQI0CakKNvLDEcW5zfCjUQfZdK9qqWuIJtsezuyG6pk9TWj75GMuI/EzRSZB/VAE43sNWWZfiy8psw==",
2034
+ "dev": true,
2035
+ "license": "MIT",
2036
+ "funding": {
2037
+ "type": "opencollective",
2038
+ "url": "https://opencollective.com/preact"
2039
+ }
2040
+ },
2041
+ "node_modules/property-information": {
2042
+ "version": "7.1.0",
2043
+ "resolved": "https://registry.npmjs.org/property-information/-/property-information-7.1.0.tgz",
2044
+ "integrity": "sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ==",
2045
+ "dev": true,
2046
+ "license": "MIT",
2047
+ "funding": {
2048
+ "type": "github",
2049
+ "url": "https://github.com/sponsors/wooorm"
2050
+ }
2051
+ },
2052
+ "node_modules/regex": {
2053
+ "version": "6.1.0",
2054
+ "resolved": "https://registry.npmjs.org/regex/-/regex-6.1.0.tgz",
2055
+ "integrity": "sha512-6VwtthbV4o/7+OaAF9I5L5V3llLEsoPyq9P1JVXkedTP33c7MfCG0/5NOPcSJn0TzXcG9YUrR0gQSWioew3LDg==",
2056
+ "dev": true,
2057
+ "license": "MIT",
2058
+ "dependencies": {
2059
+ "regex-utilities": "^2.3.0"
2060
+ }
2061
+ },
2062
+ "node_modules/regex-recursion": {
2063
+ "version": "6.0.2",
2064
+ "resolved": "https://registry.npmjs.org/regex-recursion/-/regex-recursion-6.0.2.tgz",
2065
+ "integrity": "sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg==",
2066
+ "dev": true,
2067
+ "license": "MIT",
2068
+ "dependencies": {
2069
+ "regex-utilities": "^2.3.0"
2070
+ }
2071
+ },
2072
+ "node_modules/regex-utilities": {
2073
+ "version": "2.3.0",
2074
+ "resolved": "https://registry.npmjs.org/regex-utilities/-/regex-utilities-2.3.0.tgz",
2075
+ "integrity": "sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng==",
2076
+ "dev": true,
2077
+ "license": "MIT"
2078
+ },
2079
+ "node_modules/rfdc": {
2080
+ "version": "1.4.1",
2081
+ "resolved": "https://registry.npmjs.org/rfdc/-/rfdc-1.4.1.tgz",
2082
+ "integrity": "sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==",
2083
+ "dev": true,
2084
+ "license": "MIT"
2085
+ },
2086
+ "node_modules/rollup": {
2087
+ "version": "4.54.0",
2088
+ "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.54.0.tgz",
2089
+ "integrity": "sha512-3nk8Y3a9Ea8szgKhinMlGMhGMw89mqule3KWczxhIzqudyHdCIOHw8WJlj/r329fACjKLEh13ZSk7oE22kyeIw==",
2090
+ "dev": true,
2091
+ "license": "MIT",
2092
+ "dependencies": {
2093
+ "@types/estree": "1.0.8"
2094
+ },
2095
+ "bin": {
2096
+ "rollup": "dist/bin/rollup"
2097
+ },
2098
+ "engines": {
2099
+ "node": ">=18.0.0",
2100
+ "npm": ">=8.0.0"
2101
+ },
2102
+ "optionalDependencies": {
2103
+ "@rollup/rollup-android-arm-eabi": "4.54.0",
2104
+ "@rollup/rollup-android-arm64": "4.54.0",
2105
+ "@rollup/rollup-darwin-arm64": "4.54.0",
2106
+ "@rollup/rollup-darwin-x64": "4.54.0",
2107
+ "@rollup/rollup-freebsd-arm64": "4.54.0",
2108
+ "@rollup/rollup-freebsd-x64": "4.54.0",
2109
+ "@rollup/rollup-linux-arm-gnueabihf": "4.54.0",
2110
+ "@rollup/rollup-linux-arm-musleabihf": "4.54.0",
2111
+ "@rollup/rollup-linux-arm64-gnu": "4.54.0",
2112
+ "@rollup/rollup-linux-arm64-musl": "4.54.0",
2113
+ "@rollup/rollup-linux-loong64-gnu": "4.54.0",
2114
+ "@rollup/rollup-linux-ppc64-gnu": "4.54.0",
2115
+ "@rollup/rollup-linux-riscv64-gnu": "4.54.0",
2116
+ "@rollup/rollup-linux-riscv64-musl": "4.54.0",
2117
+ "@rollup/rollup-linux-s390x-gnu": "4.54.0",
2118
+ "@rollup/rollup-linux-x64-gnu": "4.54.0",
2119
+ "@rollup/rollup-linux-x64-musl": "4.54.0",
2120
+ "@rollup/rollup-openharmony-arm64": "4.54.0",
2121
+ "@rollup/rollup-win32-arm64-msvc": "4.54.0",
2122
+ "@rollup/rollup-win32-ia32-msvc": "4.54.0",
2123
+ "@rollup/rollup-win32-x64-gnu": "4.54.0",
2124
+ "@rollup/rollup-win32-x64-msvc": "4.54.0",
2125
+ "fsevents": "~2.3.2"
2126
+ }
2127
+ },
2128
+ "node_modules/search-insights": {
2129
+ "version": "2.17.3",
2130
+ "resolved": "https://registry.npmjs.org/search-insights/-/search-insights-2.17.3.tgz",
2131
+ "integrity": "sha512-RQPdCYTa8A68uM2jwxoY842xDhvx3E5LFL1LxvxCNMev4o5mLuokczhzjAgGwUZBAmOKZknArSxLKmXtIi2AxQ==",
2132
+ "dev": true,
2133
+ "license": "MIT",
2134
+ "peer": true
2135
+ },
2136
+ "node_modules/shiki": {
2137
+ "version": "2.5.0",
2138
+ "resolved": "https://registry.npmjs.org/shiki/-/shiki-2.5.0.tgz",
2139
+ "integrity": "sha512-mI//trrsaiCIPsja5CNfsyNOqgAZUb6VpJA+340toL42UpzQlXpwRV9nch69X6gaUxrr9kaOOa6e3y3uAkGFxQ==",
2140
+ "dev": true,
2141
+ "license": "MIT",
2142
+ "dependencies": {
2143
+ "@shikijs/core": "2.5.0",
2144
+ "@shikijs/engine-javascript": "2.5.0",
2145
+ "@shikijs/engine-oniguruma": "2.5.0",
2146
+ "@shikijs/langs": "2.5.0",
2147
+ "@shikijs/themes": "2.5.0",
2148
+ "@shikijs/types": "2.5.0",
2149
+ "@shikijs/vscode-textmate": "^10.0.2",
2150
+ "@types/hast": "^3.0.4"
2151
+ }
2152
+ },
2153
+ "node_modules/source-map-js": {
2154
+ "version": "1.2.1",
2155
+ "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
2156
+ "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==",
2157
+ "dev": true,
2158
+ "license": "BSD-3-Clause",
2159
+ "engines": {
2160
+ "node": ">=0.10.0"
2161
+ }
2162
+ },
2163
+ "node_modules/space-separated-tokens": {
2164
+ "version": "2.0.2",
2165
+ "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-2.0.2.tgz",
2166
+ "integrity": "sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==",
2167
+ "dev": true,
2168
+ "license": "MIT",
2169
+ "funding": {
2170
+ "type": "github",
2171
+ "url": "https://github.com/sponsors/wooorm"
2172
+ }
2173
+ },
2174
+ "node_modules/speakingurl": {
2175
+ "version": "14.0.1",
2176
+ "resolved": "https://registry.npmjs.org/speakingurl/-/speakingurl-14.0.1.tgz",
2177
+ "integrity": "sha512-1POYv7uv2gXoyGFpBCmpDVSNV74IfsWlDW216UPjbWufNf+bSU6GdbDsxdcxtfwb4xlI3yxzOTKClUosxARYrQ==",
2178
+ "dev": true,
2179
+ "license": "BSD-3-Clause",
2180
+ "engines": {
2181
+ "node": ">=0.10.0"
2182
+ }
2183
+ },
2184
+ "node_modules/stringify-entities": {
2185
+ "version": "4.0.4",
2186
+ "resolved": "https://registry.npmjs.org/stringify-entities/-/stringify-entities-4.0.4.tgz",
2187
+ "integrity": "sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==",
2188
+ "dev": true,
2189
+ "license": "MIT",
2190
+ "dependencies": {
2191
+ "character-entities-html4": "^2.0.0",
2192
+ "character-entities-legacy": "^3.0.0"
2193
+ },
2194
+ "funding": {
2195
+ "type": "github",
2196
+ "url": "https://github.com/sponsors/wooorm"
2197
+ }
2198
+ },
2199
+ "node_modules/superjson": {
2200
+ "version": "2.2.6",
2201
+ "resolved": "https://registry.npmjs.org/superjson/-/superjson-2.2.6.tgz",
2202
+ "integrity": "sha512-H+ue8Zo4vJmV2nRjpx86P35lzwDT3nItnIsocgumgr0hHMQ+ZGq5vrERg9kJBo5AWGmxZDhzDo+WVIJqkB0cGA==",
2203
+ "dev": true,
2204
+ "license": "MIT",
2205
+ "dependencies": {
2206
+ "copy-anything": "^4"
2207
+ },
2208
+ "engines": {
2209
+ "node": ">=16"
2210
+ }
2211
+ },
2212
+ "node_modules/tabbable": {
2213
+ "version": "6.3.0",
2214
+ "resolved": "https://registry.npmjs.org/tabbable/-/tabbable-6.3.0.tgz",
2215
+ "integrity": "sha512-EIHvdY5bPLuWForiR/AN2Bxngzpuwn1is4asboytXtpTgsArc+WmSJKVLlhdh71u7jFcryDqB2A8lQvj78MkyQ==",
2216
+ "dev": true,
2217
+ "license": "MIT"
2218
+ },
2219
+ "node_modules/trim-lines": {
2220
+ "version": "3.0.1",
2221
+ "resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz",
2222
+ "integrity": "sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg==",
2223
+ "dev": true,
2224
+ "license": "MIT",
2225
+ "funding": {
2226
+ "type": "github",
2227
+ "url": "https://github.com/sponsors/wooorm"
2228
+ }
2229
+ },
2230
+ "node_modules/unist-util-is": {
2231
+ "version": "6.0.1",
2232
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-6.0.1.tgz",
2233
+ "integrity": "sha512-LsiILbtBETkDz8I9p1dQ0uyRUWuaQzd/cuEeS1hoRSyW5E5XGmTzlwY1OrNzzakGowI9Dr/I8HVaw4hTtnxy8g==",
2234
+ "dev": true,
2235
+ "license": "MIT",
2236
+ "dependencies": {
2237
+ "@types/unist": "^3.0.0"
2238
+ },
2239
+ "funding": {
2240
+ "type": "opencollective",
2241
+ "url": "https://opencollective.com/unified"
2242
+ }
2243
+ },
2244
+ "node_modules/unist-util-position": {
2245
+ "version": "5.0.0",
2246
+ "resolved": "https://registry.npmjs.org/unist-util-position/-/unist-util-position-5.0.0.tgz",
2247
+ "integrity": "sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA==",
2248
+ "dev": true,
2249
+ "license": "MIT",
2250
+ "dependencies": {
2251
+ "@types/unist": "^3.0.0"
2252
+ },
2253
+ "funding": {
2254
+ "type": "opencollective",
2255
+ "url": "https://opencollective.com/unified"
2256
+ }
2257
+ },
2258
+ "node_modules/unist-util-stringify-position": {
2259
+ "version": "4.0.0",
2260
+ "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-4.0.0.tgz",
2261
+ "integrity": "sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ==",
2262
+ "dev": true,
2263
+ "license": "MIT",
2264
+ "dependencies": {
2265
+ "@types/unist": "^3.0.0"
2266
+ },
2267
+ "funding": {
2268
+ "type": "opencollective",
2269
+ "url": "https://opencollective.com/unified"
2270
+ }
2271
+ },
2272
+ "node_modules/unist-util-visit": {
2273
+ "version": "5.0.0",
2274
+ "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-5.0.0.tgz",
2275
+ "integrity": "sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg==",
2276
+ "dev": true,
2277
+ "license": "MIT",
2278
+ "dependencies": {
2279
+ "@types/unist": "^3.0.0",
2280
+ "unist-util-is": "^6.0.0",
2281
+ "unist-util-visit-parents": "^6.0.0"
2282
+ },
2283
+ "funding": {
2284
+ "type": "opencollective",
2285
+ "url": "https://opencollective.com/unified"
2286
+ }
2287
+ },
2288
+ "node_modules/unist-util-visit-parents": {
2289
+ "version": "6.0.2",
2290
+ "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-6.0.2.tgz",
2291
+ "integrity": "sha512-goh1s1TBrqSqukSc8wrjwWhL0hiJxgA8m4kFxGlQ+8FYQ3C/m11FcTs4YYem7V664AhHVvgoQLk890Ssdsr2IQ==",
2292
+ "dev": true,
2293
+ "license": "MIT",
2294
+ "dependencies": {
2295
+ "@types/unist": "^3.0.0",
2296
+ "unist-util-is": "^6.0.0"
2297
+ },
2298
+ "funding": {
2299
+ "type": "opencollective",
2300
+ "url": "https://opencollective.com/unified"
2301
+ }
2302
+ },
2303
+ "node_modules/vfile": {
2304
+ "version": "6.0.3",
2305
+ "resolved": "https://registry.npmjs.org/vfile/-/vfile-6.0.3.tgz",
2306
+ "integrity": "sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==",
2307
+ "dev": true,
2308
+ "license": "MIT",
2309
+ "dependencies": {
2310
+ "@types/unist": "^3.0.0",
2311
+ "vfile-message": "^4.0.0"
2312
+ },
2313
+ "funding": {
2314
+ "type": "opencollective",
2315
+ "url": "https://opencollective.com/unified"
2316
+ }
2317
+ },
2318
+ "node_modules/vfile-message": {
2319
+ "version": "4.0.3",
2320
+ "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-4.0.3.tgz",
2321
+ "integrity": "sha512-QTHzsGd1EhbZs4AsQ20JX1rC3cOlt/IWJruk893DfLRr57lcnOeMaWG4K0JrRta4mIJZKth2Au3mM3u03/JWKw==",
2322
+ "dev": true,
2323
+ "license": "MIT",
2324
+ "dependencies": {
2325
+ "@types/unist": "^3.0.0",
2326
+ "unist-util-stringify-position": "^4.0.0"
2327
+ },
2328
+ "funding": {
2329
+ "type": "opencollective",
2330
+ "url": "https://opencollective.com/unified"
2331
+ }
2332
+ },
2333
+ "node_modules/vite": {
2334
+ "version": "5.4.21",
2335
+ "resolved": "https://registry.npmjs.org/vite/-/vite-5.4.21.tgz",
2336
+ "integrity": "sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==",
2337
+ "dev": true,
2338
+ "license": "MIT",
2339
+ "peer": true,
2340
+ "dependencies": {
2341
+ "esbuild": "^0.21.3",
2342
+ "postcss": "^8.4.43",
2343
+ "rollup": "^4.20.0"
2344
+ },
2345
+ "bin": {
2346
+ "vite": "bin/vite.js"
2347
+ },
2348
+ "engines": {
2349
+ "node": "^18.0.0 || >=20.0.0"
2350
+ },
2351
+ "funding": {
2352
+ "url": "https://github.com/vitejs/vite?sponsor=1"
2353
+ },
2354
+ "optionalDependencies": {
2355
+ "fsevents": "~2.3.3"
2356
+ },
2357
+ "peerDependencies": {
2358
+ "@types/node": "^18.0.0 || >=20.0.0",
2359
+ "less": "*",
2360
+ "lightningcss": "^1.21.0",
2361
+ "sass": "*",
2362
+ "sass-embedded": "*",
2363
+ "stylus": "*",
2364
+ "sugarss": "*",
2365
+ "terser": "^5.4.0"
2366
+ },
2367
+ "peerDependenciesMeta": {
2368
+ "@types/node": {
2369
+ "optional": true
2370
+ },
2371
+ "less": {
2372
+ "optional": true
2373
+ },
2374
+ "lightningcss": {
2375
+ "optional": true
2376
+ },
2377
+ "sass": {
2378
+ "optional": true
2379
+ },
2380
+ "sass-embedded": {
2381
+ "optional": true
2382
+ },
2383
+ "stylus": {
2384
+ "optional": true
2385
+ },
2386
+ "sugarss": {
2387
+ "optional": true
2388
+ },
2389
+ "terser": {
2390
+ "optional": true
2391
+ }
2392
+ }
2393
+ },
2394
+ "node_modules/vitepress": {
2395
+ "version": "1.6.4",
2396
+ "resolved": "https://registry.npmjs.org/vitepress/-/vitepress-1.6.4.tgz",
2397
+ "integrity": "sha512-+2ym1/+0VVrbhNyRoFFesVvBvHAVMZMK0rw60E3X/5349M1GuVdKeazuksqopEdvkKwKGs21Q729jX81/bkBJg==",
2398
+ "dev": true,
2399
+ "license": "MIT",
2400
+ "dependencies": {
2401
+ "@docsearch/css": "3.8.2",
2402
+ "@docsearch/js": "3.8.2",
2403
+ "@iconify-json/simple-icons": "^1.2.21",
2404
+ "@shikijs/core": "^2.1.0",
2405
+ "@shikijs/transformers": "^2.1.0",
2406
+ "@shikijs/types": "^2.1.0",
2407
+ "@types/markdown-it": "^14.1.2",
2408
+ "@vitejs/plugin-vue": "^5.2.1",
2409
+ "@vue/devtools-api": "^7.7.0",
2410
+ "@vue/shared": "^3.5.13",
2411
+ "@vueuse/core": "^12.4.0",
2412
+ "@vueuse/integrations": "^12.4.0",
2413
+ "focus-trap": "^7.6.4",
2414
+ "mark.js": "8.11.1",
2415
+ "minisearch": "^7.1.1",
2416
+ "shiki": "^2.1.0",
2417
+ "vite": "^5.4.14",
2418
+ "vue": "^3.5.13"
2419
+ },
2420
+ "bin": {
2421
+ "vitepress": "bin/vitepress.js"
2422
+ },
2423
+ "peerDependencies": {
2424
+ "markdown-it-mathjax3": "^4",
2425
+ "postcss": "^8"
2426
+ },
2427
+ "peerDependenciesMeta": {
2428
+ "markdown-it-mathjax3": {
2429
+ "optional": true
2430
+ },
2431
+ "postcss": {
2432
+ "optional": true
2433
+ }
2434
+ }
2435
+ },
2436
+ "node_modules/vue": {
2437
+ "version": "3.5.26",
2438
+ "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.26.tgz",
2439
+ "integrity": "sha512-SJ/NTccVyAoNUJmkM9KUqPcYlY+u8OVL1X5EW9RIs3ch5H2uERxyyIUI4MRxVCSOiEcupX9xNGde1tL9ZKpimA==",
2440
+ "dev": true,
2441
+ "license": "MIT",
2442
+ "peer": true,
2443
+ "dependencies": {
2444
+ "@vue/compiler-dom": "3.5.26",
2445
+ "@vue/compiler-sfc": "3.5.26",
2446
+ "@vue/runtime-dom": "3.5.26",
2447
+ "@vue/server-renderer": "3.5.26",
2448
+ "@vue/shared": "3.5.26"
2449
+ },
2450
+ "peerDependencies": {
2451
+ "typescript": "*"
2452
+ },
2453
+ "peerDependenciesMeta": {
2454
+ "typescript": {
2455
+ "optional": true
2456
+ }
2457
+ }
2458
+ },
2459
+ "node_modules/zwitch": {
2460
+ "version": "2.0.4",
2461
+ "resolved": "https://registry.npmjs.org/zwitch/-/zwitch-2.0.4.tgz",
2462
+ "integrity": "sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==",
2463
+ "dev": true,
2464
+ "license": "MIT",
2465
+ "funding": {
2466
+ "type": "github",
2467
+ "url": "https://github.com/sponsors/wooorm"
2468
+ }
2469
+ }
2470
+ }
2471
+ }
package.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "quran-muaalem-docs",
3
+ "private": true,
4
+ "type": "module",
5
+ "scripts": {
6
+ "docs:dev": "vitepress dev .",
7
+ "docs:build": "vitepress build .",
8
+ "docs:preview": "vitepress preview ."
9
+ },
10
+ "devDependencies": {
11
+ "vitepress": "^1.6.4"
12
+ }
13
+ }
paper.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # الورقة العلمية
2
+
3
+ الورقة المذكورة في `README.md` متاحة على arXiv:
4
+
5
+ - https://arxiv.org/abs/2509.00094
6
+
7
+ نسخة PDF من هذا المستودع متاحة هنا:
8
+
9
+ - /paper.pdf
paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63f5a74293896b30e2089e725b085aad40560cafc696f3701411e2ea64403b8d
3
+ size 498941
pyproject.toml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # main docs of setup tools: https://setuptools.pypa.io/en/latest/userguide/quickstart.html
2
+ # pypip tutoral: https://packaging.python.org/en/latest/tutorials/packaging-projects/
3
+ # Video Tutorial: https://www.youtube.com/watch?v=v6tALyc4C10
4
+ [build-system]
5
+ requires = ["setuptools"]
6
+ build-backend = "setuptools.build_meta"
7
+
8
+ [project]
9
+ license = "MIT"
10
+ name = "quran-muaalem"
11
+ version = "0.0.3"
12
+ authors = [
13
+ { name="Abdullah", email="abdullahamlyossef@gmail.com" },
14
+ ]
15
+ description = "Quran Phonetic Script with addional quarnic utils"
16
+ readme = "README.md"
17
+ dependencies = [
18
+ "diff-match-patch>=20241021",
19
+ "numpy>=2.2.6",
20
+ "quran-transcript>=0.1.0",
21
+ "rich>=14.1.0",
22
+ "torch>=2.7.0",
23
+ "transformers>=4.55.0",
24
+ ]
25
+ requires-python = ">=3.10"
26
+ classifiers = [
27
+ "Programming Language :: Python :: 3.10",
28
+ "Programming Language :: Python :: 3.11",
29
+ "Programming Language :: Python :: 3.12",
30
+ "Programming Language :: Python :: 3.13",
31
+ "Operating System :: OS Independent",
32
+ ]
33
+
34
+
35
+ # Optional dependencies
36
+ [project.optional-dependencies]
37
+
38
+ test = [
39
+ "librosa>=0.11.0",
40
+ "numba>=0.61.2",
41
+ "pytest",
42
+ ]
43
+ ui = [
44
+ "gradio>=5.43.1",
45
+ "librosa>=0.11.0",
46
+ "numba>=0.61.2",
47
+ "moviepy>=2.2.1",
48
+ ]
49
+
50
+
51
+ [project.scripts]
52
+ quran-muaalem-ui = "quran_muaalem.gradio_app:main"
53
+
54
+ [project.urls]
55
+ Homepage = "https://github.com/obadx/quran-muaalem"
56
+ Issues = "https://github.com/obadx/quran-muaalem/issues"
57
+
58
+ # for addint data: https://setuptools.pypa.io/en/latest/userguide/datafiles.html#package-data
59
+ # [tool.setuptools.package-data]
60
+ # quran_transcript = ["quran-script/*"]
python-api.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # واجهة بايثون
2
+
3
+ الفئة الأساسية للاستدلال هي `Muaalem` في `src/quran_muaalem/inference.py`. هذه الفئة تشغّل نموذج CTC متعدد المستويات وتعيد نواتج الفونيمات وصفات الحروف.
4
+
5
+ ## توقيع الفئة
6
+
7
+ ```python
8
+ class Muaalem:
9
+ def __init__(
10
+ self,
11
+ model_name_or_path: str = "obadx/muaalem-model-v3_2",
12
+ device: str = "cpu",
13
+ dtype=torch.bfloat16,
14
+ ):
15
+ ...
16
+
17
+ @torch.no_grad()
18
+ def __call__(
19
+ self,
20
+ waves: list[list[float] | torch.FloatTensor | NDArray],
21
+ ref_quran_phonetic_script_list: list[QuranPhoneticScriptOutput],
22
+ sampling_rate: int,
23
+ ) -> list[MuaalemOutput]:
24
+ ...
25
+ ```
26
+
27
+ ## المدخلات
28
+
29
+ ### 1) الصوت (`waves`)
30
+ - قائمة من الموجات (batch).
31
+ - كل موجة يمكن أن تكون:
32
+ - `list[float]`
33
+ - `torch.FloatTensor`
34
+ - `numpy.ndarray`
35
+ - **معدل العينة المطلوب:** `16000 Hz`. التنفيذ يرفع `ValueError` إن لم يكن كذلك.
36
+
37
+ ### 2) المرجع الصوتي (`ref_quran_phonetic_script_list`)
38
+ - قائمة من كائنات `QuranPhoneticScriptOutput`.
39
+ - يتم توليدها عبر `quran_transcript.quran_phonetizer(..., remove_spaces=True)` لضمان تطابق المحاذاة.
40
+
41
+ مثال توليد المرجع:
42
+
43
+ ```python
44
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
45
+
46
+ uthmani_ref = Aya(8, 75).get_by_imlaey_words(17, 9).uthmani
47
+ moshaf = MoshafAttributes(
48
+ rewaya="hafs",
49
+ madd_monfasel_len=4,
50
+ madd_mottasel_len=4,
51
+ madd_mottasel_waqf=4,
52
+ madd_aared_len=4,
53
+ )
54
+ ref = quran_phonetizer(uthmani_ref, moshaf, remove_spaces=True)
55
+ ```
56
+
57
+ ## المخرجات
58
+
59
+ الإرجاع يكون `list[MuaalemOutput]` (عنصر لكل موجة). راجع `src/quran_muaalem/muaalem_typing.py`:
60
+
61
+ - `Unit`: تسلسل مفكوك مع `text` و `probs` و `ids`.
62
+ - `Sifa`: خصائص لكل مجموعة فونيمات (قيمة `SingleUnit` أو `None`).
63
+ - `MuaalemOutput`: حاوية تضم `phonemes` و `sifat`.
64
+
65
+ للتفاصيل والمثال العملي راجع صفحة **المخرجات**.
66
+
67
+ ## مثال سريع
68
+
69
+ ```python
70
+ from librosa.core import load
71
+ import torch
72
+ from quran_transcript import Aya, quran_phonetizer, MoshafAttributes
73
+ from quran_muaalem import Muaalem
74
+
75
+ sampling_rate = 16000
76
+ wave, _ = load("./assets/test.wav", sr=sampling_rate, mono=True)
77
+
78
+ uthmani_ref = Aya(8, 75).get_by_imlaey_words(17, 9).uthmani
79
+ moshaf = MoshafAttributes(
80
+ rewaya="hafs",
81
+ madd_monfasel_len=4,
82
+ madd_mottasel_len=4,
83
+ madd_mottasel_waqf=4,
84
+ madd_aared_len=4,
85
+ )
86
+ ref = quran_phonetizer(uthmani_ref, moshaf, remove_spaces=True)
87
+
88
+ model = Muaalem(device="cuda" if torch.cuda.is_available() else "cpu")
89
+ outs = model([wave], [ref], sampling_rate=sampling_rate)
90
+ print(outs[0].phonemes.text)
91
+ ```
92
+
93
+ ## ملاحظات عن الأخطاء والحالات الطرفية
94
+
95
+ - إذا كان `sampling_rate` لا يساوي 16000 يتم رفع `ValueError`.
96
+ - عند اختلاف أطوال المحاذاة قد تُضاف رموز حشو، وقد تكون بعض صفات `Sifa` بقيمة `None`.
97
+ - النموذج يعمل دائمًا في وضع الاستدلال (`torch.no_grad()`).
98
+
99
+ ## الأداء
100
+
101
+ - القيمة الافتراضية لـ `dtype` هي `torch.bfloat16`. يمكن تغييرها إلى `torch.float16` إذا كانت بطاقة الرسوم لا تدعم BF16.
102
+ - يفضل إعادة استخدام نفس كائن `Muaalem` لتجنب تكلفة إعادة تحميل النموذج.
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ gradio
2
+ opencv-python
3
+ numpy
4
+ mediapipe
requires.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ diff-match-patch>=20241021
2
+ numpy>=2.2.6
3
+ quran-transcript>=0.1.0
4
+ rich>=14.1.0
5
+ torch>=2.7.0
6
+ transformers>=4.55.0
7
+
8
+ [test]
9
+ librosa>=0.11.0
10
+ numba>=0.61.2
11
+ pytest
12
+
13
+ [ui]
14
+ gradio>=5.43.1
15
+ librosa>=0.11.0
16
+ numba>=0.61.2
17
+ moviepy>=2.2.1
test.mp3 ADDED
Binary file (26.8 kB). View file
 
test.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3a827103e67142ad69be1722f513e6400d38624aa96c2706da2025674726c24
3
+ size 16909