MEscriva commited on
Commit
8a8ca05
·
verified ·
1 Parent(s): 3fab953

Add model card: Research baseline for Gilbert project with IP clarification

Browse files
Files changed (1) hide show
  1. README.md +191 -347
README.md CHANGED
@@ -1,456 +1,300 @@
1
  ---
2
  license: mit
3
- language: fr
4
- library_name: transformers
5
- pipeline_tag: automatic-speech-recognition
6
- thumbnail: null
7
  tags:
8
  - automatic-speech-recognition
9
- - hf-asr-leaderboard
10
- datasets:
11
- - mozilla-foundation/common_voice_13_0
12
- - facebook/multilingual_librispeech
13
- - facebook/voxpopuli
14
- - google/fleurs
15
- - gigant/african_accented_french
16
- metrics:
17
- - wer
18
- model-index:
19
- - name: whisper-large-v3-french
20
- results:
21
- - task:
22
- name: Automatic Speech Recognition
23
- type: automatic-speech-recognition
24
- dataset:
25
- name: Common Voice 13.0
26
- type: mozilla-foundation/common_voice_13_0
27
- config: fr
28
- split: test
29
- args:
30
- language: fr
31
- metrics:
32
- - name: WER
33
- type: wer
34
- value: 7.28
35
- - task:
36
- name: Automatic Speech Recognition
37
- type: automatic-speech-recognition
38
- dataset:
39
- name: Multilingual LibriSpeech (MLS)
40
- type: facebook/multilingual_librispeech
41
- config: french
42
- split: test
43
- args:
44
- language: fr
45
- metrics:
46
- - name: WER
47
- type: wer
48
- value: 3.98
49
- - task:
50
- name: Automatic Speech Recognition
51
- type: automatic-speech-recognition
52
- dataset:
53
- name: VoxPopuli
54
- type: facebook/voxpopuli
55
- config: fr
56
- split: test
57
- args:
58
- language: fr
59
- metrics:
60
- - name: WER
61
- type: wer
62
- value: 8.91
63
- - task:
64
- name: Automatic Speech Recognition
65
- type: automatic-speech-recognition
66
- dataset:
67
- name: Fleurs
68
- type: google/fleurs
69
- config: fr_fr
70
- split: test
71
- args:
72
- language: fr
73
- metrics:
74
- - name: WER
75
- type: wer
76
- value: 4.84
77
- - task:
78
- name: Automatic Speech Recognition
79
- type: automatic-speech-recognition
80
- dataset:
81
- name: African Accented French
82
- type: gigant/african_accented_french
83
- config: fr
84
- split: test
85
- args:
86
- language: fr
87
- metrics:
88
- - name: WER
89
- type: wer
90
- value: 4.20
91
  ---
92
 
93
- # Whisper-Large-V3-French
94
-
95
- Whisper-Large-V3-French is fine-tuned on `openai/whisper-large-v3` to further enhance its performance on the French language. This model has been trained to predict casing, punctuation, and numbers. While this might slightly sacrifice performance, we believe it allows for broader usage.
96
 
97
- This model has been converted into various formats, facilitating its usage across different libraries, including transformers, openai-whisper, fasterwhisper, whisper.cpp, candle, mlx, etc.
98
 
99
- ## Table of Contents
100
 
101
- - [Performance](#performance)
102
- - [Usage](#usage)
103
- - [Hugging Face Pipeline](#hugging-face-pipeline)
104
- - [Hugging Face Low-level APIs](#hugging-face-low-level-apis)
105
- - [Speculative Decoding](#speculative-decoding)
106
- - [OpenAI Whisper](#openai-whisper)
107
- - [Faster Whisper](#faster-whisper)
108
- - [Whisper.cpp](#whispercpp)
109
- - [Candle](#candle)
110
- - [MLX](#mlx)
111
- - [Training details](#training-details)
112
- - [Acknowledgements](#acknowledgements)
113
 
114
- ## Performance
115
 
116
- We evaluated our model on both short and long-form transcriptions, and also tested it on both in-distribution and out-of-distribution datasets to conduct a comprehensive analysis assessing its accuracy, generalizability, and robustness.
117
 
118
- Please note that the reported WER is the result after converting numbers to text, removing punctuation (except for apostrophes and hyphens), and converting all characters to lowercase.
119
 
120
- All evaluation results on the public datasets can be found [here](https://drive.google.com/drive/folders/1rFIh6yXRVa9RZ0ieZoKiThFZgQ4STPPI?usp=drive_link).
 
 
 
 
121
 
122
- ### Short-Form Transcription
123
 
124
- ![eval-short-form](https://huggingface.co/bofenghuang/whisper-large-v3-french/resolve/main/assets/whisper_fr_eval_short_form.png)
125
 
126
- Due to the lack of readily available out-of-domain (OOD) and long-form test sets in French, we evaluated using internal test sets from [Zaion Lab](https://zaion.ai/). These sets comprise human-annotated audio-transcription pairs from call center conversations, which are notable for their significant background noise and domain-specific terminology.
127
 
128
- ### Long-Form Transcription
129
 
130
- ![eval-long-form](https://huggingface.co/bofenghuang/whisper-large-v3-french/resolve/main/assets/whisper_fr_eval_long_form.png)
 
 
 
131
 
132
- The long-form transcription was run using the 🤗 Hugging Face pipeline for quicker evaluation. Audio files were segmented into 30-second chunks and processed in parallel.
133
 
134
- ## Usage
 
 
 
135
 
136
- ### Hugging Face Pipeline
137
 
138
- The model can easily used with the 🤗 Hugging Face [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class for audio transcription.
139
 
140
- For long-form transcription (> 30 seconds), you can activate the process by passing the `chunk_length_s` argument. This approach segments the audio into smaller segments, processes them in parallel, and then joins them at the strides by finding the longest common sequence. While this chunked long-form approach may have a slight compromise in performance compared to OpenAI's sequential algorithm, it provides 9x faster inference speed.
141
 
142
- ```python
143
- import torch
144
- from datasets import load_dataset
145
- from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
146
 
147
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
148
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
 
 
149
 
150
- # Load model
151
- model_name_or_path = "bofenghuang/whisper-large-v3-french"
152
- processor = AutoProcessor.from_pretrained(model_name_or_path)
153
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
154
- model_name_or_path,
155
- torch_dtype=torch_dtype,
156
- low_cpu_mem_usage=True,
157
- )
158
- model.to(device)
159
 
160
- # Init pipeline
161
- pipe = pipeline(
162
- "automatic-speech-recognition",
163
- model=model,
164
- feature_extractor=processor.feature_extractor,
165
- tokenizer=processor.tokenizer,
166
- torch_dtype=torch_dtype,
167
- device=device,
168
- # chunk_length_s=30, # for long-form transcription
169
- max_new_tokens=128,
170
- )
171
 
172
- # Example audio
173
- dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
174
- sample = dataset[0]["audio"]
175
 
176
- # Run pipeline
177
- result = pipe(sample)
178
- print(result["text"])
179
- ```
180
 
181
- ### Hugging Face Low-level APIs
182
 
183
- You can also use the 🤗 Hugging Face low-level APIs for transcription, offering greater control over the process, as demonstrated below:
184
 
185
- ```python
186
- import torch
187
- from datasets import load_dataset
188
- from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
 
 
 
189
 
190
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
191
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
192
 
193
- # Load model
194
- model_name_or_path = "bofenghuang/whisper-large-v3-french"
195
- processor = AutoProcessor.from_pretrained(model_name_or_path)
196
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
197
- model_name_or_path,
198
- torch_dtype=torch_dtype,
199
- low_cpu_mem_usage=True,
200
- )
201
- model.to(device)
202
-
203
- # Example audio
204
- dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
205
- sample = dataset[0]["audio"]
206
-
207
- # Extract feautres
208
- input_features = processor(
209
- sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt"
210
- ).input_features
211
 
 
212
 
213
- # Generate tokens
214
- predicted_ids = model.generate(
215
- input_features.to(dtype=torch_dtype).to(device), max_new_tokens=128
216
- )
217
 
218
- # Detokenize to text
219
- transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
220
- print(transcription)
221
  ```
222
 
223
- ### Speculative Decoding
224
-
225
- [Speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding) can be achieved using a draft model, essentially a distilled version of Whisper. This approach guarantees identical outputs to using the main Whisper model alone, offers a 2x faster inference speed, and incurs only a slight increase in memory overhead.
226
-
227
- Since the distilled Whisper has the same encoder as the original, only its decoder need to be loaded, and encoder outputs are shared between the main and draft models during inference.
228
-
229
- Using speculative decoding with the Hugging Face pipeline is simple - just specify the `assistant_model` within the generation configurations.
230
 
231
  ```python
 
232
  import torch
233
- from datasets import load_dataset
234
- from transformers import (
235
- AutoModelForCausalLM,
236
- AutoModelForSpeechSeq2Seq,
237
- AutoProcessor,
238
- pipeline,
239
- )
240
 
241
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
242
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
 
243
 
244
- # Load model
245
- model_name_or_path = "bofenghuang/whisper-large-v3-french"
246
- processor = AutoProcessor.from_pretrained(model_name_or_path)
247
  model = AutoModelForSpeechSeq2Seq.from_pretrained(
248
- model_name_or_path,
249
  torch_dtype=torch_dtype,
250
- low_cpu_mem_usage=True,
251
  )
252
  model.to(device)
253
 
254
- # Load draft model
255
- assistant_model_name_or_path = "bofenghuang/whisper-large-v3-french-distil-dec2"
256
- assistant_model = AutoModelForCausalLM.from_pretrained(
257
- assistant_model_name_or_path,
258
- torch_dtype=torch_dtype,
259
- low_cpu_mem_usage=True,
260
- )
261
- assistant_model.to(device)
262
-
263
- # Init pipeline
264
- pipe = pipeline(
265
- "automatic-speech-recognition",
266
- model=model,
267
- feature_extractor=processor.feature_extractor,
268
- tokenizer=processor.tokenizer,
269
- torch_dtype=torch_dtype,
270
- device=device,
271
- generate_kwargs={"assistant_model": assistant_model},
272
- max_new_tokens=128,
273
- )
274
-
275
- # Example audio
276
- dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
277
- sample = dataset[0]["audio"]
278
-
279
- # Run pipeline
280
- result = pipe(sample)
281
- print(result["text"])
282
- ```
283
-
284
- ### OpenAI Whisper
285
-
286
- You can also employ the sequential long-form decoding algorithm with a sliding window and temperature fallback, as outlined by OpenAI in their original [paper](https://arxiv.org/abs/2212.04356).
287
-
288
- First, install the [openai-whisper](https://github.com/openai/whisper) package:
289
-
290
- ```bash
291
- pip install -U openai-whisper
292
- ```
293
-
294
- Then, download the converted model:
295
-
296
- ```bash
297
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french', filename='original_model.pt', local_dir='./models/whisper-large-v3-french')"
298
  ```
299
 
300
- Now, you can transcirbe audio files by following the usage instructions provided in the repository:
301
 
302
  ```python
303
  import whisper
304
- from datasets import load_dataset
305
 
306
- # Load model
307
- model = whisper.load_model("./models/whisper-large-v3-french/original_model.pt")
308
 
309
- # Example audio
310
- dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
311
- sample = dataset[0]["audio"]["array"].astype("float32")
 
 
 
312
 
313
- # Transcribe
314
- result = model.transcribe(sample, language="fr")
315
  print(result["text"])
316
  ```
317
 
318
- ### Faster Whisper
319
 
320
- Faster Whisper is a reimplementation of OpenAI's Whisper models and the sequential long-form decoding algorithm in the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format.
321
 
322
- Compared to openai-whisper, it offers up to 4x faster inference speed, while consuming less memory. Additionally, the model can be quantized into int8, further enhancing its efficiency on both CPU and GPU.
323
 
324
- First, install the [faster-whisper](https://github.com/SYSTRAN/faster-whisper) package:
325
 
326
- ```bash
327
- pip install faster-whisper
328
- ```
329
 
330
- Then, download the model converted to the CTranslate2 format:
331
 
332
- ```bash
333
- python -c "from huggingface_hub import snapshot_download; snapshot_download(repo_id='bofenghuang/whisper-large-v3-french', local_dir='./models/whisper-large-v3-french', allow_patterns='ctranslate2/*')"
334
- ```
335
 
336
- Now, you can transcirbe audio files by following the usage instructions provided in the repository:
337
 
338
- ```python
339
- from datasets import load_dataset
340
- from faster_whisper import WhisperModel
341
 
342
- # Load model
343
- model = WhisperModel("./models/whisper-large-v3-french/ctranslate2", device="cuda", compute_type="float16") # Run on GPU with FP16
344
 
345
- # Example audio
346
- dataset = load_dataset("bofenghuang/asr-dummy", "fr", split="test")
347
- sample = dataset[0]["audio"]["array"].astype("float32")
348
 
349
- segments, info = model.transcribe(sample, beam_size=5, language="fr")
350
 
351
- for segment in segments:
352
- print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
353
- ```
 
 
354
 
355
- ### Whisper.cpp
356
 
357
- Whisper.cpp is a reimplementation of OpenAI's Whisper models, crafted in plain C/C++ without any dependencies. It offers compatibility with various backends and platforms.
358
 
359
- Additionally, the model can be quantized to either 4-bit or 5-bit integers, further enhancing its efficiency.
360
 
361
- First, clone and build the [whisper.cpp](https://github.com/ggerganov/whisper.cpp) repository:
362
 
363
- ```bash
364
- git clone https://github.com/ggerganov/whisper.cpp.git
365
- cd whisper.cpp
366
 
367
- # build the main example
368
- make
369
- ```
 
370
 
371
- Next, download the converted ggml weights from the Hugging Face Hub:
 
 
372
 
373
- ```bash
374
- # Download model quantized with Q5_0 method
375
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french', filename='ggml-model-q5_0.bin', local_dir='./models/whisper-large-v3-french')"
376
- ```
377
 
378
- Now, you can transcribe an audio file using the following command:
 
 
379
 
380
- ```bash
381
- ./main -m ./models/whisper-large-v3-french/ggml-model-q5_0.bin -l fr -f /path/to/audio/file --print-colors
382
- ```
383
 
384
- ### Candle
385
 
386
- [Candle-whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper) is a reimplementation of OpenAI's Whisper models in the candle format - a lightweight ML framework built in Rust.
387
 
388
- First, clone the [candle](https://github.com/huggingface/candle) repository:
389
 
390
- ```bash
391
- git clone https://github.com/huggingface/candle.git
392
- cd candle/candle-examples/examples/whisper
393
- ```
394
 
395
- Transcribe an audio file using the following command:
 
 
 
 
396
 
397
- ```bash
398
- cargo run --example whisper --release -- --model large-v3 --model-id bofenghuang/whisper-large-v3-french --language fr --input /path/to/audio/file
399
- ```
400
 
401
- In order to use CUDA add `--features cuda` to the example command line:
402
 
403
- ```bash
404
- cargo run --example whisper --release --features cuda -- --model large-v3 --model-id bofenghuang/whisper-large-v3-french --language fr --input /path/to/audio/file
405
- ```
406
 
407
- ### MLX
 
 
408
 
409
- [MLX-Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper) is a reimplementation of OpenAI's Whisper models in the [MLX](https://github.com/ml-explore/mlx) format - a ML framework on Apple silicon. It supports features like lazy computation, unified memory management, etc.
410
 
411
- First, clone the [MLX Examples](https://github.com/ml-explore/mlx-examples) repository:
412
 
413
- ```bash
414
- git clone https://github.com/ml-explore/mlx-examples.git
415
- cd mlx-examples/whisper
416
- ```
417
 
418
- Next, install the dependencies:
419
 
420
- ```bash
421
- pip install -r requirements.txt
 
 
 
 
 
 
 
422
  ```
423
 
424
- Download the pytorch checkpoint in the original OpenAI format and convert it into MLX format (We haven't included the converted version here since the repository is already heavy and the conversion is very fast):
425
 
426
- ```bash
427
- # Download
428
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-french', filename='original_model.pt', local_dir='./models/whisper-large-v3-french')"
429
- # Convert into .npz
430
- python convert.py --torch-name-or-path ./models/whisper-large-v3-french/original_model.pt --mlx-path ./mlx_models/whisper-large-v3-french
431
- ```
432
 
433
- Now, you can transcribe audio with:
 
 
434
 
435
- ```python
436
- import whisper
437
 
438
- result = whisper.transcribe("/path/to/audio/file", path_or_hf_repo="mlx_models/whisper-large-v3-french", language="fr")
439
- print(result["text"])
440
- ```
441
 
442
- ## Training details
443
 
444
- We've collected a composite dataset consisting of over 2,500 hours of French speech recognition data, which incldues datasets such as [Common Voice 13.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli), [Fleurs](https://huggingface.co/datasets/google/fleurs), [Multilingual TEDx](https://www.openslr.org/100/), [MediaSpeech](https://www.openslr.org/108/), [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french), etc.
 
 
445
 
446
- Given that some datasets, like MLS, only offer text without case or punctuation, we employed a customized version of 🤗 [Speechbox](https://github.com/huggingface/speechbox) to restore case and punctuation from a limited set of symbols using the [bofenghuang/whisper-large-v2-cv11-french](bofenghuang/whisper-large-v2-cv11-french) model.
447
 
448
- However, even within these datasets, we observed certain quality issues. These ranged from mismatches between audio and transcription in terms of language or content, poorly segmented utterances, to missing words in scripted speech, etc. We've built a pipeline to filter out many of these problematic utterances, aiming to enhance the dataset's quality. As a result, we excluded more than 10% of the data, and when we retrained the model, we noticed a significant reduction of hallucination.
449
 
450
- For training, we employed the [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) available in the 🤗 Transformers repository. The model training took place on the [Jean-Zay supercomputer](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html) at GENCI, and we extend our gratitude to the IDRIS team for their responsive support throughout the project.
 
 
 
 
 
 
451
 
452
- ## Acknowledgements
453
 
454
- - OpenAI for creating and open-sourcing the [Whisper model](https://arxiv.org/abs/2212.04356)
455
- - 🤗 Hugging Face for integrating the Whisper model and providing the training codebase within the [Transformers](https://github.com/huggingface/transformers) repository
456
- - [Genci](https://genci.fr/) for their generous contribution of GPU hours to this project
 
1
  ---
2
  license: mit
 
 
 
 
3
  tags:
4
  - automatic-speech-recognition
5
+ - asr
6
+ - whisper
7
+ - french
8
+ - speech-recognition
9
+ - stt
10
+ - multilingual
11
+ - research
12
+ - baseline
13
+ library_name: transformers
14
+ pipeline_tag: automatic-speech-recognition
15
+ base_model: openai/whisper-large-v3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
+ # Gilbert-FR-Source — Research Baseline for French Automatic Speech Recognition
 
 
19
 
20
+ ## Overview
21
 
22
+ **Gilbert-FR-Source** is the foundational baseline model for the **Gilbert research project**, a comprehensive initiative focused on developing state-of-the-art automatic speech recognition (ASR) systems optimized for French language applications. This model serves as the **frozen reference point** for all subsequent research, fine-tuning, and development work within the Gilbert ecosystem.
23
 
24
+ **Important Notice on Intellectual Property:**
25
+ - This baseline model (`MEscriva/gilbert-fr-source`) is distributed under the MIT License, allowing research and commercial use.
26
+ - **All derivative models, fine-tuned variants, and specialized models developed from this baseline as part of the Gilbert project are the exclusive intellectual property of Lexia France.**
27
+ - While this baseline can be used freely under MIT terms, any models built upon it for the Gilbert project are proprietary and subject to separate licensing terms.
 
 
 
 
 
 
 
 
28
 
29
+ ---
30
 
31
+ ## Research Context
32
 
33
+ The Gilbert project is a systematic research and development effort aimed at creating highly specialized ASR systems for:
34
 
35
+ - **Professional meeting transcription** (hybrid and remote meetings)
36
+ - **Long-form multi-speaker discourse** (30-120 minute sessions)
37
+ - **Institutional environments** (education, public sector, healthcare)
38
+ - **Constrained audio conditions** (telephony, VoIP, low signal-to-noise ratio)
39
+ - **Sociolinguistic diversity** (African, Canadian, Belgian, and other French accents)
40
 
41
+ This baseline model provides the **controlled starting point** for all experimental work, ensuring reproducibility and enabling fair comparison across different research directions.
42
 
43
+ ---
44
 
45
+ ## Model Details
46
 
47
+ ### Architecture
48
 
49
+ - **Base Model:** OpenAI Whisper Large V3
50
+ - **Fine-tuning:** Optimized for French language performance
51
+ - **Framework:** Compatible with Hugging Face Transformers, OpenAI Whisper, CTranslate2, ONNX Runtime, and MLX
52
+ - **Model Size:** ~3.2 GB (full precision)
53
 
54
+ ### Key Characteristics
55
 
56
+ - **Language:** French (primary), with multilingual capabilities
57
+ - **Context Length:** Long-form audio support (up to 30 minutes per segment)
58
+ - **Output:** Text transcription with word-level timestamps
59
+ - **Performance:** Optimized for French speech recognition accuracy
60
 
61
+ ---
62
 
63
+ ## Intended Use
64
 
65
+ ### Research and Development
66
 
67
+ This model is intended for:
 
 
 
68
 
69
+ 1. **Research Baseline:** Use as a reference point for ASR research and experimentation
70
+ 2. **Comparative Studies:** Benchmark against this baseline when evaluating new architectures or training strategies
71
+ 3. **Fine-tuning Foundation:** Use as a starting point for domain-specific fine-tuning (subject to Gilbert project IP terms)
72
+ 4. **Educational Purposes:** Learning and understanding ASR model behavior
73
 
74
+ ### Production Use
 
 
 
 
 
 
 
 
75
 
76
+ While this baseline model can be used directly, **production deployments should use specialized Gilbert models** that are optimized for specific use cases and domains. Contact the Gilbert team for production-grade models.
 
 
 
 
 
 
 
 
 
 
77
 
78
+ ---
 
 
79
 
80
+ ## Performance Benchmarks
 
 
 
81
 
82
+ ### Reference Results
83
 
84
+ The following WER (Word Error Rate) scores serve as **baseline reference** for future Gilbert model development:
85
 
86
+ | Dataset | WER | Notes |
87
+ |---------|-----|-------|
88
+ | MLS (FR) | 3.98% | Multilingual LibriSpeech French |
89
+ | Common Voice FR (v13.0) | 7.28% | Diverse French speech |
90
+ | VoxPopuli (FR) | 8.91% | European Parliament speeches |
91
+ | Fleurs (FR) | 4.84% | FLORES evaluation |
92
+ | African Accented French | 4.20% | Regional accent evaluation |
93
 
94
+ **Note:** These results represent the **upper bound** before targeted fine-tuning. Future Gilbert variants will be evaluated against these baselines to measure improvement.
 
95
 
96
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
+ ## Usage
99
 
100
+ ### Installation
 
 
 
101
 
102
+ ```bash
103
+ pip install transformers torch torchaudio librosa soundfile
 
104
  ```
105
 
106
+ ### Basic Usage with Transformers
 
 
 
 
 
 
107
 
108
  ```python
109
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
110
  import torch
 
 
 
 
 
 
 
111
 
112
+ model_id = "MEscriva/gilbert-fr-source"
113
+ device = "cuda" if torch.cuda.is_available() else "cpu"
114
+ torch_dtype = torch.float16 if device == "cuda" else torch.float32
115
 
116
+ processor = AutoProcessor.from_pretrained(model_id)
 
 
117
  model = AutoModelForSpeechSeq2Seq.from_pretrained(
118
+ model_id,
119
  torch_dtype=torch_dtype,
120
+ low_cpu_mem_usage=True
121
  )
122
  model.to(device)
123
 
124
+ # Process audio
125
+ audio_path = "your_audio.wav"
126
+ inputs = processor(audio_path, return_tensors="pt", sampling_rate=16000)
127
+ inputs = {k: v.to(device) for k, v in inputs.items()}
128
+
129
+ with torch.no_grad():
130
+ generated_ids = model.generate(
131
+ inputs["input_features"],
132
+ language="fr",
133
+ task="transcribe"
134
+ )
135
+
136
+ transcription = processor.batch_decode(
137
+ generated_ids,
138
+ skip_special_tokens=True
139
+ )[0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  ```
141
 
142
+ ### Usage with OpenAI Whisper
143
 
144
  ```python
145
  import whisper
 
146
 
147
+ # Load the model
148
+ model = whisper.load_model("large-v3")
149
 
150
+ # Transcribe French audio
151
+ result = model.transcribe(
152
+ "audio.wav",
153
+ language="fr",
154
+ task="transcribe"
155
+ )
156
 
 
 
157
  print(result["text"])
158
  ```
159
 
160
+ ---
161
 
162
+ ## Research Methodology
163
 
164
+ ### Baseline Purpose
165
 
166
+ This model serves as:
167
 
168
+ 1. **Frozen Reference:** Weights remain unchanged to ensure consistent baseline comparisons
169
+ 2. **Reproducibility Anchor:** All experiments reference this exact checkpoint
170
+ 3. **Version Control:** Future Gilbert models explicitly reference this baseline version for traceability
171
 
172
+ ### Evaluation Standards
173
 
174
+ - **WER Calculation:** Standard normalization (lowercasing, punctuation removal)
175
+ - **Metrics:** Word Error Rate (WER), Character Error Rate (CER), BLEU score
176
+ - **Advanced Metrics:** Speaker-attributed WER (SA-WER), long-context stability (internal research)
177
 
178
+ ### Versioning
179
 
180
+ - **Current Version:** 0.1 (Research Baseline)
181
+ - **Future Versions:** All Gilbert model variants will reference this baseline version
 
182
 
183
+ ---
 
184
 
185
+ ## Limitations
 
 
186
 
187
+ This baseline model inherits known limitations from Whisper and the underlying training data:
188
 
189
+ 1. **Overlapping Speech:** Sensitivity to simultaneous speakers
190
+ 2. **Long-form Decoding:** Occasional hallucinations in very long audio segments
191
+ 3. **Domain Shift:** Suboptimal performance on spontaneous dialogue without fine-tuning
192
+ 4. **Accent Distribution:** Potential biases related to accent representation in training data
193
+ 5. **Telephony Bandwidth:** Suboptimal performance on narrowband (8 kHz) audio without adaptation
194
 
195
+ **Understanding and quantifying these limitations is a core objective of the Gilbert research roadmap.**
196
 
197
+ ---
198
 
199
+ ## Future Research Directions
200
 
201
+ The following specialized models will be developed as independent checkpoints from this baseline:
202
 
203
+ ### Planned Gilbert Models
 
 
204
 
205
+ 1. **Gilbert-FR-Longform-v1**
206
+ - Optimized for long meetings (30-120 minutes)
207
+ - Multi-speaker interaction handling
208
+ - Discourse-level context stability
209
 
210
+ 2. **Gilbert-FR-Accents-v1**
211
+ - Robustness to regional and international French accents
212
+ - African, Canadian, Belgian accent optimization
213
 
214
+ 3. **Gilbert-FR-Telephone-v1**
215
+ - Optimized for 8 kHz VoIP/call-center speech
216
+ - Narrowband audio adaptation
 
217
 
218
+ 4. **Gilbert-Multilingual-v1**
219
+ - Extended cross-lingual performance
220
+ - Optimized French anchors with multilingual support
221
 
222
+ **All future Gilbert models are the exclusive intellectual property of Lexia France** and will include detailed evaluation reports adhering to research reproducibility standards.
 
 
223
 
224
+ ---
225
 
226
+ ## Intellectual Property and Licensing
227
 
228
+ ### License for This Baseline
229
 
230
+ This baseline model (`MEscriva/gilbert-fr-source`) is distributed under the **MIT License**, allowing:
 
 
 
231
 
232
+ - Commercial use
233
+ - ✅ Modification
234
+ - ✅ Distribution
235
+ - ✅ Private use
236
+ - ✅ Patent use
237
 
238
+ See the `LICENSE` file for full terms.
 
 
239
 
240
+ ### Intellectual Property Notice
241
 
242
+ **Important:** While this baseline model is available under MIT License:
 
 
243
 
244
+ - **All derivative models, fine-tuned variants, and specialized models developed as part of the Gilbert project are the exclusive intellectual property of Lexia France.**
245
+ - Use of this baseline for Gilbert project development implies acceptance of these IP terms.
246
+ - Commercial use of Gilbert project derivatives requires separate licensing agreements.
247
 
248
+ For licensing inquiries regarding Gilbert project models, contact: **mathis@lexiapro.fr**
249
 
250
+ ---
251
 
252
+ ## Citation
 
 
 
253
 
254
+ If you use this baseline model in your research, please cite:
255
 
256
+ ```bibtex
257
+ @software{gilbert_fr_source_2024,
258
+ title={Gilbert-FR-Source: Research Baseline for French Automatic Speech Recognition},
259
+ author={MEscriva and Lexia France},
260
+ year={2024},
261
+ url={https://huggingface.co/MEscriva/gilbert-fr-source},
262
+ version={0.1},
263
+ note={Research baseline for the Gilbert project}
264
+ }
265
  ```
266
 
267
+ ---
268
 
269
+ ## Acknowledgments
 
 
 
 
 
270
 
271
+ This baseline model is based on:
272
+ - **OpenAI Whisper Large V3** (MIT License)
273
+ - **bofenghuang/whisper-large-v3-french** (French fine-tuning)
274
 
275
+ We acknowledge the contributions of the open-source community and the original Whisper research team.
 
276
 
277
+ ---
278
+
279
+ ## Contact
280
 
281
+ For research collaboration, evaluation access, or technical inquiries:
282
 
283
+ - **Website:** [https://gilbert-assistant.fr](https://gilbert-assistant.fr)
284
+ - **Email:** mathis@lexiapro.fr
285
+ - **Repository:** [https://huggingface.co/MEscriva/gilbert-fr-source](https://huggingface.co/MEscriva/gilbert-fr-source)
286
 
287
+ ---
288
 
289
+ ## Changelog
290
 
291
+ ### Version 0.1 (2024-12-19)
292
+ - Initial research baseline release
293
+ - Based on Whisper Large V3 with French optimization
294
+ - Established as frozen reference point for Gilbert project
295
+ - Documentation of baseline performance metrics
296
+
297
+ ---
298
 
299
+ **© 2024 Lexia France. All rights reserved for Gilbert project derivatives.**
300