Gilbert-FR-Math v1

Whisper Large V3 fine-tune pour la transcription de cours de mathematiques en francais.

Entrainement

  • Base: MEscriva/gilbert-fr-source
  • Methode: LoRA r=32 alpha=64 + 4-bit NF4
  • Dataset: Lexia-Labs/french-math-asr-benchmark
  • Epochs: 3, batch 4x4
  • GPU: NVIDIA T4 16GB
Downloads last month
24
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lexia-Labs/gilbert-fr-math-v1

Adapter
(1)
this model

Dataset used to train Lexia-Labs/gilbert-fr-math-v1