Mayank022 commited on
Commit
e9c54ba
Β·
verified Β·
1 Parent(s): ea70237

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - hi
6
+ - en
7
+ tags:
8
+ - audio
9
+ - speech
10
+ - audio-language-model
11
+ - whisper
12
+ - sarvam-m
13
+ - lora
14
+ - projector
15
+ - indic
16
+ - hindi
17
+ pipeline_tag: audio-text-to-text
18
+ ---
19
+
20
+ # Vocal LLM
21
+
22
+ **Cost-Efficient Joint Audio-Language Modeling via Lightweight Projector Training over Frozen Foundations**
23
+
24
+ Vocal LLM is a joint audio-language model that bridges a frozen [Whisper](https://huggingface.co/openai/whisper-medium) speech encoder with the [Sarvam-M](https://huggingface.co/sarvamai/sarvam-m) 24B Indic LLM through a lightweight trainable projector. The entire model was trained for **~$10** on a **single NVIDIA A100 GPU** in approximately **6 hours**.
25
+
26
+ ## Architecture
27
+
28
+ <img src="Joint_embedding_model_Sarvam_with_Whisper.svg" alt="Vocal LLM Architecture" width="100%">
29
+
30
+ Vocal LLM consists of three components:
31
+
32
+ | Component | Model | Parameters | Status |
33
+ |---|---|---|---|
34
+ | Speech Encoder | `openai/whisper-medium` | ~300M | Frozen |
35
+ | Multimodal Projector | Two-layer MLP (GELU + LayerNorm) | ~60M | Trained |
36
+ | Language Model | `sarvamai/sarvam-m` (Mistral-based, 24B) | ~24B | LoRA-adapted (~103M trainable) |
37
+
38
+ **Total trainable parameters: <3% of the full model.**
39
+
40
+ ### How it works
41
+
42
+ 1. **Audio encoding** β€” Raw audio is resampled to 16 kHz, converted to a log-mel spectrogram, and processed by the frozen Whisper encoder to produce 1024-dim embeddings at 50 frames/sec.
43
+ 2. **Projection** β€” The MLP projector stacks 8 consecutive frames (8x temporal downsampling) and maps them into the LLM's 2048-dim input space. A 30-second clip becomes ~188 pseudo-tokens.
44
+ 3. **Text generation** β€” Projected audio tokens are concatenated with text instruction tokens and processed by the LoRA-adapted Sarvam-M LLM to generate the response.
45
+
46
+ ## Training
47
+
48
+ Training follows a two-stage pipeline:
49
+
50
+ **Stage 1: Projector Pre-training** β€” Alignment between Whisper's speech representations and Sarvam-M's text embedding space using 10K audio continuation pairs from Mozilla Common Voice (Hindi). Only the projector MLP is trained. 1 epoch, AdamW, lr=1e-4, bfloat16.
51
+
52
+ **Stage 2: Instruction Fine-tuning** β€” 3,000 synthetic Hindi audio question-answer pairs. Both the projector and LoRA adapters (rank 16, alpha=32, applied to all attention projections) are trained. 3 epochs, lr=5e-5.
53
+
54
+ The synthetic dataset was generated by prompting a text-only LLM with ASR transcripts to create instruction-answer pairs β€” **10-50x cheaper** than processing raw audio through multimodal APIs.
55
+
56
+ ## Capabilities
57
+
58
+ - **Hindi audio question answering** β€” Given audio + a question, generates contextually relevant Hindi responses
59
+ - **Cross-lingual understanding** β€” Translates Hindi speech to English text
60
+ - **Audio transcription** β€” Transcribes Hindi speech leveraging Whisper's multilingual capabilities
61
+ - **Content summarization** β€” Summarizes audio content in Hindi or English
62
+
63
+ ## Usage
64
+
65
+ ```python
66
+ # Inference format
67
+ # User: [INST] Based on the provided audio, answer the following question: {Q} <|audio|> [/INST]
68
+ # Assistant: {Answer}
69
+
70
+ # During the forward pass, the <|audio|> placeholder is replaced
71
+ # with the projected audio pseudo-tokens from the Whisper encoder + MLP projector.
72
+ ```
73
+
74
+ ## Limitations
75
+
76
+ - **Hallucination** β€” May occasionally generate fluent but factually incorrect responses
77
+ - **Limited vocabulary** β€” Trained on only 3,000 samples; restricted Hindi vocabulary coverage
78
+ - **Length sensitivity** β€” Audio clips significantly longer/shorter than training distribution may produce degraded outputs
79
+ - **Noise sensitivity** β€” Background noise or atypical speaking patterns can cause incoherent output
80
+
81
+
82
+
83
+ ## Citation
84
+
85
+ ```bibtex
86
+ @article{vocalllm2026,
87
+ title={Vocal LLM: Cost-Efficient Joint Audio-Language Modeling
88
+ via Lightweight Projector Training over Frozen Foundations},
89
+ author={Team Vizuara},
90
+ year={2026}
91
+ }
92
+ ```
93
+
94
+ ## Links
95
+
96
+ - [Project Page](https://huggingface.co/teamvizuara/Vocal-LLM
97
+ - [Github](https://github.com/VizuaraAI/audio-llm)