mazesmazes commited on
Commit
574a799
·
verified ·
1 Parent(s): 36087fa

Update custom model files, README, and requirements

Browse files
Files changed (6) hide show
  1. README.md +36 -181
  2. asr_modeling.py +3 -0
  3. asr_pipeline.py +1 -0
  4. asr_processing.py +6 -0
  5. handler.py +152 -0
  6. requirements.txt +5 -0
README.md CHANGED
@@ -1,199 +1,54 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
 
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ datasets:
6
+ - speechbrain/LoquaciousSet
7
+ base_model:
8
+ - facebook/hubert-xlarge-ls960-ft
9
+ - HuggingFaceTB/SmolLM3-3B
10
+ pipeline_tag: automatic-speech-recognition
11
+ tags:
12
+ - asr
13
+ - speech-recognition
14
+ - audio
15
+ - smollm
16
+ - hubert
17
  ---
18
 
19
+ # Tiny Audio Model Card
20
 
21
+ This model was born from a simple idea: what if anyone could train a powerful, modern speech recognition model for the price of a few coffees? This model is the result of the [Tiny Audio course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md), a free, hands-on guide to building your own ASR system from scratch.
22
 
23
+ ## The Story of this Model
24
 
25
+ This model isn't the product of a massive research lab with an unlimited budget. It's the result of a 24-hour training run on a single GPU, made possible by an efficient projector-only training approach. By combining the strengths of a massive pretrained audio encoder (`facebook/hubert-xlarge-ls960-ft`) and a powerful language model (`HuggingFaceTB/SmolLM3-3B`), and only training a small projector between them, we can create a high-quality ASR model with minimal resources.
26
 
27
+ This model is a testament to the power of open-source and the incredible tools and models that are now available to everyone.
28
 
29
+ ## Intended Use
30
 
31
+ This model is for you. It's for the curious, the builders, the learners. It's for anyone who wants to understand how modern AI works by getting their hands dirty. Use it to transcribe your podcasts, your meetings, your voice memos. But more importantly, use it as a starting point. Fork it, fine-tune it, break it, and make it your own.
32
 
33
+ ## Performance
34
 
35
+ This model achieves a Word Error Rate (WER) of **12.14%** on the LoquaciousSet test set. It's not perfect, but it's a solid baseline that you can build on. See how it compares to other models on the [community leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard).
 
 
 
 
 
 
36
 
37
+ ## How to Use
38
 
39
+ ```python
40
+ from transformers import pipeline
41
 
42
+ pipe = pipeline("automatic-speech-recognition", model="mazesmazes/tiny-audio", trust_remote_code=True)
 
 
43
 
44
+ result = pipe("path/to/audio.wav")
45
+ print(result["text"])
46
+ ```
47
 
48
+ ## How to Get Involved
49
 
50
+ This project is more than just a model; it's a community. Here's how you can get involved:
51
 
52
+ - **Take the course**: The best way to start is to go through the [free 6-hour course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md) and train your own model.
53
+ - **Share your results**: Add your model to the [leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard) and share what you've learned.
54
+ - **Join the conversation**: Ask questions, share your ideas, and connect with other builders in the [GitHub Discussions](https://github.com/alexkroman/tiny-audio/discussions).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
asr_modeling.py CHANGED
@@ -573,6 +573,9 @@ class ASRModel(PreTrainedModel):
573
  if audio_fill_value is None:
574
  audio_fill_value = fill_value
575
 
 
 
 
576
  # Create output tensor
577
  expanded = torch.full(
578
  (batch_size, new_seq_len),
 
573
  if audio_fill_value is None:
574
  audio_fill_value = fill_value
575
 
576
+ # At this point tensor_to_expand is guaranteed to be a Tensor
577
+ assert tensor_to_expand is not None
578
+
579
  # Create output tensor
580
  expanded = torch.full(
581
  (batch_size, new_seq_len),
asr_pipeline.py CHANGED
@@ -27,6 +27,7 @@ class ASRPipeline(transformers.AutomaticSpeechRecognitionPipeline):
27
  else:
28
  # Fallback to whisper-tiny tokenizer for its normalize() method only
29
  from transformers import WhisperTokenizer
 
30
  self.text_normalizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny")
31
 
32
  def __call__(self, inputs, **kwargs):
 
27
  else:
28
  # Fallback to whisper-tiny tokenizer for its normalize() method only
29
  from transformers import WhisperTokenizer
30
+
31
  self.text_normalizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny")
32
 
33
  def __call__(self, inputs, **kwargs):
asr_processing.py CHANGED
@@ -57,6 +57,12 @@ class ASRProcessor(ProcessorMixin):
57
  else:
58
  processor_config = {}
59
 
 
 
 
 
 
 
60
  # Add/update processor metadata while preserving feature extractor settings
61
  feature_extractor_type = self.feature_extractor.__class__.__name__
62
  processor_config.update(
 
57
  else:
58
  processor_config = {}
59
 
60
+ # Filter out any non-serializable objects that might have been added
61
+ processor_config = {
62
+ k: v for k, v in processor_config.items()
63
+ if isinstance(v, (str, int, float, bool, list, dict, type(None)))
64
+ }
65
+
66
  # Add/update processor metadata while preserving feature extractor settings
67
  feature_extractor_type = self.feature_extractor.__class__.__name__
68
  processor_config.update(
handler.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Custom inference handler for HuggingFace Inference Endpoints."""
2
+
3
+ from typing import Any, Dict, List, Union
4
+
5
+ import torch
6
+
7
+ try:
8
+ # For remote execution, imports are relative
9
+ from .asr_modeling import ASRModel
10
+ from .asr_pipeline import ASRPipeline
11
+ except ImportError:
12
+ # For local execution, imports are not relative
13
+ from asr_modeling import ASRModel # type: ignore[no-redef]
14
+ from asr_pipeline import ASRPipeline # type: ignore[no-redef]
15
+
16
+
17
+ class EndpointHandler:
18
+ def __init__(self, path: str = ""):
19
+ # Set environment variables for PyTorch/CUDA (must be before imports/operations)
20
+ import os
21
+
22
+ # Enable expandable segments to reduce fragmentation
23
+ os.environ.setdefault("PYTORCH_CUDA_ALLOC_CONF", "expandable_segments:True")
24
+
25
+ # Enable TF32 for faster matmul on A40/A100
26
+ torch.backends.cuda.matmul.allow_tf32 = True
27
+ torch.backends.cudnn.allow_tf32 = True
28
+
29
+ # Set device and dtype
30
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
31
+ self.dtype = torch.bfloat16 if self.device == "cuda" else torch.float32
32
+
33
+ # Enable CUDA optimizations
34
+ if torch.cuda.is_available():
35
+ torch.backends.cudnn.benchmark = True
36
+
37
+ # Prepare model kwargs for pipeline
38
+ model_kwargs = {
39
+ "dtype": self.dtype,
40
+ "low_cpu_mem_usage": True,
41
+ }
42
+ if torch.cuda.is_available():
43
+ model_kwargs["attn_implementation"] = (
44
+ "flash_attention_2" if self._is_flash_attn_available() else "sdpa"
45
+ )
46
+
47
+ # Load model (this loads the model, tokenizer, and feature extractor)
48
+ self.model = ASRModel.from_pretrained(path, **model_kwargs)
49
+
50
+ # Instantiate custom pipeline - it will get feature_extractor and tokenizer from model
51
+ self.pipe = ASRPipeline(
52
+ model=self.model,
53
+ feature_extractor=self.model.feature_extractor,
54
+ tokenizer=self.model.tokenizer,
55
+ device=self.device,
56
+ )
57
+
58
+ # Apply torch.compile if enabled (after model is loaded by pipeline)
59
+ # Enable by default for significant speedup (20-40%)
60
+ if torch.cuda.is_available() and os.getenv("ENABLE_TORCH_COMPILE", "1") == "1":
61
+ compile_mode = os.getenv("TORCH_COMPILE_MODE", "reduce-overhead")
62
+ self.model = torch.compile(self.model, mode=compile_mode)
63
+ # Update the pipeline with the compiled model
64
+ self.pipe.model = self.model
65
+
66
+ # Warmup the model
67
+ if torch.cuda.is_available():
68
+ self._warmup()
69
+
70
+ def _is_flash_attn_available(self):
71
+ """Check if flash attention is available."""
72
+ import importlib.util
73
+
74
+ return importlib.util.find_spec("flash_attn") is not None
75
+
76
+ def _warmup(self):
77
+ """Warmup to trigger model compilation and allocate GPU memory."""
78
+ try:
79
+ # Create dummy audio (1 second at config sample rate)
80
+ sample_rate = self.pipe.model.config.audio_sample_rate
81
+ dummy_audio = torch.randn(sample_rate, dtype=torch.float32)
82
+
83
+ # The pipeline now handles GPU optimization internally
84
+ with torch.inference_mode():
85
+ warmup_tokens = self.pipe.model.config.inference_warmup_tokens
86
+ _ = self.pipe(
87
+ {"raw": dummy_audio, "sampling_rate": sample_rate}, max_new_tokens=warmup_tokens
88
+ )
89
+
90
+ # Force CUDA synchronization to ensure kernels are compiled
91
+ if torch.cuda.is_available():
92
+ torch.cuda.synchronize()
93
+ # Clear cache after warmup to free memory
94
+ torch.cuda.empty_cache()
95
+
96
+ except Exception as e:
97
+ print(f"Warmup skipped due to: {e}")
98
+
99
+ def __call__(self, data: Dict[str, Any]) -> Union[Dict[str, Any], List[Dict[str, Any]]]:
100
+ """Process audio transcription request.
101
+
102
+ Supports both single and batch inputs for efficient concurrent processing.
103
+ The endpoint infrastructure can batch multiple concurrent requests automatically.
104
+ """
105
+ inputs = data.get("inputs")
106
+ if inputs is None:
107
+ raise ValueError("Missing 'inputs' in request data")
108
+
109
+ # Get generation parameters (matching SLAM-ASR paper defaults)
110
+ params = data.get("parameters", {})
111
+ max_new_tokens = params.get("max_new_tokens", 200) # Longer transcripts
112
+
113
+ # Beam search for better quality (5 beams for higher quality)
114
+ # Use num_beams=1 for faster inference at cost of ~2-3% WER increase
115
+ num_beams = params.get("num_beams", 5)
116
+
117
+ do_sample = params.get("do_sample", False)
118
+
119
+ # Length penalty encourages appropriate transcript length
120
+ # >1.0 = prefer longer outputs, <1.0 = prefer shorter
121
+ # Slight positive bias helps avoid truncated transcripts
122
+ length_penalty = params.get("length_penalty", 1.1)
123
+
124
+ # Repetition penalty to prevent loops (1.1-1.2 is good for ASR)
125
+ repetition_penalty = params.get("repetition_penalty", 1.15)
126
+
127
+ # Alternative: use no_repeat_ngram_size to prevent exact n-gram repetition
128
+ no_repeat_ngram_size = params.get("no_repeat_ngram_size", 3)
129
+
130
+ # Early stopping for beam search: stop when all beams end
131
+ # "never" = generate full max_new_tokens (more accurate but slower)
132
+ # True = stop when all beams reach EOS (faster)
133
+ early_stopping = params.get("early_stopping", True)
134
+
135
+ # Diversity penalty encourages different beams (helps with rare words)
136
+ # 0.0 = no diversity, 0.5-1.0 = good diversity
137
+ default_diversity = self.pipe.model.config.inference_diversity_penalty
138
+ diversity_penalty = params.get("diversity_penalty", default_diversity)
139
+
140
+ # The pipeline's __call__ method handles both single and batch inputs
141
+ # as well as automatic chunking for long audio files
142
+ return self.pipe(
143
+ inputs,
144
+ max_new_tokens=max_new_tokens,
145
+ num_beams=num_beams,
146
+ do_sample=do_sample,
147
+ length_penalty=length_penalty,
148
+ repetition_penalty=repetition_penalty,
149
+ no_repeat_ngram_size=no_repeat_ngram_size,
150
+ early_stopping=early_stopping,
151
+ diversity_penalty=diversity_penalty,
152
+ )
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # Core dependencies for tiny-audio model inference
2
+ # This file is pushed to HuggingFace for model repository
3
+
4
+ # Transformers - main library for model loading and inference
5
+ transformers>=4.57.0