Improve model card: Add abstract, sample usage, update paper and project page links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -2
README.md CHANGED
@@ -10,9 +10,53 @@ tags:
10
  - hf-asr-leaderboard
11
  ---
12
 
13
- <!-- Provide a quick summary of what the model is/does. -->
14
 
15
- Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ## Benchmark Results
18
 
 
10
  - hf-asr-leaderboard
11
  ---
12
 
13
+ # LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation
14
 
15
+ [📄 Paper](https://huggingface.co/papers/2502.20583) | [💻 Code](https://github.com/efeslab/LiteASR) | [🌐 Project Page](https://efeslab.github.io/LiteASR/)
16
+
17
+ LiteASR is a compression scheme for automatic speech recognition (ASR) models that leverages the _low-rank_ properties of activation values. Our method can compress OpenAI's Whisper encoder by up to **~50%**.
18
+
19
+ ## Abstract
20
+ Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in reduced dimensionality. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto frontier of accuracy and efficiency. The code of LiteASR is available at this https URL .
21
+
22
+ ## Quick Start (Sample Usage)
23
+
24
+ The easiest way to run our model is to use our integration with HuggingFace Transformers library.
25
+ We provide model weights for the compressed version of OpenAI Whisper series [here](https://huggingface.co/efficient-speech).
26
+
27
+ ```python
28
+ import librosa
29
+ import torch
30
+ from transformers import AutoProcessor, AutoModel
31
+
32
+ device = "cuda:0"
33
+ dtype = torch.float16
34
+
35
+ # load the compressed Whisper model
36
+ model = AutoModel.from_pretrained(
37
+ "efficient-speech/lite-whisper-large-v3-turbo",
38
+ trust_remote_code=True,
39
+ )
40
+ model.to(dtype).to(device)
41
+
42
+ # we use the same processor as the original model
43
+ processor = AutoProcessor.from_pretrained("openai/whisper-large-v3")
44
+
45
+ # set the path to your audio file
46
+ path = "path/to/audio.wav"
47
+ audio, _ = librosa.load(path, sr=16000)
48
+
49
+ input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
50
+ input_features = input_features.to(dtype).to(device)
51
+
52
+ predicted_ids = model.generate(input_features)
53
+ transcription = processor.batch_decode(
54
+ predicted_ids,
55
+ skip_special_tokens=True
56
+ )[0]
57
+
58
+ print(transcription)
59
+ ```
60
 
61
  ## Benchmark Results
62