Instructions to use Talha/URDU-ASR with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Talha/URDU-ASR with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="Talha/URDU-ASR")# Load model directly from transformers import AutoProcessor, AutoModelForCTC processor = AutoProcessor.from_pretrained("Talha/URDU-ASR") model = AutoModelForCTC.from_pretrained("Talha/URDU-ASR") - Notebooks
- Google Colab
- Kaggle
Update tokenizer_config.json
Browse files- tokenizer_config.json +2 -2
tokenizer_config.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
{
|
| 2 |
-
"bos_token":
|
| 3 |
"do_lower_case": false,
|
| 4 |
-
"eos_token":
|
| 5 |
"name_or_path": "Talha/URDU-ASR",
|
| 6 |
"pad_token": "[PAD]",
|
| 7 |
"processor_class": "Wav2Vec2ProcessorWithLM",
|
|
|
|
| 1 |
{
|
| 2 |
+
"bos_token": null,
|
| 3 |
"do_lower_case": false,
|
| 4 |
+
"eos_token": null,
|
| 5 |
"name_or_path": "Talha/URDU-ASR",
|
| 6 |
"pad_token": "[PAD]",
|
| 7 |
"processor_class": "Wav2Vec2ProcessorWithLM",
|