pepoo20 commited on
Commit
e9ffc73
·
1 Parent(s): cf6562c

pepoo20/bengali_1B-Lora-LORA-colab

Browse files
Files changed (4) hide show
  1. README.md +57 -0
  2. preprocessor_config.json +10 -0
  3. pytorch_model.bin +3 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: Umong/wav2vec2-large-mms-1b-bengali
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: Umong/wav2vec2-large-mms-1b-bengali
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # Umong/wav2vec2-large-mms-1b-bengali
15
+
16
+ This model is a fine-tuned version of [Umong/wav2vec2-large-mms-1b-bengali](https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - eval_loss: 1.3414
19
+ - eval_wer: 0.6114
20
+ - eval_runtime: 393.3932
21
+ - eval_samples_per_second: 2.542
22
+ - eval_steps_per_second: 1.271
23
+ - epoch: 0.01
24
+ - step: 40
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 2e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 2
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_steps: 600
50
+ - training_steps: 100
51
+
52
+ ### Framework versions
53
+
54
+ - Transformers 4.33.0
55
+ - Pytorch 2.0.0
56
+ - Datasets 2.1.0
57
+ - Tokenizers 0.13.3
preprocessor_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_extractor_type": "Wav2Vec2FeatureExtractor",
4
+ "feature_size": 1,
5
+ "padding_side": "right",
6
+ "padding_value": 0.0,
7
+ "processor_class": "Wav2Vec2Processor",
8
+ "return_attention_mask": true,
9
+ "sampling_rate": 16000
10
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecc3473cd26e01bcad211dabe3fa39bdfdb0ce6769059e6b38d2e92445f8e114
3
+ size 3891024525
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6085bd3712ccf8338e4fae8050e72f8fd2e7e7b815127735a4d6c6092116294b
3
+ size 3963