pepoo20 commited on
Commit
3e2c3be
·
1 Parent(s): e9ffc73

Upload model

Browse files
Files changed (3) hide show
  1. README.md +3 -51
  2. adapter_config.json +24 -0
  3. adapter_model.bin +3 -0
README.md CHANGED
@@ -1,57 +1,9 @@
1
  ---
2
- license: cc-by-nc-4.0
3
- base_model: Umong/wav2vec2-large-mms-1b-bengali
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: Umong/wav2vec2-large-mms-1b-bengali
8
- results: []
9
  ---
10
-
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
- # Umong/wav2vec2-large-mms-1b-bengali
15
-
16
- This model is a fine-tuned version of [Umong/wav2vec2-large-mms-1b-bengali](https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
- - eval_loss: 1.3414
19
- - eval_wer: 0.6114
20
- - eval_runtime: 393.3932
21
- - eval_samples_per_second: 2.542
22
- - eval_steps_per_second: 1.271
23
- - epoch: 0.01
24
- - step: 40
25
-
26
- ## Model description
27
-
28
- More information needed
29
-
30
- ## Intended uses & limitations
31
-
32
- More information needed
33
-
34
- ## Training and evaluation data
35
-
36
- More information needed
37
-
38
  ## Training procedure
39
 
40
- ### Training hyperparameters
41
-
42
- The following hyperparameters were used during training:
43
- - learning_rate: 2e-05
44
- - train_batch_size: 8
45
- - eval_batch_size: 2
46
- - seed: 42
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: cosine
49
- - lr_scheduler_warmup_steps: 600
50
- - training_steps: 100
51
-
52
  ### Framework versions
53
 
54
- - Transformers 4.33.0
55
- - Pytorch 2.0.0
56
- - Datasets 2.1.0
57
- - Tokenizers 0.13.3
 
1
  ---
2
+ library_name: peft
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ### Framework versions
7
 
8
+
9
+ - PEFT 0.5.0
 
 
adapter_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": {
3
+ "base_model_class": "Wav2Vec2ForCTC",
4
+ "parent_library": "transformers.models.wav2vec2.modeling_wav2vec2"
5
+ },
6
+ "base_model_name_or_path": "Umong/wav2vec2-large-mms-1b-bengali",
7
+ "bias": "none",
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layers_pattern": null,
12
+ "layers_to_transform": null,
13
+ "lora_alpha": 64,
14
+ "lora_dropout": 0.05,
15
+ "modules_to_save": null,
16
+ "peft_type": "LORA",
17
+ "r": 32,
18
+ "revision": null,
19
+ "target_modules": [
20
+ "q_proj",
21
+ "v_proj"
22
+ ],
23
+ "task_type": null
24
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:927a83bb20374c5a09705fc3445f9ac6a684926cdb530ce0adee4de4fdfaa186
3
+ size 31528333