jealk commited on
Commit
f8c82bb
·
verified ·
1 Parent(s): 7176ee5

Model save

Browse files
Files changed (4) hide show
  1. README.md +50 -0
  2. config.json +1 -1
  3. generation_config.json +7 -0
  4. model.safetensors +2 -2
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: princeton-nlp/Sheared-LLaMA-1.3B
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: llm2vec-da-mntp-sheared
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jealk/LLM2Vec/runs/2fgmtevb)
16
+ # llm2vec-da-mntp-sheared
17
+
18
+ This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on an unknown dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 32
39
+ - eval_batch_size: 32
40
+ - seed: 42
41
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3.0
44
+
45
+ ### Framework versions
46
+
47
+ - Transformers 4.47.1
48
+ - Pytorch 2.2.1+cu121
49
+ - Datasets 2.19.2
50
+ - Tokenizers 0.21.1
config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "_name_or_path": "princeton-nlp/Sheared-LLaMA-1.3B",
3
  "architectures": [
4
- "LlamaBiModel"
5
  ],
6
  "attention_bias": false,
7
  "attention_dropout": 0.0,
 
1
  {
2
  "_name_or_path": "princeton-nlp/Sheared-LLaMA-1.3B",
3
  "architectures": [
4
+ "LlamaBiForMNTP"
5
  ],
6
  "attention_bias": false,
7
  "attention_dropout": 0.0,
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.47.1"
7
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6566b365f4530b0f03e8f52c33238182c216fb0f40141a530560fd94bdab98ae
3
- size 2619805384
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1559a4d2219949d874d394647a639c8a1f9978df3eee332b5d35e4a1b15222f
3
+ size 2750880824