pharrow commited on
Commit
b4847f0
·
verified ·
1 Parent(s): 6e2901a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -40
README.md CHANGED
@@ -1,40 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - tinyllama
5
- - causal-lm
6
- - merged-lora
7
- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
8
- merged_from:
9
- - snaplora-adapted
10
- ---
11
-
12
- # TinyLlama (Merged LoRA)
13
-
14
- This repository contains a TinyLlama model with LoRA weights merged into the base.
15
-
16
- - **Base model:** `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
17
- - **Adapter:** `snaplora-adapted`
18
- - **Merge date:** 2025-09-14 23:12:26Z UTC
19
-
20
- ## Usage
21
-
22
- ```python
23
- from transformers import AutoModelForCausalLM, AutoTokenizer
24
- import torch
25
-
26
- model_id = "<this-repo-id>"
27
- tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
28
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
29
-
30
- prompt = "Write a haiku about tiny models."
31
- inputs = tok(prompt, return_tensors="pt").to(model.device)
32
- with torch.no_grad():
33
- out = model.generate(**inputs, max_new_tokens=64)
34
- print(tok.decode(out[0], skip_special_tokens=True))
35
- ```
36
-
37
- ## Notes
38
-
39
- - The adapter was merged into the base weights using `peft.PeftModel.merge_and_unload()`.
40
- - Files are saved with `safetensors` when possible.
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - Tesslate/UIGEN-T2
4
+ base_model:
5
+ - TinyLlama/TinyLlama-1.1B-Chat-v1.0
6
+ ---
7
+ ---
8
+ license: apache-2.0
9
+ tags:
10
+ - tinyllama
11
+ - causal-lm
12
+ - merged-lora
13
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
14
+ merged_from:
15
+ - snaplora-adapted
16
+ ---
17
+
18
+ # TinyLlama (Merged LoRA)
19
+
20
+ This repository contains a TinyLlama model with LoRA weights merged into the base.
21
+
22
+ - **Base model:** `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
23
+ - **Adapter:** `snaplora-adapted`
24
+ - **Merge date:** 2025-09-14 23:12:26Z UTC
25
+
26
+ ## Usage
27
+
28
+ ```python
29
+ from transformers import AutoModelForCausalLM, AutoTokenizer
30
+ import torch
31
+
32
+ model_id = "<this-repo-id>"
33
+ tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
34
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
35
+
36
+ prompt = "Write a haiku about tiny models."
37
+ inputs = tok(prompt, return_tensors="pt").to(model.device)
38
+ with torch.no_grad():
39
+ out = model.generate(**inputs, max_new_tokens=64)
40
+ print(tok.decode(out[0], skip_special_tokens=True))
41
+ ```
42
+
43
+ ## Notes
44
+
45
+ - The adapter was merged into the base weights using `peft.PeftModel.merge_and_unload()`.
46
+ - Files are saved with `safetensors` when possible.