modeltrainer1 commited on
Commit
c95e5fe
·
verified ·
1 Parent(s): dd67d25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -3
README.md CHANGED
@@ -1,3 +1,20 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Saba-Ethiopia
2
+
3
+ A fine-tuned LLaMA-3 4-bit model trained for [specific purpose].
4
+
5
+ ## Model Details
6
+ - **Base Model**: LLaMA-3 3B
7
+ - **Quantization**: 4-bit
8
+ - **Use Case**: [Describe what the model is fine-tuned for]
9
+
10
+ ## Usage
11
+ To use this model in your code:
12
+ ```python
13
+ from transformers import AutoModelForCausalLM, AutoTokenizer
14
+
15
+ model = AutoModelForCausalLM.from_pretrained("modeltrainer1/Saba-Ethiopia", torch_dtype="auto")
16
+ tokenizer = AutoTokenizer.from_pretrained("modeltrainer1/Saba-Ethiopia")
17
+
18
+ inputs = tokenizer("Your input text here", return_tensors="pt")
19
+ outputs = model.generate(**inputs)
20
+ print(tokenizer.decode(outputs[0]))