Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ base_model: unsloth/tinyllama-bnb-4bit
|
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
-
|
| 10 |
|
| 11 |
|
| 12 |
## Model Details
|
|
@@ -14,6 +14,16 @@ base_model: unsloth/tinyllama-bnb-4bit
|
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
<!-- Provide a longer summary of what this model is. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
|
| 19 |
|
|
|
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
+
This model was tested using 1% of the dataset
|
| 10 |
|
| 11 |
|
| 12 |
## Model Details
|
|
|
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
+
Directly quantized 4bit model with bitsandbytes.
|
| 18 |
+
|
| 19 |
+
Unsloth can finetune LLMs with QLoRA 2.2x faster and use 62% less memory!
|
| 20 |
+
|
| 21 |
+
We have a Google Colab Tesla T4 notebook for TinyLlama with 4096 max sequence length RoPE Scaling
|
| 22 |
+
here: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing
|
| 23 |
+
|
| 24 |
+
**Source :** [unsloth/tinyllama-bnb-4bit]
|
| 25 |
+
|
| 26 |
+
**Dataset :** [yahma/alpaca-cleaned]
|
| 27 |
|
| 28 |
|
| 29 |
|