Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ tags:
|
|
| 22 |
- unsloth
|
| 23 |
- gemma
|
| 24 |
- trl
|
| 25 |
-
base_model: google/gemma-
|
| 26 |
pipeline_tag: text-generation
|
| 27 |
---
|
| 28 |
|
|
@@ -32,7 +32,7 @@ pipeline_tag: text-generation
|
|
| 32 |
|
| 33 |
- **Developed by:** pmking27
|
| 34 |
- **License:** apache-2.0
|
| 35 |
-
- **Finetuned from model :** google/gemma-
|
| 36 |
|
| 37 |
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 38 |
|
|
@@ -46,10 +46,10 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
| 46 |
device = 'cuda'
|
| 47 |
|
| 48 |
# Loading the tokenizer for the model
|
| 49 |
-
tokenizer = AutoTokenizer.from_pretrained("pmking27/PrathameshLLM-
|
| 50 |
|
| 51 |
# Loading the pre-trained model
|
| 52 |
-
model = AutoModelForCausalLM.from_pretrained("pmking27/PrathameshLLM-
|
| 53 |
|
| 54 |
# Defining the Alpaca prompt template
|
| 55 |
alpaca_prompt = """
|
|
|
|
| 22 |
- unsloth
|
| 23 |
- gemma
|
| 24 |
- trl
|
| 25 |
+
base_model: google/gemma-2b
|
| 26 |
pipeline_tag: text-generation
|
| 27 |
---
|
| 28 |
|
|
|
|
| 32 |
|
| 33 |
- **Developed by:** pmking27
|
| 34 |
- **License:** apache-2.0
|
| 35 |
+
- **Finetuned from model :** google/gemma-2b
|
| 36 |
|
| 37 |
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 38 |
|
|
|
|
| 46 |
device = 'cuda'
|
| 47 |
|
| 48 |
# Loading the tokenizer for the model
|
| 49 |
+
tokenizer = AutoTokenizer.from_pretrained("pmking27/PrathameshLLM-2B")
|
| 50 |
|
| 51 |
# Loading the pre-trained model
|
| 52 |
+
model = AutoModelForCausalLM.from_pretrained("pmking27/PrathameshLLM-2B")
|
| 53 |
|
| 54 |
# Defining the Alpaca prompt template
|
| 55 |
alpaca_prompt = """
|