Update README.md
Browse files
README.md
CHANGED
|
@@ -15,10 +15,9 @@ Novus-7B-Instruct-SEO is fine-tuned on Mistral-7B-v0.2 using SFT.
|
|
| 15 |
|
| 16 |
Our SEO model was trained on a custom hand-curated dataset that contains keywords as inputs and SEO-related blog posts as outputs. Using foundational models, our dataset was further enhanced by augmenting the data.
|
| 17 |
|
| 18 |
-
To use, pass `trust_remote_code=True` when loading the model, for example
|
| 19 |
|
| 20 |
## Training
|
| 21 |
-
Novus-7B-Instruct-SEO was trained over 3 epochs using the qLoRA with a Lora Rank of 16, on a single H100 GPU
|
| 22 |
|
| 23 |
```python
|
| 24 |
model = AutoModelForCausalLM.from_pretrained("NovusResearch/Novus-7B-Instruct-SEO",
|
|
|
|
| 15 |
|
| 16 |
Our SEO model was trained on a custom hand-curated dataset that contains keywords as inputs and SEO-related blog posts as outputs. Using foundational models, our dataset was further enhanced by augmenting the data.
|
| 17 |
|
|
|
|
| 18 |
|
| 19 |
## Training
|
| 20 |
+
Novus-7B-Instruct-SEO was trained over 3 epochs using the qLoRA with a Lora Rank of 16, on a single H100 GPU.
|
| 21 |
|
| 22 |
```python
|
| 23 |
model = AutoModelForCausalLM.from_pretrained("NovusResearch/Novus-7B-Instruct-SEO",
|