Adventure LoRAs
Collection
LoRAs trained on text adventure data, quality variable • 9 items • Updated • 2
How to use ToastyPigeon/SpringDragon-NeMo-Instruct-QLoRA-ep1 with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-Nemo-Instruct-2407")
model = PeftModel.from_pretrained(base_model, "ToastyPigeon/SpringDragon-NeMo-Instruct-QLoRA-ep1")This model is a fine-tuned version of mistralai/Mistral-Nemo-Instruct-2407 on the SpringDragon dataset.
Completion format. User instructions are given with >.
The following hyperparameters were used during training:
Base model
mistralai/Mistral-Nemo-Base-2407
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-Nemo-Instruct-2407") model = PeftModel.from_pretrained(base_model, "ToastyPigeon/SpringDragon-NeMo-Instruct-QLoRA-ep1")