# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jspr/bts_mistral_7b")
model = AutoModelForCausalLM.from_pretrained("jspr/bts_mistral_7b")Quick Links
Mistral-7B finetuned on a dataset of BTS fanfic.
This model uses the alpaca format:
{"instruction": "An interaction between a user providing instructions, and an imaginative assistant providing responses.", "input": "...", "output": "..."}
Note RoPE scaling parameter 4.0, with RoPE scaling type linear
- Downloads last month
- 8
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jspr/bts_mistral_7b")