Ministral-8B STEM Energy LoRA
Fine-tuned LoRA adapter for Ministral-8B on STEM Energy tasks.
Model Details
- Base model: mistralai/Ministral-8B-Instruct-2410
- Dataset: EnergyAI/stem_energy
- Training method: LoRA (Low-Rank Adaptation)
LoRA Configuration
- r: 64
- alpha: 128
- dropout: 0.05
- Checkpoint: training/sft/logs/stem-min8b
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Ministral-8B-Instruct-2410")
model = PeftModel.from_pretrained(base_model, "EnergyAI/stem-energy-ministral8")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Ministral-8B-Instruct-2410")
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for EnergyAI/stem-energy-ministral8
Base model
mistralai/Ministral-8B-Instruct-2410