How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca to start chatting
Load model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
    model_name="Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca",
    max_seq_length=2048,
)
Quick Links

Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca 🐍

This model is a Mixtral 8x7B Instruct model fine-tuned using QLoRA on the CodeAlpaca 20K dataset to specialize in Python code instruction following and generation.

  • Developed by: Puxis97
  • License: apache-2.0
  • Finetuned from model : mistralai/Mixtral-8x7B-Instruct-v0.1

Training Details

This fine-tuned model was built for high-efficiency using Unsloth's QLoRA optimizations and the Hugging Face TRL library, resulting in a powerful, instruction-following code generation model that runs on consumer GPUs.

Setting Value
Base Model mistralai/Mixtral-8x7B-Instruct-v0.1
Dataset HuggingFaceH4/CodeAlpaca_20K
Method QLoRA (4-bit quantization)
Task Code Instruction Following / Python Coding

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Puxis97/Mixtral-8x7B-Python-Coder-CodeAlpaca

Finetuned
(65)
this model