Instructions to use devshaheen/unsloth-finetuning-4bit-mistral_imdb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use devshaheen/unsloth-finetuning-4bit-mistral_imdb with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("devshaheen/unsloth-finetuning-4bit-mistral_imdb", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use devshaheen/unsloth-finetuning-4bit-mistral_imdb with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for devshaheen/unsloth-finetuning-4bit-mistral_imdb to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for devshaheen/unsloth-finetuning-4bit-mistral_imdb to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for devshaheen/unsloth-finetuning-4bit-mistral_imdb to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="devshaheen/unsloth-finetuning-4bit-mistral_imdb", max_seq_length=2048, )
Uploaded Model
- Developed by: Shaheen Nabi
- License: Apache-2.0
- Finetuned from model:
unsloth/mistral-7b-bnb-4bit - Model Type: Large Language Model (LLM)
- Training Framework: Hugging Face Transformers, TRL (Transformers Reinforcement Learning) library
- Pretraining Dataset: Stanford IMDb Dataset
- Fine-Tuning Dataset: Stanford IMDb (Text Classification Task)
Overview
This model is a fine-tuned version of unsloth/mistral-7b-bnb-4bit, a 7-billion-parameter model based on the Mistral architecture. It was fine-tuned to improve performance on natural language understanding tasks, specifically for text classification using the Stanford IMDb dataset.
The fine-tuning process leveraged the Unsloth framework, which significantly sped up the training time, enabling a 2x faster training process. Additionally, Hugging Face's TRL library (Transformers Reinforcement Learning) was used to adapt the model efficiently.
Training Details
- Base Model:
unsloth/mistral-7b-bnb-4bit(7B parameters, 4-bit quantized weights for memory efficiency) - Training Speed: The model was trained 2x faster with Unsloth, optimizing training time and resource usage.
- Optimization Techniques: Low-rank adaptation (LoRA), gradient checkpointing, and 4-bit quantization were employed to reduce memory and computational cost while maintaining model performance.
Intended Use
This model is designed for tasks such as:
- Sentiment analysis
- Text classification
- Fine-grained NLP tasks
It is optimized for deployment in resource-constrained environments due to the quantization of the base model and fine-tuning techniques used.
Model Performance
- Primary Metric: Accuracy on text classification tasks (Stanford IMDb dataset)
- Fine-Tuning Results: The fine-tuned model shows improved accuracy, making it a practical choice for NLP applications.
Usage
To use the model, you can load it using the FastLanguageModel class as follows:
from unsloth import FastLanguageModel
# Load the fine-tuned model and tokenizer
model_name = "shaheennabi/your-finetuned-mistral-7b-imdb"
max_seq_length = 512 # Set according to your requirements
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=max_seq_length,
dtype=None,
load_in_4bit=True
)
# Example of using the model for inference
input_text = "This movie was fantastic!"
inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
Model tree for devshaheen/unsloth-finetuning-4bit-mistral_imdb
Base model
unsloth/mistral-7b-bnb-4bit