Instructions to use nosadaniel/mistral-7b-instruct-tuned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use nosadaniel/mistral-7b-instruct-tuned with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/mistral-7b-instruct-v0.3-bnb-4bit") model = PeftModel.from_pretrained(base_model, "nosadaniel/mistral-7b-instruct-tuned") - Transformers
How to use nosadaniel/mistral-7b-instruct-tuned with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nosadaniel/mistral-7b-instruct-tuned") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("nosadaniel/mistral-7b-instruct-tuned", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nosadaniel/mistral-7b-instruct-tuned with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nosadaniel/mistral-7b-instruct-tuned" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nosadaniel/mistral-7b-instruct-tuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/nosadaniel/mistral-7b-instruct-tuned
- SGLang
How to use nosadaniel/mistral-7b-instruct-tuned with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nosadaniel/mistral-7b-instruct-tuned" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nosadaniel/mistral-7b-instruct-tuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nosadaniel/mistral-7b-instruct-tuned" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nosadaniel/mistral-7b-instruct-tuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use nosadaniel/mistral-7b-instruct-tuned with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nosadaniel/mistral-7b-instruct-tuned to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nosadaniel/mistral-7b-instruct-tuned to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nosadaniel/mistral-7b-instruct-tuned to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="nosadaniel/mistral-7b-instruct-tuned", max_seq_length=2048, ) - Docker Model Runner
How to use nosadaniel/mistral-7b-instruct-tuned with Docker Model Runner:
docker model run hf.co/nosadaniel/mistral-7b-instruct-tuned
Model Card for mistral-7b-instruct-sft
Abstract
This repository hosts a fine‑tuned Mistral 7B‑Instruct model that leverages parameter‑efficient LoRA adaptation via the Unsloth framework. The model is adapted for email‑security tasks using a curated phishing‑email training dataset and demonstrates state‑of‑the‑art performance (94.9 % accuracy, 93.9 % precision, 96.1 % recall).
Model Details
Model Description
- Developed by: Montimage
- Model type: Large Language Model (LLM)
- Language(s): English
- License: Apache‑2.0
- Finetuned from model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
Model Sources
How to Get Started with the Model
You can download the model using the Hugging Face Space.
Locally using Ollama
- Install Ollama – https://ollama.com
- Pull the base model:
ollama pull unsloth/mistral-7b-instruct-v0.3-bnb-4bit - Create a merged‑model manifest (no file extension) with the following content and place it in a folder of your choice:
Then run:FROM mistral-7b-instruct-v0.3-bnb-4bit ADAPTER /path/to/your/downloaded/adapterollama create merged-model --from ./merged-model - Run the merged model:
ollama run merged-model
Training Details
Training Data
- Dataset: Phishing Email Training Dataset
- Link: https://huggingface.co/datasets/nosadelian/phishing-email-training-dataset
Training Procedure
- Fine‑tuning method: LoRA (Low‑Rank Adaptation) via Unsloth
- Training regime: fp16 mixed precision
- Epochs: 3 (full dataset)
- Learning rate: 2e‑4
- Batch size: 32
Model Comparison Table (selected row for this model)
| Model | Samples | Accuracy | Precision | Recall | F1‑Score | Specificity | FPR | FNR | MCC | Validity | Avg Response Time (s) | Total Input Tokens | Total Output Tokens | Avg Input Tokens | Avg Output Tokens | Quality Mean | Quality Std | Excellent (%) | Good (%) | Fair (%) | Poor (%) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| mistral-7b-instruct-sft | 256 | 0.949 | 0.939 | 0.961 | 0.950 | 0.938 | 0.062 | 0.039 | 0.899 | 100.0 % | 18.70 | 144,247 | 72,299 | 563.5 | 282.4 | 0.951 | 0.098 | 94.9 | 0.0 | 5.1 | 0.0 |
| mistral:7b | 256 | 0.840 | 0.939 | 0.727 | 0.819 | 0.953 | 0.047 | 0.273 | 0.698 | 100.0 % | 12.57 | 144,247 | 76,797 | 563.5 | 300.0 | 0.850 | 0.160 | 84.0 | 0.0 | 6.6 | 9.4 |
Comparison with Base Model (mistral:7b): The fine‑tuned model achieves substantially higher accuracy (94.9 % vs 84.0 %), recall (96.1 % vs 72.7 %), and overall quality metrics, while maintaining comparable precision, demonstrating the effectiveness of LoRA fine‑tuning for email‑phishing detection.
Model Performance Analysis – mistral‑7b‑instruct‑sft
- Total Responses: 256
- Accuracy: 94.9 % (243/256)
- Valid Responses: 100 % (256/256)
- Average Confidence: 0.921
Classification Metrics
| Metric | Value |
|---|---|
| Accuracy | 94.9 % |
| Precision | 93.9 % |
| Recall | 96.1 % |
| F1‑Score | 95.0 % |
| Specificity | 93.8 % |
Confusion Matrix
| Predicted Positive | Predicted Negative | |
|---|---|---|
| Actual Positive | 123 (TP) | 5 (FN) |
| Actual Negative | 8 (FP) | 120 (TN) |
Additional Metrics
- False Positive Rate: 6.2 %
- False Negative Rate: 3.9 %
- Negative Predictive Value: 96.0 %
- Matthews Correlation Coefficient: 0.899
Performance Insights
- ✅ High Precision – Low false‑positive rate, fostering user trust.
- ✅ High Recall – Catches the vast majority of phishing attempts, enhancing security.
- ✅ Excellent F1‑Score – Well‑balanced precision and recall.
- ✅ Strong MCC – Strong overall correlation between predictions and ground truth.
Citation
BibTeX:
@dataset{mistral-7b-instruct-sft,
title={Fine‑tuned Mistral 7B‑Instruct model for Email Phishing Detection},
author={Montimage, Nosadaniel, Luong89},
year={2025},
publisher={Montimage}
}
APA:
Montimage, Nosakhare Daniel Ahanor, & Luong89. (2025). Fine‑tuned Mistral 7B‑Instruct model for Email Phishing Detection. Montimage.
Model Card Authors
Montimage Email Security Research Division AI/ML Engineering Team Cybersecurity Domain Experts
Framework versions
- PEFT 0.17.1
- Downloads last month
- 1
Model tree for nosadaniel/mistral-7b-instruct-tuned
Base model
mistralai/Mistral-7B-v0.3