Mistral-7B Fine-Tuned on AI Regulations
This repository contains a fine-tuned version of the Mistral-7B-Instruct model trained on AI regulations data. The fine-tuning process utilized QLoRA (Quantized Low-Rank Adaptation) and BitsAndBytes 4-bit quantization to optimize memory efficiency while preserving performance.
Model Details
- Base Model: Mistral-7B-Instruct
- Fine-tuned Dataset: Custom AI regulations dataset
- Quantization: 4-bit using BitsAndBytes
- Training Method: QLoRA with LoRA adapters
Usage
You can use the fine-tuned model for text generation using the Hugging Face Transformers library.
Install Required Packages
pip install transformers torch bitsandbytes accelerate
Load the Model and Tokenizer
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_NAME = "sssdddwd/AI_Governance_Fine_Tuned_mistral_7B_LLM"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
# Load model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.float16, device_map="auto")
# Resize embeddings to match tokenizer
model.resize_token_embeddings(len(tokenizer))
# Set model to evaluation mode
model.eval()
Generate Text
def generate_response(prompt, max_length=256):
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=max_length)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Example Usage
prompt = "What are the key AI regulations in the EU?"
response = generate_response(prompt)
print(response)
Model Training Details
- Epochs: 3
- Batch Size: 1 (with gradient accumulation)
- Optimizer: PagedAdamW
- FP16 Training: Enabled
License
This model follows the same licensing as Mistral-7B. Please refer to the official Mistral AI terms for usage restrictions.
Acknowledgments
- Mistral AI for the base model.
- Hugging Face for providing the ecosystem for model training and hosting.
For any questions or issues, feel free to reach out via Hugging Face discussions!
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sssdddwd/AI_Governance_Fine_Tuned_mistral_7B_LLM
Base model
mistralai/Mistral-7B-v0.1
Finetuned
mistralai/Mistral-7B-Instruct-v0.1