Mistral 7B Project Guide Overview This repository, remiai3/mistral7B, provides code and resources for students to run the Mistral 7B model locally on their laptops for AI experiments and research. It is a free resource with no hidden fees, and we attribute the original model to Mistral AI. The repository includes scripts to run both the pre-trained Mistral 7B model and a fine-tuned version using LoRA weights. Features

Run Mistral 7B locally with a simple web UI. Includes pre-trained and fine-tuned (LoRA) model support. Educational focus for students to explore modern AI models. Quantized model weights for consumer hardware (8GB or 16GB RAM).

Getting Started Follow the steps in document.txt for detailed instructions on:

System requirements (Python 3.10+, 8GB/16GB RAM). Setting up the environment and installing dependencies. Downloading model weights from TheBloke/Mistral-7B-Instruct-v0.1-GGUF. Running the pre-trained and fine-tuned models.

Repository Structure

app.py: Script to run the pre-trained model with a model selector UI. fine_tune/app.py: Script to run the fine-tuned LoRA model. fine_tune/lora_finetuned.gguf: LoRA weights for the fine-tuned model. fine_tune/dataset.json: Dataset used for fine-tuning. fine_tune/finetune.py: Fine-tuning script. requirements.txt: Dependencies for the project. document.txt: Detailed setup and usage guide.

Attribution

Model: Mistral 7B, created by Mistral AI. Quantized Weights: Provided by TheBloke. This project is for educational purposes to support student learning and research.

License Apache 2.0 (same as Mistral 7B). Support For issues or questions, visit the Issues section or contact remiai3 on Hugging Face.

Downloads last month
6
GGUF
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for remiai3/mistral-7B-Instruct-v0.1-GGUF_using_int4_project_guide