Spaces:
Sleeping
Sleeping
| # Mistral-7B-Instruct v0.3 Advanced Chatbot (Gradio Version) | |
| This repository contains a comprehensive Gradio application that serves as an advanced chatbot powered by the Mistral-7B-Instruct v0.3 model from Hugging Face. It's optimized for deployment on Hugging Face Spaces. | |
| ## Features | |
| - Interactive chat interface with Mistral-7B-Instruct v0.3 | |
| - Multiple chat sessions management | |
| - Customizable system prompts | |
| - Adjustable generation parameters (temperature, max tokens, etc.) | |
| - File analysis for: | |
| - CSV files | |
| - Excel files | |
| - Text files | |
| - JSON files | |
| - Context-aware responses that can incorporate file data | |
| - JSON structure generation optimized for n8n workflows | |
| ## Requirements | |
| - Python 3.8+ | |
| - Required packages are listed in `requirements.txt` | |
| ## Quick Start | |
| 1. Clone this repository | |
| 2. Install dependencies: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 3. Run the application: | |
| ```bash | |
| python app.py | |
| ``` | |
| ## Deployment to Hugging Face Spaces | |
| This application is designed to work well with Hugging Face Spaces: | |
| 1. Create a new Space on Hugging Face (https://huggingface.co/spaces) | |
| 2. Choose the "Gradio" framework | |
| 3. Upload the files from this repository | |
| 4. Make sure to select a GPU runtime for better performance | |
| ## Usage Guide | |
| 1. Start by clicking "Load Mistral-7B Model" in the interface | |
| 2. Wait for the model to load completely (this may take a few minutes on first run) | |
| 3. Type your message in the chat input field and press Send | |
| 4. Create new chat sessions as needed | |
| 5. Adjust system prompt and generation parameters for better results | |
| 6. Upload files for analysis and incorporate their data in your prompts | |
| ## Specialized for n8n JSON Generation | |
| The default system prompt is optimized for generating well-structured JSON for n8n workflows. You can: | |
| 1. Ask the model to create complex JSON structures | |
| 2. Request specific n8n node configurations | |
| 3. Generate sample data in the correct format | |
| 4. Validate and fix existing JSON for n8n compatibility | |
| ## Model Configuration | |
| The application provides various configuration options: | |
| - **System Prompt**: Define how the AI should behave | |
| - **Temperature**: Control creativity (higher = more creative) | |
| - **Max Tokens**: Limit the length of responses | |
| - **Top P**: Nucleus sampling parameter | |
| - **Repetition Penalty**: Reduce repetition in responses | |
| ## Notes | |
| - The first load of the model may take several minutes depending on your hardware | |
| - For Hugging Face Spaces, a GPU runtime is strongly recommended | |
| - The model requires approximately 14GB+ of VRAM for optimal performance | |
| ## License | |
| This project is released under the MIT License. |