Spaces:
Sleeping
Sleeping
Create READ.md
Browse files
READ.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mistral-7B-Instruct v0.3 Advanced Chatbot (Gradio Version)
|
| 2 |
+
|
| 3 |
+
This repository contains a comprehensive Gradio application that serves as an advanced chatbot powered by the Mistral-7B-Instruct v0.3 model from Hugging Face. It's optimized for deployment on Hugging Face Spaces.
|
| 4 |
+
|
| 5 |
+
## Features
|
| 6 |
+
|
| 7 |
+
- Interactive chat interface with Mistral-7B-Instruct v0.3
|
| 8 |
+
- Multiple chat sessions management
|
| 9 |
+
- Customizable system prompts
|
| 10 |
+
- Adjustable generation parameters (temperature, max tokens, etc.)
|
| 11 |
+
- File analysis for:
|
| 12 |
+
- CSV files
|
| 13 |
+
- Excel files
|
| 14 |
+
- Text files
|
| 15 |
+
- JSON files
|
| 16 |
+
- Context-aware responses that can incorporate file data
|
| 17 |
+
- JSON structure generation optimized for n8n workflows
|
| 18 |
+
|
| 19 |
+
## Requirements
|
| 20 |
+
|
| 21 |
+
- Python 3.8+
|
| 22 |
+
- Required packages are listed in `requirements.txt`
|
| 23 |
+
|
| 24 |
+
## Quick Start
|
| 25 |
+
|
| 26 |
+
1. Clone this repository
|
| 27 |
+
2. Install dependencies:
|
| 28 |
+
```bash
|
| 29 |
+
pip install -r requirements.txt
|
| 30 |
+
```
|
| 31 |
+
3. Run the application:
|
| 32 |
+
```bash
|
| 33 |
+
python app.py
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Deployment to Hugging Face Spaces
|
| 37 |
+
|
| 38 |
+
This application is designed to work well with Hugging Face Spaces:
|
| 39 |
+
|
| 40 |
+
1. Create a new Space on Hugging Face (https://huggingface.co/spaces)
|
| 41 |
+
2. Choose the "Gradio" framework
|
| 42 |
+
3. Upload the files from this repository
|
| 43 |
+
4. Make sure to select a GPU runtime for better performance
|
| 44 |
+
|
| 45 |
+
## Usage Guide
|
| 46 |
+
|
| 47 |
+
1. Start by clicking "Load Mistral-7B Model" in the interface
|
| 48 |
+
2. Wait for the model to load completely (this may take a few minutes on first run)
|
| 49 |
+
3. Type your message in the chat input field and press Send
|
| 50 |
+
4. Create new chat sessions as needed
|
| 51 |
+
5. Adjust system prompt and generation parameters for better results
|
| 52 |
+
6. Upload files for analysis and incorporate their data in your prompts
|
| 53 |
+
|
| 54 |
+
## Specialized for n8n JSON Generation
|
| 55 |
+
|
| 56 |
+
The default system prompt is optimized for generating well-structured JSON for n8n workflows. You can:
|
| 57 |
+
|
| 58 |
+
1. Ask the model to create complex JSON structures
|
| 59 |
+
2. Request specific n8n node configurations
|
| 60 |
+
3. Generate sample data in the correct format
|
| 61 |
+
4. Validate and fix existing JSON for n8n compatibility
|
| 62 |
+
|
| 63 |
+
## Model Configuration
|
| 64 |
+
|
| 65 |
+
The application provides various configuration options:
|
| 66 |
+
|
| 67 |
+
- **System Prompt**: Define how the AI should behave
|
| 68 |
+
- **Temperature**: Control creativity (higher = more creative)
|
| 69 |
+
- **Max Tokens**: Limit the length of responses
|
| 70 |
+
- **Top P**: Nucleus sampling parameter
|
| 71 |
+
- **Repetition Penalty**: Reduce repetition in responses
|
| 72 |
+
|
| 73 |
+
## Notes
|
| 74 |
+
|
| 75 |
+
- The first load of the model may take several minutes depending on your hardware
|
| 76 |
+
- For Hugging Face Spaces, a GPU runtime is strongly recommended
|
| 77 |
+
- The model requires approximately 14GB+ of VRAM for optimal performance
|
| 78 |
+
|
| 79 |
+
## License
|
| 80 |
+
|
| 81 |
+
This project is released under the MIT License.
|