| title: Mistral Fine-tuning Interface | |
| emoji: π | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: gradio | |
| sdk_version: 4.44.0 | |
| app_file: interface_app.py | |
| pinned: false | |
| license: apache-2.0 | |
| # π Mistral Fine-tuning & Hosting Interface | |
| A comprehensive Gradio interface for fine-tuning and deploying Mistral models with LoRA. | |
| ## Features | |
| - π₯ **Fine-tuning**: Train Mistral models with LoRA on custom datasets | |
| - π **API Hosting**: Deploy fine-tuned models as REST APIs | |
| - π§ͺ **Test Inference**: Test your models with an intuitive interface | |
| - π **GPU Recommendations**: Automatic parameter suggestions based on available GPU | |
| - πΎ **Model Management**: Easy model selection and management | |
| ## Quick Start | |
| 1. Upload your training dataset (JSON/JSONL format) | |
| 2. Configure training parameters | |
| 3. Start fine-tuning | |
| 4. Test your model via inference tab | |
| 5. Deploy as API for production use | |
| ## Requirements | |
| See `requirements_interface.txt` for dependencies. | |
| ## Documentation | |
| - Full setup guide in workspace | |
| - Detailed API documentation included | |
| - Example datasets and prompts provided | |
| ## Author | |
| Created for RTL code generation and hardware design tasks. | |