Spaces:
Runtime error
Runtime error
| title: Architech - AI Model Architect | |
| emoji: ποΈ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: gradio | |
| sdk_version: 6.4.0 | |
| app_file: app.py | |
| pinned: false | |
| license: mit | |
| # ποΈ Architech - Your Personal AI Model Architect | |
| **Create custom AI models without the headache!** Just describe what you want, and Architech handles the rest. | |
| ## β¨ Features | |
| ### π Synthetic Data Generation | |
| - Generate high-quality training data from simple descriptions | |
| - Support for multiple domains: Technology, Healthcare, Finance, Education | |
| - Multiple format types: Conversational, Instruction-following | |
| - 50-500 examples per dataset | |
| ### π Model Training | |
| - Fine-tune state-of-the-art models (GPT-2, DialoGPT) | |
| - Automatic optimization and parameter tuning | |
| - Direct deployment to HuggingFace Hub | |
| - GPU-accelerated training with efficient memory usage | |
| ### π§ͺ Model Testing | |
| - Load and test your trained models instantly | |
| - Interactive inference with adjustable parameters | |
| - Real-time generation with temperature and length controls | |
| ### π Security & Limits | |
| - **Rate Limiting**: Fair usage for all users | |
| - Dataset Generation: 10/hour | |
| - Model Training: 3/hour | |
| - Model Inference: 50/hour | |
| - **Token Authentication**: Secure HuggingFace integration | |
| - **Error Handling**: Comprehensive error messages and recovery | |
| ## π Quick Start | |
| ### 1. Generate Training Data | |
| 1. Go to the **"Generate Dataset"** tab | |
| 2. Describe your task (e.g., "Customer support chatbot for tech products") | |
| 3. Select domain and size | |
| 4. Click **"Generate Dataset"** | |
| ### 2. Train Your Model | |
| 1. Go to the **"Train Model"** tab | |
| 2. Enter your model name and HuggingFace token | |
| 3. Choose to use synthetic data or provide your own | |
| 4. Click **"Train Model"** | |
| 5. Wait for training to complete (5-15 minutes) | |
| ### 3. Test Your Model | |
| 1. Go to the **"Test Model"** tab | |
| 2. Enter your model name and token | |
| 3. Click **"Load Model"** | |
| 4. Enter a test prompt and generate! | |
| ## π Requirements | |
| - HuggingFace account with **write** token | |
| - For training: GPU recommended (CPU works but slower) | |
| - Patience during training (coffee break recommended β) | |
| ## π― Use Cases | |
| - **Customer Support Bots**: Train chatbots for specific products/services | |
| - **Content Generation**: Create domain-specific text generators | |
| - **Educational Tools**: Build tutoring and explanation systems | |
| - **Creative Writing**: Fine-tune for specific writing styles | |
| - **Technical Documentation**: Generate code explanations and docs | |
| ## βοΈ Technical Details | |
| ### Supported Base Models | |
| - `distilgpt2` (fastest, smallest) | |
| - `gpt2` (balanced) | |
| - `microsoft/DialoGPT-small` (conversational) | |
| ### Training Features | |
| - Gradient accumulation for memory efficiency | |
| - Mixed precision training (FP16) | |
| - Automatic learning rate optimization | |
| - Smart tokenization and padding | |
| ### Synthetic Data Quality | |
| - Domain-specific vocabulary | |
| - Natural language variations | |
| - Contextually relevant examples | |
| - Edge case handling | |
| ## π οΈ Troubleshooting | |
| ### "GPU Memory Overflow" | |
| - Reduce batch size to 1 | |
| - Use smaller base model (distilgpt2) | |
| - Reduce dataset size | |
| ### "Permission Denied" | |
| - Check your HuggingFace token has **WRITE** access | |
| - Generate new token at: https://huggingface.co/settings/tokens | |
| ### "Rate Limit Exceeded" | |
| - Wait for the cooldown period | |
| - Check remaining requests in error message | |
| ## π Best Practices | |
| 1. **Start Small**: Begin with 100 examples and 3 epochs | |
| 2. **Be Specific**: Detailed task descriptions yield better results | |
| 3. **Test First**: Use the Test tab before deploying | |
| 4. **Iterate**: Train multiple versions with different parameters | |
| 5. **Monitor**: Watch training logs for issues | |
| ## π€ Contributing | |
| Found a bug? Have a feature request? Open an issue! | |
| ## π License | |
| MIT License - feel free to use and modify! | |
| ## π Acknowledgments | |
| Built with: | |
| - [Gradio](https://gradio.app/) - Interface | |
| - [Transformers](https://huggingface.co/transformers/) - Models | |
| - [HuggingFace](https://huggingface.co/) - Infrastructure | |
| --- | |
| *No PhD required. Just ideas.* β¨ |