--- title: Mistral Fine-tuned Model emoji: ๐Ÿค– colorFrom: blue colorTo: purple sdk: docker app_port: 7860 --- # ๐Ÿค– Mistral Fine-tuned Model Flask API with separate HTML/CSS/JS frontend for `KASHH-4/mistral_fine-tuned` model. ## ๐Ÿš€ What This Is A **Flask API server** with **separate frontend files**: - Backend: Python Flask with CORS - Frontend: HTML + CSS + JavaScript - Clean separation of concerns - API-first design ## ๐Ÿ“ Project Structure ``` e:\EDI\hf-node-app\ โ”œโ”€โ”€ app.py # Main Gradio application โ”œโ”€โ”€ requirements.txt # Python dependencies โ”œโ”€โ”€ README.md # This file โ””โ”€โ”€ .gitignore # Git ignore rules ``` ## ๐Ÿ”ง Deploy to Hugging Face Spaces ### Step 1: Create a Space 1. Go to https://huggingface.co/spaces 2. Click **"Create new Space"** 3. Configure: - **Owner:** KASHH-4 (or your account) - **Space name:** `mistral-api` (or any name) - **SDK:** Gradio - **Hardware:** CPU basic (Free) - **Visibility:** Public 4. Click **"Create Space"** ### Step 2: Upload Files Upload these 3 files to your Space: - `app.py` - `requirements.txt` - `README.md` (optional) **Via Web UI:** 1. Click "Files" tab 2. Click "Add file" โ†’ "Upload files" 3. Drag and drop the files 4. Commit changes **Via Git:** ```bash git init git remote add origin https://huggingface.co/spaces/KASHH-4/mistral-api git add app.py requirements.txt README.md .gitignore git commit -m "Initial deployment" git push origin main ``` ### Step 3: Wait for Deployment - First build takes 5-10 minutes - Watch the logs for "Running on..." - Your Space will be live at: `https://kashh-4-mistral-api.hf.space` ## ๐Ÿงช Test Your Space ### Web Interface Visit: `https://huggingface.co/spaces/KASHH-4/mistral-api` ### API Endpoint ```bash curl -X POST "https://kashh-4-mistral-api.hf.space/api/predict" \ -H "Content-Type: application/json" \ -d '{"data":["Hello, how are you?"]}' ``` ### From JavaScript/Node.js ```javascript const response = await fetch('https://kashh-4-mistral-api.hf.space/api/predict', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ data: ["Your prompt here"] }) }); const result = await response.json(); console.log(result.data[0]); // Generated text ``` ### From Python ```python import requests response = requests.post( 'https://kashh-4-mistral-api.hf.space/api/predict', json={'data': ['Your prompt here']} ) print(response.json()['data'][0]) ``` ## ๐Ÿ’ฐ Cost **100% FREE** on HF Spaces: - Free CPU tier (slower, ~10-30 sec per request) - Sleeps after 48h inactivity (30 sec wake-up) - Perfect for demos, personal projects, testing **Optional Upgrades:** - GPU T4 Small: $0.60/hour (much faster, 2-5 sec) - GPU A10G: $3.15/hour (very fast, 1-2 sec) Upgrade in: Space Settings โ†’ Hardware ## ๐Ÿ”ง Local Testing (Optional) If you have Python installed and want to test locally before deploying: ```bash # Install dependencies pip install -r requirements.txt # Run locally python app.py # Visit: http://localhost:7860 ``` **Requirements:** - Python 3.9+ - 16GB+ RAM (for model loading) - GPU recommended but not required ## ๐Ÿ“‹ Model Configuration The app is configured for `KASHH-4/mistral_fine-tuned`. To use a different model, edit `app.py`: ```python MODEL_NAME = "your-org/your-model" ``` ## ๐Ÿ†˜ Troubleshooting **Space stuck on "Building":** - Check logs for errors - Model might be too large for free CPU - Try: Restart Space in Settings **Space shows "Runtime Error":** - Check if model exists and is public - Verify model format is compatible with transformers - Try smaller model first to test **Slow responses:** - Normal on free CPU tier - Upgrade to GPU for faster inference - Or use smaller model ## ๐Ÿ“ž Support Issues? Check the deployment guide in `huggingface-space/DEPLOYMENT-GUIDE.md` --- ## ๐Ÿ—‘๏ธ Cleanup Old Files If you followed earlier Node.js instructions, delete unnecessary files: See `CLEANUP.md` for full list of files to remove. ## License MIT