HusainHG commited on
Commit
32bd536
·
verified ·
1 Parent(s): c9f3035

Upload 8 files

Browse files
Files changed (8) hide show
  1. .gitignore +24 -0
  2. Dockerfile +13 -0
  3. README.md +180 -11
  4. app.py +97 -0
  5. requirements.txt +10 -0
  6. static/app.js +152 -0
  7. static/index.html +60 -0
  8. static/style.css +211 -0
.gitignore ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ venv/
8
+ env/
9
+ ENV/
10
+ .venv
11
+
12
+ # Hugging Face
13
+ .cache/
14
+ flagged/
15
+
16
+ # IDE
17
+ .vscode/
18
+ .idea/
19
+ *.swp
20
+ *.swo
21
+ .DS_Store
22
+
23
+ # Misc
24
+ *.log
Dockerfile ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY requirements.txt .
6
+ RUN pip install --no-cache-dir -r requirements.txt
7
+
8
+ COPY app.py .
9
+ COPY static ./static
10
+
11
+ EXPOSE 7860
12
+
13
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,11 +1,180 @@
1
- ---
2
- title: Copy
3
- emoji: 📈
4
- colorFrom: yellow
5
- colorTo: blue
6
- sdk: docker
7
- pinned: false
8
- short_description: Trial
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Mistral Fine-tuned Model
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: docker
7
+ app_port: 7860
8
+ ---
9
+
10
+ # 🤖 Mistral Fine-tuned Model
11
+
12
+ Flask API with separate HTML/CSS/JS frontend for `KASHH-4/mistral_fine-tuned` model.
13
+
14
+ ## 🚀 What This Is
15
+
16
+ A **Flask API server** with **separate frontend files**:
17
+ - Backend: Python Flask with CORS
18
+ - Frontend: HTML + CSS + JavaScript
19
+ - Clean separation of concerns
20
+ - API-first design
21
+
22
+ ## 📁 Project Structure
23
+
24
+ ```
25
+ e:\EDI\hf-node-app\
26
+ ├── app.py # Main Gradio application
27
+ ├── requirements.txt # Python dependencies
28
+ ├── README.md # This file
29
+ └── .gitignore # Git ignore rules
30
+ ```
31
+
32
+ ## 🔧 Deploy to Hugging Face Spaces
33
+
34
+ ### Step 1: Create a Space
35
+
36
+ 1. Go to https://huggingface.co/spaces
37
+ 2. Click **"Create new Space"**
38
+ 3. Configure:
39
+ - **Owner:** KASHH-4 (or your account)
40
+ - **Space name:** `mistral-api` (or any name)
41
+ - **SDK:** Gradio
42
+ - **Hardware:** CPU basic (Free)
43
+ - **Visibility:** Public
44
+ 4. Click **"Create Space"**
45
+
46
+ ### Step 2: Upload Files
47
+
48
+ Upload these 3 files to your Space:
49
+ - `app.py`
50
+ - `requirements.txt`
51
+ - `README.md` (optional)
52
+
53
+ **Via Web UI:**
54
+ 1. Click "Files" tab
55
+ 2. Click "Add file" → "Upload files"
56
+ 3. Drag and drop the files
57
+ 4. Commit changes
58
+
59
+ **Via Git:**
60
+ ```bash
61
+ git init
62
+ git remote add origin https://huggingface.co/spaces/KASHH-4/mistral-api
63
+ git add app.py requirements.txt README.md .gitignore
64
+ git commit -m "Initial deployment"
65
+ git push origin main
66
+ ```
67
+
68
+ ### Step 3: Wait for Deployment
69
+
70
+ - First build takes 5-10 minutes
71
+ - Watch the logs for "Running on..."
72
+ - Your Space will be live at: `https://kashh-4-mistral-api.hf.space`
73
+
74
+ ## 🧪 Test Your Space
75
+
76
+ ### Web Interface
77
+ Visit: `https://huggingface.co/spaces/KASHH-4/mistral-api`
78
+
79
+ ### API Endpoint
80
+ ```bash
81
+ curl -X POST "https://kashh-4-mistral-api.hf.space/api/predict" \
82
+ -H "Content-Type: application/json" \
83
+ -d '{"data":["Hello, how are you?"]}'
84
+ ```
85
+
86
+ ### From JavaScript/Node.js
87
+ ```javascript
88
+ const response = await fetch('https://kashh-4-mistral-api.hf.space/api/predict', {
89
+ method: 'POST',
90
+ headers: { 'Content-Type': 'application/json' },
91
+ body: JSON.stringify({ data: ["Your prompt here"] })
92
+ });
93
+
94
+ const result = await response.json();
95
+ console.log(result.data[0]); // Generated text
96
+ ```
97
+
98
+ ### From Python
99
+ ```python
100
+ import requests
101
+
102
+ response = requests.post(
103
+ 'https://kashh-4-mistral-api.hf.space/api/predict',
104
+ json={'data': ['Your prompt here']}
105
+ )
106
+
107
+ print(response.json()['data'][0])
108
+ ```
109
+
110
+ ## 💰 Cost
111
+
112
+ **100% FREE** on HF Spaces:
113
+ - Free CPU tier (slower, ~10-30 sec per request)
114
+ - Sleeps after 48h inactivity (30 sec wake-up)
115
+ - Perfect for demos, personal projects, testing
116
+
117
+ **Optional Upgrades:**
118
+ - GPU T4 Small: $0.60/hour (much faster, 2-5 sec)
119
+ - GPU A10G: $3.15/hour (very fast, 1-2 sec)
120
+
121
+ Upgrade in: Space Settings → Hardware
122
+
123
+ ## 🔧 Local Testing (Optional)
124
+
125
+ If you have Python installed and want to test locally before deploying:
126
+
127
+ ```bash
128
+ # Install dependencies
129
+ pip install -r requirements.txt
130
+
131
+ # Run locally
132
+ python app.py
133
+
134
+ # Visit: http://localhost:7860
135
+ ```
136
+
137
+ **Requirements:**
138
+ - Python 3.9+
139
+ - 16GB+ RAM (for model loading)
140
+ - GPU recommended but not required
141
+
142
+ ## 📋 Model Configuration
143
+
144
+ The app is configured for `KASHH-4/mistral_fine-tuned`. To use a different model, edit `app.py`:
145
+
146
+ ```python
147
+ MODEL_NAME = "your-org/your-model"
148
+ ```
149
+
150
+ ## 🆘 Troubleshooting
151
+
152
+ **Space stuck on "Building":**
153
+ - Check logs for errors
154
+ - Model might be too large for free CPU
155
+ - Try: Restart Space in Settings
156
+
157
+ **Space shows "Runtime Error":**
158
+ - Check if model exists and is public
159
+ - Verify model format is compatible with transformers
160
+ - Try smaller model first to test
161
+
162
+ **Slow responses:**
163
+ - Normal on free CPU tier
164
+ - Upgrade to GPU for faster inference
165
+ - Or use smaller model
166
+
167
+ ## 📞 Support
168
+
169
+ Issues? Check the deployment guide in `huggingface-space/DEPLOYMENT-GUIDE.md`
170
+
171
+ ---
172
+
173
+ ## 🗑️ Cleanup Old Files
174
+
175
+ If you followed earlier Node.js instructions, delete unnecessary files:
176
+
177
+ See `CLEANUP.md` for full list of files to remove.
178
+
179
+ ## License
180
+ MIT
app.py ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, jsonify, send_from_directory
2
+ from flask_cors import CORS
3
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
4
+ import torch
5
+ import os
6
+
7
+ app = Flask(__name__, static_folder='static')
8
+ CORS(app)
9
+
10
+ MODEL_NAME = "KASHH-4/mistral_fine-tuned"
11
+
12
+ print(f"Loading model: {MODEL_NAME}")
13
+
14
+ print("Loading tokenizer from YOUR merged model (slow tokenizer)...")
15
+ # Your model HAS tokenizer files, use them with use_fast=False
16
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
17
+
18
+ if tokenizer.pad_token is None:
19
+ tokenizer.pad_token = tokenizer.eos_token
20
+
21
+ print("Tokenizer loaded successfully!")
22
+
23
+ print("Loading YOUR model weights...")
24
+ # Optimized for 16GB RAM - load in 8-bit quantization
25
+ quantization_config = BitsAndBytesConfig(
26
+ load_in_8bit=True, # Use 8-bit to fit in 16GB RAM
27
+ llm_int8_threshold=6.0
28
+ )
29
+
30
+ model = AutoModelForCausalLM.from_pretrained(
31
+ MODEL_NAME,
32
+ quantization_config=quantization_config,
33
+ device_map="auto",
34
+ low_cpu_mem_usage=True,
35
+ trust_remote_code=True
36
+ )
37
+ print("Model loaded successfully!")
38
+
39
+
40
+ @app.route('/')
41
+ def index():
42
+ return send_from_directory('static', 'index.html')
43
+
44
+
45
+ @app.route('/api/generate', methods=['POST'])
46
+ def generate():
47
+ try:
48
+ data = request.json
49
+
50
+ if not data or 'prompt' not in data:
51
+ return jsonify({'error': 'Missing prompt in request body'}), 400
52
+
53
+ prompt = data['prompt']
54
+ max_new_tokens = data.get('max_new_tokens', 256)
55
+ temperature = data.get('temperature', 0.7)
56
+ top_p = data.get('top_p', 0.9)
57
+
58
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
59
+
60
+ with torch.no_grad():
61
+ outputs = model.generate(
62
+ **inputs,
63
+ max_new_tokens=max_new_tokens,
64
+ temperature=temperature,
65
+ top_p=top_p,
66
+ do_sample=True,
67
+ pad_token_id=tokenizer.eos_token_id
68
+ )
69
+
70
+ # Decode the full output
71
+ full_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
72
+
73
+ # Remove the prompt from the output to return only the generated text
74
+ generated_text = full_output[len(prompt):].strip()
75
+
76
+ return jsonify({
77
+ 'generated_text': generated_text,
78
+ 'prompt': prompt
79
+ })
80
+
81
+ except Exception as e:
82
+ print(f"Error during generation: {e}")
83
+ return jsonify({'error': str(e)}), 500
84
+
85
+
86
+ @app.route('/api/health', methods=['GET'])
87
+ def health():
88
+ return jsonify({
89
+ 'status': 'ok',
90
+ 'model': MODEL_NAME,
91
+ 'device': str(model.device)
92
+ })
93
+
94
+
95
+ if __name__ == '__main__':
96
+ port = int(os.environ.get('PORT', 7860))
97
+ app.run(host='0.0.0.0', port=port, debug=False)
requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ flask
2
+ flask-cors
3
+ transformers
4
+ torch
5
+ accelerate
6
+ numpy
7
+ protobuf
8
+ sentencepiece
9
+ bitsandbytes
10
+ scipy
static/app.js ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // API Configuration
2
+ const API_URL = window.location.origin; // Use same origin (works locally and on HF Spaces)
3
+
4
+ // DOM Elements
5
+ const promptEl = document.getElementById('prompt');
6
+ const generateBtn = document.getElementById('generateBtn');
7
+ const statusEl = document.getElementById('status');
8
+ const outputEl = document.getElementById('output');
9
+
10
+ const maxTokensEl = document.getElementById('maxTokens');
11
+ const temperatureEl = document.getElementById('temperature');
12
+ const topPEl = document.getElementById('topP');
13
+
14
+ const maxTokensValueEl = document.getElementById('maxTokensValue');
15
+ const temperatureValueEl = document.getElementById('temperatureValue');
16
+ const topPValueEl = document.getElementById('topPValue');
17
+
18
+ // Update slider value displays
19
+ maxTokensEl.addEventListener('input', (e) => {
20
+ maxTokensValueEl.textContent = e.target.value;
21
+ });
22
+
23
+ temperatureEl.addEventListener('input', (e) => {
24
+ temperatureValueEl.textContent = parseFloat(e.target.value).toFixed(1);
25
+ });
26
+
27
+ topPEl.addEventListener('input', (e) => {
28
+ topPValueEl.textContent = parseFloat(e.target.value).toFixed(2);
29
+ });
30
+
31
+ // Generate button click handler
32
+ generateBtn.addEventListener('click', async () => {
33
+ const prompt = promptEl.value.trim();
34
+
35
+ console.log('=== GENERATION STARTED ===');
36
+ console.log('📝 Step 1: User clicked Generate button');
37
+ console.log('📝 Timestamp:', new Date().toLocaleTimeString());
38
+
39
+ if (!prompt) {
40
+ console.log('❌ No prompt entered - aborting');
41
+ outputEl.textContent = 'Please enter a prompt';
42
+ outputEl.className = 'output error';
43
+ return;
44
+ }
45
+
46
+ console.log('📝 Step 2: Prompt validation passed');
47
+ console.log('📝 Prompt text:', prompt.substring(0, 100) + (prompt.length > 100 ? '...' : ''));
48
+ console.log('📝 Prompt length:', prompt.length, 'characters');
49
+ console.log('📝 Parameters:', {
50
+ max_new_tokens: parseInt(maxTokensEl.value),
51
+ temperature: parseFloat(temperatureEl.value),
52
+ top_p: parseFloat(topPEl.value)
53
+ });
54
+
55
+ // Disable button and show loading with animation
56
+ generateBtn.disabled = true;
57
+ generateBtn.textContent = '⏳ Generating...';
58
+ statusEl.textContent = '⏳';
59
+ outputEl.textContent = '🔄 Your model is thinking...\n\nThis may take 10-30 seconds on CPU.\nPlease wait...';
60
+ outputEl.className = 'output';
61
+
62
+ console.log('📝 Step 3: UI updated - button disabled, loading message shown');
63
+
64
+ try {
65
+ console.log('📝 Step 4: Preparing API request to /api/generate');
66
+ console.log('📝 API URL:', `${API_URL}/api/generate`);
67
+
68
+ const requestStartTime = Date.now();
69
+ console.log('📝 Step 5: Sending POST request...', new Date().toLocaleTimeString());
70
+
71
+ const response = await fetch(`${API_URL}/api/generate`, {
72
+ method: 'POST',
73
+ headers: {
74
+ 'Content-Type': 'application/json',
75
+ },
76
+ body: JSON.stringify({
77
+ prompt: prompt,
78
+ max_new_tokens: parseInt(maxTokensEl.value),
79
+ temperature: parseFloat(temperatureEl.value),
80
+ top_p: parseFloat(topPEl.value)
81
+ })
82
+ });
83
+
84
+ const requestEndTime = Date.now();
85
+ const requestDuration = ((requestEndTime - requestStartTime) / 1000).toFixed(2);
86
+
87
+ console.log('📝 Step 6: Response received from backend!', new Date().toLocaleTimeString());
88
+ console.log('📝 Response status:', response.status, response.statusText);
89
+ console.log('📝 Response time:', requestDuration, 'seconds');
90
+ console.log('📝 Response OK?', response.ok);
91
+
92
+ console.log('📝 Step 7: Parsing JSON response...');
93
+ const data = await response.json();
94
+ console.log('📝 JSON parsed successfully');
95
+
96
+ if (!response.ok) {
97
+ console.error('❌ Backend returned error status');
98
+ console.error('❌ Error from backend:', data.error);
99
+ throw new Error(data.error || `HTTP ${response.status}`);
100
+ }
101
+
102
+ console.log('📝 Step 8: Generation successful!');
103
+ console.log('📝 Generated text length:', data.generated_text?.length || 0, 'characters');
104
+ console.log('📝 Generated text preview:', data.generated_text?.substring(0, 150) + '...');
105
+
106
+ // Display result - show only the generated text without the prompt
107
+ outputEl.textContent = data.generated_text || 'No output generated';
108
+ outputEl.className = 'output';
109
+ statusEl.textContent = '✅';
110
+
111
+ console.log('📝 Step 9: UI updated with generated text');
112
+ console.log('=== GENERATION COMPLETED SUCCESSFULLY ===');
113
+ console.log('⏱️ Total time:', requestDuration, 'seconds\n');
114
+
115
+ } catch (error) {
116
+ console.error('❌ ERROR OCCURRED:');
117
+ console.error('❌ Error type:', error.name);
118
+ console.error('❌ Error message:', error.message);
119
+ console.error('❌ Stack trace:', error.stack);
120
+
121
+ outputEl.textContent = `Error: ${error.message}`;
122
+ outputEl.className = 'output error';
123
+ statusEl.textContent = '❌';
124
+
125
+ console.log('=== GENERATION FAILED ===\n');
126
+ } finally {
127
+ generateBtn.disabled = false;
128
+ generateBtn.textContent = '✨ Generate';
129
+ console.log('📝 Step 10: Button re-enabled and reset');
130
+ }
131
+ });
132
+
133
+ // Allow Enter key to trigger generation (Ctrl+Enter)
134
+ promptEl.addEventListener('keydown', (e) => {
135
+ if (e.ctrlKey && e.key === 'Enter') {
136
+ generateBtn.click();
137
+ }
138
+ });
139
+
140
+ // Health check on load
141
+ async function checkHealth() {
142
+ try {
143
+ const response = await fetch(`${API_URL}/api/health`);
144
+ const data = await response.json();
145
+ console.log('API Health:', data);
146
+ } catch (error) {
147
+ console.warn('API health check failed:', error);
148
+ }
149
+ }
150
+
151
+ // Run health check when page loads
152
+ checkHealth();
static/index.html ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Mistral Fine-tuned Model</title>
7
+ <link rel="stylesheet" href="/static/style.css">
8
+ </head>
9
+ <body>
10
+ <div class="container">
11
+ <header>
12
+ <h1>🤖 Mistral Fine-tuned Model</h1>
13
+ <p>Model: <code>KASHH-4/mistral_fine-tuned</code></p>
14
+ </header>
15
+
16
+ <main>
17
+ <div class="prompt-section">
18
+ <label for="prompt">Enter your prompt:</label>
19
+ <textarea id="prompt" rows="6" placeholder="Write a short story about a robot learning to paint..."></textarea>
20
+ </div>
21
+
22
+ <div class="settings-section">
23
+ <details>
24
+ <summary>⚙️ Advanced Settings</summary>
25
+ <div class="settings-grid">
26
+ <div class="setting">
27
+ <label for="maxTokens">Max Tokens: <span id="maxTokensValue">256</span></label>
28
+ <input type="range" id="maxTokens" min="50" max="512" value="256">
29
+ </div>
30
+ <div class="setting">
31
+ <label for="temperature">Temperature: <span id="temperatureValue">0.7</span></label>
32
+ <input type="range" id="temperature" min="0.1" max="2.0" step="0.1" value="0.7">
33
+ </div>
34
+ <div class="setting">
35
+ <label for="topP">Top P: <span id="topPValue">0.9</span></label>
36
+ <input type="range" id="topP" min="0.1" max="1.0" step="0.05" value="0.9">
37
+ </div>
38
+ </div>
39
+ </details>
40
+ </div>
41
+
42
+ <div class="button-section">
43
+ <button id="generateBtn" class="generate-btn">✨ Generate</button>
44
+ <span id="status" class="status"></span>
45
+ </div>
46
+
47
+ <div class="output-section">
48
+ <h3>Generated Output:</h3>
49
+ <div id="output" class="output"></div>
50
+ </div>
51
+ </main>
52
+
53
+ <footer>
54
+ <p>API Endpoints: <code>POST /api/generate</code> | <code>GET /api/health</code></p>
55
+ </footer>
56
+ </div>
57
+
58
+ <script src="/static/app.js"></script>
59
+ </body>
60
+ </html>
static/style.css ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ * {
2
+ margin: 0;
3
+ padding: 0;
4
+ box-sizing: border-box;
5
+ }
6
+
7
+ body {
8
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
9
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
10
+ min-height: 100vh;
11
+ padding: 20px;
12
+ }
13
+
14
+ .container {
15
+ max-width: 900px;
16
+ margin: 0 auto;
17
+ background: white;
18
+ border-radius: 16px;
19
+ box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
20
+ overflow: hidden;
21
+ }
22
+
23
+ header {
24
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
25
+ color: white;
26
+ padding: 30px;
27
+ text-align: center;
28
+ }
29
+
30
+ header h1 {
31
+ font-size: 2.5em;
32
+ margin-bottom: 10px;
33
+ }
34
+
35
+ header p {
36
+ opacity: 0.9;
37
+ font-size: 1.1em;
38
+ }
39
+
40
+ header code {
41
+ background: rgba(255, 255, 255, 0.2);
42
+ padding: 4px 8px;
43
+ border-radius: 4px;
44
+ }
45
+
46
+ main {
47
+ padding: 30px;
48
+ }
49
+
50
+ .prompt-section {
51
+ margin-bottom: 20px;
52
+ }
53
+
54
+ .prompt-section label {
55
+ display: block;
56
+ font-weight: 600;
57
+ margin-bottom: 8px;
58
+ color: #333;
59
+ }
60
+
61
+ #prompt {
62
+ width: 100%;
63
+ padding: 15px;
64
+ border: 2px solid #e0e0e0;
65
+ border-radius: 8px;
66
+ font-size: 1em;
67
+ font-family: inherit;
68
+ resize: vertical;
69
+ transition: border-color 0.3s;
70
+ }
71
+
72
+ #prompt:focus {
73
+ outline: none;
74
+ border-color: #667eea;
75
+ }
76
+
77
+ .settings-section {
78
+ margin-bottom: 20px;
79
+ }
80
+
81
+ details {
82
+ border: 1px solid #e0e0e0;
83
+ border-radius: 8px;
84
+ padding: 15px;
85
+ }
86
+
87
+ summary {
88
+ cursor: pointer;
89
+ font-weight: 600;
90
+ color: #667eea;
91
+ user-select: none;
92
+ }
93
+
94
+ summary:hover {
95
+ color: #764ba2;
96
+ }
97
+
98
+ .settings-grid {
99
+ display: grid;
100
+ grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
101
+ gap: 20px;
102
+ margin-top: 15px;
103
+ }
104
+
105
+ .setting label {
106
+ display: block;
107
+ margin-bottom: 8px;
108
+ font-weight: 500;
109
+ color: #555;
110
+ }
111
+
112
+ .setting input[type="range"] {
113
+ width: 100%;
114
+ cursor: pointer;
115
+ }
116
+
117
+ .button-section {
118
+ display: flex;
119
+ align-items: center;
120
+ gap: 15px;
121
+ margin-bottom: 30px;
122
+ }
123
+
124
+ .generate-btn {
125
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
126
+ color: white;
127
+ border: none;
128
+ padding: 15px 40px;
129
+ font-size: 1.1em;
130
+ font-weight: 600;
131
+ border-radius: 8px;
132
+ cursor: pointer;
133
+ transition: transform 0.2s, box-shadow 0.2s;
134
+ }
135
+
136
+ .generate-btn:hover {
137
+ transform: translateY(-2px);
138
+ box-shadow: 0 5px 15px rgba(102, 126, 234, 0.4);
139
+ }
140
+
141
+ .generate-btn:active {
142
+ transform: translateY(0);
143
+ }
144
+
145
+ .generate-btn:disabled {
146
+ opacity: 0.6;
147
+ cursor: not-allowed;
148
+ }
149
+
150
+ .status {
151
+ font-size: 1.5em;
152
+ }
153
+
154
+ .output-section h3 {
155
+ color: #333;
156
+ margin-bottom: 15px;
157
+ }
158
+
159
+ .output {
160
+ background: #f8f9fa;
161
+ border: 2px solid #e0e0e0;
162
+ border-radius: 8px;
163
+ padding: 20px;
164
+ min-height: 150px;
165
+ font-family: 'Courier New', monospace;
166
+ white-space: pre-wrap;
167
+ word-wrap: break-word;
168
+ line-height: 1.6;
169
+ color: #333;
170
+ }
171
+
172
+ .output.empty {
173
+ color: #999;
174
+ font-style: italic;
175
+ }
176
+
177
+ .output.error {
178
+ color: #dc3545;
179
+ background: #fff5f5;
180
+ border-color: #dc3545;
181
+ }
182
+
183
+ footer {
184
+ background: #f8f9fa;
185
+ padding: 20px;
186
+ text-align: center;
187
+ color: #666;
188
+ font-size: 0.9em;
189
+ }
190
+
191
+ footer code {
192
+ background: white;
193
+ padding: 4px 8px;
194
+ border-radius: 4px;
195
+ border: 1px solid #e0e0e0;
196
+ }
197
+
198
+ @media (max-width: 768px) {
199
+ header h1 {
200
+ font-size: 2em;
201
+ }
202
+
203
+ .settings-grid {
204
+ grid-template-columns: 1fr;
205
+ }
206
+
207
+ .button-section {
208
+ flex-direction: column;
209
+ align-items: stretch;
210
+ }
211
+ }