DarkNeuron-AI commited on
Commit
a4c8c42
ยท
verified ยท
1 Parent(s): 8432fe8

Upload 6 files

Browse files
Files changed (6) hide show
  1. Dockerfile (1) +31 -0
  2. README (1).md +398 -0
  3. app (2).py +110 -0
  4. gitattributes (1) +35 -0
  5. index (2).html +374 -0
  6. requirements (1).txt +7 -0
Dockerfile (1) ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use the official Python 3.9 slim image
2
+ FROM python:3.9-slim
3
+
4
+ # Create a user with UID 1000 (Required for Hugging Face Spaces security)
5
+ RUN useradd -m -u 1000 user
6
+ USER user
7
+
8
+ # Add local bin to PATH to ensure pip installed packages are found
9
+ ENV PATH="/home/user/.local/bin:$PATH"
10
+
11
+ # Set the working directory
12
+ WORKDIR /app
13
+
14
+ # Copy the requirements file first to leverage Docker cache
15
+ COPY --chown=user ./requirements.txt requirements.txt
16
+
17
+ # Step 1: Install PyTorch CPU version specifically (to save space and fix compatibility)
18
+ # We use torch 2.4.0 to resolve conflicts with newer transformers libraries
19
+ RUN pip install --no-cache-dir torch==2.4.0 --index-url https://download.pytorch.org/whl/cpu
20
+
21
+ # Step 2: Install the rest of the dependencies from requirements.txt
22
+ RUN pip install --no-cache-dir -r requirements.txt
23
+
24
+ # Copy the rest of the application code
25
+ COPY --chown=user . /app
26
+
27
+ # Expose the port used by Hugging Face Spaces
28
+ EXPOSE 7860
29
+
30
+ # Command to start the application using Uvicorn
31
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README (1).md ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Humour 0.5B
3
+ emoji: ๐Ÿจ
4
+ colorFrom: pink
5
+ colorTo: blue
6
+ sdk: docker
7
+ pinned: false
8
+ ---
9
+ # ๐Ÿง  DNAI Humour Chatbot - Interactive Web Interface
10
+
11
+ A beautiful, fully-functional chat interface for the **dnai-humour-0.5B-instruct** model. Chat with a witty, lightweight AI assistant that's fast, friendly, and surprisingly capable.
12
+
13
+ ![DNAI Humour](https://via.placeholder.com/1200x300/667eea/ffffff?text=DNAI+Humour+Chatbot)
14
+
15
+ ---
16
+
17
+ ## โœจ Features
18
+
19
+ - ๐Ÿ’ฌ **Real-time Chat** - Smooth, responsive conversation interface
20
+ - ๐ŸŽจ **Beautiful UI** - Modern design with dark/light mode
21
+ - โšก **Fast Responses** - Optimized for 0.5B parameter model
22
+ - ๐ŸŽญ **Personality** - Witty and helpful, not robotic
23
+ - ๐Ÿ“ฑ **Responsive** - Works on all devices
24
+ - ๐Ÿงน **Clean UX** - Clear conversations, suggestions, timestamps
25
+
26
+ ---
27
+
28
+ ## ๐Ÿš€ Quick Start
29
+
30
+ ### Prerequisites
31
+ - Python 3.8+
32
+ - CUDA-capable GPU (recommended) or CPU
33
+ - 2GB+ VRAM (GPU) or 4GB+ RAM (CPU)
34
+
35
+ ### Installation
36
+
37
+ 1. **Clone the Space**
38
+ ```bash
39
+ git clone https://huggingface.co/spaces/YOUR-USERNAME/dnai-humour-chatbot
40
+ cd dnai-humour-chatbot
41
+ ```
42
+
43
+ 2. **Install Dependencies**
44
+ ```bash
45
+ pip install -r requirements.txt
46
+ ```
47
+
48
+ 3. **Run the Application**
49
+ ```bash
50
+ uvicorn app:app --host 0.0.0.0 --port 7860
51
+ ```
52
+
53
+ 4. **Open Browser**
54
+ ```
55
+ http://localhost:7860
56
+ ```
57
+
58
+ ---
59
+
60
+ ## ๐ŸŒ Deploy to Hugging Face Spaces
61
+
62
+ ### Step 1: Create Space
63
+
64
+ 1. Go to [Hugging Face](https://huggingface.co/)
65
+ 2. Click **New Space**
66
+ 3. Settings:
67
+ - **Name**: `dnai-humour-chatbot`
68
+ - **SDK**: Docker
69
+ - **Hardware**: T4 Small (or CPU basic for testing)
70
+ - **Visibility**: Public
71
+
72
+ ### Step 2: Upload Files
73
+
74
+ Upload these files to your Space:
75
+ ```
76
+ dnai-humour-chatbot/
77
+ โ”œโ”€โ”€ app.py
78
+ โ”œโ”€โ”€ requirements.txt
79
+ โ”œโ”€โ”€ index.html
80
+ โ”œโ”€โ”€ README.md
81
+ โ””โ”€โ”€ Dockerfile (optional)
82
+ ```
83
+
84
+ ### Step 3: Create Dockerfile (if needed)
85
+
86
+ ```dockerfile
87
+ FROM python:3.10-slim
88
+
89
+ WORKDIR /app
90
+
91
+ # Install dependencies
92
+ COPY requirements.txt .
93
+ RUN pip install --no-cache-dir -r requirements.txt
94
+
95
+ # Copy application
96
+ COPY . .
97
+
98
+ # Expose port
99
+ EXPOSE 7860
100
+
101
+ # Run
102
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
103
+ ```
104
+
105
+ ### Step 4: Wait for Build
106
+
107
+ - Hugging Face will automatically build your Space
108
+ - Check logs for any errors
109
+ - Model will download on first startup (~500MB)
110
+ - Once "Running", your chatbot is live! ๐ŸŽ‰
111
+
112
+ ---
113
+
114
+ ## ๐Ÿ“ File Structure
115
+
116
+ ```
117
+ dnai-humour-chatbot/
118
+ โ”‚
119
+ โ”œโ”€โ”€ app.py # FastAPI backend with model inference
120
+ โ”‚ โ”œโ”€โ”€ /api/chat # POST - Send messages
121
+ โ”‚ โ”œโ”€โ”€ /api/info # GET - Model information
122
+ โ”‚ โ”œโ”€โ”€ /health # GET - Health check
123
+ โ”‚ โ””โ”€โ”€ /api/reset # POST - Reset conversation
124
+ โ”‚
125
+ โ”œโ”€โ”€ index.html # React-based chat interface
126
+ โ”‚ โ”œโ”€โ”€ Message history
127
+ โ”‚ โ”œโ”€โ”€ Dark/Light mode
128
+ โ”‚ โ”œโ”€โ”€ Typing indicators
129
+ โ”‚ โ””โ”€โ”€ Quick suggestions
130
+ โ”‚
131
+ โ”œโ”€โ”€ requirements.txt # Python dependencies
132
+ โ”‚ โ”œโ”€โ”€ transformers
133
+ โ”‚ โ”œโ”€โ”€ torch
134
+ โ”‚ โ”œโ”€โ”€ fastapi
135
+ โ”‚ โ””โ”€โ”€ uvicorn
136
+ โ”‚
137
+ โ””โ”€โ”€ README.md # This file
138
+ ```
139
+
140
+ ---
141
+
142
+ ## ๐Ÿ”Œ API Documentation
143
+
144
+ ### POST `/api/chat`
145
+
146
+ **Request:**
147
+ ```json
148
+ {
149
+ "messages": [
150
+ {
151
+ "role": "user",
152
+ "content": "Tell me a joke"
153
+ }
154
+ ],
155
+ "temperature": 0.7,
156
+ "max_tokens": 256
157
+ }
158
+ ```
159
+
160
+ **Response:**
161
+ ```json
162
+ {
163
+ "response": "Why don't scientists trust atoms? Because they make up everything! ๐Ÿ˜„",
164
+ "model": "DarkNeuron-AI/dnai-humour-0.5B-instruct",
165
+ "tokens_used": 45
166
+ }
167
+ ```
168
+
169
+ ### GET `/api/info`
170
+
171
+ **Response:**
172
+ ```json
173
+ {
174
+ "model_name": "DNAI Humour 0.5B Instruct",
175
+ "version": "1.0",
176
+ "base_model": "Qwen2.5-0.5B-Instruct",
177
+ "parameters": "~0.5 Billion",
178
+ "capabilities": [
179
+ "Instruction following",
180
+ "Conversational AI",
181
+ "Light humor",
182
+ "Low-latency responses"
183
+ ]
184
+ }
185
+ ```
186
+
187
+ ---
188
+
189
+ ## ๐ŸŽจ UI Features
190
+
191
+ ### Chat Interface
192
+ - Clean, modern design
193
+ - Message bubbles with timestamps
194
+ - User/Assistant avatars
195
+ - Smooth animations
196
+
197
+ ### Dark/Light Mode
198
+ - Toggle between themes
199
+ - Smooth transitions
200
+ - Persistent preferences (client-side)
201
+
202
+ ### Smart Suggestions
203
+ - Quick-start prompts
204
+ - Contextual examples
205
+ - One-click input
206
+
207
+ ### Typing Indicators
208
+ - Real-time feedback
209
+ - Loading animations
210
+ - Response timing
211
+
212
+ ---
213
+
214
+ ## โš™๏ธ Configuration
215
+
216
+ ### Model Parameters (app.py)
217
+
218
+ ```python
219
+ MODEL_NAME = "DarkNeuron-AI/dnai-humour-0.5B-instruct"
220
+ MAX_LENGTH = 512 # Context window
221
+ TEMPERATURE = 0.7 # Creativity (0.0-1.0)
222
+ TOP_P = 0.9 # Nucleus sampling
223
+ TOP_K = 50 # Top-k sampling
224
+ ```
225
+
226
+ ### Generation Settings
227
+
228
+ Adjust in `/api/chat` request:
229
+ - `temperature`: 0.1 (focused) to 1.0 (creative)
230
+ - `max_tokens`: 50 (short) to 512 (long)
231
+ - `stream`: true/false (streaming support)
232
+
233
+ ---
234
+
235
+ ## ๐Ÿ› Troubleshooting
236
+
237
+ ### Issue: Model not loading
238
+
239
+ **Symptoms**: 503 errors, "Model not loaded"
240
+
241
+ **Solutions**:
242
+ ```bash
243
+ # Check CUDA availability
244
+ python -c "import torch; print(torch.cuda.is_available())"
245
+
246
+ # Verify model download
247
+ python -c "from transformers import AutoModelForCausalLM; AutoModelForCausalLM.from_pretrained('DarkNeuron-AI/dnai-humour-0.5B-instruct')"
248
+
249
+ # Check logs
250
+ uvicorn app:app --log-level debug
251
+ ```
252
+
253
+ ### Issue: Out of memory
254
+
255
+ **Solutions**:
256
+ 1. Reduce `max_tokens` in generation
257
+ 2. Use CPU instead of GPU (slower but works)
258
+ 3. Enable model quantization (INT8)
259
+
260
+ ```python
261
+ # In app.py, modify model loading:
262
+ model = AutoModelForCausalLM.from_pretrained(
263
+ MODEL_NAME,
264
+ load_in_8bit=True, # Enable INT8 quantization
265
+ device_map="auto"
266
+ )
267
+ ```
268
+
269
+ ### Issue: Slow responses
270
+
271
+ **Solutions**:
272
+ 1. Use GPU instead of CPU
273
+ 2. Reduce `max_tokens`
274
+ 3. Lower `temperature` for faster sampling
275
+ 4. Use smaller batch size
276
+
277
+ ---
278
+
279
+ ## ๐Ÿ“Š Performance Benchmarks
280
+
281
+ | Hardware | Response Time | Memory Usage |
282
+ |----------|--------------|--------------|
283
+ | T4 GPU | ~1-2s | ~2GB VRAM |
284
+ | CPU | ~5-10s | ~4GB RAM |
285
+ | A10G GPU | ~0.5-1s | ~2GB VRAM |
286
+
287
+ ---
288
+
289
+ ## ๐ŸŽฏ Use Cases
290
+
291
+ - **Educational Chatbots** - Learning companions
292
+ - **Personal Assistants** - Quick help & info
293
+ - **Code Helpers** - Programming Q&A
294
+ - **Creative Writing** - Brainstorming & ideas
295
+ - **General Chat** - Friendly conversation
296
+
297
+ ---
298
+
299
+ ## ๐Ÿšซ Limitations
300
+
301
+ - **Not for production medical/legal advice**
302
+ - **Limited context window** (512 tokens)
303
+ - **0.5B parameters** - expect occasional mistakes
304
+ - **No long-term memory** - each conversation is independent
305
+ - **English-focused** - other languages may be limited
306
+
307
+ ---
308
+
309
+ ## ๐Ÿ”’ Privacy & Safety
310
+
311
+ - **No data logging** - conversations not stored
312
+ - **Local processing** - your data stays with you
313
+ - **No tracking** - no analytics or monitoring
314
+ - **Open source** - fully transparent code
315
+
316
+ ---
317
+
318
+ ## ๐Ÿ› ๏ธ Advanced Customization
319
+
320
+ ### Change Model Personality
321
+
322
+ Edit the system prompt in `app.py`:
323
+
324
+ ```python
325
+ def format_chat_prompt(messages: List[Message]) -> str:
326
+ system_prompt = "You are a helpful, witty AI assistant."
327
+ formatted_messages = [f"System: {system_prompt}"]
328
+ # ... rest of code
329
+ ```
330
+
331
+ ### Add Memory
332
+
333
+ Implement conversation history storage:
334
+
335
+ ```python
336
+ # Simple in-memory storage
337
+ conversations = {}
338
+
339
+ @app.post("/api/chat")
340
+ async def chat(request: ChatRequest, user_id: str = "default"):
341
+ if user_id not in conversations:
342
+ conversations[user_id] = []
343
+ # ... use stored history
344
+ ```
345
+
346
+ ### Enable Streaming
347
+
348
+ For real-time token-by-token responses:
349
+
350
+ ```python
351
+ from fastapi.responses import StreamingResponse
352
+
353
+ @app.post("/api/chat/stream")
354
+ async def chat_stream(request: ChatRequest):
355
+ def generate():
356
+ # Yield tokens as they're generated
357
+ for token in model.generate_stream(...):
358
+ yield f"data: {token}\n\n"
359
+
360
+ return StreamingResponse(generate(), media_type="text/event-stream")
361
+ ```
362
+
363
+ ---
364
+
365
+ ## ๐Ÿ“ž Support
366
+
367
+ For issues, questions, or suggestions:
368
+ - Open an issue on GitHub
369
+ - Contact via Hugging Face
370
+ - Check model card: [DarkNeuron-AI/dnai-humour-0.5B-instruct](https://huggingface.co/DarkNeuron-AI/dnai-humour-0.5B-instruct)
371
+
372
+ ---
373
+
374
+ ## ๐Ÿ™ Acknowledgments
375
+
376
+ - **Base Model**: Qwen2.5-0.5B-Instruct by Alibaba
377
+ - **Dataset**: OpenAssistant v1
378
+ - **Framework**: Hugging Face Transformers
379
+ - **UI**: React + Tailwind CSS
380
+
381
+ ---
382
+
383
+ ## ๐Ÿ“ License
384
+
385
+ MIT License - Free to use, modify, and distribute
386
+
387
+ ---
388
+
389
+ <div align="center">
390
+
391
+ **Crafted with โค๏ธ and passion by @MADARA369Uchiha**
392
+
393
+ *Small brain, well-trained. Fast responses, good vibes.* โœจ
394
+
395
+ [![Hugging Face](https://img.shields.io/badge/๐Ÿค—-Hugging%20Face-yellow)](https://huggingface.co/DarkNeuron-AI)
396
+ [![GitHub](https://img.shields.io/badge/GitHub-@Madara369Uchiha-black)](https://github.com/Madara369Uchiha)
397
+
398
+ </div>
app (2).py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from fastapi.responses import HTMLResponse
4
+ from pydantic import BaseModel
5
+ from transformers import AutoModelForCausalLM, AutoTokenizer
6
+ import torch
7
+ from pathlib import Path
8
+ from typing import List, Optional
9
+
10
+ app = FastAPI(title="DNAI Humour Chatbot API", version="1.1")
11
+
12
+ app.add_middleware(
13
+ CORSMiddleware,
14
+ allow_origins=["*"],
15
+ allow_credentials=True,
16
+ allow_methods=["*"],
17
+ allow_headers=["*"],
18
+ )
19
+
20
+ # Global variables
21
+ model = None
22
+ tokenizer = None
23
+ MODEL_NAME = "DarkNeuron-AI/dnai-humour-0.5B-instruct"
24
+
25
+ @app.on_event("startup")
26
+ async def load_model():
27
+ global model, tokenizer
28
+ try:
29
+ print(f"๐Ÿ”„ Loading {MODEL_NAME} on CPU...")
30
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
31
+ # Low CPU memory usage logic
32
+ model = AutoModelForCausalLM.from_pretrained(
33
+ MODEL_NAME,
34
+ torch_dtype=torch.float32,
35
+ device_map="cpu",
36
+ low_cpu_mem_usage=True
37
+ )
38
+ model.eval()
39
+ print("โœ… Model loaded on CPU successfully!")
40
+ except Exception as e:
41
+ print(f"โŒ Error loading model: {str(e)}")
42
+ raise
43
+
44
+ class Message(BaseModel):
45
+ role: str
46
+ content: str
47
+
48
+ # Updated Request Model to accept Settings
49
+ class ChatRequest(BaseModel):
50
+ messages: List[Message]
51
+ temperature: Optional[float] = 0.7
52
+ top_p: Optional[float] = 0.9
53
+ max_tokens: Optional[int] = 256
54
+ system_prompt: Optional[str] = "You are DNAI, a helpful and humorous AI assistant."
55
+
56
+ def format_chat_prompt(messages: List[Message], system_prompt: str) -> str:
57
+ # Adding System Prompt to the beginning
58
+ formatted = f"System: {system_prompt}\n"
59
+ for msg in messages:
60
+ if msg.role == "user":
61
+ formatted += f"User: {msg.content}\n"
62
+ elif msg.role == "assistant":
63
+ formatted += f"Assistant: {msg.content}\n"
64
+ formatted += "Assistant:"
65
+ return formatted
66
+
67
+ @app.get("/", response_class=HTMLResponse)
68
+ async def root():
69
+ html_path = Path(__file__).parent / "index.html"
70
+ if html_path.exists():
71
+ with open(html_path, 'r', encoding='utf-8') as f:
72
+ return HTMLResponse(content=f.read(), status_code=200)
73
+ return "<h1>Error: index.html not found</h1>"
74
+
75
+ @app.post("/api/chat")
76
+ async def chat(request: ChatRequest):
77
+ if model is None:
78
+ raise HTTPException(status_code=503, detail="Model loading")
79
+
80
+ try:
81
+ # Pass system prompt explicitly
82
+ prompt = format_chat_prompt(request.messages, request.system_prompt)
83
+
84
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=1024)
85
+
86
+ with torch.no_grad():
87
+ outputs = model.generate(
88
+ **inputs,
89
+ max_new_tokens=request.max_tokens,
90
+ temperature=request.temperature,
91
+ top_p=request.top_p,
92
+ do_sample=True,
93
+ pad_token_id=tokenizer.eos_token_id
94
+ )
95
+
96
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
97
+ # Robust extraction
98
+ response = generated_text[len(prompt):].strip()
99
+ if "User:" in response:
100
+ response = response.split("User:")[0].strip()
101
+
102
+ return {"response": response}
103
+
104
+ except Exception as e:
105
+ print(f"Error: {e}")
106
+ raise HTTPException(status_code=500, detail=str(e))
107
+
108
+ if __name__ == "__main__":
109
+ import uvicorn
110
+ uvicorn.run(app, host="0.0.0.0", port=7860)
gitattributes (1) ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
index (2).html ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
6
+ <title>DNAI Humour - Advanced</title>
7
+
8
+ <script crossorigin src="https://unpkg.com/react@18/umd/react.production.min.js"></script>
9
+ <script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"></script>
10
+ <script src="https://unpkg.com/@babel/standalone/babel.min.js"></script>
11
+ <script src="https://cdn.tailwindcss.com"></script>
12
+
13
+ <style>
14
+ @import url('https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@300;400;500;600;700&family=Inter:wght@300;400;500;600&display=swap');
15
+
16
+ * { font-family: 'Inter', sans-serif; -webkit-tap-highlight-color: transparent; }
17
+ h1, h2, h3, button { font-family: 'Space Grotesk', sans-serif; }
18
+
19
+ /* Animations */
20
+ @keyframes blob {
21
+ 0% { transform: translate(0px, 0px) scale(1); }
22
+ 33% { transform: translate(30px, -50px) scale(1.1); }
23
+ 66% { transform: translate(-20px, 20px) scale(0.9); }
24
+ 100% { transform: translate(0px, 0px) scale(1); }
25
+ }
26
+
27
+ @keyframes twinkle {
28
+ 0%, 100% { opacity: 0.2; transform: scale(1); }
29
+ 50% { opacity: 1; transform: scale(1.5); }
30
+ }
31
+
32
+ .animate-blob { animation: blob 7s infinite; }
33
+ .animation-delay-2000 { animation-delay: 2s; }
34
+ .animation-delay-4000 { animation-delay: 4s; }
35
+
36
+ /* Custom Scrollbar */
37
+ .custom-scrollbar::-webkit-scrollbar { width: 4px; }
38
+ .custom-scrollbar::-webkit-scrollbar-track { background: transparent; }
39
+ .custom-scrollbar::-webkit-scrollbar-thumb { background: rgba(156, 163, 175, 0.5); border-radius: 4px; }
40
+
41
+ /* Range Slider Styling */
42
+ input[type=range] {
43
+ -webkit-appearance: none;
44
+ width: 100%;
45
+ background: transparent;
46
+ }
47
+ input[type=range]::-webkit-slider-thumb {
48
+ -webkit-appearance: none;
49
+ height: 16px;
50
+ width: 16px;
51
+ border-radius: 50%;
52
+ background: #8b5cf6;
53
+ cursor: pointer;
54
+ margin-top: -6px;
55
+ box-shadow: 0 0 10px rgba(139, 92, 246, 0.5);
56
+ }
57
+ input[type=range]::-webkit-slider-runnable-track {
58
+ width: 100%;
59
+ height: 4px;
60
+ cursor: pointer;
61
+ background: rgba(255, 255, 255, 0.2);
62
+ border-radius: 2px;
63
+ }
64
+
65
+ .glass-panel {
66
+ background: rgba(255, 255, 255, 0.05);
67
+ backdrop-filter: blur(16px);
68
+ -webkit-backdrop-filter: blur(16px);
69
+ border: 1px solid rgba(255, 255, 255, 0.1);
70
+ }
71
+
72
+ .glass-panel-light {
73
+ background: rgba(255, 255, 255, 0.7);
74
+ backdrop-filter: blur(16px);
75
+ border: 1px solid rgba(0, 0, 0, 0.05);
76
+ }
77
+ </style>
78
+ </head>
79
+ <body>
80
+ <div id="root"></div>
81
+
82
+ <script type="text/babel">
83
+ const { useState, useEffect, useRef } = React;
84
+
85
+ // --- Icons ---
86
+ const Icon = ({ path, className = "w-5 h-5" }) => (
87
+ <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" className={className}>{path}</svg>
88
+ );
89
+ const Icons = {
90
+ Brain: (p) => <Icon {...p} path={<path d="M9.5 2A2.5 2.5 0 0 1 12 4.5v15a2.5 2.5 0 0 1-4.96.44 2.5 2.5 0 0 1-2.96-3.08 3 3 0 0 1-.34-5.58 2.5 2.5 0 0 1 1.32-4.24 2.5 2.5 0 0 1 1.98-3A2.5 2.5 0 0 1 9.5 2ZM14.5 2a2.5 2.5 0 0 0-2.5 2.5v15a2.5 2.5 0 0 0 4.96.44 2.5 2.5 0 0 0 2.96-3.08 3 3 0 0 0 .34-5.58 2.5 2.5 0 0 0-1.32-4.24 2.5 2.5 0 0 0-1.98-3A2.5 2.5 0 0 0 14.5 2Z"/>} />,
91
+ Send: (p) => <Icon {...p} path={<><line x1="22" y1="2" x2="11" y2="13"></line><polygon points="22 2 15 22 11 13 2 9 22 2"></polygon></>} />,
92
+ Settings: (p) => <Icon {...p} path={<><circle cx="12" cy="12" r="3"></circle><path d="M19.4 15a1.65 1.65 0 0 0 .33 1.82l.06.06a2 2 0 0 1 0 2.83 2 2 0 0 1-2.83 0l-.06-.06a1.65 1.65 0 0 0-1.82-.33 1.65 1.65 0 0 0-1 1.51V21a2 2 0 0 1-2 2 2 2 0 0 1-2-2v-.09A1.65 1.65 0 0 0 9 19.4a1.65 1.65 0 0 0-1.82.33l-.06.06a2 2 0 0 1-2.83 0 2 2 0 0 1 0-2.83l.06-.06a1.65 1.65 0 0 0 .33-1.82 1.65 1.65 0 0 0-1.51-1H3a2 2 0 0 1-2-2 2 2 0 0 1 2-2h.09A1.65 1.65 0 0 0 4.6 9a1.65 1.65 0 0 0-.33-1.82l-.06-.06a2 2 0 0 1 0-2.83 2 2 0 0 1 2.83 0l.06.06a1.65 1.65 0 0 0 1.82.33H9a1.65 1.65 0 0 0 1-1.51V3a2 2 0 0 1 2-2 2 2 0 0 1 2 2v.09a1.65 1.65 0 0 0 1 1.51 1.65 1.65 0 0 0 1.82-.33l.06-.06a2 2 0 0 1 2.83 0 2 2 0 0 1 0 2.83l-.06.06a1.65 1.65 0 0 0-.33 1.82V9a1.65 1.65 0 0 0 1.51 1H21a2 2 0 0 1 2 2 2 2 0 0 1-2 2h-.09a1.65 1.65 0 0 0-1.51 1z"></path></>} />,
93
+ X: (p) => <Icon {...p} path={<><line x1="18" y1="6" x2="6" y2="18"></line><line x1="6" y1="6" x2="18" y2="18"></line></>} />,
94
+ Sun: (p) => <Icon {...p} path={<><circle cx="12" cy="12" r="5"></circle><line x1="12" y1="1" x2="12" y2="3"></line><line x1="12" y1="21" x2="12" y2="23"></line><line x1="4.22" y1="4.22" x2="5.64" y2="5.64"></line><line x1="18.36" y1="18.36" x2="19.78" y2="19.78"></line><line x1="1" y1="12" x2="3" y2="12"></line><line x1="21" y1="12" x2="23" y2="12"></line><line x1="4.22" y1="19.78" x2="5.64" y2="18.36"></line><line x1="18.36" y1="5.64" x2="19.78" y2="4.22"></line></>} />,
95
+ Moon: (p) => <Icon {...p} path={<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z"></path>} />,
96
+ Refresh: (p) => <Icon {...p} path={<path d="M23 4v6h-6M1 20v-6h6"></path>} />,
97
+ User: (p) => <Icon {...p} path={<path d="M20 21v-2a4 4 0 0 0-4-4H8a4 4 0 0 0-4 4v2"></path>} />
98
+ };
99
+
100
+ // --- Components ---
101
+
102
+ const SettingsPanel = ({ isOpen, onClose, settings, setSettings, darkMode }) => {
103
+ const panelClass = darkMode ? 'bg-slate-900 border-l border-slate-700' : 'bg-white border-l border-gray-200';
104
+ const textClass = darkMode ? 'text-gray-200' : 'text-gray-800';
105
+ const inputBg = darkMode ? 'bg-slate-800 border-slate-700' : 'bg-gray-100 border-gray-200';
106
+
107
+ return (
108
+ <div className={`fixed inset-y-0 right-0 w-full sm:w-80 transform transition-transform duration-300 ease-in-out z-50 ${isOpen ? 'translate-x-0' : 'translate-x-full'} ${panelClass} shadow-2xl`}>
109
+ <div className="p-6 h-full flex flex-col">
110
+ <div className="flex justify-between items-center mb-8">
111
+ <h2 className={`text-xl font-bold ${textClass}`}>Configuration</h2>
112
+ <button onClick={onClose} className={`p-2 rounded-full hover:bg-opacity-10 hover:bg-gray-500 transition-colors ${textClass}`}>
113
+ <Icons.X className="w-6 h-6" />
114
+ </button>
115
+ </div>
116
+
117
+ <div className="space-y-6 flex-1 overflow-y-auto custom-scrollbar pr-2">
118
+ {/* System Prompt */}
119
+ <div>
120
+ <label className={`block text-sm font-medium mb-2 ${textClass}`}>System Persona</label>
121
+ <textarea
122
+ value={settings.systemPrompt}
123
+ onChange={(e) => setSettings({...settings, systemPrompt: e.target.value})}
124
+ className={`w-full p-3 rounded-xl text-sm h-32 resize-none focus:outline-none focus:ring-2 focus:ring-purple-500 ${inputBg} ${textClass}`}
125
+ placeholder="Define how the AI should behave..."
126
+ />
127
+ </div>
128
+
129
+ {/* Temperature */}
130
+ <div>
131
+ <div className="flex justify-between mb-2">
132
+ <label className={`text-sm font-medium ${textClass}`}>Creativity (Temp)</label>
133
+ <span className="text-xs font-mono text-purple-500">{settings.temperature}</span>
134
+ </div>
135
+ <input
136
+ type="range" min="0.1" max="1.5" step="0.1"
137
+ value={settings.temperature}
138
+ onChange={(e) => setSettings({...settings, temperature: parseFloat(e.target.value)})}
139
+ />
140
+ </div>
141
+
142
+ {/* Top P */}
143
+ <div>
144
+ <div className="flex justify-between mb-2">
145
+ <label className={`text-sm font-medium ${textClass}`}>Focus (Top P)</label>
146
+ <span className="text-xs font-mono text-purple-500">{settings.topP}</span>
147
+ </div>
148
+ <input
149
+ type="range" min="0.1" max="1.0" step="0.05"
150
+ value={settings.topP}
151
+ onChange={(e) => setSettings({...settings, topP: parseFloat(e.target.value)})}
152
+ />
153
+ </div>
154
+
155
+ {/* Max Tokens */}
156
+ <div>
157
+ <div className="flex justify-between mb-2">
158
+ <label className={`text-sm font-medium ${textClass}`}>Response Length</label>
159
+ <span className="text-xs font-mono text-purple-500">{settings.maxTokens}</span>
160
+ </div>
161
+ <input
162
+ type="range" min="64" max="1024" step="64"
163
+ value={settings.maxTokens}
164
+ onChange={(e) => setSettings({...settings, maxTokens: parseInt(e.target.value)})}
165
+ />
166
+ </div>
167
+ </div>
168
+
169
+ <div className="mt-6 pt-6 border-t border-gray-700/20">
170
+ <button
171
+ onClick={() => setSettings({
172
+ temperature: 0.7,
173
+ topP: 0.9,
174
+ maxTokens: 256,
175
+ systemPrompt: "You are DNAI, a helpful and humorous AI assistant."
176
+ })}
177
+ className="w-full py-3 rounded-xl text-sm font-medium bg-purple-500/10 text-purple-500 hover:bg-purple-500/20 transition-all"
178
+ >
179
+ Reset to Defaults
180
+ </button>
181
+ </div>
182
+ </div>
183
+ </div>
184
+ );
185
+ };
186
+
187
+ const MainApp = () => {
188
+ const [darkMode, setDarkMode] = useState(true);
189
+ const [isSettingsOpen, setIsSettingsOpen] = useState(false);
190
+ const [settings, setSettings] = useState({
191
+ temperature: 0.7,
192
+ topP: 0.9,
193
+ maxTokens: 256,
194
+ systemPrompt: "You are DNAI, a helpful and humorous AI assistant. You like to make coding puns."
195
+ });
196
+ const [messages, setMessages] = useState([
197
+ { role: 'assistant', content: "System online. DNAI core initialized. How can I help you today?", timestamp: new Date() }
198
+ ]);
199
+ const [input, setInput] = useState('');
200
+ const [loading, setLoading] = useState(false);
201
+ const messagesEndRef = useRef(null);
202
+
203
+ const scrollToBottom = () => messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
204
+ useEffect(scrollToBottom, [messages]);
205
+
206
+ const sendMessage = async () => {
207
+ if (!input.trim() || loading) return;
208
+
209
+ const userMessage = { role: 'user', content: input, timestamp: new Date() };
210
+ setMessages(prev => [...prev, userMessage]);
211
+ setInput('');
212
+ setLoading(true);
213
+
214
+ try {
215
+ const response = await fetch('/api/chat', {
216
+ method: 'POST',
217
+ headers: { 'Content-Type': 'application/json' },
218
+ body: JSON.stringify({
219
+ messages: [...messages, userMessage],
220
+ temperature: settings.temperature,
221
+ top_p: settings.topP,
222
+ max_tokens: settings.maxTokens,
223
+ system_prompt: settings.systemPrompt
224
+ })
225
+ });
226
+
227
+ if (!response.ok) throw new Error('API Error');
228
+ const data = await response.json();
229
+ setMessages(prev => [...prev, { role: 'assistant', content: data.response, timestamp: new Date() }]);
230
+ } catch (error) {
231
+ setMessages(prev => [...prev, { role: 'assistant', content: "โš ๏ธ Error connecting to neural pathways. Please try again.", timestamp: new Date() }]);
232
+ } finally {
233
+ setLoading(false);
234
+ }
235
+ };
236
+
237
+ // Dynamic Classes
238
+ const bgClass = darkMode ? 'bg-slate-950' : 'bg-gray-50';
239
+ const glassClass = darkMode ? 'glass-panel' : 'glass-panel-light';
240
+ const textClass = darkMode ? 'text-white' : 'text-gray-800';
241
+
242
+ return (
243
+ <div className={`min-h-screen relative overflow-hidden transition-colors duration-500 ${bgClass}`}>
244
+
245
+ {/* Animated Background (CSS Only for Performance) */}
246
+ <div className="absolute inset-0 overflow-hidden pointer-events-none">
247
+ <div className={`absolute -top-20 -left-20 w-72 h-72 bg-purple-500 rounded-full mix-blend-multiply filter blur-xl opacity-20 animate-blob`}></div>
248
+ <div className={`absolute top-0 -right-4 w-72 h-72 bg-blue-500 rounded-full mix-blend-multiply filter blur-xl opacity-20 animate-blob animation-delay-2000`}></div>
249
+ <div className={`absolute -bottom-8 left-20 w-72 h-72 bg-pink-500 rounded-full mix-blend-multiply filter blur-xl opacity-20 animate-blob animation-delay-4000`}></div>
250
+ {darkMode && Array.from({length: 20}).map((_, i) => (
251
+ <div key={i} className="absolute bg-white rounded-full animate-[twinkle_4s_ease-in-out_infinite]"
252
+ style={{
253
+ top: `${Math.random() * 100}%`, left: `${Math.random() * 100}%`,
254
+ width: Math.random() * 3 + 'px', height: Math.random() * 3 + 'px',
255
+ animationDelay: `${Math.random() * 5}s`
256
+ }}
257
+ />
258
+ ))}
259
+ </div>
260
+
261
+ {/* Overlay for Settings */}
262
+ {isSettingsOpen && <div onClick={() => setIsSettingsOpen(false)} className="fixed inset-0 bg-black/50 backdrop-blur-sm z-40 transition-opacity" />}
263
+
264
+ {/* Settings Panel */}
265
+ <SettingsPanel
266
+ isOpen={isSettingsOpen}
267
+ onClose={() => setIsSettingsOpen(false)}
268
+ settings={settings}
269
+ setSettings={setSettings}
270
+ darkMode={darkMode}
271
+ />
272
+
273
+ {/* Main Content */}
274
+ <div className="relative z-10 flex flex-col h-[100dvh] max-w-5xl mx-auto p-4 md:p-6">
275
+
276
+ {/* Header */}
277
+ <div className="flex justify-between items-center mb-4">
278
+ <div className="flex items-center gap-3">
279
+ <div className={`p-2.5 rounded-xl bg-gradient-to-br from-indigo-600 to-purple-600 shadow-lg shadow-purple-500/20`}>
280
+ <Icons.Brain className="w-6 h-6 text-white" />
281
+ </div>
282
+ <div>
283
+ <h1 className={`text-xl font-bold tracking-tight ${textClass}`}>DNAI <span className="text-purple-500">v1.1</span></h1>
284
+ </div>
285
+ </div>
286
+
287
+ <div className={`flex items-center gap-2 p-1.5 rounded-2xl ${darkMode ? 'bg-slate-900/50' : 'bg-white/50'} border border-gray-200/10`}>
288
+ <button onClick={() => setDarkMode(!darkMode)} className={`p-2 rounded-xl transition-all ${darkMode ? 'hover:bg-slate-800 text-yellow-400' : 'hover:bg-gray-200 text-slate-700'}`}>
289
+ {darkMode ? <Icons.Sun className="w-5 h-5"/> : <Icons.Moon className="w-5 h-5"/>}
290
+ </button>
291
+ <div className="w-px h-6 bg-gray-500/20"></div>
292
+ <button onClick={() => setIsSettingsOpen(true)} className={`p-2 rounded-xl transition-all ${darkMode ? 'hover:bg-slate-800 text-gray-300' : 'hover:bg-gray-200 text-gray-600'}`}>
293
+ <Icons.Settings className="w-5 h-5"/>
294
+ </button>
295
+ </div>
296
+ </div>
297
+
298
+ {/* Chat Area */}
299
+ <div className={`flex-1 rounded-3xl mb-4 overflow-hidden flex flex-col shadow-2xl border ${glassClass} ${darkMode ? 'border-slate-700/50' : 'border-white/50'}`}>
300
+ <div className="flex-1 overflow-y-auto p-4 space-y-6 custom-scrollbar">
301
+ {messages.map((msg, idx) => (
302
+ <div key={idx} className={`flex gap-4 ${msg.role === 'user' ? 'flex-row-reverse' : ''}`}>
303
+ <div className={`w-8 h-8 rounded-lg flex-shrink-0 flex items-center justify-center text-xs font-bold shadow-lg
304
+ ${msg.role === 'user'
305
+ ? 'bg-gradient-to-br from-pink-500 to-rose-500 text-white'
306
+ : 'bg-gradient-to-br from-indigo-500 to-blue-600 text-white'}`}>
307
+ {msg.role === 'user' ? 'YOU' : 'AI'}
308
+ </div>
309
+ <div className={`max-w-[85%] md:max-w-[75%]`}>
310
+ <div className={`p-3.5 rounded-2xl shadow-sm text-sm md:text-base leading-relaxed
311
+ ${msg.role === 'user'
312
+ ? 'bg-gradient-to-r from-purple-600 to-indigo-600 text-white rounded-tr-sm'
313
+ : (darkMode ? 'bg-slate-800 text-slate-200' : 'bg-white text-gray-800') + ' rounded-tl-sm'}`}>
314
+ {msg.content}
315
+ </div>
316
+ <div className={`text-[10px] mt-1 opacity-50 ${textClass} ${msg.role === 'user' ? 'text-right' : 'text-left'}`}>
317
+ {msg.timestamp.toLocaleTimeString([], {hour: '2-digit', minute:'2-digit'})}
318
+ </div>
319
+ </div>
320
+ </div>
321
+ ))}
322
+ {loading && (
323
+ <div className="flex gap-4">
324
+ <div className="w-8 h-8 rounded-lg bg-indigo-600 flex items-center justify-center"><Icons.Brain className="w-4 h-4 text-white animate-pulse"/></div>
325
+ <div className={`p-4 rounded-2xl ${darkMode ? 'bg-slate-800' : 'bg-white'}`}>
326
+ <div className="flex gap-1.5">
327
+ {[0, 1, 2].map(i => (
328
+ <div key={i} className="w-2 h-2 rounded-full bg-indigo-400 animate-bounce" style={{animationDelay: `${i * 0.15}s`}}/>
329
+ ))}
330
+ </div>
331
+ </div>
332
+ </div>
333
+ )}
334
+ <div ref={messagesEndRef} />
335
+ </div>
336
+
337
+ {/* Input Area */}
338
+ <div className={`p-3 md:p-4 border-t ${darkMode ? 'border-slate-700/50 bg-slate-900/30' : 'border-gray-200/50 bg-white/50'}`}>
339
+ <div className={`flex items-end gap-2 p-1.5 rounded-2xl border transition-all focus-within:ring-2 focus-within:ring-purple-500/50
340
+ ${darkMode ? 'bg-slate-800 border-slate-700' : 'bg-white border-gray-200'}`}>
341
+ <textarea
342
+ value={input}
343
+ onChange={(e) => setInput(e.target.value)}
344
+ onKeyPress={(e) => e.key === 'Enter' && !e.shiftKey && (e.preventDefault(), sendMessage())}
345
+ placeholder="Type a message..."
346
+ rows="1"
347
+ className={`flex-1 bg-transparent border-none focus:ring-0 p-3 max-h-32 resize-none custom-scrollbar ${textClass} placeholder-opacity-50`}
348
+ />
349
+ <button
350
+ onClick={sendMessage}
351
+ disabled={!input.trim() || loading}
352
+ className={`p-3 rounded-xl flex-shrink-0 transition-all active:scale-95
353
+ ${!input.trim() || loading
354
+ ? 'bg-gray-500/20 text-gray-500 cursor-not-allowed'
355
+ : 'bg-gradient-to-r from-indigo-500 to-purple-500 text-white shadow-lg shadow-purple-500/25 hover:shadow-purple-500/40'}`}
356
+ >
357
+ <Icons.Send className="w-5 h-5" />
358
+ </button>
359
+ </div>
360
+ <div className="text-center mt-2">
361
+ <p className={`text-[10px] uppercase tracking-widest opacity-40 ${textClass}`}>Powered by DarkNeuron AI</p>
362
+ </div>
363
+ </div>
364
+ </div>
365
+ </div>
366
+ </div>
367
+ );
368
+ };
369
+
370
+ const root = ReactDOM.createRoot(document.getElementById('root'));
371
+ root.render(<MainApp />);
372
+ </script>
373
+ </body>
374
+ </html>
requirements (1).txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ fastapi==0.110.0
2
+ uvicorn[standard]==0.29.0
3
+ transformers>=4.41.2
4
+ accelerate>=0.30.1
5
+ pydantic>=2.7.1
6
+ python-multipart
7
+ numpy