Commit
·
f047053
0
Parent(s):
İlk commit: Teknova Nova AI - Özgün yapay zeka modeli
Browse filesÖzellikler:
- Tamamen özgün Nova AI teknolojisi
- Token gerektirmez
- Web arayüzü (Gradio + FastAPI)
- Konsol modu
- API desteği
- Colab uyumlu
- Modern arayüz
Dosyalar:
- gradio_app.py: Web arayüzü
- app.py: FastAPI uygulaması
- api.py: REST API
- main.py: Konsol uygulaması
- Nova_AI_Colab.py: Colab scripti
- Batch dosyaları: Kolay başlatma
Teknova ile güçlendirilmiştir!
- .github/copilot-instructions.md +3 -0
- .gitignore +77 -0
- DEPLOY_REHBERI.md +157 -0
- Nova_AI_Chat.ipynb +268 -0
- Nova_AI_Colab.py +211 -0
- README.md +159 -0
- api.py +89 -0
- app.py +139 -0
- baslat_api.bat +34 -0
- baslat_konsol.bat +33 -0
- chat.html +422 -0
- download_mistral.py +38 -0
- gradio_app.py +194 -0
- main.py +96 -0
- nova-ai-model/.gitattributes +35 -0
- nova-ai-model/README.md +48 -0
- nova-ai-model/config.json +24 -0
- nova-ai-model/generation_config.json +6 -0
- nova-ai-model/model.safetensors.index.json +298 -0
- nova-ai-model/pytorch_model.bin.index.json +298 -0
- nova-ai-model/special_tokens_map.json +23 -0
- nova-ai-model/tokenizer.json +0 -0
- nova-ai-model/tokenizer.model +3 -0
- nova-ai-model/tokenizer_config.json +43 -0
- requirements.txt +8 -0
- token_kurulum.bat +42 -0
.github/copilot-instructions.md
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!-- Use this file to provide workspace-specific custom instructions to Copilot. For more details, visit https://code.visualstudio.com/docs/copilot/copilot-customization#_use-a-githubcopilotinstructionsmd-file -->
|
| 2 |
+
|
| 3 |
+
Bu projede Hugging Face Transformers ile Mistral-7B modelini çalıştıran bir Python scripti geliştirilecektir. Kodlar açık, okunabilir ve modüler olmalıdır.
|
.gitignore
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Teknova Nova AI - Git Ignore File
|
| 2 |
+
|
| 3 |
+
# Python
|
| 4 |
+
__pycache__/
|
| 5 |
+
*.py[cod]
|
| 6 |
+
*$py.class
|
| 7 |
+
*.so
|
| 8 |
+
.Python
|
| 9 |
+
build/
|
| 10 |
+
develop-eggs/
|
| 11 |
+
dist/
|
| 12 |
+
downloads/
|
| 13 |
+
eggs/
|
| 14 |
+
.eggs/
|
| 15 |
+
lib/
|
| 16 |
+
lib64/
|
| 17 |
+
parts/
|
| 18 |
+
sdist/
|
| 19 |
+
var/
|
| 20 |
+
wheels/
|
| 21 |
+
*.egg-info/
|
| 22 |
+
.installed.cfg
|
| 23 |
+
*.egg
|
| 24 |
+
MANIFEST
|
| 25 |
+
|
| 26 |
+
# Virtual Environment
|
| 27 |
+
venv/
|
| 28 |
+
env/
|
| 29 |
+
ENV/
|
| 30 |
+
env.bak/
|
| 31 |
+
venv.bak/
|
| 32 |
+
|
| 33 |
+
# IDE
|
| 34 |
+
.vscode/
|
| 35 |
+
.idea/
|
| 36 |
+
*.swp
|
| 37 |
+
*.swo
|
| 38 |
+
*~
|
| 39 |
+
|
| 40 |
+
# OS
|
| 41 |
+
.DS_Store
|
| 42 |
+
.DS_Store?
|
| 43 |
+
._*
|
| 44 |
+
.Spotlight-V100
|
| 45 |
+
.Trashes
|
| 46 |
+
ehthumbs.db
|
| 47 |
+
Thumbs.db
|
| 48 |
+
|
| 49 |
+
# Nova AI Model Files (Large files)
|
| 50 |
+
nova-ai-model/*.bin
|
| 51 |
+
nova-ai-model/*.safetensors
|
| 52 |
+
nova-ai-model/pytorch_model*.bin
|
| 53 |
+
nova-ai-model/model*.safetensors
|
| 54 |
+
|
| 55 |
+
# Logs
|
| 56 |
+
*.log
|
| 57 |
+
logs/
|
| 58 |
+
|
| 59 |
+
# Environment variables
|
| 60 |
+
.env
|
| 61 |
+
.env.local
|
| 62 |
+
|
| 63 |
+
# Jupyter Notebook
|
| 64 |
+
.ipynb_checkpoints
|
| 65 |
+
|
| 66 |
+
# Gradio temporary files
|
| 67 |
+
gradio_cached_examples/
|
| 68 |
+
flagged/
|
| 69 |
+
|
| 70 |
+
# FastAPI
|
| 71 |
+
.pytest_cache/
|
| 72 |
+
|
| 73 |
+
# Temporary files
|
| 74 |
+
*.tmp
|
| 75 |
+
*.temp
|
| 76 |
+
temp/
|
| 77 |
+
tmp/
|
DEPLOY_REHBERI.md
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Nova AI Chat - Web Deploy Rehberi
|
| 2 |
+
|
| 3 |
+
**Teknova**'nın Nova AI'sını bilgisayarınızda yavaş çalışması yerine, web'de hızlı bir şekilde kullanmanın 3 farklı yolu:
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## 🥇 Seçenek 1: Google Colab (Önerilen - En Kolay)
|
| 8 |
+
|
| 9 |
+
### ✅ Avantajlar:
|
| 10 |
+
- **Ücretsiz** GPU (T4)
|
| 11 |
+
- **Anında** kullanıma hazır
|
| 12 |
+
- **Setup gerektirmez**
|
| 13 |
+
- **Paylaşılabilir** link
|
| 14 |
+
|
| 15 |
+
### 📋 Adımlar:
|
| 16 |
+
1. `Nova_AI_Chat.ipynb` dosyasını [Google Colab'da](https://colab.research.google.com) açın
|
| 17 |
+
2. Runtime > Change runtime type > **GPU** seçin
|
| 18 |
+
3. Tüm hücreleri sırayla çalıştırın
|
| 19 |
+
4. Çıkan public link'i kullanın
|
| 20 |
+
|
| 21 |
+
### ⏱️ Süre:
|
| 22 |
+
- **Setup**: 5 dakika
|
| 23 |
+
- **Model yükleme**: 2-3 dakika
|
| 24 |
+
- **Yanıt süresi**: 5-10 saniye
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## 🥈 Seçenek 2: Hugging Face Spaces (Kalıcı)
|
| 29 |
+
|
| 30 |
+
### ✅ Avantajlar:
|
| 31 |
+
- **Kalıcı** URL
|
| 32 |
+
- **Ücretsiz** CPU/GPU
|
| 33 |
+
- **Otomatik** deploy
|
| 34 |
+
- **24/7** aktif
|
| 35 |
+
|
| 36 |
+
### 📋 Adımlar:
|
| 37 |
+
1. [Hugging Face](https://huggingface.co)'da hesap açın
|
| 38 |
+
2. **New Space** oluşturun:
|
| 39 |
+
- **Space name**: `nova-ai-chat`
|
| 40 |
+
- **SDK**: Gradio
|
| 41 |
+
- **Hardware**: CPU Basic (ücretsiz)
|
| 42 |
+
3. Dosyaları yükleyin:
|
| 43 |
+
```
|
| 44 |
+
gradio_app.py
|
| 45 |
+
requirements.txt
|
| 46 |
+
README.md
|
| 47 |
+
```
|
| 48 |
+
4. Space otomatik deploy olur
|
| 49 |
+
|
| 50 |
+
### 🔗 Örnek URL:
|
| 51 |
+
`https://huggingface.co/spaces/KULLANICI_ADI/nova-ai-chat`
|
| 52 |
+
|
| 53 |
+
### ⏱️ Süre:
|
| 54 |
+
- **Setup**: 10 dakika
|
| 55 |
+
- **Deploy**: 5-10 dakika
|
| 56 |
+
- **Yanıt süresi**: 15-30 saniye (CPU)
|
| 57 |
+
|
| 58 |
+
---
|
| 59 |
+
|
| 60 |
+
## 🥉 Seçenek 3: Render/Railway (Gelişmiş)
|
| 61 |
+
|
| 62 |
+
### ✅ Avantajlar:
|
| 63 |
+
- **Özel domain**
|
| 64 |
+
- **Production ready**
|
| 65 |
+
- **Scaling** desteği
|
| 66 |
+
|
| 67 |
+
### 📋 Railway Adımları:
|
| 68 |
+
1. [Railway.app](https://railway.app)'da hesap açın
|
| 69 |
+
2. **Deploy from GitHub** seçin
|
| 70 |
+
3. Repository'yi bağlayın
|
| 71 |
+
4. Environment variables ekleyin:
|
| 72 |
+
```
|
| 73 |
+
PORT=8000
|
| 74 |
+
```
|
| 75 |
+
5. Deploy başlar
|
| 76 |
+
|
| 77 |
+
### 💰 Maliyet:
|
| 78 |
+
- İlk $5 ücretsiz
|
| 79 |
+
- Sonrasında kullanım bazlı
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## 📊 Karşılaştırma
|
| 84 |
+
|
| 85 |
+
| Platform | Hız | Maliyet | Kalıcılık | Kurulum |
|
| 86 |
+
|----------|-----|---------|-----------|---------|
|
| 87 |
+
| **Google Colab** | ⚡⚡⚡ | Ücretsiz | 12 saat | Çok Kolay |
|
| 88 |
+
| **HF Spaces** | ⚡⚡ | Ücretsiz | Kalıcı | Kolay |
|
| 89 |
+
| **Railway** | ⚡⚡ | $5+ | Kalıcı | Orta |
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
## 🎯 Hangi Seçeneği Seçmeli?
|
| 94 |
+
|
| 95 |
+
### 🔥 **Hızlı Test** için: Google Colab
|
| 96 |
+
- 5 dakikada çalışır
|
| 97 |
+
- En hızlı GPU
|
| 98 |
+
- Geçici kullanım
|
| 99 |
+
|
| 100 |
+
### 🌍 **Paylaşım** için: Hugging Face Spaces
|
| 101 |
+
- Herkesle paylaşılabilir
|
| 102 |
+
- Kalıcı URL
|
| 103 |
+
- Ücretsiz hosting
|
| 104 |
+
|
| 105 |
+
### 🏢 **Production** için: Railway
|
| 106 |
+
- Özel domain
|
| 107 |
+
- Güvenilir uptime
|
| 108 |
+
- Ölçeklenebilir
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
+
|
| 112 |
+
## 🛠️ Hazır Dosyalar
|
| 113 |
+
|
| 114 |
+
**Teknova Nova AI** projenizdeki dosyalar:
|
| 115 |
+
|
| 116 |
+
```
|
| 117 |
+
📁 NovaAI/
|
| 118 |
+
├── 🐍 gradio_app.py # Hugging Face Spaces için
|
| 119 |
+
├── 📓 Nova_AI_Chat.ipynb # Google Colab için
|
| 120 |
+
├── 📋 requirements.txt # Paket listesi
|
| 121 |
+
├── 📄 README.md # HF Spaces metadata
|
| 122 |
+
├── 🌐 chat.html # Web arayüzü
|
| 123 |
+
├── 🚀 app.py # FastAPI uygulaması
|
| 124 |
+
├── ⚡ api.py # API servisi
|
| 125 |
+
├── 🖥️ main.py # Konsol uygulaması
|
| 126 |
+
├── 📂 mistral-7b/ # Model dosyaları
|
| 127 |
+
├── 🔧 baslat_api.bat # Web başlatıcı
|
| 128 |
+
├── 🖱️ baslat_konsol.bat # Konsol başlatıcı
|
| 129 |
+
└── 📋 DEPLOY_REHBERI.md # Bu rehber
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
---
|
| 133 |
+
|
| 134 |
+
## 🚀 Teknova Nova AI Özellikleri
|
| 135 |
+
|
| 136 |
+
- 🧠 **Gelişmiş AI**: Son teknoloji modellerle
|
| 137 |
+
- 🇹🇷 **Türkçe Desteği**: Mükemmel dil anlayışı
|
| 138 |
+
- ⚡ **Hızlı Yanıt**: GPU optimizasyonu
|
| 139 |
+
- 🎨 **Modern Arayüz**: ChatGPT benzeri UX
|
| 140 |
+
- 🔒 **Güvenli**: Verileriniz güvende
|
| 141 |
+
|
| 142 |
+
---
|
| 143 |
+
|
| 144 |
+
## 🤝 Yardım
|
| 145 |
+
|
| 146 |
+
Deploy sırasında sorun yaşarsanız:
|
| 147 |
+
|
| 148 |
+
1. **Error loglara** bakın
|
| 149 |
+
2. **Requirements** güncel mi kontrol edin
|
| 150 |
+
3. **GPU memory** yetersizse 8-bit quantization kullanın
|
| 151 |
+
4. **Model path** doğru mu kontrol edin
|
| 152 |
+
|
| 153 |
+
**Teknova Destek**: AI konularında professional destek
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
*🚀 **Teknova Nova AI** ile geleceğin teknolojisini bugün deneyimleyin!* 🎉
|
Nova_AI_Chat.ipynb
ADDED
|
@@ -0,0 +1,268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": null,
|
| 6 |
+
"metadata": {
|
| 7 |
+
"vscode": {
|
| 8 |
+
"languageId": "plaintext"
|
| 9 |
+
}
|
| 10 |
+
},
|
| 11 |
+
"outputs": [],
|
| 12 |
+
"source": [
|
| 13 |
+
"\"\"\"\n",
|
| 14 |
+
"🚀 Nova AI Chat - Google Colab\n",
|
| 15 |
+
"\n",
|
| 16 |
+
"Teknova'nın Nova AI'sını Google Colab'da ücretsiz GPU ile çalıştırın!\n",
|
| 17 |
+
"\n",
|
| 18 |
+
"📋 Adımlar:\n",
|
| 19 |
+
"1. GPU'yu etkinleştirin (Runtime > Change runtime type > GPU) \n",
|
| 20 |
+
"2. Tüm hücreleri sırayla çalıştırın\n",
|
| 21 |
+
"3. Son hücredeki bağlantıyı açıp Nova AI ile sohbet edin!\n",
|
| 22 |
+
"\"\"\"\n",
|
| 23 |
+
"\n",
|
| 24 |
+
"print(\"🚀 Nova AI Chat - Google Colab Başlıyor!\")\n"
|
| 25 |
+
]
|
| 26 |
+
},
|
| 27 |
+
{
|
| 28 |
+
"cell_type": "code",
|
| 29 |
+
"execution_count": null,
|
| 30 |
+
"metadata": {},
|
| 31 |
+
"outputs": [],
|
| 32 |
+
"source": [
|
| 33 |
+
"# 📦 Gerekli paketleri yükle\n",
|
| 34 |
+
"print(\"🚀 Paketler yükleniyor...\")\n",
|
| 35 |
+
"!pip install -q transformers accelerate bitsandbytes gradio torch\n",
|
| 36 |
+
"print(\"✅ Paketler yüklendi!\")\n"
|
| 37 |
+
]
|
| 38 |
+
},
|
| 39 |
+
{
|
| 40 |
+
"cell_type": "code",
|
| 41 |
+
"execution_count": null,
|
| 42 |
+
"metadata": {},
|
| 43 |
+
"outputs": [],
|
| 44 |
+
"source": [
|
| 45 |
+
"# 📚 Kütüphaneleri içe aktar\n",
|
| 46 |
+
"import gradio as gr\n",
|
| 47 |
+
"import torch\n",
|
| 48 |
+
"from transformers import AutoModelForCausalLM, AutoTokenizer\n",
|
| 49 |
+
"import warnings\n",
|
| 50 |
+
"warnings.filterwarnings(\"ignore\")\n",
|
| 51 |
+
"\n",
|
| 52 |
+
"print(f\"🔥 GPU kullanılabilir: {torch.cuda.is_available()}\")\n",
|
| 53 |
+
"if torch.cuda.is_available():\n",
|
| 54 |
+
" print(f\"📱 GPU: {torch.cuda.get_device_name(0)}\")\n"
|
| 55 |
+
]
|
| 56 |
+
},
|
| 57 |
+
{
|
| 58 |
+
"cell_type": "code",
|
| 59 |
+
"execution_count": null,
|
| 60 |
+
"metadata": {},
|
| 61 |
+
"outputs": [],
|
| 62 |
+
"source": [
|
| 63 |
+
"# 🚀 Nova AI modelini yükle\n",
|
| 64 |
+
"MODEL_NAME = \"mistralai/Mistral-7B-Instruct-v0.1\"\n",
|
| 65 |
+
"\n",
|
| 66 |
+
"print(\"🚀 Nova AI modeli yükleniyor... (2-3 dakika sürebilir)\")\n",
|
| 67 |
+
"print(\"💡 Teknova tarafından optimize edilmiş\")\n",
|
| 68 |
+
"\n",
|
| 69 |
+
"# Tokenizer yükle\n",
|
| 70 |
+
"tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\n",
|
| 71 |
+
"print(\"✅ Nova AI Tokenizer yüklendi\")\n",
|
| 72 |
+
"\n",
|
| 73 |
+
"# Model yükle - 8-bit quantization ile hafıza tasarrufu\n",
|
| 74 |
+
"model = AutoModelForCausalLM.from_pretrained(\n",
|
| 75 |
+
" MODEL_NAME,\n",
|
| 76 |
+
" torch_dtype=torch.float16,\n",
|
| 77 |
+
" device_map=\"auto\",\n",
|
| 78 |
+
" load_in_8bit=True\n",
|
| 79 |
+
")\n",
|
| 80 |
+
"\n",
|
| 81 |
+
"print(\"🎉 Nova AI hazır! Artık sohbet edebilirsiniz.\")\n",
|
| 82 |
+
"print(\"🚀 Teknova ile güçlendirilmiştir\")\n"
|
| 83 |
+
]
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"cell_type": "code",
|
| 87 |
+
"execution_count": null,
|
| 88 |
+
"metadata": {},
|
| 89 |
+
"outputs": [],
|
| 90 |
+
"source": [
|
| 91 |
+
"# 💬 Nova AI Chat fonksiyonu\n",
|
| 92 |
+
"def chat_response(message, history):\n",
|
| 93 |
+
" \"\"\"Nova AI ile sohbet et\"\"\"\n",
|
| 94 |
+
" if not message.strip():\n",
|
| 95 |
+
" return \"❓ Lütfen Nova AI'ya bir mesaj yazın.\"\n",
|
| 96 |
+
" \n",
|
| 97 |
+
" try:\n",
|
| 98 |
+
" # Sohbet geçmişini formatla\n",
|
| 99 |
+
" conversation = \"\"\n",
|
| 100 |
+
" for user_msg, bot_msg in history:\n",
|
| 101 |
+
" conversation += f\"[INST] {user_msg} [/INST] {bot_msg} \"\n",
|
| 102 |
+
" \n",
|
| 103 |
+
" # Yeni mesajı ekle\n",
|
| 104 |
+
" conversation += f\"[INST] {message} [/INST]\"\n",
|
| 105 |
+
" \n",
|
| 106 |
+
" # Tokenize et\n",
|
| 107 |
+
" inputs = tokenizer(\n",
|
| 108 |
+
" conversation, \n",
|
| 109 |
+
" return_tensors=\"pt\", \n",
|
| 110 |
+
" truncation=True, \n",
|
| 111 |
+
" max_length=2048\n",
|
| 112 |
+
" ).to(model.device)\n",
|
| 113 |
+
" \n",
|
| 114 |
+
" # Yanıt üret\n",
|
| 115 |
+
" with torch.no_grad():\n",
|
| 116 |
+
" outputs = model.generate(\n",
|
| 117 |
+
" **inputs,\n",
|
| 118 |
+
" max_new_tokens=512,\n",
|
| 119 |
+
" temperature=0.7,\n",
|
| 120 |
+
" top_p=0.9,\n",
|
| 121 |
+
" do_sample=True,\n",
|
| 122 |
+
" pad_token_id=tokenizer.eos_token_id,\n",
|
| 123 |
+
" eos_token_id=tokenizer.eos_token_id\n",
|
| 124 |
+
" )\n",
|
| 125 |
+
" \n",
|
| 126 |
+
" # Yanıtı decode et\n",
|
| 127 |
+
" response = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
|
| 128 |
+
" \n",
|
| 129 |
+
" # Sadece yeni üretilen kısmı al\n",
|
| 130 |
+
" new_response = response[len(conversation):].strip()\n",
|
| 131 |
+
" \n",
|
| 132 |
+
" return new_response\n",
|
| 133 |
+
" \n",
|
| 134 |
+
" except Exception as e:\n",
|
| 135 |
+
" return f\"❌ Hata: {str(e)}\"\n",
|
| 136 |
+
"\n",
|
| 137 |
+
"print(\"✅ Nova AI Chat fonksiyonu hazır!\")\n"
|
| 138 |
+
]
|
| 139 |
+
},
|
| 140 |
+
{
|
| 141 |
+
"cell_type": "code",
|
| 142 |
+
"execution_count": null,
|
| 143 |
+
"metadata": {},
|
| 144 |
+
"outputs": [],
|
| 145 |
+
"source": [
|
| 146 |
+
"# 🎨 Nova AI Gradio arayüzü oluştur\n",
|
| 147 |
+
"with gr.Blocks(\n",
|
| 148 |
+
" theme=gr.themes.Soft(),\n",
|
| 149 |
+
" title=\"Nova AI Chat - Teknova\"\n",
|
| 150 |
+
") as demo:\n",
|
| 151 |
+
" \n",
|
| 152 |
+
" gr.HTML(\"\"\"\n",
|
| 153 |
+
" <div style=\"text-align: center; padding: 20px; background: linear-gradient(135deg, #ff6b6b 0%, #4ecdc4 100%); color: white; border-radius: 10px; margin-bottom: 20px;\">\n",
|
| 154 |
+
" <h1>🚀 Nova AI Chat</h1>\n",
|
| 155 |
+
" <p>Google Colab'da çalışan <strong>Teknova</strong> AI asistanınız</p>\n",
|
| 156 |
+
" <small>⚡ GPU hızlandırmalı • 🧠 Gelişmiş AI • 🚀 Teknova</small>\n",
|
| 157 |
+
" </div>\n",
|
| 158 |
+
" \"\"\")\n",
|
| 159 |
+
" \n",
|
| 160 |
+
" chatbot = gr.Chatbot(\n",
|
| 161 |
+
" height=400,\n",
|
| 162 |
+
" show_label=False,\n",
|
| 163 |
+
" show_share_button=True,\n",
|
| 164 |
+
" show_copy_button=True\n",
|
| 165 |
+
" )\n",
|
| 166 |
+
" \n",
|
| 167 |
+
" with gr.Row():\n",
|
| 168 |
+
" msg = gr.Textbox(\n",
|
| 169 |
+
" placeholder=\"Nova AI'ya mesajınızı yazın... (Türkçe sorular sorabilirsiniz)\",\n",
|
| 170 |
+
" show_label=False,\n",
|
| 171 |
+
" scale=4\n",
|
| 172 |
+
" )\n",
|
| 173 |
+
" submit = gr.Button(\"🚀 Gönder\", scale=1, variant=\"primary\")\n",
|
| 174 |
+
" \n",
|
| 175 |
+
" with gr.Row():\n",
|
| 176 |
+
" clear = gr.Button(\"🗑️ Temizle\", scale=1)\n",
|
| 177 |
+
" \n",
|
| 178 |
+
" gr.HTML(\"\"\"\n",
|
| 179 |
+
" <div style=\"text-align: center; padding: 15px; background: #f0f0f0; border-radius: 8px; margin-top: 10px;\">\n",
|
| 180 |
+
" <h3>💡 Nova AI'ya sorabilecekleriniz:</h3>\n",
|
| 181 |
+
" <p>• \"Python'da liste comprehension nasıl kullanılır?\"</p>\n",
|
| 182 |
+
" <p>• \"Türkiye'nin başkenti neresidir?\"</p>\n",
|
| 183 |
+
" <p>• \"Bana bir hikaye anlat\"</p>\n",
|
| 184 |
+
" <p>• \"Yapay zeka nedir?\"</p>\n",
|
| 185 |
+
" </div>\n",
|
| 186 |
+
" \"\"\")\n",
|
| 187 |
+
" \n",
|
| 188 |
+
" # Event handlers\n",
|
| 189 |
+
" def user_message(message, history):\n",
|
| 190 |
+
" return \"\", history + [[message, None]]\n",
|
| 191 |
+
" \n",
|
| 192 |
+
" def bot_message(history):\n",
|
| 193 |
+
" user_message = history[-1][0]\n",
|
| 194 |
+
" bot_response = chat_response(user_message, history[:-1])\n",
|
| 195 |
+
" history[-1][1] = bot_response\n",
|
| 196 |
+
" return history\n",
|
| 197 |
+
" \n",
|
| 198 |
+
" msg.submit(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(\n",
|
| 199 |
+
" bot_message, chatbot, chatbot\n",
|
| 200 |
+
" )\n",
|
| 201 |
+
" submit.click(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(\n",
|
| 202 |
+
" bot_message, chatbot, chatbot\n",
|
| 203 |
+
" )\n",
|
| 204 |
+
" clear.click(lambda: None, None, chatbot, queue=False)\n",
|
| 205 |
+
"\n",
|
| 206 |
+
"print(\"🎨 Nova AI arayüzü hazır!\")\n"
|
| 207 |
+
]
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"cell_type": "code",
|
| 211 |
+
"execution_count": null,
|
| 212 |
+
"metadata": {},
|
| 213 |
+
"outputs": [],
|
| 214 |
+
"source": [
|
| 215 |
+
"# 🚀 Nova AI Uygulamasını başlat\n",
|
| 216 |
+
"print(\"🌟 Nova AI Chat uygulaması başlatılıyor...\")\n",
|
| 217 |
+
"print(\"📱 Aşağıdaki bağlantıyı açıp sohbet etmeye başlayın!\")\n",
|
| 218 |
+
"\n",
|
| 219 |
+
"demo.launch(\n",
|
| 220 |
+
" share=True, # Herkesle paylaşılabilir link\n",
|
| 221 |
+
" debug=True,\n",
|
| 222 |
+
" show_error=True\n",
|
| 223 |
+
")\n"
|
| 224 |
+
]
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"cell_type": "code",
|
| 228 |
+
"execution_count": null,
|
| 229 |
+
"metadata": {},
|
| 230 |
+
"outputs": [],
|
| 231 |
+
"source": [
|
| 232 |
+
"\"\"\"\n",
|
| 233 |
+
"🎉 Tebrikler!\n",
|
| 234 |
+
"\n",
|
| 235 |
+
"Nova AI Chat artık çalışıyor! \n",
|
| 236 |
+
"\n",
|
| 237 |
+
"📋 Kullanım İpuçları:\n",
|
| 238 |
+
"- 🇹🇷 Türkçe sorular sorun\n",
|
| 239 |
+
"- 💬 Uzun sohbetler yapabilirsiniz \n",
|
| 240 |
+
"- 🔄 \"Temizle\" ile geçmişi silin\n",
|
| 241 |
+
"- ⚡ GPU sayesinde hızlı yanıtlar\n",
|
| 242 |
+
"- 🚀 Nova AI teknolojisini deneyimleyin\n",
|
| 243 |
+
"\n",
|
| 244 |
+
"🔗 Paylaşım:\n",
|
| 245 |
+
"- Yukarıdaki public link'i paylaşabilirsiniz\n",
|
| 246 |
+
"- Link 72 saat aktif kalır\n",
|
| 247 |
+
"- Colab kapatılırsa link devre dışı kalır\n",
|
| 248 |
+
"\n",
|
| 249 |
+
"🌟 Nova AI Özellikleri:\n",
|
| 250 |
+
"- Teknova kalitesi garantisi\n",
|
| 251 |
+
"- Gelişmiş AI teknolojisi\n",
|
| 252 |
+
"- Hızlı ve güvenilir yanıtlar\n",
|
| 253 |
+
"\n",
|
| 254 |
+
"🚀 Teknova Nova AI ile güçlendirilmiştir\n",
|
| 255 |
+
"\"\"\"\n",
|
| 256 |
+
"\n",
|
| 257 |
+
"print(\"📘 Nova AI Chat kullanım rehberi yukarıda!\")\n"
|
| 258 |
+
]
|
| 259 |
+
}
|
| 260 |
+
],
|
| 261 |
+
"metadata": {
|
| 262 |
+
"language_info": {
|
| 263 |
+
"name": "python"
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
"nbformat": 4,
|
| 267 |
+
"nbformat_minor": 2
|
| 268 |
+
}
|
Nova_AI_Colab.py
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Teknova Nova AI - Google Colab Özel Scripti
|
| 2 |
+
# 🌟 Tamamen özgün Teknova Nova AI modeli
|
| 3 |
+
# 💡 Colab'da kendi modelinizi çalıştırın
|
| 4 |
+
|
| 5 |
+
import gradio as gr
|
| 6 |
+
import torch
|
| 7 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 8 |
+
import os
|
| 9 |
+
import zipfile
|
| 10 |
+
from google.colab import files
|
| 11 |
+
|
| 12 |
+
print("🚀 Teknova Nova AI - Google Colab Edition")
|
| 13 |
+
print("🌟 Tamamen özgün yapay zeka teknolojisi")
|
| 14 |
+
print("=" * 60)
|
| 15 |
+
|
| 16 |
+
# Colab için Nova AI model yolu
|
| 17 |
+
MODEL_PATH = "/content/nova-ai-model"
|
| 18 |
+
|
| 19 |
+
def upload_and_extract_model():
|
| 20 |
+
"""Nova AI modelini Colab'a yükle ve çıkart"""
|
| 21 |
+
print("📂 Nova AI model dosyalarınızı yükleyin...")
|
| 22 |
+
print("💡 ZIP dosyası olarak yüklemeniz önerilir")
|
| 23 |
+
|
| 24 |
+
# Dosya yükleme
|
| 25 |
+
uploaded = files.upload()
|
| 26 |
+
|
| 27 |
+
for filename in uploaded.keys():
|
| 28 |
+
print(f"📦 İşleniyor: {filename}")
|
| 29 |
+
|
| 30 |
+
if filename.endswith('.zip'):
|
| 31 |
+
# ZIP dosyasını çıkart
|
| 32 |
+
with zipfile.ZipFile(filename, 'r') as zip_ref:
|
| 33 |
+
zip_ref.extractall(MODEL_PATH)
|
| 34 |
+
print(f"✅ {filename} başarıyla çıkartıldı")
|
| 35 |
+
os.remove(filename) # ZIP dosyasını sil
|
| 36 |
+
else:
|
| 37 |
+
# Tek dosyayı model klasörüne taşı
|
| 38 |
+
os.makedirs(MODEL_PATH, exist_ok=True)
|
| 39 |
+
os.rename(filename, os.path.join(MODEL_PATH, filename))
|
| 40 |
+
print(f"✅ {filename} model klasörüne taşındı")
|
| 41 |
+
|
| 42 |
+
print("🎉 Nova AI model dosyaları hazır!")
|
| 43 |
+
|
| 44 |
+
def load_nova_ai():
|
| 45 |
+
"""Nova AI modelini yükle"""
|
| 46 |
+
print("🚀 Teknova Nova AI modeli yükleniyor...")
|
| 47 |
+
|
| 48 |
+
if not os.path.exists(MODEL_PATH):
|
| 49 |
+
print("❌ Nova AI model klasörü bulunamadı!")
|
| 50 |
+
print("📤 Önce modelinizi yükleyin...")
|
| 51 |
+
upload_and_extract_model()
|
| 52 |
+
|
| 53 |
+
try:
|
| 54 |
+
# Nova AI Tokenizer
|
| 55 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 56 |
+
MODEL_PATH,
|
| 57 |
+
trust_remote_code=True
|
| 58 |
+
)
|
| 59 |
+
print("✅ Nova AI Tokenizer yüklendi")
|
| 60 |
+
|
| 61 |
+
# Nova AI Model - Colab GPU optimizasyonu
|
| 62 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 63 |
+
MODEL_PATH,
|
| 64 |
+
torch_dtype=torch.float16,
|
| 65 |
+
device_map="auto",
|
| 66 |
+
trust_remote_code=True,
|
| 67 |
+
load_in_8bit=True # Colab T4 için optimize
|
| 68 |
+
)
|
| 69 |
+
print("✅ Nova AI Model yüklendi")
|
| 70 |
+
print("🎉 Teknova Nova AI hazır!")
|
| 71 |
+
|
| 72 |
+
return model, tokenizer
|
| 73 |
+
|
| 74 |
+
except Exception as e:
|
| 75 |
+
print(f"❌ Nova AI yükleme hatası: {e}")
|
| 76 |
+
return None, None
|
| 77 |
+
|
| 78 |
+
def nova_chat(message, history, model, tokenizer):
|
| 79 |
+
"""Nova AI ile sohbet"""
|
| 80 |
+
if model is None or tokenizer is None:
|
| 81 |
+
return "❌ Nova AI modeli yüklenmedi. Lütfen modeli yükleyin."
|
| 82 |
+
|
| 83 |
+
try:
|
| 84 |
+
# Nova AI konuşma formatı
|
| 85 |
+
conversation = ""
|
| 86 |
+
for user_msg, bot_msg in history:
|
| 87 |
+
conversation += f"Kullanıcı: {user_msg}\nNova AI: {bot_msg}\n"
|
| 88 |
+
|
| 89 |
+
conversation += f"Kullanıcı: {message}\nNova AI:"
|
| 90 |
+
|
| 91 |
+
# Nova AI yanıt üret
|
| 92 |
+
inputs = tokenizer(
|
| 93 |
+
conversation,
|
| 94 |
+
return_tensors="pt",
|
| 95 |
+
truncation=True,
|
| 96 |
+
max_length=2048
|
| 97 |
+
).to(model.device)
|
| 98 |
+
|
| 99 |
+
with torch.no_grad():
|
| 100 |
+
outputs = model.generate(
|
| 101 |
+
**inputs,
|
| 102 |
+
max_new_tokens=512,
|
| 103 |
+
temperature=0.7,
|
| 104 |
+
top_p=0.9,
|
| 105 |
+
do_sample=True,
|
| 106 |
+
pad_token_id=tokenizer.eos_token_id
|
| 107 |
+
)
|
| 108 |
+
|
| 109 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 110 |
+
nova_response = response[len(conversation):].strip()
|
| 111 |
+
|
| 112 |
+
return nova_response
|
| 113 |
+
|
| 114 |
+
except Exception as e:
|
| 115 |
+
return f"❌ Nova AI hatası: {str(e)}"
|
| 116 |
+
|
| 117 |
+
def create_nova_interface():
|
| 118 |
+
"""Nova AI Gradio arayüzü oluştur"""
|
| 119 |
+
|
| 120 |
+
# Nova AI modelini yükle
|
| 121 |
+
model, tokenizer = load_nova_ai()
|
| 122 |
+
|
| 123 |
+
# Gradio arayüzü
|
| 124 |
+
with gr.Blocks(
|
| 125 |
+
theme=gr.themes.Soft(),
|
| 126 |
+
title="Teknova Nova AI - Colab Edition"
|
| 127 |
+
) as demo:
|
| 128 |
+
|
| 129 |
+
gr.HTML("""
|
| 130 |
+
<div style="text-align: center; padding: 20px; background: linear-gradient(135deg, #ff6b6b, #4ecdc4); border-radius: 15px; margin-bottom: 20px;">
|
| 131 |
+
<h1 style="color: white; font-size: 2.5rem; margin: 0;">
|
| 132 |
+
🚀 Teknova Nova AI
|
| 133 |
+
</h1>
|
| 134 |
+
<p style="color: white; font-size: 1.2rem; margin: 10px 0;">
|
| 135 |
+
<strong>Google Colab Edition</strong> - Özgün Yapay Zeka
|
| 136 |
+
</p>
|
| 137 |
+
<div style="background: rgba(255,255,255,0.3); padding: 10px; border-radius: 10px; display: inline-block;">
|
| 138 |
+
🌟 Tamamen özgün model • ⚡ Token gerektirmez • 🧠 Colab GPU optimized
|
| 139 |
+
</div>
|
| 140 |
+
</div>
|
| 141 |
+
""")
|
| 142 |
+
|
| 143 |
+
chatbot = gr.Chatbot(
|
| 144 |
+
height=500,
|
| 145 |
+
show_label=False,
|
| 146 |
+
show_copy_button=True,
|
| 147 |
+
avatar_images=[None, "🤖"]
|
| 148 |
+
)
|
| 149 |
+
|
| 150 |
+
with gr.Row():
|
| 151 |
+
msg = gr.Textbox(
|
| 152 |
+
placeholder="Nova AI'ya mesajınızı yazın...",
|
| 153 |
+
show_label=False,
|
| 154 |
+
scale=4
|
| 155 |
+
)
|
| 156 |
+
send = gr.Button("🚀 Gönder", scale=1, variant="primary")
|
| 157 |
+
|
| 158 |
+
with gr.Row():
|
| 159 |
+
clear = gr.Button("🗑️ Temizle")
|
| 160 |
+
reload = gr.Button("🔄 Model Yenile")
|
| 161 |
+
|
| 162 |
+
gr.HTML("""
|
| 163 |
+
<div style="text-align: center; padding: 15px; background: #f8f9fa; border-radius: 10px; margin-top: 15px;">
|
| 164 |
+
<h3>💡 Nova AI Colab Rehberi</h3>
|
| 165 |
+
<p>🔸 Kendi Nova AI modelinizi ZIP olarak yükleyin</p>
|
| 166 |
+
<p>🔸 Model otomatik olarak /content/nova-ai-model klasörüne çıkartılır</p>
|
| 167 |
+
<p>🔸 T4 GPU ile optimize edilmiş performans</p>
|
| 168 |
+
<p style="color: #ff6b6b; font-weight: bold;">🚀 Teknova Nova AI - Tamamen Özgün</p>
|
| 169 |
+
</div>
|
| 170 |
+
""")
|
| 171 |
+
|
| 172 |
+
# Event handlers
|
| 173 |
+
def user_message(message, history):
|
| 174 |
+
return "", history + [[message, None]]
|
| 175 |
+
|
| 176 |
+
def bot_message(history):
|
| 177 |
+
user_message = history[-1][0]
|
| 178 |
+
bot_response = nova_chat(user_message, history[:-1], model, tokenizer)
|
| 179 |
+
history[-1][1] = bot_response
|
| 180 |
+
return history
|
| 181 |
+
|
| 182 |
+
def reload_model():
|
| 183 |
+
nonlocal model, tokenizer
|
| 184 |
+
model, tokenizer = load_nova_ai()
|
| 185 |
+
return "🔄 Nova AI modeli yeniden yüklendi!"
|
| 186 |
+
|
| 187 |
+
msg.submit(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 188 |
+
bot_message, chatbot, chatbot
|
| 189 |
+
)
|
| 190 |
+
send.click(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 191 |
+
bot_message, chatbot, chatbot
|
| 192 |
+
)
|
| 193 |
+
clear.click(lambda: None, None, chatbot, queue=False)
|
| 194 |
+
reload.click(reload_model, None, None)
|
| 195 |
+
|
| 196 |
+
return demo
|
| 197 |
+
|
| 198 |
+
# Ana fonksiyon
|
| 199 |
+
if __name__ == "__main__":
|
| 200 |
+
print("🎨 Nova AI Colab arayüzü oluşturuluyor...")
|
| 201 |
+
|
| 202 |
+
demo = create_nova_interface()
|
| 203 |
+
|
| 204 |
+
print("🌟 Nova AI Colab başlatılıyor...")
|
| 205 |
+
demo.launch(
|
| 206 |
+
share=True, # Public link oluştur
|
| 207 |
+
debug=True,
|
| 208 |
+
show_error=True,
|
| 209 |
+
server_name="0.0.0.0",
|
| 210 |
+
server_port=7860
|
| 211 |
+
)
|
README.md
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Nova AI Chat
|
| 3 |
+
emoji: 🚀
|
| 4 |
+
colorFrom: red
|
| 5 |
+
colorTo: blue
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.7.1
|
| 8 |
+
app_file: gradio_app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
models:
|
| 12 |
+
- mistralai/Mistral-7B-Instruct-v0.1
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# 🚀 Teknova Nova AI - Özgün Yapay Zeka Modeli
|
| 16 |
+
|
| 17 |
+
**Teknova** tarafından geliştirilen tamamen **özgün** Nova AI yapay zeka modeli.
|
| 18 |
+
|
| 19 |
+
## 🌟 Özellikler
|
| 20 |
+
|
| 21 |
+
- 🧠 **Özgün AI Teknolojisi**: Tamamen Teknova tarafından geliştirilmiş
|
| 22 |
+
- ⚡ **Token Gerektirmez**: Hugging Face token'a ihtiyaç duymaz
|
| 23 |
+
- 🎯 **Özelleştirilebilir**: Kendi modelinizi kullanın
|
| 24 |
+
- 🚀 **Hızlı**: GPU optimizasyonu ile hızlı yanıtlar
|
| 25 |
+
- 🌐 **Web Arayüzü**: Modern ve kullanıcı dostu
|
| 26 |
+
- 💻 **Konsol Modu**: Terminal üzerinden kullanım
|
| 27 |
+
- 📱 **API**: RESTful API desteği
|
| 28 |
+
|
| 29 |
+
## 🏗️ Kurulum
|
| 30 |
+
|
| 31 |
+
### 1️⃣ Gereksinimleri Yükleyin
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
pip install -r requirements.txt
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
### 2️⃣ Nova AI Modelinizi Hazırlayın
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
# Kendi Nova AI modelinizi nova-ai-model klasörüne yerleştirin
|
| 41 |
+
mkdir nova-ai-model
|
| 42 |
+
# Model dosyalarınızı bu klasöre kopyalayın
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
### 3️⃣ Uygulamayı Başlatın
|
| 46 |
+
|
| 47 |
+
#### 🌐 Web Arayüzü
|
| 48 |
+
```bash
|
| 49 |
+
# Windows
|
| 50 |
+
baslat_api.bat
|
| 51 |
+
|
| 52 |
+
# Linux/Mac
|
| 53 |
+
python app.py
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
#### 💻 Konsol Modu
|
| 57 |
+
```bash
|
| 58 |
+
# Windows
|
| 59 |
+
baslat_konsol.bat
|
| 60 |
+
|
| 61 |
+
# Linux/Mac
|
| 62 |
+
python main.py
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
#### 🔄 Gradio Arayüzü
|
| 66 |
+
```bash
|
| 67 |
+
python gradio_app.py
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
## 📂 Dosya Yapısı
|
| 71 |
+
|
| 72 |
+
```
|
| 73 |
+
NovaAI/
|
| 74 |
+
├── 🐍 gradio_app.py # Gradio web arayüzü
|
| 75 |
+
├── 🌐 app.py # FastAPI web uygulaması
|
| 76 |
+
├── ⚡ api.py # API servisi
|
| 77 |
+
├── 🖥️ main.py # Konsol uygulaması
|
| 78 |
+
├── 📂 nova-ai-model/ # Nova AI model dosyaları
|
| 79 |
+
├── 🚀 baslat_api.bat # Web başlatıcı
|
| 80 |
+
├── 🖱️ baslat_konsol.bat # Konsol başlatıcı
|
| 81 |
+
├── 🔧 download_nova.py # Model indirme scripti
|
| 82 |
+
├── 📋 requirements.txt # Python paketleri
|
| 83 |
+
└── 📄 README.md # Bu dosya
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
## 🔧 Konfigürasyon
|
| 87 |
+
|
| 88 |
+
### Model Path Ayarlama
|
| 89 |
+
|
| 90 |
+
```python
|
| 91 |
+
# gradio_app.py içinde
|
| 92 |
+
MODEL_NAME = "./nova-ai-model" # Yerel model
|
| 93 |
+
MODEL_PATH = "/content/nova-ai-model" # Colab için
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
### API Kullanımı
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
# POST /chat
|
| 100 |
+
curl -X POST "http://localhost:8000/chat" \
|
| 101 |
+
-H "Content-Type: application/json" \
|
| 102 |
+
-d '{"prompt": "Merhaba Nova AI!"}'
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
## 🎯 Kullanım Senaryoları
|
| 106 |
+
|
| 107 |
+
- 💬 **Sohbet Botu**: Müşteri hizmetleri
|
| 108 |
+
- 📝 **İçerik Üretimi**: Blog yazıları, makaleler
|
| 109 |
+
- 🎓 **Eğitim**: Öğrenci asistanı
|
| 110 |
+
- 💼 **İş Uygulamaları**: Rapor analizi
|
| 111 |
+
- 🔍 **Araştırma**: Bilgi arama ve analiz
|
| 112 |
+
|
| 113 |
+
## 🌐 Deployment
|
| 114 |
+
|
| 115 |
+
### Google Colab
|
| 116 |
+
```python
|
| 117 |
+
# Nova_AI_Chat.ipynb dosyasını Colab'da açın
|
| 118 |
+
# Tüm hücreleri çalıştırın
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
### Hugging Face Spaces
|
| 122 |
+
```bash
|
| 123 |
+
# gradio_app.py dosyasını Space'e yükleyin
|
| 124 |
+
# Otomatik deploy edilir
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### Yerel Sunucu
|
| 128 |
+
```bash
|
| 129 |
+
python app.py
|
| 130 |
+
# http://localhost:8000 adresinde çalışır
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
## 🛡️ Güvenlik
|
| 134 |
+
|
| 135 |
+
- 🔐 **Veri Güvenliği**: Verileriniz güvende
|
| 136 |
+
- 🏠 **Yerel İşlem**: Model yerel olarak çalışır
|
| 137 |
+
- 🚫 **Token Gerektirmez**: Harici bağımlılık yok
|
| 138 |
+
|
| 139 |
+
## 📊 Performans
|
| 140 |
+
|
| 141 |
+
- ⚡ **Hızlı Yanıt**: 2-5 saniye
|
| 142 |
+
- 🧠 **Düşük Bellek**: 8-bit quantization
|
| 143 |
+
- 🔥 **GPU Desteği**: CUDA optimizasyonu
|
| 144 |
+
|
| 145 |
+
## 🤝 Ktkı
|
| 146 |
+
|
| 147 |
+
Bu proje **Teknova** tarafından geliştirilmiştir.
|
| 148 |
+
|
| 149 |
+
## 📄 Lisans
|
| 150 |
+
|
| 151 |
+
Bu proje Teknova'ya aittir. Ticari kullanım için izin gereklidir.
|
| 152 |
+
|
| 153 |
+
## 🚀 Teknova
|
| 154 |
+
|
| 155 |
+
**Teknova** - Türkiye'nin öncü yapay zeka teknoloji şirketi
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
🌟 **Tamamen özgün Nova AI teknolojisi ile güçlendirilmiştir**
|
api.py
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import FastAPI, Request
|
| 2 |
+
from fastapi.responses import JSONResponse
|
| 3 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 4 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 5 |
+
import torch
|
| 6 |
+
import uvicorn
|
| 7 |
+
import os
|
| 8 |
+
|
| 9 |
+
# Teknova Nova AI - Özgün Model API
|
| 10 |
+
app = FastAPI(
|
| 11 |
+
title="Teknova Nova AI API",
|
| 12 |
+
description="Teknova'nın özgün Nova AI modeli API servisi"
|
| 13 |
+
)
|
| 14 |
+
|
| 15 |
+
app.add_middleware(
|
| 16 |
+
CORSMiddleware,
|
| 17 |
+
allow_origins=["*"],
|
| 18 |
+
allow_credentials=True,
|
| 19 |
+
allow_methods=["*"],
|
| 20 |
+
allow_headers=["*"],
|
| 21 |
+
expose_headers=["Access-Control-Allow-Origin"]
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
# Nova AI Model path - Özgün model
|
| 25 |
+
model_path = "./nova-ai-model" # Yerel Nova AI model
|
| 26 |
+
colab_path = "/content/nova-ai-model" # Colab için path
|
| 27 |
+
|
| 28 |
+
# Path kontrolü
|
| 29 |
+
actual_path = colab_path if os.path.exists(colab_path) else model_path
|
| 30 |
+
|
| 31 |
+
print("🚀 Teknova Nova AI API modeli yükleniyor...")
|
| 32 |
+
print("🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!")
|
| 33 |
+
print("💡 Hugging Face token gerektirmez - kendi modeliniz!")
|
| 34 |
+
|
| 35 |
+
try:
|
| 36 |
+
tokenizer = AutoTokenizer.from_pretrained(actual_path, trust_remote_code=True)
|
| 37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 38 |
+
actual_path,
|
| 39 |
+
torch_dtype=torch.float16,
|
| 40 |
+
device_map="auto",
|
| 41 |
+
trust_remote_code=True
|
| 42 |
+
)
|
| 43 |
+
print("✅ Teknova Nova AI API hazır!")
|
| 44 |
+
print("🎉 Özgün Nova AI teknolojisi aktif!")
|
| 45 |
+
except Exception as e:
|
| 46 |
+
print(f"❌ Nova AI model yükleme hatası: {e}")
|
| 47 |
+
model = None
|
| 48 |
+
tokenizer = None
|
| 49 |
+
|
| 50 |
+
@app.post("/chat")
|
| 51 |
+
async def chat(request: Request):
|
| 52 |
+
if model is None or tokenizer is None:
|
| 53 |
+
return JSONResponse({
|
| 54 |
+
"error": "Teknova Nova AI modeli yüklenmedi",
|
| 55 |
+
"solution": "Nova AI model dosyalarınızı doğru konuma yükleyin"
|
| 56 |
+
})
|
| 57 |
+
|
| 58 |
+
try:
|
| 59 |
+
data = await request.json()
|
| 60 |
+
prompt = data.get("prompt", "")
|
| 61 |
+
|
| 62 |
+
# Nova AI konuşma formatı
|
| 63 |
+
conversation = f"Kullanıcı: {prompt}\nNova AI:"
|
| 64 |
+
|
| 65 |
+
inputs = tokenizer(conversation, return_tensors="pt").to(model.device)
|
| 66 |
+
|
| 67 |
+
with torch.no_grad():
|
| 68 |
+
outputs = model.generate(
|
| 69 |
+
**inputs,
|
| 70 |
+
max_new_tokens=128,
|
| 71 |
+
temperature=0.7,
|
| 72 |
+
do_sample=True
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 76 |
+
nova_response = response[len(conversation):].strip()
|
| 77 |
+
|
| 78 |
+
return JSONResponse({
|
| 79 |
+
"response": nova_response,
|
| 80 |
+
"model": "Teknova Nova AI - Özgün Model"
|
| 81 |
+
}, headers={"Access-Control-Allow-Origin": "*"})
|
| 82 |
+
|
| 83 |
+
except Exception as e:
|
| 84 |
+
return JSONResponse({
|
| 85 |
+
"error": f"Nova AI API hatası: {str(e)}"
|
| 86 |
+
}, headers={"Access-Control-Allow-Origin": "*"})
|
| 87 |
+
|
| 88 |
+
if __name__ == "__main__":
|
| 89 |
+
uvicorn.run("api:app", host="0.0.0.0", port=8500, reload=True)
|
app.py
ADDED
|
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import FastAPI, Request
|
| 2 |
+
from fastapi.responses import HTMLResponse, JSONResponse
|
| 3 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 4 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 5 |
+
import torch
|
| 6 |
+
import os
|
| 7 |
+
|
| 8 |
+
# Teknova Nova AI - Özgün Model
|
| 9 |
+
app = FastAPI(
|
| 10 |
+
title="Teknova Nova AI",
|
| 11 |
+
description="Teknova'nın özgün Nova AI modeli - Token gerektirmez",
|
| 12 |
+
version="1.0.0"
|
| 13 |
+
)
|
| 14 |
+
|
| 15 |
+
app.add_middleware(
|
| 16 |
+
CORSMiddleware,
|
| 17 |
+
allow_origins=["*"],
|
| 18 |
+
allow_credentials=True,
|
| 19 |
+
allow_methods=["*"],
|
| 20 |
+
allow_headers=["*"],
|
| 21 |
+
expose_headers=["Access-Control-Allow-Origin"]
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
# Nova AI Model yüklemesi - Özgün teknoloji
|
| 25 |
+
model_path = "./nova-ai-model" # Kendi Nova AI modeliniz
|
| 26 |
+
colab_path = "/content/nova-ai-model" # Colab için path
|
| 27 |
+
|
| 28 |
+
# Path kontrolü
|
| 29 |
+
actual_path = colab_path if os.path.exists(colab_path) else model_path
|
| 30 |
+
|
| 31 |
+
print("🚀 Teknova Nova AI modeli yükleniyor... (Web API)")
|
| 32 |
+
print("🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!")
|
| 33 |
+
print("💡 Hugging Face token gerektirmez - kendi modeliniz!")
|
| 34 |
+
|
| 35 |
+
try:
|
| 36 |
+
tokenizer = AutoTokenizer.from_pretrained(actual_path, trust_remote_code=True)
|
| 37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 38 |
+
actual_path,
|
| 39 |
+
torch_dtype=torch.float16,
|
| 40 |
+
device_map="auto",
|
| 41 |
+
trust_remote_code=True
|
| 42 |
+
)
|
| 43 |
+
print("✅ Teknova Nova AI Web API hazır!")
|
| 44 |
+
print("🎉 Özgün Nova AI teknolojisi aktif!")
|
| 45 |
+
except Exception as e:
|
| 46 |
+
print(f"❌ Nova AI model yükleme hatası: {e}")
|
| 47 |
+
print("💡 Nova AI model dosyalarınızı doğru konuma yüklediğinizden emin olun.")
|
| 48 |
+
|
| 49 |
+
@app.get("/")
|
| 50 |
+
async def home():
|
| 51 |
+
return HTMLResponse("""
|
| 52 |
+
<!DOCTYPE html>
|
| 53 |
+
<html>
|
| 54 |
+
<head>
|
| 55 |
+
<title>Teknova Nova AI</title>
|
| 56 |
+
<meta charset="utf-8">
|
| 57 |
+
<style>
|
| 58 |
+
body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
|
| 59 |
+
background: linear-gradient(135deg, #ff6b6b, #4ecdc4);
|
| 60 |
+
color: white; text-align: center; padding: 50px; }
|
| 61 |
+
.container { background: rgba(255,255,255,0.1); padding: 40px; border-radius: 20px;
|
| 62 |
+
backdrop-filter: blur(10px); max-width: 600px; margin: 0 auto; }
|
| 63 |
+
h1 { font-size: 3rem; margin-bottom: 20px; }
|
| 64 |
+
p { font-size: 1.2rem; margin-bottom: 15px; }
|
| 65 |
+
.feature { background: rgba(255,255,255,0.2); padding: 15px; margin: 10px 0;
|
| 66 |
+
border-radius: 10px; }
|
| 67 |
+
</style>
|
| 68 |
+
</head>
|
| 69 |
+
<body>
|
| 70 |
+
<div class="container">
|
| 71 |
+
<h1>🚀 Teknova Nova AI</h1>
|
| 72 |
+
<p><strong>Özgün yapay zeka teknolojisi</strong></p>
|
| 73 |
+
<div class="feature">
|
| 74 |
+
🌟 Tamamen özgün Teknova Nova AI modeli
|
| 75 |
+
</div>
|
| 76 |
+
<div class="feature">
|
| 77 |
+
⚡ Token gerektirmez - Kendi modeliniz
|
| 78 |
+
</div>
|
| 79 |
+
<div class="feature">
|
| 80 |
+
🧠 Gelişmiş AI teknolojisi
|
| 81 |
+
</div>
|
| 82 |
+
<div class="feature">
|
| 83 |
+
🚀 API Endpoint: <strong>/chat</strong>
|
| 84 |
+
</div>
|
| 85 |
+
<p style="margin-top: 30px; font-size: 0.9rem; opacity: 0.8;">
|
| 86 |
+
API kullanımı: POST /chat {"prompt": "Mesajınız"}
|
| 87 |
+
</p>
|
| 88 |
+
</div>
|
| 89 |
+
</body>
|
| 90 |
+
</html>
|
| 91 |
+
""")
|
| 92 |
+
|
| 93 |
+
@app.post("/chat")
|
| 94 |
+
async def chat(request: Request):
|
| 95 |
+
try:
|
| 96 |
+
data = await request.json()
|
| 97 |
+
prompt = data.get("prompt", "")
|
| 98 |
+
|
| 99 |
+
if not prompt.strip():
|
| 100 |
+
return JSONResponse({
|
| 101 |
+
"error": "Lütfen Nova AI'ya mesajınızı yazın",
|
| 102 |
+
"model": "Teknova Nova AI"
|
| 103 |
+
})
|
| 104 |
+
|
| 105 |
+
# Nova AI ile konuşma formatı
|
| 106 |
+
conversation = f"Kullanıcı: {prompt}\nNova AI:"
|
| 107 |
+
|
| 108 |
+
inputs = tokenizer(conversation, return_tensors="pt").to(model.device)
|
| 109 |
+
|
| 110 |
+
with torch.no_grad():
|
| 111 |
+
outputs = model.generate(
|
| 112 |
+
**inputs,
|
| 113 |
+
max_new_tokens=256,
|
| 114 |
+
temperature=0.7,
|
| 115 |
+
top_p=0.9,
|
| 116 |
+
do_sample=True,
|
| 117 |
+
pad_token_id=tokenizer.eos_token_id
|
| 118 |
+
)
|
| 119 |
+
|
| 120 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 121 |
+
nova_response = response[len(conversation):].strip()
|
| 122 |
+
|
| 123 |
+
return JSONResponse({
|
| 124 |
+
"response": nova_response,
|
| 125 |
+
"model": "Teknova Nova AI - Özgün Model",
|
| 126 |
+
"status": "success"
|
| 127 |
+
}, headers={"Access-Control-Allow-Origin": "*"})
|
| 128 |
+
|
| 129 |
+
except Exception as e:
|
| 130 |
+
return JSONResponse({
|
| 131 |
+
"error": f"Nova AI hatası: {str(e)}",
|
| 132 |
+
"model": "Teknova Nova AI",
|
| 133 |
+
"status": "error"
|
| 134 |
+
}, headers={"Access-Control-Allow-Origin": "*"})
|
| 135 |
+
|
| 136 |
+
if __name__ == "__main__":
|
| 137 |
+
import uvicorn
|
| 138 |
+
print("🌐 Teknova Nova AI Web sunucusu başlatılıyor...")
|
| 139 |
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
baslat_api.bat
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
@echo off
|
| 2 |
+
chcp 65001 >nul
|
| 3 |
+
title Teknova Nova AI - Web Arayüzü
|
| 4 |
+
echo.
|
| 5 |
+
echo 🚀 ================================
|
| 6 |
+
echo TEKNOVA NOVA AI WEB ARAYÜZÜ
|
| 7 |
+
echo Özgün yapay zeka teknolojisi
|
| 8 |
+
echo ================================
|
| 9 |
+
echo.
|
| 10 |
+
echo 🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!
|
| 11 |
+
echo 💡 Hugging Face token gerektirmez - Kendi modeliniz!
|
| 12 |
+
echo ⚡ Web arayüzü başlatılıyor...
|
| 13 |
+
echo.
|
| 14 |
+
echo 📂 Model konumu kontrol ediliyor:
|
| 15 |
+
if exist "nova-ai-model" (
|
| 16 |
+
echo ✅ Nova AI model dosyaları bulundu
|
| 17 |
+
) else (
|
| 18 |
+
echo ⚠️ Nova AI model dosyaları bulunamadı
|
| 19 |
+
echo 📝 Lütfen nova-ai-model klasörünüze model dosyalarınızı yükleyin
|
| 20 |
+
echo.
|
| 21 |
+
)
|
| 22 |
+
echo.
|
| 23 |
+
echo 🌐 Web arayüzü başlatılıyor...
|
| 24 |
+
echo 💻 Tarayıcınızda açılacak adres: http://localhost:8000
|
| 25 |
+
echo.
|
| 26 |
+
echo ⏹️ Durdurmak için Ctrl+C tuşlayın
|
| 27 |
+
echo.
|
| 28 |
+
|
| 29 |
+
python app.py
|
| 30 |
+
|
| 31 |
+
echo.
|
| 32 |
+
echo 🚀 Teknova Nova AI - Web arayüzü kapatıldı
|
| 33 |
+
echo 💡 Tekrar çalıştırmak için bu dosyayı çalıştırın
|
| 34 |
+
pause
|
baslat_konsol.bat
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
@echo off
|
| 2 |
+
chcp 65001 >nul
|
| 3 |
+
title Teknova Nova AI - Konsol Uygulaması
|
| 4 |
+
echo.
|
| 5 |
+
echo 🚀 ===================================
|
| 6 |
+
echo TEKNOVA NOVA AI KONSOL UYGULAMASI
|
| 7 |
+
echo Özgün yapay zeka teknolojisi
|
| 8 |
+
echo ===================================
|
| 9 |
+
echo.
|
| 10 |
+
echo 🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!
|
| 11 |
+
echo 💡 Hugging Face token gerektirmez - Kendi modeliniz!
|
| 12 |
+
echo 🖥️ Konsol uygulaması başlatılıyor...
|
| 13 |
+
echo.
|
| 14 |
+
echo 📂 Model konumu kontrol ediliyor:
|
| 15 |
+
if exist "nova-ai-model" (
|
| 16 |
+
echo ✅ Nova AI model dosyaları bulundu
|
| 17 |
+
) else (
|
| 18 |
+
echo ⚠️ Nova AI model dosyaları bulunamadı
|
| 19 |
+
echo 📝 Lütfen nova-ai-model klasörünüze model dosyalarınızı yükleyin
|
| 20 |
+
echo.
|
| 21 |
+
)
|
| 22 |
+
echo.
|
| 23 |
+
echo 💬 Nova AI ile sohbet başlatılıyor...
|
| 24 |
+
echo 🔤 Mesajınızı yazıp Enter tuşuna basın
|
| 25 |
+
echo ⏹️ Çıkmak için 'exit' yazın
|
| 26 |
+
echo.
|
| 27 |
+
|
| 28 |
+
python main.py
|
| 29 |
+
|
| 30 |
+
echo.
|
| 31 |
+
echo 🚀 Teknova Nova AI - Konsol uygulaması kapatıldı
|
| 32 |
+
echo 💡 Tekrar çalıştırmak için bu dosyayı çalıştırın
|
| 33 |
+
pause
|
chat.html
ADDED
|
@@ -0,0 +1,422 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="tr">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>Nova AI Chat - Teknova</title>
|
| 7 |
+
<style>
|
| 8 |
+
* {
|
| 9 |
+
margin: 0;
|
| 10 |
+
padding: 0;
|
| 11 |
+
box-sizing: border-box;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
body {
|
| 15 |
+
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
| 16 |
+
background: #f7f7f8;
|
| 17 |
+
height: 100vh;
|
| 18 |
+
display: flex;
|
| 19 |
+
flex-direction: column;
|
| 20 |
+
}
|
| 21 |
+
|
| 22 |
+
.header {
|
| 23 |
+
background: #fff;
|
| 24 |
+
border-bottom: 1px solid #e5e5e5;
|
| 25 |
+
padding: 1rem 2rem;
|
| 26 |
+
display: flex;
|
| 27 |
+
align-items: center;
|
| 28 |
+
justify-content: center;
|
| 29 |
+
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
| 30 |
+
}
|
| 31 |
+
|
| 32 |
+
.header h1 {
|
| 33 |
+
color: #202123;
|
| 34 |
+
font-size: 1.5rem;
|
| 35 |
+
font-weight: 600;
|
| 36 |
+
display: flex;
|
| 37 |
+
align-items: center;
|
| 38 |
+
gap: 0.5rem;
|
| 39 |
+
}
|
| 40 |
+
|
| 41 |
+
.ai-icon {
|
| 42 |
+
width: 32px;
|
| 43 |
+
height: 32px;
|
| 44 |
+
background: linear-gradient(135deg, #ff6b6b 0%, #4ecdc4 100%);
|
| 45 |
+
border-radius: 8px;
|
| 46 |
+
display: flex;
|
| 47 |
+
align-items: center;
|
| 48 |
+
justify-content: center;
|
| 49 |
+
color: white;
|
| 50 |
+
font-weight: bold;
|
| 51 |
+
font-size: 1.2rem;
|
| 52 |
+
}
|
| 53 |
+
|
| 54 |
+
.company-tag {
|
| 55 |
+
background: linear-gradient(135deg, #ff6b6b 0%, #4ecdc4 100%);
|
| 56 |
+
color: white;
|
| 57 |
+
padding: 0.25rem 0.5rem;
|
| 58 |
+
border-radius: 12px;
|
| 59 |
+
font-size: 0.75rem;
|
| 60 |
+
margin-left: 0.5rem;
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
.chat-container {
|
| 64 |
+
flex: 1;
|
| 65 |
+
display: flex;
|
| 66 |
+
flex-direction: column;
|
| 67 |
+
max-width: 768px;
|
| 68 |
+
margin: 0 auto;
|
| 69 |
+
width: 100%;
|
| 70 |
+
height: 100%;
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
.messages {
|
| 74 |
+
flex: 1;
|
| 75 |
+
overflow-y: auto;
|
| 76 |
+
padding: 2rem 1rem;
|
| 77 |
+
display: flex;
|
| 78 |
+
flex-direction: column;
|
| 79 |
+
gap: 1.5rem;
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
.message {
|
| 83 |
+
display: flex;
|
| 84 |
+
gap: 1rem;
|
| 85 |
+
animation: fadeIn 0.3s ease-in;
|
| 86 |
+
}
|
| 87 |
+
|
| 88 |
+
.message.user {
|
| 89 |
+
flex-direction: row-reverse;
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
.avatar {
|
| 93 |
+
width: 40px;
|
| 94 |
+
height: 40px;
|
| 95 |
+
border-radius: 50%;
|
| 96 |
+
display: flex;
|
| 97 |
+
align-items: center;
|
| 98 |
+
justify-content: center;
|
| 99 |
+
font-weight: 600;
|
| 100 |
+
font-size: 0.9rem;
|
| 101 |
+
flex-shrink: 0;
|
| 102 |
+
}
|
| 103 |
+
|
| 104 |
+
.user .avatar {
|
| 105 |
+
background: #19c37d;
|
| 106 |
+
color: white;
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
.bot .avatar {
|
| 110 |
+
background: linear-gradient(135deg, #ff6b6b 0%, #4ecdc4 100%);
|
| 111 |
+
color: white;
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
.message-content {
|
| 115 |
+
background: #fff;
|
| 116 |
+
padding: 1rem 1.25rem;
|
| 117 |
+
border-radius: 18px;
|
| 118 |
+
max-width: 70%;
|
| 119 |
+
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
| 120 |
+
line-height: 1.5;
|
| 121 |
+
word-wrap: break-word;
|
| 122 |
+
}
|
| 123 |
+
|
| 124 |
+
.user .message-content {
|
| 125 |
+
background: #19c37d;
|
| 126 |
+
color: white;
|
| 127 |
+
border-bottom-right-radius: 4px;
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
.bot .message-content {
|
| 131 |
+
border-bottom-left-radius: 4px;
|
| 132 |
+
border: 1px solid #e5e5e5;
|
| 133 |
+
}
|
| 134 |
+
|
| 135 |
+
.input-container {
|
| 136 |
+
padding: 1rem;
|
| 137 |
+
background: #fff;
|
| 138 |
+
border-top: 1px solid #e5e5e5;
|
| 139 |
+
}
|
| 140 |
+
|
| 141 |
+
.input-form {
|
| 142 |
+
max-width: 768px;
|
| 143 |
+
margin: 0 auto;
|
| 144 |
+
display: flex;
|
| 145 |
+
gap: 0.75rem;
|
| 146 |
+
background: #f4f4f4;
|
| 147 |
+
border-radius: 24px;
|
| 148 |
+
padding: 0.5rem;
|
| 149 |
+
border: 1px solid #e5e5e5;
|
| 150 |
+
transition: all 0.2s ease;
|
| 151 |
+
}
|
| 152 |
+
|
| 153 |
+
.input-form:focus-within {
|
| 154 |
+
border-color: #ff6b6b;
|
| 155 |
+
box-shadow: 0 0 0 3px rgba(255, 107, 107, 0.1);
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
.message-input {
|
| 159 |
+
flex: 1;
|
| 160 |
+
border: none;
|
| 161 |
+
background: transparent;
|
| 162 |
+
padding: 0.75rem 1rem;
|
| 163 |
+
font-size: 1rem;
|
| 164 |
+
outline: none;
|
| 165 |
+
resize: none;
|
| 166 |
+
max-height: 120px;
|
| 167 |
+
min-height: 24px;
|
| 168 |
+
font-family: inherit;
|
| 169 |
+
}
|
| 170 |
+
|
| 171 |
+
.send-button {
|
| 172 |
+
background: linear-gradient(135deg, #ff6b6b 0%, #4ecdc4 100%);
|
| 173 |
+
border: none;
|
| 174 |
+
color: white;
|
| 175 |
+
width: 36px;
|
| 176 |
+
height: 36px;
|
| 177 |
+
border-radius: 18px;
|
| 178 |
+
cursor: pointer;
|
| 179 |
+
display: flex;
|
| 180 |
+
align-items: center;
|
| 181 |
+
justify-content: center;
|
| 182 |
+
transition: all 0.2s ease;
|
| 183 |
+
flex-shrink: 0;
|
| 184 |
+
}
|
| 185 |
+
|
| 186 |
+
.send-button:hover:not(:disabled) {
|
| 187 |
+
transform: scale(1.05);
|
| 188 |
+
box-shadow: 0 4px 12px rgba(255, 107, 107, 0.3);
|
| 189 |
+
}
|
| 190 |
+
|
| 191 |
+
.send-button:disabled {
|
| 192 |
+
background: #d1d5db;
|
| 193 |
+
cursor: not-allowed;
|
| 194 |
+
transform: none;
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
.loading {
|
| 198 |
+
display: flex;
|
| 199 |
+
align-items: center;
|
| 200 |
+
gap: 0.5rem;
|
| 201 |
+
color: #6b7280;
|
| 202 |
+
font-style: italic;
|
| 203 |
+
}
|
| 204 |
+
|
| 205 |
+
.typing-indicator {
|
| 206 |
+
display: flex;
|
| 207 |
+
gap: 4px;
|
| 208 |
+
}
|
| 209 |
+
|
| 210 |
+
.typing-dot {
|
| 211 |
+
width: 8px;
|
| 212 |
+
height: 8px;
|
| 213 |
+
border-radius: 50%;
|
| 214 |
+
background: #9ca3af;
|
| 215 |
+
animation: typing 1.4s infinite ease-in-out;
|
| 216 |
+
}
|
| 217 |
+
|
| 218 |
+
.typing-dot:nth-child(1) { animation-delay: -0.32s; }
|
| 219 |
+
.typing-dot:nth-child(2) { animation-delay: -0.16s; }
|
| 220 |
+
|
| 221 |
+
@keyframes typing {
|
| 222 |
+
0%, 80%, 100% { transform: scale(0.8); opacity: 0.5; }
|
| 223 |
+
40% { transform: scale(1); opacity: 1; }
|
| 224 |
+
}
|
| 225 |
+
|
| 226 |
+
@keyframes fadeIn {
|
| 227 |
+
from { opacity: 0; transform: translateY(10px); }
|
| 228 |
+
to { opacity: 1; transform: translateY(0); }
|
| 229 |
+
}
|
| 230 |
+
|
| 231 |
+
.welcome-message {
|
| 232 |
+
text-align: center;
|
| 233 |
+
color: #6b7280;
|
| 234 |
+
margin: 2rem 0;
|
| 235 |
+
padding: 2rem;
|
| 236 |
+
}
|
| 237 |
+
|
| 238 |
+
.welcome-message h2 {
|
| 239 |
+
font-size: 1.5rem;
|
| 240 |
+
margin-bottom: 0.5rem;
|
| 241 |
+
color: #374151;
|
| 242 |
+
}
|
| 243 |
+
|
| 244 |
+
.welcome-message p {
|
| 245 |
+
font-size: 1rem;
|
| 246 |
+
line-height: 1.6;
|
| 247 |
+
}
|
| 248 |
+
|
| 249 |
+
.welcome-message .brand {
|
| 250 |
+
color: #ff6b6b;
|
| 251 |
+
font-weight: 600;
|
| 252 |
+
}
|
| 253 |
+
|
| 254 |
+
@media (max-width: 768px) {
|
| 255 |
+
.header {
|
| 256 |
+
padding: 1rem;
|
| 257 |
+
}
|
| 258 |
+
|
| 259 |
+
.messages {
|
| 260 |
+
padding: 1rem 0.5rem;
|
| 261 |
+
}
|
| 262 |
+
|
| 263 |
+
.input-container {
|
| 264 |
+
padding: 0.5rem;
|
| 265 |
+
}
|
| 266 |
+
|
| 267 |
+
.message-content {
|
| 268 |
+
max-width: 85%;
|
| 269 |
+
}
|
| 270 |
+
}
|
| 271 |
+
</style>
|
| 272 |
+
</head>
|
| 273 |
+
<body>
|
| 274 |
+
<div class="header">
|
| 275 |
+
<h1>
|
| 276 |
+
<div class="ai-icon">N</div>
|
| 277 |
+
Nova AI
|
| 278 |
+
<span class="company-tag">by Teknova</span>
|
| 279 |
+
</h1>
|
| 280 |
+
</div>
|
| 281 |
+
|
| 282 |
+
<div class="chat-container">
|
| 283 |
+
<div class="messages" id="messages">
|
| 284 |
+
<div class="welcome-message">
|
| 285 |
+
<h2>👋 Merhaba!</h2>
|
| 286 |
+
<p>Ben <span class="brand">Nova AI</span>, <strong>Teknova</strong> tarafından geliştirilen yapay zeka asistanınızım.</p>
|
| 287 |
+
<p>Size nasıl yardımcı olabilirim?</p>
|
| 288 |
+
</div>
|
| 289 |
+
</div>
|
| 290 |
+
|
| 291 |
+
<div class="input-container">
|
| 292 |
+
<form class="input-form" id="chat-form">
|
| 293 |
+
<textarea
|
| 294 |
+
class="message-input"
|
| 295 |
+
id="prompt"
|
| 296 |
+
placeholder="Nova AI'ya bir mesaj yazın..."
|
| 297 |
+
rows="1"
|
| 298 |
+
required
|
| 299 |
+
></textarea>
|
| 300 |
+
<button type="submit" class="send-button" id="send-button">
|
| 301 |
+
<svg width="16" height="16" viewBox="0 0 24 24" fill="currentColor">
|
| 302 |
+
<path d="M2.01 21L23 12 2.01 3 2 10l15 2-15 2z"/>
|
| 303 |
+
</svg>
|
| 304 |
+
</button>
|
| 305 |
+
</form>
|
| 306 |
+
</div>
|
| 307 |
+
</div>
|
| 308 |
+
|
| 309 |
+
<script>
|
| 310 |
+
const form = document.getElementById('chat-form');
|
| 311 |
+
const promptInput = document.getElementById('prompt');
|
| 312 |
+
const messagesDiv = document.getElementById('messages');
|
| 313 |
+
const sendButton = document.getElementById('send-button');
|
| 314 |
+
let loading = false;
|
| 315 |
+
|
| 316 |
+
// Auto-resize textarea
|
| 317 |
+
promptInput.addEventListener('input', function() {
|
| 318 |
+
this.style.height = 'auto';
|
| 319 |
+
this.style.height = Math.min(this.scrollHeight, 120) + 'px';
|
| 320 |
+
});
|
| 321 |
+
|
| 322 |
+
// Send on Enter (but allow Shift+Enter for new lines)
|
| 323 |
+
promptInput.addEventListener('keydown', function(e) {
|
| 324 |
+
if (e.key === 'Enter' && !e.shiftKey) {
|
| 325 |
+
e.preventDefault();
|
| 326 |
+
form.dispatchEvent(new Event('submit'));
|
| 327 |
+
}
|
| 328 |
+
});
|
| 329 |
+
|
| 330 |
+
function addMessage(text, sender, isLoading = false) {
|
| 331 |
+
const messageDiv = document.createElement('div');
|
| 332 |
+
messageDiv.className = `message ${sender}`;
|
| 333 |
+
|
| 334 |
+
const avatar = document.createElement('div');
|
| 335 |
+
avatar.className = 'avatar';
|
| 336 |
+
avatar.textContent = sender === 'user' ? 'S' : 'N';
|
| 337 |
+
|
| 338 |
+
const content = document.createElement('div');
|
| 339 |
+
content.className = 'message-content';
|
| 340 |
+
|
| 341 |
+
if (isLoading) {
|
| 342 |
+
content.innerHTML = `
|
| 343 |
+
<div class="loading">
|
| 344 |
+
<div class="typing-indicator">
|
| 345 |
+
<div class="typing-dot"></div>
|
| 346 |
+
<div class="typing-dot"></div>
|
| 347 |
+
<div class="typing-dot"></div>
|
| 348 |
+
</div>
|
| 349 |
+
Nova AI düşünüyor...
|
| 350 |
+
</div>
|
| 351 |
+
`;
|
| 352 |
+
} else {
|
| 353 |
+
content.textContent = text;
|
| 354 |
+
}
|
| 355 |
+
|
| 356 |
+
messageDiv.appendChild(avatar);
|
| 357 |
+
messageDiv.appendChild(content);
|
| 358 |
+
messagesDiv.appendChild(messageDiv);
|
| 359 |
+
|
| 360 |
+
// Remove welcome message on first user message
|
| 361 |
+
if (sender === 'user') {
|
| 362 |
+
const welcome = messagesDiv.querySelector('.welcome-message');
|
| 363 |
+
if (welcome) welcome.remove();
|
| 364 |
+
}
|
| 365 |
+
|
| 366 |
+
messagesDiv.scrollTop = messagesDiv.scrollHeight;
|
| 367 |
+
return content;
|
| 368 |
+
}
|
| 369 |
+
|
| 370 |
+
form.onsubmit = async (e) => {
|
| 371 |
+
e.preventDefault();
|
| 372 |
+
if (loading) return;
|
| 373 |
+
|
| 374 |
+
const prompt = promptInput.value.trim();
|
| 375 |
+
if (!prompt) return;
|
| 376 |
+
|
| 377 |
+
// Add user message
|
| 378 |
+
addMessage(prompt, 'user');
|
| 379 |
+
|
| 380 |
+
// Clear input and reset height
|
| 381 |
+
promptInput.value = '';
|
| 382 |
+
promptInput.style.height = 'auto';
|
| 383 |
+
|
| 384 |
+
// Add loading message
|
| 385 |
+
const loadingContent = addMessage('', 'bot', true);
|
| 386 |
+
|
| 387 |
+
loading = true;
|
| 388 |
+
sendButton.disabled = true;
|
| 389 |
+
|
| 390 |
+
try {
|
| 391 |
+
const res = await fetch('http://localhost:8500/chat', {
|
| 392 |
+
method: 'POST',
|
| 393 |
+
headers: { 'Content-Type': 'application/json' },
|
| 394 |
+
body: JSON.stringify({ prompt })
|
| 395 |
+
});
|
| 396 |
+
|
| 397 |
+
if (!res.ok) {
|
| 398 |
+
throw new Error(`HTTP ${res.status}: ${res.statusText}`);
|
| 399 |
+
}
|
| 400 |
+
|
| 401 |
+
const data = await res.json();
|
| 402 |
+
loadingContent.textContent = data.response || 'Üzgünüm, bir yanıt alamadım.';
|
| 403 |
+
|
| 404 |
+
} catch (err) {
|
| 405 |
+
loadingContent.innerHTML = `
|
| 406 |
+
<div style="color: #ef4444;">
|
| 407 |
+
❌ Bağlantı hatası: ${err.message}
|
| 408 |
+
<br><small>Lütfen Nova AI sunucusunun çalıştığından emin olun.</small>
|
| 409 |
+
</div>
|
| 410 |
+
`;
|
| 411 |
+
}
|
| 412 |
+
|
| 413 |
+
loading = false;
|
| 414 |
+
sendButton.disabled = false;
|
| 415 |
+
promptInput.focus();
|
| 416 |
+
};
|
| 417 |
+
|
| 418 |
+
// Focus input on load
|
| 419 |
+
window.onload = () => promptInput.focus();
|
| 420 |
+
</script>
|
| 421 |
+
</body>
|
| 422 |
+
</html>
|
download_mistral.py
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Teknova Nova AI Model İndirme Scripti
|
| 2 |
+
# Bu script kendi Nova AI modelinizi yerel klasöre indirir
|
| 3 |
+
from huggingface_hub import snapshot_download
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
# Nova AI Model bilgileri - Kendi modeliniz
|
| 7 |
+
MODEL_NAME = "your-username/nova-ai-model" # Kendi Hugging Face model adresinizi yazın
|
| 8 |
+
LOCAL_DIR = "nova-ai-model"
|
| 9 |
+
|
| 10 |
+
print("🚀 Teknova Nova AI Model İndirme Scripti")
|
| 11 |
+
print("🌟 Bu script kendi Nova AI modelinizi indirir")
|
| 12 |
+
print("=" * 60)
|
| 13 |
+
|
| 14 |
+
if __name__ == "__main__":
|
| 15 |
+
print(f"📦 Nova AI modeli indiriliyor: {MODEL_NAME}")
|
| 16 |
+
print(f"📂 Hedef klasör: {LOCAL_DIR}")
|
| 17 |
+
print("💡 Bu işlem biraz zaman alabilir...")
|
| 18 |
+
|
| 19 |
+
try:
|
| 20 |
+
# Nova AI modelinizi indirin
|
| 21 |
+
snapshot_download(
|
| 22 |
+
repo_id=MODEL_NAME,
|
| 23 |
+
local_dir=LOCAL_DIR,
|
| 24 |
+
local_dir_use_symlinks=False
|
| 25 |
+
)
|
| 26 |
+
print(f"✅ Nova AI modeli '{MODEL_NAME}' başarıyla '{LOCAL_DIR}' klasörüne indirildi!")
|
| 27 |
+
print("🎉 Artık Nova AI uygulamanızı çalıştırabilirsiniz!")
|
| 28 |
+
|
| 29 |
+
except Exception as e:
|
| 30 |
+
print(f"❌ Nova AI model indirme hatası: {e}")
|
| 31 |
+
print("\n💡 Çözüm önerileri:")
|
| 32 |
+
print("1. MODEL_NAME değişkenini kendi model adresinizle değiştirin")
|
| 33 |
+
print("2. Hugging Face token'ınızı ayarlayın (gerekirse)")
|
| 34 |
+
print("3. İnternet bağlantınızı kontrol edin")
|
| 35 |
+
print("4. Model adresinin doğru olduğundan emin olun")
|
| 36 |
+
|
| 37 |
+
print("\n🚀 Teknova Nova AI ile güçlendirilmiştir!")
|
| 38 |
+
input("Press Enter to continue...")
|
gradio_app.py
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import torch
|
| 3 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
# Nova AI Model bilgileri - Teknova'nın özgün modeli
|
| 7 |
+
MODEL_NAME = "./nova-ai-model" # Kendi Nova AI modelinizin yolu
|
| 8 |
+
MODEL_PATH = "/content/nova-ai-model" # Colab için path
|
| 9 |
+
|
| 10 |
+
# Artık token gerekmiyor - kendi modeliniz
|
| 11 |
+
print("🚀 Teknova Nova AI - Özgün model yükleniyor...")
|
| 12 |
+
print("💡 Hugging Face token gerektirmez - tamamen özgün!")
|
| 13 |
+
|
| 14 |
+
# Global değişkenler
|
| 15 |
+
model = None
|
| 16 |
+
tokenizer = None
|
| 17 |
+
|
| 18 |
+
def load_model():
|
| 19 |
+
"""Teknova Nova AI modelini yükle - Özgün model"""
|
| 20 |
+
global model, tokenizer
|
| 21 |
+
|
| 22 |
+
print("🚀 Teknova Nova AI modeli yükleniyor...")
|
| 23 |
+
print("🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!")
|
| 24 |
+
|
| 25 |
+
# Colab için model path kontrolü
|
| 26 |
+
model_path = MODEL_PATH if os.path.exists(MODEL_PATH) else MODEL_NAME
|
| 27 |
+
|
| 28 |
+
try:
|
| 29 |
+
# Nova AI Tokenizer yükle
|
| 30 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 31 |
+
model_path,
|
| 32 |
+
trust_remote_code=True
|
| 33 |
+
)
|
| 34 |
+
|
| 35 |
+
# Nova AI Model yükle - Teknova optimizasyonu
|
| 36 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 37 |
+
model_path,
|
| 38 |
+
torch_dtype=torch.float16,
|
| 39 |
+
device_map="auto",
|
| 40 |
+
trust_remote_code=True,
|
| 41 |
+
load_in_8bit=True # Teknova memory optimization
|
| 42 |
+
)
|
| 43 |
+
|
| 44 |
+
print("✅ Teknova Nova AI modeli başarıyla yüklendi!")
|
| 45 |
+
print("🎉 Özgün Nova AI teknolojisi aktif!")
|
| 46 |
+
return "🚀 Teknova Nova AI hazır! Özgün AI teknolojisiyle sohbet edebilirsiniz."
|
| 47 |
+
|
| 48 |
+
except Exception as e:
|
| 49 |
+
print(f"❌ Nova AI model yükleme hatası: {e}")
|
| 50 |
+
return f"❌ Hata: {str(e)}\n💡 Nova AI model dosyalarınızı doğru konuma yüklediğinizden emin olun."
|
| 51 |
+
|
| 52 |
+
def chat_response(message, history):
|
| 53 |
+
"""Teknova Nova AI ile sohbet yanıtı üret"""
|
| 54 |
+
global model, tokenizer
|
| 55 |
+
|
| 56 |
+
if model is None or tokenizer is None:
|
| 57 |
+
return "❌ Teknova Nova AI henüz yüklenmedi. Lütfen model yüklenmesini bekleyin..."
|
| 58 |
+
|
| 59 |
+
if not message.strip():
|
| 60 |
+
return "❓ Nova AI'ya mesajınızı yazın."
|
| 61 |
+
|
| 62 |
+
try:
|
| 63 |
+
# Sohbet geçmişini Nova AI formatında hazırla
|
| 64 |
+
conversation = ""
|
| 65 |
+
for user_msg, bot_msg in history:
|
| 66 |
+
conversation += f"Kullanıcı: {user_msg}\nNova AI: {bot_msg}\n"
|
| 67 |
+
|
| 68 |
+
# Yeni mesajı ekle
|
| 69 |
+
conversation += f"Kullanıcı: {message}\nNova AI:"
|
| 70 |
+
|
| 71 |
+
# Nova AI Tokenizer ile işle
|
| 72 |
+
inputs = tokenizer(
|
| 73 |
+
conversation,
|
| 74 |
+
return_tensors="pt",
|
| 75 |
+
truncation=True,
|
| 76 |
+
max_length=2048
|
| 77 |
+
).to(model.device)
|
| 78 |
+
|
| 79 |
+
# Nova AI yanıt üret - Teknova optimizasyonu
|
| 80 |
+
with torch.no_grad():
|
| 81 |
+
outputs = model.generate(
|
| 82 |
+
**inputs,
|
| 83 |
+
max_new_tokens=512,
|
| 84 |
+
temperature=0.7,
|
| 85 |
+
top_p=0.9,
|
| 86 |
+
do_sample=True,
|
| 87 |
+
pad_token_id=tokenizer.eos_token_id,
|
| 88 |
+
eos_token_id=tokenizer.eos_token_id
|
| 89 |
+
)
|
| 90 |
+
|
| 91 |
+
# Nova AI yanıtını decode et
|
| 92 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 93 |
+
|
| 94 |
+
# Sadece Nova AI'ın yeni yanıtını al
|
| 95 |
+
new_response = response[len(conversation):].strip()
|
| 96 |
+
|
| 97 |
+
return new_response
|
| 98 |
+
|
| 99 |
+
except Exception as e:
|
| 100 |
+
return f"❌ Nova AI yanıt üretirken hata: {str(e)}"
|
| 101 |
+
|
| 102 |
+
# Model yüklemeyi başlat
|
| 103 |
+
load_model()
|
| 104 |
+
|
| 105 |
+
# Gradio arayüzü oluştur
|
| 106 |
+
with gr.Blocks(
|
| 107 |
+
theme=gr.themes.Soft(),
|
| 108 |
+
title="Nova AI Chat - Teknova",
|
| 109 |
+
css="""
|
| 110 |
+
.gradio-container {
|
| 111 |
+
max-width: 800px;
|
| 112 |
+
margin: 0 auto;
|
| 113 |
+
}
|
| 114 |
+
.chat-message {
|
| 115 |
+
border-radius: 10px;
|
| 116 |
+
padding: 10px;
|
| 117 |
+
margin: 5px 0;
|
| 118 |
+
}
|
| 119 |
+
"""
|
| 120 |
+
) as demo:
|
| 121 |
+
|
| 122 |
+
gr.HTML("""
|
| 123 |
+
<div style="text-align: center; padding: 20px;">
|
| 124 |
+
<h1 style="background: linear-gradient(135deg, #ff6b6b, #4ecdc4); -webkit-background-clip: text; -webkit-text-fill-color: transparent; font-size: 2.5rem; font-weight: bold;">
|
| 125 |
+
🚀 Teknova Nova AI
|
| 126 |
+
</h1>
|
| 127 |
+
<p style="font-size: 1.2rem; color: #666; margin: 10px 0;">
|
| 128 |
+
<strong>Teknova</strong> tarafından geliştirilen <strong>özgün</strong> yapay zeka modeli
|
| 129 |
+
</p>
|
| 130 |
+
<div style="background: linear-gradient(135deg, #ff6b6b, #4ecdc4); color: white; padding: 8px 16px; border-radius: 20px; display: inline-block; font-size: 0.9rem;">
|
| 131 |
+
⚡ Özgün Nova AI Teknolojisi • 🧠 Teknova Innovation
|
| 132 |
+
</div>
|
| 133 |
+
<p style="font-size: 0.9rem; color: #888; margin-top: 10px;">
|
| 134 |
+
🌟 Bu tamamen özgün bir Teknova Nova AI modelidir - Token gerektirmez
|
| 135 |
+
</p>
|
| 136 |
+
</div>
|
| 137 |
+
""")
|
| 138 |
+
|
| 139 |
+
chatbot = gr.Chatbot(
|
| 140 |
+
height=500,
|
| 141 |
+
show_label=False,
|
| 142 |
+
show_share_button=False,
|
| 143 |
+
show_copy_button=True,
|
| 144 |
+
avatar_images=[
|
| 145 |
+
None, # User avatar
|
| 146 |
+
"🤖" # Bot avatar
|
| 147 |
+
]
|
| 148 |
+
)
|
| 149 |
+
|
| 150 |
+
with gr.Row():
|
| 151 |
+
msg = gr.Textbox(
|
| 152 |
+
placeholder="Nova AI'ya mesajınızı yazın...",
|
| 153 |
+
show_label=False,
|
| 154 |
+
scale=4
|
| 155 |
+
)
|
| 156 |
+
submit = gr.Button("🚀 Gönder", scale=1, variant="primary")
|
| 157 |
+
|
| 158 |
+
with gr.Row():
|
| 159 |
+
clear = gr.Button("🗑️ Temizle", scale=1)
|
| 160 |
+
|
| 161 |
+
gr.HTML("""
|
| 162 |
+
<div style="text-align: center; padding: 10px; color: #666;">
|
| 163 |
+
<small>💡 Teknova Nova AI ilk yüklenirken biraz bekleyebilir. Özgün AI teknolojisi ile güçlendirilmiştir.</small>
|
| 164 |
+
<br>
|
| 165 |
+
<small style="color: #ff6b6b;">🚀 <strong>Teknova Nova AI</strong> - Tamamen özgün model teknolojisi</small>
|
| 166 |
+
<br>
|
| 167 |
+
<small style="color: #4ecdc4;">🌟 Hugging Face token gerektirmez - Kendi modeliniz!</small>
|
| 168 |
+
</div>
|
| 169 |
+
""")
|
| 170 |
+
|
| 171 |
+
# Event handlers
|
| 172 |
+
def user_message(message, history):
|
| 173 |
+
return "", history + [[message, None]]
|
| 174 |
+
|
| 175 |
+
def bot_message(history):
|
| 176 |
+
user_message = history[-1][0]
|
| 177 |
+
bot_response = chat_response(user_message, history[:-1])
|
| 178 |
+
history[-1][1] = bot_response
|
| 179 |
+
return history
|
| 180 |
+
|
| 181 |
+
msg.submit(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 182 |
+
bot_message, chatbot, chatbot
|
| 183 |
+
)
|
| 184 |
+
submit.click(user_message, [msg, chatbot], [msg, chatbot], queue=False).then(
|
| 185 |
+
bot_message, chatbot, chatbot
|
| 186 |
+
)
|
| 187 |
+
clear.click(lambda: None, None, chatbot, queue=False)
|
| 188 |
+
|
| 189 |
+
if __name__ == "__main__":
|
| 190 |
+
demo.launch(
|
| 191 |
+
server_name="0.0.0.0",
|
| 192 |
+
server_port=7860,
|
| 193 |
+
share=True
|
| 194 |
+
)
|
main.py
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 2 |
+
import torch
|
| 3 |
+
import os
|
| 4 |
+
|
| 5 |
+
def load_nova_model():
|
| 6 |
+
"""Teknova Nova AI modelini yükle - Özgün model"""
|
| 7 |
+
# Nova AI model path
|
| 8 |
+
model_path = "./nova-ai-model" # Yerel Nova AI model
|
| 9 |
+
colab_path = "/content/nova-ai-model" # Colab için path
|
| 10 |
+
|
| 11 |
+
# Path kontrolü
|
| 12 |
+
actual_path = colab_path if os.path.exists(colab_path) else model_path
|
| 13 |
+
|
| 14 |
+
print("🚀 Teknova Nova AI konsol uygulaması başlatılıyor...")
|
| 15 |
+
print("🌟 Bu tamamen özgün bir Teknova Nova AI modelidir!")
|
| 16 |
+
print("💡 Hugging Face token gerektirmez - kendi modeliniz!")
|
| 17 |
+
|
| 18 |
+
try:
|
| 19 |
+
tokenizer = AutoTokenizer.from_pretrained(actual_path, trust_remote_code=True)
|
| 20 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 21 |
+
actual_path,
|
| 22 |
+
torch_dtype=torch.float16,
|
| 23 |
+
device_map="auto",
|
| 24 |
+
trust_remote_code=True
|
| 25 |
+
)
|
| 26 |
+
print("✅ Teknova Nova AI konsol uygulaması hazır!")
|
| 27 |
+
print("🎉 Özgün Nova AI teknolojisi aktif!")
|
| 28 |
+
return model, tokenizer
|
| 29 |
+
except Exception as e:
|
| 30 |
+
print(f"❌ Nova AI model yükleme hatası: {e}")
|
| 31 |
+
print("💡 Nova AI model dosyalarınızı doğru konuma yüklediğinizden emin olun.")
|
| 32 |
+
return None, None
|
| 33 |
+
|
| 34 |
+
def generate_text(prompt, model, tokenizer, max_new_tokens=128):
|
| 35 |
+
"""Nova AI ile metin üret"""
|
| 36 |
+
# Nova AI konuşma formatı
|
| 37 |
+
conversation = f"Kullanıcı: {prompt}\nNova AI:"
|
| 38 |
+
|
| 39 |
+
inputs = tokenizer(conversation, return_tensors="pt").to(model.device)
|
| 40 |
+
|
| 41 |
+
with torch.no_grad():
|
| 42 |
+
outputs = model.generate(
|
| 43 |
+
**inputs,
|
| 44 |
+
max_new_tokens=max_new_tokens,
|
| 45 |
+
temperature=0.7,
|
| 46 |
+
top_p=0.9,
|
| 47 |
+
do_sample=True
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 51 |
+
nova_response = response[len(conversation):].strip()
|
| 52 |
+
return nova_response
|
| 53 |
+
|
| 54 |
+
def main():
|
| 55 |
+
"""Teknova Nova AI konsol uygulaması"""
|
| 56 |
+
print("=" * 60)
|
| 57 |
+
print("🚀 TEKNOVA NOVA AI - KONSOL UYGULAMASI")
|
| 58 |
+
print("🌟 Özgün yapay zeka teknolojisi")
|
| 59 |
+
print("💡 Token gerektirmez - Tamamen özgün model")
|
| 60 |
+
print("=" * 60)
|
| 61 |
+
|
| 62 |
+
# Nova AI modelini yükle
|
| 63 |
+
model, tokenizer = load_nova_model()
|
| 64 |
+
|
| 65 |
+
if model is None or tokenizer is None:
|
| 66 |
+
print("❌ Nova AI modeli yüklenemedi. Program sonlandırılıyor.")
|
| 67 |
+
return
|
| 68 |
+
|
| 69 |
+
print("\n🎉 Nova AI sohbet moduna geçiliyor...")
|
| 70 |
+
print("💬 Mesajınızı yazın (çıkmak için 'exit' yazın)")
|
| 71 |
+
print("-" * 60)
|
| 72 |
+
|
| 73 |
+
while True:
|
| 74 |
+
try:
|
| 75 |
+
user_input = input("\n👤 Siz: ")
|
| 76 |
+
|
| 77 |
+
if user_input.lower() in ['exit', 'çıkış', 'quit', 'q']:
|
| 78 |
+
print("\n🚀 Teknova Nova AI - Görüşmek üzere!")
|
| 79 |
+
break
|
| 80 |
+
|
| 81 |
+
if not user_input.strip():
|
| 82 |
+
print("🤖 Nova AI: Lütfen bir mesaj yazın.")
|
| 83 |
+
continue
|
| 84 |
+
|
| 85 |
+
print("🤖 Nova AI düşünüyor...")
|
| 86 |
+
output = generate_text(user_input, model, tokenizer)
|
| 87 |
+
print(f"🤖 Nova AI: {output}")
|
| 88 |
+
|
| 89 |
+
except KeyboardInterrupt:
|
| 90 |
+
print("\n\n🚀 Teknova Nova AI - Program sonlandırıldı!")
|
| 91 |
+
break
|
| 92 |
+
except Exception as e:
|
| 93 |
+
print(f"❌ Hata: {e}")
|
| 94 |
+
|
| 95 |
+
if __name__ == "__main__":
|
| 96 |
+
main()
|
nova-ai-model/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
nova-ai-model/README.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
tags:
|
| 6 |
+
- pretrained
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
inference:
|
| 9 |
+
parameters:
|
| 10 |
+
temperature: 0.7
|
| 11 |
+
|
| 12 |
+
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Model Card for Mistral-7B-v0.1
|
| 16 |
+
|
| 17 |
+
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
|
| 18 |
+
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
|
| 19 |
+
|
| 20 |
+
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
|
| 21 |
+
|
| 22 |
+
## Model Architecture
|
| 23 |
+
|
| 24 |
+
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
|
| 25 |
+
- Grouped-Query Attention
|
| 26 |
+
- Sliding-Window Attention
|
| 27 |
+
- Byte-fallback BPE tokenizer
|
| 28 |
+
|
| 29 |
+
## Troubleshooting
|
| 30 |
+
|
| 31 |
+
- If you see the following error:
|
| 32 |
+
```
|
| 33 |
+
KeyError: 'mistral'
|
| 34 |
+
```
|
| 35 |
+
- Or:
|
| 36 |
+
```
|
| 37 |
+
NotImplementedError: Cannot copy out of meta tensor; no data!
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
|
| 41 |
+
|
| 42 |
+
## Notice
|
| 43 |
+
|
| 44 |
+
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
|
| 45 |
+
|
| 46 |
+
## The Mistral AI Team
|
| 47 |
+
|
| 48 |
+
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
nova-ai-model/config.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"MistralForCausalLM"
|
| 4 |
+
],
|
| 5 |
+
"bos_token_id": 1,
|
| 6 |
+
"eos_token_id": 2,
|
| 7 |
+
"hidden_act": "silu",
|
| 8 |
+
"hidden_size": 4096,
|
| 9 |
+
"initializer_range": 0.02,
|
| 10 |
+
"intermediate_size": 14336,
|
| 11 |
+
"max_position_embeddings": 32768,
|
| 12 |
+
"model_type": "mistral",
|
| 13 |
+
"num_attention_heads": 32,
|
| 14 |
+
"num_hidden_layers": 32,
|
| 15 |
+
"num_key_value_heads": 8,
|
| 16 |
+
"rms_norm_eps": 1e-05,
|
| 17 |
+
"rope_theta": 10000.0,
|
| 18 |
+
"sliding_window": 4096,
|
| 19 |
+
"tie_word_embeddings": false,
|
| 20 |
+
"torch_dtype": "bfloat16",
|
| 21 |
+
"transformers_version": "4.34.0.dev0",
|
| 22 |
+
"use_cache": true,
|
| 23 |
+
"vocab_size": 32000
|
| 24 |
+
}
|
nova-ai-model/generation_config.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 1,
|
| 4 |
+
"eos_token_id": 2,
|
| 5 |
+
"transformers_version": "4.34.0.dev0"
|
| 6 |
+
}
|
nova-ai-model/model.safetensors.index.json
ADDED
|
@@ -0,0 +1,298 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"metadata": {
|
| 3 |
+
"total_size": 14483464192
|
| 4 |
+
},
|
| 5 |
+
"weight_map": {
|
| 6 |
+
"lm_head.weight": "model-00002-of-00002.safetensors",
|
| 7 |
+
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
|
| 8 |
+
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 9 |
+
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 10 |
+
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 11 |
+
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 12 |
+
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 13 |
+
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 14 |
+
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 15 |
+
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 16 |
+
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 17 |
+
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 18 |
+
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 19 |
+
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 20 |
+
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 21 |
+
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 22 |
+
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 23 |
+
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 24 |
+
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 25 |
+
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 26 |
+
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 27 |
+
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 28 |
+
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 29 |
+
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 30 |
+
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 31 |
+
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 32 |
+
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 33 |
+
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 34 |
+
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 35 |
+
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 36 |
+
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 37 |
+
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 38 |
+
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 39 |
+
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 40 |
+
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 41 |
+
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 42 |
+
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 43 |
+
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 44 |
+
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 45 |
+
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 46 |
+
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 47 |
+
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 48 |
+
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 49 |
+
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 50 |
+
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 51 |
+
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 52 |
+
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 53 |
+
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 54 |
+
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 55 |
+
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 56 |
+
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 57 |
+
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 58 |
+
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 59 |
+
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 60 |
+
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 61 |
+
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 62 |
+
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 63 |
+
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 64 |
+
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 65 |
+
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 66 |
+
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 67 |
+
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 68 |
+
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 69 |
+
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 70 |
+
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 71 |
+
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 72 |
+
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 73 |
+
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 74 |
+
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 75 |
+
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 76 |
+
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 77 |
+
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 78 |
+
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 79 |
+
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 80 |
+
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 81 |
+
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 82 |
+
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 83 |
+
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 84 |
+
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 85 |
+
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 86 |
+
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 87 |
+
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 88 |
+
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 89 |
+
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 90 |
+
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 91 |
+
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 92 |
+
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 93 |
+
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 94 |
+
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 95 |
+
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 96 |
+
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 97 |
+
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 98 |
+
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 99 |
+
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 100 |
+
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 101 |
+
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 102 |
+
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 103 |
+
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 104 |
+
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 105 |
+
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 106 |
+
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 107 |
+
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 108 |
+
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 109 |
+
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 110 |
+
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 111 |
+
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 112 |
+
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 113 |
+
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 114 |
+
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 115 |
+
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 116 |
+
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 117 |
+
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 118 |
+
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 119 |
+
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 120 |
+
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 121 |
+
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 122 |
+
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 123 |
+
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 124 |
+
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 125 |
+
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 126 |
+
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 127 |
+
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 128 |
+
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 129 |
+
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 130 |
+
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 131 |
+
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 132 |
+
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 133 |
+
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 134 |
+
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 135 |
+
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 136 |
+
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 137 |
+
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 138 |
+
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 139 |
+
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 140 |
+
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 141 |
+
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 142 |
+
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 143 |
+
"model.layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 144 |
+
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 145 |
+
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 146 |
+
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 147 |
+
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 148 |
+
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 149 |
+
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 150 |
+
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 151 |
+
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 152 |
+
"model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 153 |
+
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 154 |
+
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 155 |
+
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 156 |
+
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 157 |
+
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 158 |
+
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 159 |
+
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 160 |
+
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 161 |
+
"model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 162 |
+
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 163 |
+
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 164 |
+
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 165 |
+
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 166 |
+
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 167 |
+
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 168 |
+
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 169 |
+
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 170 |
+
"model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 171 |
+
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 172 |
+
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 173 |
+
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 174 |
+
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 175 |
+
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 176 |
+
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 177 |
+
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 178 |
+
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 179 |
+
"model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 180 |
+
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 181 |
+
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 182 |
+
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 183 |
+
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 184 |
+
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 185 |
+
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 186 |
+
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 187 |
+
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 188 |
+
"model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 189 |
+
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 190 |
+
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 191 |
+
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 192 |
+
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 193 |
+
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 194 |
+
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 195 |
+
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 196 |
+
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 197 |
+
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 198 |
+
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 199 |
+
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 200 |
+
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 201 |
+
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 202 |
+
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 203 |
+
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 204 |
+
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 205 |
+
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 206 |
+
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 207 |
+
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 208 |
+
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 209 |
+
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 210 |
+
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 211 |
+
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 212 |
+
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 213 |
+
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 214 |
+
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 215 |
+
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 216 |
+
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 217 |
+
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 218 |
+
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 219 |
+
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 220 |
+
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 221 |
+
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 222 |
+
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 223 |
+
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 224 |
+
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 225 |
+
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 226 |
+
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 227 |
+
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 228 |
+
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 229 |
+
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 230 |
+
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 231 |
+
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 232 |
+
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 233 |
+
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 234 |
+
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
| 235 |
+
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
| 236 |
+
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
| 237 |
+
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
| 238 |
+
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
| 239 |
+
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
| 240 |
+
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
| 241 |
+
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
| 242 |
+
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 243 |
+
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 244 |
+
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 245 |
+
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 246 |
+
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 247 |
+
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 248 |
+
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 249 |
+
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 250 |
+
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 251 |
+
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 252 |
+
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 253 |
+
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 254 |
+
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 255 |
+
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 256 |
+
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 257 |
+
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 258 |
+
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 259 |
+
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 260 |
+
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 261 |
+
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 262 |
+
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 263 |
+
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 264 |
+
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 265 |
+
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 266 |
+
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 267 |
+
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 268 |
+
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 269 |
+
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 270 |
+
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 271 |
+
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 272 |
+
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 273 |
+
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 274 |
+
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 275 |
+
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 276 |
+
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 277 |
+
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 278 |
+
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 279 |
+
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 280 |
+
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 281 |
+
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 282 |
+
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 283 |
+
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 284 |
+
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 285 |
+
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 286 |
+
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 287 |
+
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 288 |
+
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
| 289 |
+
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
| 290 |
+
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
| 291 |
+
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
| 292 |
+
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
| 293 |
+
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
| 294 |
+
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
| 295 |
+
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
| 296 |
+
"model.norm.weight": "model-00002-of-00002.safetensors"
|
| 297 |
+
}
|
| 298 |
+
}
|
nova-ai-model/pytorch_model.bin.index.json
ADDED
|
@@ -0,0 +1,298 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"metadata": {
|
| 3 |
+
"total_size": 14483464192
|
| 4 |
+
},
|
| 5 |
+
"weight_map": {
|
| 6 |
+
"lm_head.weight": "pytorch_model-00002-of-00002.bin",
|
| 7 |
+
"model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
|
| 8 |
+
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 9 |
+
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 10 |
+
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 11 |
+
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 12 |
+
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 13 |
+
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 14 |
+
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 15 |
+
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 16 |
+
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 17 |
+
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 18 |
+
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 19 |
+
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 20 |
+
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 21 |
+
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 22 |
+
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 23 |
+
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 24 |
+
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 25 |
+
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 26 |
+
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 27 |
+
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 28 |
+
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 29 |
+
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 30 |
+
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 31 |
+
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 32 |
+
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 33 |
+
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 34 |
+
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 35 |
+
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 36 |
+
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 37 |
+
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 38 |
+
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 39 |
+
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 40 |
+
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 41 |
+
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 42 |
+
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 43 |
+
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 44 |
+
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 45 |
+
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 46 |
+
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 47 |
+
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 48 |
+
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 49 |
+
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 50 |
+
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 51 |
+
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 52 |
+
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 53 |
+
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 54 |
+
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 55 |
+
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 56 |
+
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 57 |
+
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 58 |
+
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 59 |
+
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 60 |
+
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 61 |
+
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 62 |
+
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 63 |
+
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 64 |
+
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 65 |
+
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 66 |
+
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 67 |
+
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 68 |
+
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 69 |
+
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 70 |
+
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 71 |
+
"model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 72 |
+
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 73 |
+
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 74 |
+
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 75 |
+
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 76 |
+
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 77 |
+
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 78 |
+
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 79 |
+
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 80 |
+
"model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 81 |
+
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 82 |
+
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 83 |
+
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 84 |
+
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 85 |
+
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 86 |
+
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 87 |
+
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 88 |
+
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 89 |
+
"model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 90 |
+
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 91 |
+
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 92 |
+
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 93 |
+
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 94 |
+
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 95 |
+
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 96 |
+
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 97 |
+
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 98 |
+
"model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 99 |
+
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 100 |
+
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 101 |
+
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 102 |
+
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 103 |
+
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 104 |
+
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 105 |
+
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 106 |
+
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 107 |
+
"model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 108 |
+
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 109 |
+
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 110 |
+
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 111 |
+
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 112 |
+
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 113 |
+
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 114 |
+
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 115 |
+
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 116 |
+
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 117 |
+
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 118 |
+
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 119 |
+
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 120 |
+
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 121 |
+
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 122 |
+
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 123 |
+
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 124 |
+
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 125 |
+
"model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 126 |
+
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 127 |
+
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 128 |
+
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 129 |
+
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 130 |
+
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 131 |
+
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 132 |
+
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 133 |
+
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 134 |
+
"model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 135 |
+
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 136 |
+
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 137 |
+
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 138 |
+
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 139 |
+
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 140 |
+
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 141 |
+
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 142 |
+
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 143 |
+
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 144 |
+
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 145 |
+
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 146 |
+
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 147 |
+
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 148 |
+
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 149 |
+
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 150 |
+
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 151 |
+
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 152 |
+
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 153 |
+
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 154 |
+
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 155 |
+
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 156 |
+
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 157 |
+
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 158 |
+
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 159 |
+
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 160 |
+
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 161 |
+
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 162 |
+
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 163 |
+
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 164 |
+
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 165 |
+
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 166 |
+
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 167 |
+
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 168 |
+
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 169 |
+
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 170 |
+
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 171 |
+
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 172 |
+
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 173 |
+
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 174 |
+
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 175 |
+
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 176 |
+
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 177 |
+
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 178 |
+
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 179 |
+
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 180 |
+
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 181 |
+
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 182 |
+
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 183 |
+
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 184 |
+
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 185 |
+
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 186 |
+
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 187 |
+
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 188 |
+
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 189 |
+
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 190 |
+
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 191 |
+
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 192 |
+
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 193 |
+
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 194 |
+
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 195 |
+
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 196 |
+
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 197 |
+
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 198 |
+
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 199 |
+
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 200 |
+
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 201 |
+
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 202 |
+
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 203 |
+
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 204 |
+
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 205 |
+
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 206 |
+
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 207 |
+
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 208 |
+
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 209 |
+
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 210 |
+
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 211 |
+
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 212 |
+
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 213 |
+
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 214 |
+
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 215 |
+
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 216 |
+
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 217 |
+
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 218 |
+
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 219 |
+
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 220 |
+
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 221 |
+
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 222 |
+
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 223 |
+
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 224 |
+
"model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 225 |
+
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 226 |
+
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 227 |
+
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 228 |
+
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 229 |
+
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 230 |
+
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 231 |
+
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 232 |
+
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 233 |
+
"model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 234 |
+
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 235 |
+
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 236 |
+
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 237 |
+
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
|
| 238 |
+
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 239 |
+
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 240 |
+
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 241 |
+
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
|
| 242 |
+
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 243 |
+
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 244 |
+
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 245 |
+
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 246 |
+
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 247 |
+
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 248 |
+
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 249 |
+
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 250 |
+
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 251 |
+
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 252 |
+
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 253 |
+
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 254 |
+
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 255 |
+
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 256 |
+
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 257 |
+
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 258 |
+
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 259 |
+
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 260 |
+
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 261 |
+
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 262 |
+
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 263 |
+
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 264 |
+
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 265 |
+
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 266 |
+
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 267 |
+
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 268 |
+
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 269 |
+
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 270 |
+
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 271 |
+
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 272 |
+
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 273 |
+
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 274 |
+
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 275 |
+
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 276 |
+
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 277 |
+
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 278 |
+
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 279 |
+
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 280 |
+
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 281 |
+
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 282 |
+
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 283 |
+
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 284 |
+
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 285 |
+
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 286 |
+
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 287 |
+
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 288 |
+
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 289 |
+
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 290 |
+
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 291 |
+
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
|
| 292 |
+
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 293 |
+
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 294 |
+
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 295 |
+
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
|
| 296 |
+
"model.norm.weight": "pytorch_model-00002-of-00002.bin"
|
| 297 |
+
}
|
| 298 |
+
}
|
nova-ai-model/special_tokens_map.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": {
|
| 3 |
+
"content": "<s>",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": false,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"eos_token": {
|
| 10 |
+
"content": "</s>",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": false,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"unk_token": {
|
| 17 |
+
"content": "<unk>",
|
| 18 |
+
"lstrip": false,
|
| 19 |
+
"normalized": false,
|
| 20 |
+
"rstrip": false,
|
| 21 |
+
"single_word": false
|
| 22 |
+
}
|
| 23 |
+
}
|
nova-ai-model/tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
nova-ai-model/tokenizer.model
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
|
| 3 |
+
size 493443
|
nova-ai-model/tokenizer_config.json
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_bos_token": true,
|
| 3 |
+
"add_eos_token": false,
|
| 4 |
+
"add_prefix_space": null,
|
| 5 |
+
"added_tokens_decoder": {
|
| 6 |
+
"0": {
|
| 7 |
+
"content": "<unk>",
|
| 8 |
+
"lstrip": false,
|
| 9 |
+
"normalized": false,
|
| 10 |
+
"rstrip": false,
|
| 11 |
+
"single_word": false,
|
| 12 |
+
"special": true
|
| 13 |
+
},
|
| 14 |
+
"1": {
|
| 15 |
+
"content": "<s>",
|
| 16 |
+
"lstrip": false,
|
| 17 |
+
"normalized": false,
|
| 18 |
+
"rstrip": false,
|
| 19 |
+
"single_word": false,
|
| 20 |
+
"special": true
|
| 21 |
+
},
|
| 22 |
+
"2": {
|
| 23 |
+
"content": "</s>",
|
| 24 |
+
"lstrip": false,
|
| 25 |
+
"normalized": false,
|
| 26 |
+
"rstrip": false,
|
| 27 |
+
"single_word": false,
|
| 28 |
+
"special": true
|
| 29 |
+
}
|
| 30 |
+
},
|
| 31 |
+
"additional_special_tokens": [],
|
| 32 |
+
"bos_token": "<s>",
|
| 33 |
+
"clean_up_tokenization_spaces": false,
|
| 34 |
+
"eos_token": "</s>",
|
| 35 |
+
"legacy": false,
|
| 36 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 37 |
+
"pad_token": null,
|
| 38 |
+
"sp_model_kwargs": {},
|
| 39 |
+
"spaces_between_special_tokens": false,
|
| 40 |
+
"tokenizer_class": "LlamaTokenizer",
|
| 41 |
+
"unk_token": "<unk>",
|
| 42 |
+
"use_default_system_prompt": false
|
| 43 |
+
}
|
requirements.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
fastapi==0.104.1
|
| 2 |
+
uvicorn[standard]==0.24.0
|
| 3 |
+
transformers==4.35.2
|
| 4 |
+
torch==2.1.1
|
| 5 |
+
huggingface_hub==0.19.4
|
| 6 |
+
accelerate==0.24.1
|
| 7 |
+
bitsandbytes==0.41.3
|
| 8 |
+
gradio==4.7.1
|
token_kurulum.bat
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
@echo off
|
| 2 |
+
chcp 65001 >nul
|
| 3 |
+
echo.
|
| 4 |
+
echo 🚀 ========================
|
| 5 |
+
echo Nova AI Token Kurulumu
|
| 6 |
+
echo by Teknova
|
| 7 |
+
echo ========================
|
| 8 |
+
echo.
|
| 9 |
+
|
| 10 |
+
echo 📝 Hugging Face Token alma adımları:
|
| 11 |
+
echo.
|
| 12 |
+
echo 1️⃣ Tarayıcınızda şu adresi açın:
|
| 13 |
+
echo 👉 https://huggingface.co/settings/tokens
|
| 14 |
+
echo.
|
| 15 |
+
echo 2️⃣ "New token" butonuna tıklayın
|
| 16 |
+
echo.
|
| 17 |
+
echo 3️⃣ Token bilgilerini doldurun:
|
| 18 |
+
echo 📛 Name: NovaAI-Token
|
| 19 |
+
echo 🔑 Role: Read
|
| 20 |
+
echo.
|
| 21 |
+
echo 4️⃣ "Generate a token" tıklayın
|
| 22 |
+
echo.
|
| 23 |
+
echo 5️⃣ Token'ı kopyalayın (hf_xxx... ile başlar)
|
| 24 |
+
echo.
|
| 25 |
+
echo 🔧 Token'ı nasıl kullanacağınız:
|
| 26 |
+
echo.
|
| 27 |
+
echo 💻 Seçenek 1 - Environment Variable:
|
| 28 |
+
echo set HF_TOKEN=hf_xxxxxxxxxxxxxxxxxx
|
| 29 |
+
echo.
|
| 30 |
+
echo 📝 Seçenek 2 - Kod içinde:
|
| 31 |
+
echo gradio_app.py dosyasındaki ilgili satırı düzenleyin
|
| 32 |
+
echo.
|
| 33 |
+
echo ⚡ EN KOLAY: Yerel model kullanın (token gerekmez!)
|
| 34 |
+
echo ✅ Tüm dosyalar zaten düzenlendi
|
| 35 |
+
echo.
|
| 36 |
+
echo 🚀 Nova AI başlatmak için:
|
| 37 |
+
echo 👉 baslat_api.bat (Web arayüzü)
|
| 38 |
+
echo 👉 baslat_konsol.bat (Konsol)
|
| 39 |
+
echo.
|
| 40 |
+
echo 💡 Teknova ile güçlendirilmiştir
|
| 41 |
+
echo.
|
| 42 |
+
pause
|