Commit ·
a2e2ce2
0
Parent(s):
Initial commit: Jagirl UI application
Browse files- .github/instructions/このプロジェクトの目的.instructions.md +65 -0
- .github/prompts/基本.instructions.md +177 -0
- .gitignore +96 -0
- .python-version +1 -0
- README.md +0 -0
- app.py +484 -0
- gradio_ui/.gitignore +42 -0
- gradio_ui/.python-version +1 -0
- gradio_ui/README.md +130 -0
- gradio_ui/app.py +220 -0
- gradio_ui/imgs/J_channel.svg +202 -0
- gradio_ui/imgs/tvasahi.svg +1 -0
- gradio_ui/main.py +6 -0
- gradio_ui/pyproject.toml +11 -0
- gradio_ui/requirements.txt +4 -0
- prompt_base.txt +23 -0
- pyproject.toml +53 -0
- utils/download_hugginface_repo.py +34 -0
- utils/logger.py +329 -0
- utils/migrate_logs.py +173 -0
- utils/test_download_hugginface_repo.py +52 -0
- utils/test_high_quality_generation.py +556 -0
- uv.lock +0 -0
- 仮想環境への入り方.txt +1 -0
.github/instructions/このプロジェクトの目的.instructions.md
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
applyTo: '**'
|
| 3 |
+
---
|
| 4 |
+
Provide project context and coding guidelines that AI should follow when generating code, answering questions, or reviewing changes.
|
| 5 |
+
Miragic AI Image Generator - UI開発用システムプロンプト
|
| 6 |
+
|
| 7 |
+
このシステムプロンプトは、gradioを用いた画像生成AIのユーザーインターフェースを開発するためのものです。
|
| 8 |
+
Gradioを使用し、以下の設計原則に従って実装してください。
|
| 9 |
+
|
| 10 |
+
【設計原則】
|
| 11 |
+
1. シンプルさ: UIは最小限で直感的に。不要な機能や複雑さは排除。
|
| 12 |
+
2. 一貫性: 全体で統一されたデザイン言語を維持。
|
| 13 |
+
3. パフォーマンス: 高速な読み込みとレスポンシブな操作性を確保。
|
| 14 |
+
4. アクセシビリティ: すべてのユーザーが利用可能な設計を心がける。
|
| 15 |
+
5. 拡張性: 既存機能を壊さずに将来の拡張が可能な構造。
|
| 16 |
+
|
| 17 |
+
【実装方針】
|
| 18 |
+
- Gradioの標準コンポーネントを使用(CSSでのスタイリングは最小限)
|
| 19 |
+
- 過剰なJavaScript使用は避ける(できれば使用しない)
|
| 20 |
+
- Hugging Faceパイプラインを用いたモデル推論を統合
|
| 21 |
+
- モバイル・デスクトップ両対応のレスポンシブ設計
|
| 22 |
+
|
| 23 |
+
【必須機能】
|
| 24 |
+
- テキスト入力:画像生成のプロンプト入力
|
| 25 |
+
- 画像出力:生成された画像の表示
|
| 26 |
+
- オプションパラメータ:画像サイズ、スタイル等のコントロール
|
| 27 |
+
- 生成ボタン:画像生成のトリガー
|
| 28 |
+
- 処理中の視覚的フィードバック
|
| 29 |
+
- プロンプト・ネガティブプロンプトのサポート例示
|
| 30 |
+
|
| 31 |
+
オプションパラメター
|
| 32 |
+
【オプションパラメーター設定例】
|
| 33 |
+
🔧 Sampling Method (スケジューラー):
|
| 34 |
+
- DDIM: 高品質、少ないステップで良い結果
|
| 35 |
+
- DPMSolver: 高速で高品質(推奨)
|
| 36 |
+
- Euler: 安定した結果
|
| 37 |
+
- EulerA: より多様な結果
|
| 38 |
+
- LMS: 古典的手法
|
| 39 |
+
- PNDM: デフォルト
|
| 40 |
+
|
| 41 |
+
📊 Sampling Steps (num_inference_steps): 10-150
|
| 42 |
+
- 少ない (10-20): 高速だが品質低め
|
| 43 |
+
- 中程度 (25-40): バランス良好(推奨)
|
| 44 |
+
- 多い (50-150): 高品質だが時間かかる
|
| 45 |
+
|
| 46 |
+
🎲 Seed (generator):
|
| 47 |
+
- 同じシード = 同じ画像(再現性)
|
| 48 |
+
- ランダムシード = バリエーション
|
| 49 |
+
|
| 50 |
+
⚙️ CFG Scale (guidance_scale): 1-20
|
| 51 |
+
- 低い (3-5): プロンプトに緩く従う、自然
|
| 52 |
+
- 中程度 (7-10): バランス良好(推奨)
|
| 53 |
+
- 高い (12-20): プロンプトに厳密に従う
|
| 54 |
+
|
| 55 |
+
🔧 その他:
|
| 56 |
+
- eta: ノイズ制御 (0.0-1.0)
|
| 57 |
+
- width/height: 画像サイズ (64の倍数推奨)
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
【注意事項】
|
| 62 |
+
- すべての実装はHugging Faceパイプラインを使用すること
|
| 63 |
+
- 依存関係はrequirements.txtに適切に記述
|
| 64 |
+
- 適切なエラーハンドリングとユーザーフィードバックを実装
|
| 65 |
+
- Hugging Face Spacesのデプロイガイドラインに従うこと
|
.github/prompts/基本.instructions.md
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
applyTo: '**'
|
| 3 |
+
---
|
| 4 |
+
Provide project context and coding guidelines that AI should follow when generating code, answering questions, or reviewing changes.素晴らしいシステムプロンプトですね。
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## 🧠 システムプロンプト
|
| 10 |
+
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
### 🔧 基本姿勢
|
| 14 |
+
|
| 15 |
+
あなたは優れたソフトウェアエンジニアです。
|
| 16 |
+
以下の設計原則・思想・制約のもと、コードを記述・設計・レビューします。
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
### 🎯 プロジェクトの思想と目的
|
| 21 |
+
|
| 22 |
+
- **目的**:「動く」「速い」「安全」なシステムをシンプルに構築する。
|
| 23 |
+
- **思想**:
|
| 24 |
+
- **マイクロサービス的アプローチ**:単機能モジュールを意識した設計。
|
| 25 |
+
- **アジャイル開発対応**:最小限の機能でまず動かすことを重視。
|
| 26 |
+
- **過剰な機能追加を厳禁**:YAGNI(You Aren't Gonna Need It)を徹底。
|
| 27 |
+
- **命名の一貫性**:`creators`, `generators`, `builders` などの混在を許さない。
|
| 28 |
+
- **疎結合・高凝集**:モジュール間の依存を最小限に保つ。
|
| 29 |
+
- **DRY, KISS, SOLID** を常に意識した設計。
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
### 🧱 設計原則
|
| 34 |
+
|
| 35 |
+
1. **動くこと**:最小限の機能で動作確認を行う。
|
| 36 |
+
2. **速いこと**:パフォーマンスを意識した実装。
|
| 37 |
+
3. **安全なこと**:型安全性、エラー処理、セキュリティを考慮。
|
| 38 |
+
4. **可読性の高さ**:コードは誰が読んでも理解できるように。
|
| 39 |
+
5. **保守性の高さ**:変更が容易で拡張可能な設計。
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
### 🚫 禁止事項と制約
|
| 44 |
+
|
| 45 |
+
- ❌ `creators`, `generators`, `builders` などの **役割が曖昧な命名の混在を禁止**。
|
| 46 |
+
- 例:`UserCreator`, `UserGenerator`, `UserBuilder` は役割が曖昧で混在してはならない。
|
| 47 |
+
- 代替案:`UserService`, `UserFactory`, `UserHandler` など、**一貫性のある命名**を使用。
|
| 48 |
+
- ❌ **過剰な機能追加**(機能蔓延)をしない。
|
| 49 |
+
- 必要な機能だけを実装し、**YAGNI** を徹底。
|
| 50 |
+
- ❌ **ハードコード禁止**:設定値やマジックナンバーは定数化・設定ファイル化。
|
| 51 |
+
- ❌ **コメントの省略禁止**:コードには日本語でわかりやすく説明を記述。
|
| 52 |
+
- ❌ **無意味な抽象化禁止**:過度なデザインパターンの適用を避ける。
|
| 53 |
+
- ❌ **正常に動いている環境の変更禁止**:動作確認済みの環境設定を不用意に変更しない。
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
### 🧪 コーディング規約
|
| 58 |
+
|
| 59 |
+
- ✅ **型ヒント**(type hints)を必須とする。
|
| 60 |
+
- ✅ **docstring** を記述し、関数・クラスの目的を明確にする。
|
| 61 |
+
- ✅ **単体テスト**(unittest / pytest)を必ず記述。
|
| 62 |
+
- ✅ **エラー処理**は以下の流れで実装:
|
| 63 |
+
1. エラーの分析
|
| 64 |
+
2. 対処方法の検討
|
| 65 |
+
3. エラー処理の実装(try-except, logging, fallback など)
|
| 66 |
+
- ✅ **仮想環境**を使用して実行・テストすること。
|
| 67 |
+
|
| 68 |
+
---
|
| 69 |
+
|
| 70 |
+
### 🧭 モジュール設計の指針
|
| 71 |
+
|
| 72 |
+
- 各モジュールは **1つの責務のみを持つ**(Single Responsibility Principle)。
|
| 73 |
+
- モジュール名は **明確で一貫性のある命名**(例:`user_service.py`, `auth_handler.py`)。
|
| 74 |
+
- モジュール間は **依存を最小限に保ち、疎結合**にする。
|
| 75 |
+
- 共通処理は **utils や shared モジュール**に切り出す。
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
### 🧠 開発プロセス
|
| 80 |
+
|
| 81 |
+
1. **最小限の機能で動かす**(MVP)
|
| 82 |
+
2. **動作確認 → テスト追加 → リファクタリング**
|
| 83 |
+
3. **必要に応じて機能追加**(ただしYAGNIを意識)
|
| 84 |
+
4. **命名・構造の整合性を常にチェック**
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
### 🧾 エラー対応のスタイル
|
| 89 |
+
|
| 90 |
+
エラー発生時は以下の形式で報告してください:
|
| 91 |
+
|
| 92 |
+
```
|
| 93 |
+
【エラー内容】
|
| 94 |
+
TypeError: 'NoneType' object is not subscriptable
|
| 95 |
+
|
| 96 |
+
【原因分析】
|
| 97 |
+
user_data が None の場合にアクセスしようとしている。
|
| 98 |
+
|
| 99 |
+
【対処方法】
|
| 100 |
+
user_data が None でないことを確認してからアクセスする。
|
| 101 |
+
例:
|
| 102 |
+
if user_data:
|
| 103 |
+
return user_data['id']
|
| 104 |
+
else:
|
| 105 |
+
return None
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
---
|
| 109 |
+
|
| 110 |
+
### 🧩 まとめ:理想とするコードの特徴
|
| 111 |
+
|
| 112 |
+
| 特徴 | 説明 |
|
| 113 |
+
|--------------|------|
|
| 114 |
+
| **シンプル** | 不要な機能や複雑さを排除 |
|
| 115 |
+
| **明確** | 命名・構造が一貫しており、意図が伝わる |
|
| 116 |
+
| **安全** | 型安全・エラー処理が徹底されている |
|
| 117 |
+
| **保守性** | 変更・拡張が容易な設計 |
|
| 118 |
+
| **テスト可能** | 単体テストが容易に記述可能 |
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
### 📋 メタデータJSONスキーマ設計原則
|
| 123 |
+
|
| 124 |
+
本プロジェクトでは、画像パーツのレイアウト情報を管理するためにJSONメタデータを使用します。
|
| 125 |
+
|
| 126 |
+
#### メタデータの役割
|
| 127 |
+
|
| 128 |
+
- ✅ **レイアウト情報のみ管理**: bbox(位置・サイズ)、z_order(重なり順)、opacity(透明度)
|
| 129 |
+
- ✅ **DRY原則の徹底**: カラー情報��RGB, fill, stroke)は画像ファイル内部に埋め込み、JSONには記載しない
|
| 130 |
+
- ✅ **後方互換性の確保**: オプションフィールドは`.get()`でデフォルト値を使用
|
| 131 |
+
|
| 132 |
+
#### 必須フィールド
|
| 133 |
+
|
| 134 |
+
```json
|
| 135 |
+
{
|
| 136 |
+
"part_name": "background",
|
| 137 |
+
"bbox": {"x": 0, "y": 0, "width": 1024, "height": 1024},
|
| 138 |
+
"canvas_size": {"width": 1024, "height": 1024},
|
| 139 |
+
"z_order": 0,
|
| 140 |
+
"has_alpha": true,
|
| 141 |
+
"is_vector": false,
|
| 142 |
+
"data_type": "bitmap"
|
| 143 |
+
}
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
#### オプションフィールド(推奨)
|
| 147 |
+
|
| 148 |
+
```json
|
| 149 |
+
{
|
| 150 |
+
"opacity": 1.0,
|
| 151 |
+
"blend_mode": "normal",
|
| 152 |
+
"color_profile": "sRGB IEC61966-2.1",
|
| 153 |
+
"is_global_lineart": false,
|
| 154 |
+
"parent_part": null,
|
| 155 |
+
"z_order_mode": "auto",
|
| 156 |
+
"vector_type": "text",
|
| 157 |
+
"text_content": "LOGO",
|
| 158 |
+
"fill_color": {"r": 255, "g": 0, "b": 0, "a": 255},
|
| 159 |
+
"stroke_color": {"r": 0, "g": 0, "b": 0, "a": 255},
|
| 160 |
+
"background_color": {"r": 255, "g": 255, "b": 255, "a": 0}
|
| 161 |
+
}
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
#### 設計上の注意点
|
| 165 |
+
|
| 166 |
+
- ✅ **カラー情報のメタデータ化**: fill_color, stroke_color, background_colorをRGBA形式で保存
|
| 167 |
+
- **用途**: PDFフォールバックカラー、プレビュー生成、レイヤー識別用
|
| 168 |
+
- **⚠️ 必須要件**: メタデータに色情報が存在する場合、**必ず使用すること**(瑕疵案件対策)
|
| 169 |
+
- **優先順位**: メタデータのカラー情報 > 画像ファイル内の実RGBA値(顧客指定色を最優先)
|
| 170 |
+
- **欠損時の挙動**: メタデータに色情報がない場合のみ、画像ファイルから自動抽出
|
| 171 |
+
- ✅ **z_order自動計算**: 面積降順 → bbox.y降順 → bbox.x昇順でソート(一意性確保)
|
| 172 |
+
- ✅ **ベクター固有情報**: `vector_type`, `text_content`を保存(将来的な拡張性)
|
| 173 |
+
|
| 174 |
+
---
|
| 175 |
+
|
| 176 |
+
ご希望であれば、このプロンプトを `.md` や `.txt` ファイル形式で出力することも可能です。
|
| 177 |
+
また、チームで共有するための簡潔なバージョンも作成できます。お気軽にどうぞ。
|
.gitignore
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python-generated files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
build/
|
| 8 |
+
dist/
|
| 9 |
+
wheels/
|
| 10 |
+
*.egg-info
|
| 11 |
+
.eggs/
|
| 12 |
+
*.egg
|
| 13 |
+
MANIFEST
|
| 14 |
+
develop-eggs/
|
| 15 |
+
lib/
|
| 16 |
+
lib64/
|
| 17 |
+
parts/
|
| 18 |
+
sdist/
|
| 19 |
+
var/
|
| 20 |
+
downloads/
|
| 21 |
+
eggs/
|
| 22 |
+
.installed.cfg
|
| 23 |
+
pip-wheel-metadata/
|
| 24 |
+
share/python-wheels/
|
| 25 |
+
|
| 26 |
+
# Virtual environments
|
| 27 |
+
.venv/
|
| 28 |
+
venv/
|
| 29 |
+
ENV/
|
| 30 |
+
env/
|
| 31 |
+
|
| 32 |
+
# IDE
|
| 33 |
+
.vscode/
|
| 34 |
+
.idea/
|
| 35 |
+
*.swp
|
| 36 |
+
*.swo
|
| 37 |
+
*~
|
| 38 |
+
.vs/
|
| 39 |
+
|
| 40 |
+
# OS
|
| 41 |
+
.DS_Store
|
| 42 |
+
Thumbs.db
|
| 43 |
+
desktop.ini
|
| 44 |
+
|
| 45 |
+
# 出力ファイル(生成された画像)
|
| 46 |
+
outputs/
|
| 47 |
+
*.png
|
| 48 |
+
*.jpg
|
| 49 |
+
*.jpeg
|
| 50 |
+
*.webp
|
| 51 |
+
|
| 52 |
+
# ログファイル
|
| 53 |
+
logs/
|
| 54 |
+
*.log
|
| 55 |
+
|
| 56 |
+
# 作業用ファイル(README.mdと.github以下のmdは除外しない)
|
| 57 |
+
*.md
|
| 58 |
+
!README.md
|
| 59 |
+
!.github/**/*.md
|
| 60 |
+
*.txt
|
| 61 |
+
!requirements.txt
|
| 62 |
+
!prompt_base.txt
|
| 63 |
+
!仮想環境への入り方.txt
|
| 64 |
+
|
| 65 |
+
# テストディレクトリ
|
| 66 |
+
test/
|
| 67 |
+
|
| 68 |
+
# 不要なフォルダ(削除予定または作業用)
|
| 69 |
+
Miragic-AI-Image-Generator/
|
| 70 |
+
gradio_UI_Asahi-main/
|
| 71 |
+
|
| 72 |
+
# Hugging Face Cache
|
| 73 |
+
.cache/
|
| 74 |
+
huggingface/
|
| 75 |
+
|
| 76 |
+
# PyTorchモデルファイル
|
| 77 |
+
*.pth
|
| 78 |
+
*.pt
|
| 79 |
+
*.ckpt
|
| 80 |
+
*.safetensors
|
| 81 |
+
|
| 82 |
+
# Jupyter Notebook
|
| 83 |
+
.ipynb_checkpoints/
|
| 84 |
+
*.ipynb
|
| 85 |
+
|
| 86 |
+
# 環境設定
|
| 87 |
+
.env
|
| 88 |
+
.env.local
|
| 89 |
+
.env.*
|
| 90 |
+
|
| 91 |
+
# Gradio
|
| 92 |
+
flagged/
|
| 93 |
+
gradio_cached_examples/
|
| 94 |
+
|
| 95 |
+
# Git
|
| 96 |
+
*.orig
|
.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.11
|
README.md
ADDED
|
File without changes
|
app.py
ADDED
|
@@ -0,0 +1,484 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Miragic AI Image Generator - Main Application
|
| 3 |
+
TV Asahi J Channel ブランド統合版
|
| 4 |
+
|
| 5 |
+
統合機能:
|
| 6 |
+
- Gradio UI フレームワーク (TV Asahi J Channel デザイン)
|
| 7 |
+
- aipicasso/jagirl モデルによる高品質画像生成
|
| 8 |
+
- 詳細パラメータ制御とログ機能
|
| 9 |
+
- Text-to-Image 対応
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
import gradio as gr
|
| 13 |
+
import torch
|
| 14 |
+
from diffusers import (
|
| 15 |
+
StableDiffusionXLPipeline,
|
| 16 |
+
DDIMScheduler,
|
| 17 |
+
DPMSolverMultistepScheduler,
|
| 18 |
+
EulerDiscreteScheduler,
|
| 19 |
+
EulerAncestralDiscreteScheduler,
|
| 20 |
+
PNDMScheduler,
|
| 21 |
+
LMSDiscreteScheduler
|
| 22 |
+
)
|
| 23 |
+
from huggingface_hub import login
|
| 24 |
+
import os
|
| 25 |
+
import base64
|
| 26 |
+
from datetime import datetime
|
| 27 |
+
import random
|
| 28 |
+
import json
|
| 29 |
+
import logging
|
| 30 |
+
from pathlib import Path
|
| 31 |
+
import traceback
|
| 32 |
+
from PIL import Image
|
| 33 |
+
import numpy as np
|
| 34 |
+
import time
|
| 35 |
+
|
| 36 |
+
# 統合ロガーのインポート
|
| 37 |
+
import sys
|
| 38 |
+
sys.path.append(os.path.join(os.path.dirname(__file__), 'utils'))
|
| 39 |
+
from logger import get_logger, log_generation
|
| 40 |
+
|
| 41 |
+
# 標準ロガー設定
|
| 42 |
+
logging.basicConfig(level=logging.INFO)
|
| 43 |
+
logger = logging.getLogger(__name__)
|
| 44 |
+
|
| 45 |
+
# 定数定義
|
| 46 |
+
HISTORY_FILE = "logs/generation_history.json"
|
| 47 |
+
OUTPUT_DIR = "outputs"
|
| 48 |
+
|
| 49 |
+
# 統合ロガーインスタンス
|
| 50 |
+
unified_logger = get_logger("logs")
|
| 51 |
+
|
| 52 |
+
# グローバル変数でパイプラインを管理
|
| 53 |
+
txt2img_pipe = None
|
| 54 |
+
model_loaded = False
|
| 55 |
+
|
| 56 |
+
def setup_scheduler(pipe, scheduler_type="default"):
|
| 57 |
+
"""
|
| 58 |
+
スケジューラーの設定
|
| 59 |
+
|
| 60 |
+
Args:
|
| 61 |
+
pipe: StableDiffusionXLPipeline
|
| 62 |
+
scheduler_type: スケジューラータイプ
|
| 63 |
+
- "default": デフォルト
|
| 64 |
+
- "DDIM": 高品質、少ないステップ
|
| 65 |
+
- "DPMSolver": 高速で高品質(推奨)
|
| 66 |
+
- "Euler": 安定した結果
|
| 67 |
+
- "EulerA": より多様な結果
|
| 68 |
+
- "LMS": 古典的手法
|
| 69 |
+
- "PNDM": デフォルト
|
| 70 |
+
|
| 71 |
+
Returns:
|
| 72 |
+
設定されたscheduler
|
| 73 |
+
"""
|
| 74 |
+
schedulers = {
|
| 75 |
+
"DDIM": DDIMScheduler,
|
| 76 |
+
"DPMSolver": DPMSolverMultistepScheduler,
|
| 77 |
+
"Euler": EulerDiscreteScheduler,
|
| 78 |
+
"EulerA": EulerAncestralDiscreteScheduler,
|
| 79 |
+
"PNDM": PNDMScheduler
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
# LMSはscipyが必要なため、利用可能な場合のみ追加
|
| 83 |
+
try:
|
| 84 |
+
schedulers["LMS"] = LMSDiscreteScheduler
|
| 85 |
+
except:
|
| 86 |
+
logger.warning("⚠️ LMSスケジューラーは利用できません (scipyが必要)")
|
| 87 |
+
|
| 88 |
+
if scheduler_type != "default" and scheduler_type in schedulers:
|
| 89 |
+
try:
|
| 90 |
+
return schedulers[scheduler_type].from_config(pipe.scheduler.config)
|
| 91 |
+
except ImportError as e:
|
| 92 |
+
logger.warning(f"⚠️ {scheduler_type}スケジューラーが利用できません: {e}")
|
| 93 |
+
return pipe.scheduler
|
| 94 |
+
return pipe.scheduler
|
| 95 |
+
|
| 96 |
+
def setup_model():
|
| 97 |
+
"""モデルのセットアップと最適化"""
|
| 98 |
+
global txt2img_pipe, model_loaded
|
| 99 |
+
|
| 100 |
+
if model_loaded:
|
| 101 |
+
return True
|
| 102 |
+
|
| 103 |
+
try:
|
| 104 |
+
logger.info("🔧 モデルをセットアップ中...")
|
| 105 |
+
|
| 106 |
+
# GPU確認
|
| 107 |
+
if not torch.cuda.is_available():
|
| 108 |
+
logger.error("❌ CUDA が利用できません。GPUを確認してください。")
|
| 109 |
+
return False
|
| 110 |
+
|
| 111 |
+
device = "cuda"
|
| 112 |
+
logger.info(f"✅ デバイス: {device}")
|
| 113 |
+
|
| 114 |
+
# Text-to-Image パイプライン
|
| 115 |
+
logger.info("📦 Text-to-Image パイプライン読み込み中...")
|
| 116 |
+
txt2img_pipe = StableDiffusionXLPipeline.from_pretrained(
|
| 117 |
+
"aipicasso/jagirl",
|
| 118 |
+
torch_dtype=torch.float16,
|
| 119 |
+
use_safetensors=True
|
| 120 |
+
).to(device)
|
| 121 |
+
|
| 122 |
+
# GPU移動後にFP16に変換
|
| 123 |
+
try:
|
| 124 |
+
txt2img_pipe = txt2img_pipe.to(dtype=torch.float16)
|
| 125 |
+
logger.info("✅ FP16モードに変換")
|
| 126 |
+
except:
|
| 127 |
+
logger.warning("⚠️ FP16変換をスキップ、FP32で継続")
|
| 128 |
+
|
| 129 |
+
# メモリ効率化 (xformersは使用しない - CPU版PyTorchのため)
|
| 130 |
+
try:
|
| 131 |
+
txt2img_pipe.enable_xformers_memory_efficient_attention()
|
| 132 |
+
logger.info("✅ xFormers メモリ効率化を有効化")
|
| 133 |
+
except Exception as e:
|
| 134 |
+
logger.warning(f"⚠️ xFormers無効 (CPU版PyTorch使用中): {e}")
|
| 135 |
+
|
| 136 |
+
# CPU Offloadは無効化(全てGPUで処理)
|
| 137 |
+
logger.info("🎯 GPU専用モードで動作")
|
| 138 |
+
|
| 139 |
+
logger.info("✅ モデルセットアップ完了")
|
| 140 |
+
model_loaded = True
|
| 141 |
+
return True
|
| 142 |
+
|
| 143 |
+
except Exception as e:
|
| 144 |
+
logger.error(f"❌ モデルセットアップ失敗: {e}")
|
| 145 |
+
return False
|
| 146 |
+
|
| 147 |
+
def log_generation_details(prompt, negative_prompt, params, output_filepath, execution_time):
|
| 148 |
+
"""
|
| 149 |
+
生成詳細のログ記録(統合ロガー使用)
|
| 150 |
+
|
| 151 |
+
Args:
|
| 152 |
+
prompt: メインプロンプト
|
| 153 |
+
negative_prompt: ネガティブプロンプト
|
| 154 |
+
params: 生成パラメータ辞書
|
| 155 |
+
output_filepath: 生成画像のファイルパス
|
| 156 |
+
execution_time: 実行時間(秒)
|
| 157 |
+
|
| 158 |
+
Returns:
|
| 159 |
+
generation_id: 生成記録のユニークID
|
| 160 |
+
"""
|
| 161 |
+
try:
|
| 162 |
+
generation_id = unified_logger.log_generation(
|
| 163 |
+
prompt=prompt,
|
| 164 |
+
negative_prompt=negative_prompt,
|
| 165 |
+
parameters=params,
|
| 166 |
+
output_filepath=output_filepath,
|
| 167 |
+
execution_time=execution_time
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
logger.info(f"📝 生成ログを記録: {generation_id}")
|
| 171 |
+
return generation_id
|
| 172 |
+
|
| 173 |
+
except Exception as e:
|
| 174 |
+
logger.error(f"❌ ログ記録失敗: {e}")
|
| 175 |
+
traceback.print_exc()
|
| 176 |
+
return None
|
| 177 |
+
|
| 178 |
+
def load_generation_history():
|
| 179 |
+
"""生成履歴を読み込む(統合ロガー形式)"""
|
| 180 |
+
try:
|
| 181 |
+
if os.path.exists(HISTORY_FILE):
|
| 182 |
+
with open(HISTORY_FILE, 'r', encoding='utf-8') as f:
|
| 183 |
+
data = json.load(f)
|
| 184 |
+
# 統合ロガーの形式: {"generations": [...]}
|
| 185 |
+
if isinstance(data, dict) and 'generations' in data:
|
| 186 |
+
generations = data['generations']
|
| 187 |
+
# 最新10件を返す
|
| 188 |
+
return generations[-10:] if len(generations) > 10 else generations
|
| 189 |
+
# 古い形式(リスト)の場合
|
| 190 |
+
elif isinstance(data, list):
|
| 191 |
+
return data[-10:]
|
| 192 |
+
else:
|
| 193 |
+
return []
|
| 194 |
+
return []
|
| 195 |
+
except Exception as e:
|
| 196 |
+
logger.error(f"履歴読み込み失敗: {e}")
|
| 197 |
+
return []
|
| 198 |
+
|
| 199 |
+
def format_history_display():
|
| 200 |
+
"""履歴表示用のフォーマット(統合ロガー形式対応)"""
|
| 201 |
+
history = load_generation_history()
|
| 202 |
+
if not history:
|
| 203 |
+
return "📝 生成履歴がありません"
|
| 204 |
+
|
| 205 |
+
display_text = "## 📋 Recent Generation History (最新10件)\n\n"
|
| 206 |
+
|
| 207 |
+
for i, entry in enumerate(reversed(history), 1):
|
| 208 |
+
# 統合ロガー形式のフィールド
|
| 209 |
+
gen_id = entry.get('generation_id', 'Unknown')
|
| 210 |
+
timestamp = entry.get('timestamp', 'Unknown')
|
| 211 |
+
prompt = entry.get('prompt', 'No prompt')
|
| 212 |
+
# プロンプトが長い場合は省略
|
| 213 |
+
prompt_display = prompt[:50] + "..." if len(prompt) > 50 else prompt
|
| 214 |
+
|
| 215 |
+
# パラメータから情報取得
|
| 216 |
+
params = entry.get('parameters', {})
|
| 217 |
+
seed = params.get('seed', 'N/A')
|
| 218 |
+
steps = params.get('num_inference_steps', 'N/A')
|
| 219 |
+
|
| 220 |
+
# 結果情報
|
| 221 |
+
result = entry.get('result', {})
|
| 222 |
+
success = result.get('success', False)
|
| 223 |
+
exec_time = result.get('execution_time_seconds', 0)
|
| 224 |
+
|
| 225 |
+
status = "✅ Success" if success else "❌ Failed"
|
| 226 |
+
|
| 227 |
+
display_text += f"### {i}. {status}\n"
|
| 228 |
+
display_text += f"**ID:** {gen_id}\n"
|
| 229 |
+
display_text += f"**Time:** {timestamp}\n"
|
| 230 |
+
display_text += f"**Prompt:** {prompt_display}\n"
|
| 231 |
+
display_text += f"**Seed:** {seed} | **Steps:** {steps}\n"
|
| 232 |
+
display_text += f"**Execution:** {exec_time:.1f}s\n"
|
| 233 |
+
display_text += "---\n"
|
| 234 |
+
|
| 235 |
+
return display_text
|
| 236 |
+
|
| 237 |
+
def refresh_history():
|
| 238 |
+
"""履歴更新関数"""
|
| 239 |
+
return format_history_display()
|
| 240 |
+
|
| 241 |
+
def generate_txt2img(prompt, negative_prompt="", num_images=1, steps=25, guidance=7.5, size=1024, seed=None, scheduler="default"):
|
| 242 |
+
"""
|
| 243 |
+
テキストから画像生成(完全なパラメータ対応)
|
| 244 |
+
|
| 245 |
+
Args:
|
| 246 |
+
prompt: メインプロンプト
|
| 247 |
+
negative_prompt: ネガティブプロンプト
|
| 248 |
+
num_images: 生成画像数
|
| 249 |
+
steps: サンプリングステップ数 (10-150)
|
| 250 |
+
guidance: CFG Scale/ガイダンス強度 (1-20)
|
| 251 |
+
size: 画像サイズ (512, 768, 1024)
|
| 252 |
+
seed: シード値 (Noneでランダム)
|
| 253 |
+
scheduler: スケジューラータイプ
|
| 254 |
+
|
| 255 |
+
Returns:
|
| 256 |
+
生成された画像のリスト
|
| 257 |
+
"""
|
| 258 |
+
global txt2img_pipe
|
| 259 |
+
|
| 260 |
+
if not prompt.strip():
|
| 261 |
+
return []
|
| 262 |
+
|
| 263 |
+
if not model_loaded:
|
| 264 |
+
if not setup_model():
|
| 265 |
+
return []
|
| 266 |
+
|
| 267 |
+
try:
|
| 268 |
+
logger.info(f"🎨 画像生成開始: {prompt[:50]}...")
|
| 269 |
+
|
| 270 |
+
start_time = time.time()
|
| 271 |
+
|
| 272 |
+
# シード設定(0またはNoneの場合はランダム)
|
| 273 |
+
if seed is None or seed == 0:
|
| 274 |
+
seed = random.randint(1, 2**32-1)
|
| 275 |
+
|
| 276 |
+
generator = torch.Generator(device="cuda").manual_seed(seed)
|
| 277 |
+
|
| 278 |
+
# スケジューラー設定
|
| 279 |
+
original_scheduler = txt2img_pipe.scheduler
|
| 280 |
+
if scheduler != "default":
|
| 281 |
+
txt2img_pipe.scheduler = setup_scheduler(txt2img_pipe, scheduler)
|
| 282 |
+
|
| 283 |
+
# パラメータ設定
|
| 284 |
+
params = {
|
| 285 |
+
"prompt": prompt,
|
| 286 |
+
"negative_prompt": negative_prompt,
|
| 287 |
+
"num_inference_steps": int(steps),
|
| 288 |
+
"guidance_scale": float(guidance),
|
| 289 |
+
"width": int(size),
|
| 290 |
+
"height": int(size),
|
| 291 |
+
"num_images_per_prompt": 1,
|
| 292 |
+
"generator": generator
|
| 293 |
+
}
|
| 294 |
+
|
| 295 |
+
# 画像生成(autocastを使用���ない - test_high_quality_generation.pyと同じ)
|
| 296 |
+
result = txt2img_pipe(**params)
|
| 297 |
+
|
| 298 |
+
# スケジューラーを元に戻す
|
| 299 |
+
if scheduler != "default":
|
| 300 |
+
txt2img_pipe.scheduler = original_scheduler
|
| 301 |
+
|
| 302 |
+
execution_time = time.time() - start_time
|
| 303 |
+
|
| 304 |
+
# 画像保存
|
| 305 |
+
outputs_dir = Path("outputs")
|
| 306 |
+
outputs_dir.mkdir(exist_ok=True)
|
| 307 |
+
|
| 308 |
+
saved_paths = []
|
| 309 |
+
for i, image in enumerate(result.images):
|
| 310 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 311 |
+
filename = f"txt2img_{timestamp}_seed{seed}_{i+1}.png"
|
| 312 |
+
filepath = outputs_dir / filename
|
| 313 |
+
image.save(filepath, quality=95)
|
| 314 |
+
saved_paths.append(str(filepath))
|
| 315 |
+
logger.info(f"💾 画像保存: {filepath}")
|
| 316 |
+
|
| 317 |
+
# ログ記録(統合ロガー使用)
|
| 318 |
+
log_params = {
|
| 319 |
+
"num_inference_steps": int(steps),
|
| 320 |
+
"guidance_scale": float(guidance),
|
| 321 |
+
"width": int(size),
|
| 322 |
+
"height": int(size),
|
| 323 |
+
"seed": seed,
|
| 324 |
+
"scheduler_type": scheduler,
|
| 325 |
+
"num_images": num_images,
|
| 326 |
+
"torch_dtype": "float16",
|
| 327 |
+
"mode": "txt2img"
|
| 328 |
+
}
|
| 329 |
+
|
| 330 |
+
log_generation_details(
|
| 331 |
+
prompt=prompt,
|
| 332 |
+
negative_prompt=negative_prompt,
|
| 333 |
+
params=log_params,
|
| 334 |
+
output_filepath=saved_paths[0] if saved_paths else "",
|
| 335 |
+
execution_time=execution_time
|
| 336 |
+
)
|
| 337 |
+
|
| 338 |
+
logger.info(f"✅ 生成完了: {execution_time:.2f}秒, {len(result.images)}枚")
|
| 339 |
+
|
| 340 |
+
return result.images
|
| 341 |
+
|
| 342 |
+
except Exception as e:
|
| 343 |
+
logger.error(f"❌ 画像生成失敗: {e}")
|
| 344 |
+
logger.error(traceback.format_exc())
|
| 345 |
+
return []
|
| 346 |
+
|
| 347 |
+
def create_gradio_app():
|
| 348 |
+
"""Gradio アプリケーションの作成"""
|
| 349 |
+
|
| 350 |
+
# カスタムカラーオブジェクトを作成(TV Asahi Blue)
|
| 351 |
+
custom_blue = gr.themes.Color(
|
| 352 |
+
c50="#f0f4ff",
|
| 353 |
+
c100="#dbeafe",
|
| 354 |
+
c200="#bfdbfe",
|
| 355 |
+
c300="#93c5fd",
|
| 356 |
+
c400="#60a5fa",
|
| 357 |
+
c500="#284baf", # メインの色
|
| 358 |
+
c600="#1e40af",
|
| 359 |
+
c700="#1d4ed8",
|
| 360 |
+
c800="#1e3a8a",
|
| 361 |
+
c900="#1e3a8a",
|
| 362 |
+
c950="#172554"
|
| 363 |
+
)
|
| 364 |
+
|
| 365 |
+
# メインUI構築
|
| 366 |
+
with gr.Blocks(
|
| 367 |
+
title="Jagirl",
|
| 368 |
+
theme=gr.themes.Default(primary_hue=custom_blue)
|
| 369 |
+
) as demo:
|
| 370 |
+
|
| 371 |
+
# メインタブ
|
| 372 |
+
with gr.Tabs() as tabs:
|
| 373 |
+
|
| 374 |
+
# Text-to-Image タブ
|
| 375 |
+
with gr.TabItem("Text to Image", id="txt2img"):
|
| 376 |
+
|
| 377 |
+
with gr.Row():
|
| 378 |
+
with gr.Column(scale=2):
|
| 379 |
+
txt_prompt = gr.Textbox(
|
| 380 |
+
label="Prompt / プロンプト",
|
| 381 |
+
placeholder="Enter your prompt | 高品質なアニメ風の美しい女性の画像を生成するプロンプトを入力",
|
| 382 |
+
lines=3,
|
| 383 |
+
max_lines=5
|
| 384 |
+
)
|
| 385 |
+
|
| 386 |
+
txt_negative_prompt = gr.Textbox(
|
| 387 |
+
label="Negative Prompt / ネガティブプロンプト",
|
| 388 |
+
value="(worst quality, low quality:1.4), (illustration, 3d, 2d, painting, cartoons, sketch:1.3), (monochrome, grayscale:1.2), teeth, open mouth, (bad hands, bad fingers, deformed hands, mutated fingers:1.3), watermark, signature, text, logo, extra limbs, malformed limbs, poorly drawn face, poorly drawn hands, mutation, deformed, bad anatomy, bad proportions, duplicate, cropped, jpeg artifacts, blurry, out of focus, oversaturated, artificial lighting",
|
| 389 |
+
lines=3,
|
| 390 |
+
max_lines=5
|
| 391 |
+
)
|
| 392 |
+
|
| 393 |
+
with gr.Accordion("Advanced Settings / 詳細設定", open=True):
|
| 394 |
+
txt_step = gr.Slider(
|
| 395 |
+
minimum=10, maximum=150, value=25, step=5,
|
| 396 |
+
label="Sampling Steps / サンプリングステップ数 (推奨: 20-40)"
|
| 397 |
+
)
|
| 398 |
+
txt_guidance = gr.Slider(
|
| 399 |
+
minimum=3.0, maximum=15.0, value=7.5, step=0.5,
|
| 400 |
+
label="CFG Scale / ガイダンス強度 (推奨: 7-10)"
|
| 401 |
+
)
|
| 402 |
+
# 画像サイズは1024x1024固定 (UIには表示しない)
|
| 403 |
+
# サポート解像度例: 512x512, 768x768, 1024x1024, 1280x1280, 1536x1536
|
| 404 |
+
txt_size = 1024 # 固定値
|
| 405 |
+
txt_seed = gr.Number(
|
| 406 |
+
label="Seed (空欄でランダム)",
|
| 407 |
+
value=None,
|
| 408 |
+
precision=0
|
| 409 |
+
)
|
| 410 |
+
txt_scheduler = gr.Dropdown(
|
| 411 |
+
choices=["default", "DDIM", "DPMSolver", "Euler", "EulerA", "LMS", "PNDM"],
|
| 412 |
+
value="default",
|
| 413 |
+
label="Scheduler / スケジューラー (推奨: DPMSolver)"
|
| 414 |
+
)
|
| 415 |
+
|
| 416 |
+
txt_generate_btn = gr.Button(
|
| 417 |
+
"🎨 画像生成開始",
|
| 418 |
+
variant="primary",
|
| 419 |
+
size="lg"
|
| 420 |
+
)
|
| 421 |
+
|
| 422 |
+
with gr.Column(scale=3):
|
| 423 |
+
txt_gallery = gr.Image(
|
| 424 |
+
label="Generated Image / 生成された画像",
|
| 425 |
+
type="pil",
|
| 426 |
+
interactive=False,
|
| 427 |
+
show_label=True,
|
| 428 |
+
show_download_button=True,
|
| 429 |
+
container=True,
|
| 430 |
+
height=None,
|
| 431 |
+
width=None
|
| 432 |
+
)
|
| 433 |
+
|
| 434 |
+
# システム情報(公開用)
|
| 435 |
+
with gr.Accordion("System Information / システム情報", open=False):
|
| 436 |
+
gr.HTML(f"""
|
| 437 |
+
<div style="padding: 10px;">
|
| 438 |
+
<p><strong>Model:</strong> aipicasso/jagirl (Stable Diffusion XL)</p>
|
| 439 |
+
<p><strong>Framework:</strong> PyTorch {torch.__version__}</p>
|
| 440 |
+
<p><strong>Device:</strong> {'GPU (CUDA ' + torch.version.cuda + ')' if torch.cuda.is_available() else 'CPU'}</p>
|
| 441 |
+
<p><strong>Diffusers:</strong> Latest</p>
|
| 442 |
+
<p><strong>Default Resolution:</strong> 1024x1024</p>
|
| 443 |
+
</div>
|
| 444 |
+
""")
|
| 445 |
+
|
| 446 |
+
# 画像生成用のラッパー関数(2重呼び出し防止)
|
| 447 |
+
def generate_single_image(prompt, neg_prompt, step, guidance, seed, scheduler):
|
| 448 |
+
result = generate_txt2img(prompt, neg_prompt, 1, step, guidance, txt_size, seed, scheduler)
|
| 449 |
+
return result[0] if result else None
|
| 450 |
+
|
| 451 |
+
# イベントバインディング
|
| 452 |
+
txt_generate_btn.click(
|
| 453 |
+
fn=generate_single_image,
|
| 454 |
+
inputs=[txt_prompt, txt_negative_prompt, txt_step, txt_guidance, txt_seed, txt_scheduler],
|
| 455 |
+
outputs=txt_gallery,
|
| 456 |
+
show_progress=True
|
| 457 |
+
)
|
| 458 |
+
|
| 459 |
+
return demo
|
| 460 |
+
|
| 461 |
+
def main():
|
| 462 |
+
"""メインアプリケーション"""
|
| 463 |
+
logger.info("🚀 Miragic AI Image Generator 起動中...")
|
| 464 |
+
|
| 465 |
+
# 必要なディレクトリを作成
|
| 466 |
+
Path("outputs").mkdir(exist_ok=True)
|
| 467 |
+
Path("logs").mkdir(exist_ok=True)
|
| 468 |
+
|
| 469 |
+
# Gradio アプリケーション作成
|
| 470 |
+
demo = create_gradio_app()
|
| 471 |
+
|
| 472 |
+
# アプリケーション起動
|
| 473 |
+
logger.info("🌐 Webアプリケーションを起動...")
|
| 474 |
+
demo.launch(
|
| 475 |
+
server_name="127.0.0.1",
|
| 476 |
+
server_port=7860,
|
| 477 |
+
share=False,
|
| 478 |
+
show_error=True,
|
| 479 |
+
quiet=False,
|
| 480 |
+
inbrowser=True # ブラウザを自動で開く
|
| 481 |
+
)
|
| 482 |
+
|
| 483 |
+
if __name__ == "__main__":
|
| 484 |
+
main()
|
gradio_ui/.gitignore
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python-generated files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[oc]
|
| 4 |
+
build/
|
| 5 |
+
dist/
|
| 6 |
+
wheels/
|
| 7 |
+
*.egg-info
|
| 8 |
+
|
| 9 |
+
# Virtual environments
|
| 10 |
+
.venv
|
| 11 |
+
|
| 12 |
+
# UV lock file
|
| 13 |
+
uv.lock
|
| 14 |
+
|
| 15 |
+
# Project-specific files
|
| 16 |
+
reference/
|
| 17 |
+
|
| 18 |
+
# IDE and development tools
|
| 19 |
+
.serena/
|
| 20 |
+
.github/
|
| 21 |
+
.github/prompts/
|
| 22 |
+
|
| 23 |
+
# Unused image files (keep only: j_channel.svg, J_channel.svg, tvasahi.svg)
|
| 24 |
+
imgs/j_channel.eps
|
| 25 |
+
imgs/j_channel.png
|
| 26 |
+
imgs/J_channel.psd
|
| 27 |
+
imgs/J_channel_large.png
|
| 28 |
+
imgs/pc_main.jpg
|
| 29 |
+
imgs/pc_main.psd
|
| 30 |
+
|
| 31 |
+
# Other unused image formats
|
| 32 |
+
*.webp
|
| 33 |
+
*.bmp
|
| 34 |
+
*.tiff
|
| 35 |
+
*.ico
|
| 36 |
+
|
| 37 |
+
# Unused Python files
|
| 38 |
+
# (Add specific .py files here as they become unused)
|
| 39 |
+
|
| 40 |
+
# documentation files
|
| 41 |
+
依頼内容.md
|
| 42 |
+
仕様書.md
|
gradio_ui/.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.11.12
|
gradio_ui/README.md
ADDED
|
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# テレビ朝日 スーパーJチャンネル - AI画像生成システムUI
|
| 2 |
+
|
| 3 |
+
## 概要
|
| 4 |
+
Gradioを使用したテレビ朝日 スーパーJチャンネル向けのAI画像生成UI。
|
| 5 |
+
|
| 6 |
+
### スクリーンショット
|
| 7 |
+
- Text to Image
|
| 8 |
+

|
| 9 |
+
|
| 10 |
+
- Image to Image
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
## 起動方法
|
| 16 |
+
|
| 17 |
+
### UV使用
|
| 18 |
+
```bash
|
| 19 |
+
# UVでの起動
|
| 20 |
+
uv run python app.py
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
### Python環境
|
| 24 |
+
```bash
|
| 25 |
+
# 依存関係インストール
|
| 26 |
+
pip install -r requirements.txt
|
| 27 |
+
|
| 28 |
+
# アプリケーション起動
|
| 29 |
+
python app.py
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
起動後、ブラウザで `http://localhost:7860` にアクセス
|
| 33 |
+
|
| 34 |
+
## 機能
|
| 35 |
+
|
| 36 |
+
### Text-to-Image タブ
|
| 37 |
+
|
| 38 |
+
- テキストプロンプトから最大4枚の画像生成
|
| 39 |
+
- Advanced Settings(Step、Guidance Scale、Size等)
|
| 40 |
+
- 2x2グリッドでの結果表示
|
| 41 |
+
- ダウンロード機能付きギャラリー
|
| 42 |
+
|
| 43 |
+
### Image-to-Image タブ
|
| 44 |
+
|
| 45 |
+
- 参考画像 + テキストプロンプトで画像生成
|
| 46 |
+
- Prompt Strength(参考画像の影響度)調整
|
| 47 |
+
- アップロード画像のプレビュー
|
| 48 |
+
- 同じAdvanced Settings対応
|
| 49 |
+
|
| 50 |
+
### 共通機能
|
| 51 |
+
|
| 52 |
+
- テレビ朝日・Jチャンネルのロゴ表示(#284baf カラー)
|
| 53 |
+
- 最小限のデザイン(シンプルさ最優先)
|
| 54 |
+
- Base64インライン画像埋め込み
|
| 55 |
+
- 日本語・英語併記のUI
|
| 56 |
+
|
| 57 |
+
## 技術仕様
|
| 58 |
+
|
| 59 |
+
- **フレームワーク**: Gradio >= 5.48.0
|
| 60 |
+
- **環境管理**: UV(Python 3.11.12)
|
| 61 |
+
- **レイアウト**: gr.Blocks + カスタムテーマ
|
| 62 |
+
- **色設計**: #284baf統一カラー(ボタン・タブ)
|
| 63 |
+
- **依存関係**: gradio, numpy, pillow
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
### 画像生成関数の実装
|
| 67 |
+
|
| 68 |
+
現在はダミー処理。実際のAIモデルに置き換える場合:
|
| 69 |
+
|
| 70 |
+
```python
|
| 71 |
+
def generate_txt2img(prompt, num_images=4):
|
| 72 |
+
"""
|
| 73 |
+
テキストから画像生成
|
| 74 |
+
|
| 75 |
+
Args:
|
| 76 |
+
prompt (str): 生成プロンプト
|
| 77 |
+
num_images (int): 生成枚数(1-4枚)
|
| 78 |
+
|
| 79 |
+
Returns:
|
| 80 |
+
list: 生成された画像のリスト
|
| 81 |
+
|
| 82 |
+
Note:
|
| 83 |
+
実際のAI画像生成モデル(Stable Diffusion, DALL-E等)に置き換える際は、
|
| 84 |
+
Advanced Settingsのパラメータ(step, guidance, size等)も
|
| 85 |
+
引数として追加し、モデルに渡してください。
|
| 86 |
+
"""
|
| 87 |
+
# 例: モデルにパラメータを渡す場合
|
| 88 |
+
# return model.generate(prompt, num_images=num_images, steps=step, guidance_scale=guidance, size=size)
|
| 89 |
+
pass
|
| 90 |
+
|
| 91 |
+
def generate_img2img(prompt, reference_image, num_images=4):
|
| 92 |
+
"""
|
| 93 |
+
参考画像+テキストから画像生成
|
| 94 |
+
|
| 95 |
+
Args:
|
| 96 |
+
prompt (str): 生成プロンプト
|
| 97 |
+
reference_image: 参考画像(PIL Image)
|
| 98 |
+
num_images (int): 生成枚数(1-4枚)
|
| 99 |
+
|
| 100 |
+
Returns:
|
| 101 |
+
list: 生成された画像のリスト
|
| 102 |
+
"""
|
| 103 |
+
pass
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
### ログ機能
|
| 107 |
+
|
| 108 |
+
`logs/` フォルダを活用してユーザー操作をトラッキング
|
| 109 |
+
|
| 110 |
+
## 📁 ファイル構成
|
| 111 |
+
|
| 112 |
+
```text
|
| 113 |
+
gradio_ui_asahi/
|
| 114 |
+
├── app.py # メインアプリケーション
|
| 115 |
+
├── requirements.txt # 依存関係
|
| 116 |
+
├── pyproject.toml # UV設定
|
| 117 |
+
├── imgs/ # ロゴ画像
|
| 118 |
+
│ ├── tvasahi.svg # TVasahiロゴ(フッター用)
|
| 119 |
+
│ └── j_channel.svg # Jチャンネルロゴ(ヘッダー用)
|
| 120 |
+
├── logs/ # ログファイル(将来使用)
|
| 121 |
+
└── README.md # このファイル
|
| 122 |
+
```
|
| 123 |
+
## 設計原則
|
| 124 |
+
- **統一感**: #284baf カラーでブランディング統一
|
| 125 |
+
|
| 126 |
+
## 更新履歴
|
| 127 |
+
|
| 128 |
+
- **v1.0**: 初期UI実装(txt2img/img2img)
|
| 129 |
+
- **v1.1**: Advanced Settings 追加
|
| 130 |
+
- **v1.2**: カラーテーマ統一(#284baf)
|
gradio_ui/app.py
ADDED
|
@@ -0,0 +1,220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import os
|
| 3 |
+
import base64
|
| 4 |
+
|
| 5 |
+
# ダミー画像生成関数(実際のモデルは未実装)
|
| 6 |
+
def generate_txt2img(prompt, num_images=4):
|
| 7 |
+
"""テキストから画像生成(ダミー処理)"""
|
| 8 |
+
if not prompt.strip():
|
| 9 |
+
return []
|
| 10 |
+
|
| 11 |
+
# 実際の実装では、ここでAI画像生成モデルを呼び出し
|
| 12 |
+
# 現在はダミー画像を返す(グレーのデフォルト)
|
| 13 |
+
dummy_images = []
|
| 14 |
+
for i in range(num_images):
|
| 15 |
+
# プレースホルダー画像のパスを生成
|
| 16 |
+
dummy_path = f"https://via.placeholder.com/512x512?text=Generated+Image+{i+1}"
|
| 17 |
+
dummy_images.append(dummy_path)
|
| 18 |
+
|
| 19 |
+
return dummy_images
|
| 20 |
+
|
| 21 |
+
def generate_img2img(prompt, reference_image, num_images=4):
|
| 22 |
+
"""参考画像+テキストから画像生成(ダミー処理)"""
|
| 23 |
+
if not prompt.strip():
|
| 24 |
+
return []
|
| 25 |
+
|
| 26 |
+
if reference_image is None:
|
| 27 |
+
return generate_txt2img(prompt, num_images)
|
| 28 |
+
|
| 29 |
+
# 実際の実装では、参考画像とプロンプトを使用してAI生成
|
| 30 |
+
dummy_images = []
|
| 31 |
+
for i in range(num_images):
|
| 32 |
+
dummy_path = f"https://via.placeholder.com/512x512?text=Img2Img+Result+{i+1}"
|
| 33 |
+
dummy_images.append(dummy_path)
|
| 34 |
+
|
| 35 |
+
return dummy_images
|
| 36 |
+
|
| 37 |
+
# Gradioテーマシステムを使用してボタンとタブの色を統一
|
| 38 |
+
|
| 39 |
+
# カスタムカラーオブジェクトを作成
|
| 40 |
+
custom_blue = gr.themes.Color(
|
| 41 |
+
c50="#f0f4ff",
|
| 42 |
+
c100="#dbeafe",
|
| 43 |
+
c200="#bfdbfe",
|
| 44 |
+
c300="#93c5fd",
|
| 45 |
+
c400="#60a5fa",
|
| 46 |
+
c500="#284baf", # メインの色
|
| 47 |
+
c600="#1e40af",
|
| 48 |
+
c700="#1d4ed8",
|
| 49 |
+
c800="#1e3a8a",
|
| 50 |
+
c900="#1e3a8a",
|
| 51 |
+
c950="#172554"
|
| 52 |
+
)
|
| 53 |
+
|
| 54 |
+
# メインUI構築
|
| 55 |
+
with gr.Blocks(
|
| 56 |
+
title="TV Asahi J Channel Image UI",
|
| 57 |
+
theme=gr.themes.Default(primary_hue=custom_blue),
|
| 58 |
+
css=".header-bg { background-color: #284baf; } .footer-bg { background-color: white; justify-content: center; display: flex; align-items: center; }"
|
| 59 |
+
) as demo:
|
| 60 |
+
|
| 61 |
+
# ヘッダー部分(j_channel.pngを中央配置)
|
| 62 |
+
with gr.Row(elem_classes=["header-bg"], equal_height=True):
|
| 63 |
+
try:
|
| 64 |
+
with open("imgs/j_channel.svg", 'rb') as f:
|
| 65 |
+
b64 = base64.b64encode(f.read()).decode()
|
| 66 |
+
gr.HTML(
|
| 67 |
+
'<div style="width:100%;display:flex;justify-content:center;align-items:center;">'
|
| 68 |
+
f'<img src="data:image/svg+xml;base64,{b64}" style="height:150px;aspect-ratio:auto;display:block;" alt="J Channel logo" />'
|
| 69 |
+
'</div>'
|
| 70 |
+
)
|
| 71 |
+
except:
|
| 72 |
+
gr.HTML('<span>Missing J Channel logo</span>')
|
| 73 |
+
|
| 74 |
+
# メインタブ
|
| 75 |
+
with gr.Tabs() as tabs:
|
| 76 |
+
|
| 77 |
+
# Text-to-Image タブ
|
| 78 |
+
with gr.TabItem("Text to Image", id="txt2img"):
|
| 79 |
+
|
| 80 |
+
with gr.Row():
|
| 81 |
+
with gr.Column(scale=2):
|
| 82 |
+
txt_prompt = gr.Textbox(
|
| 83 |
+
label="Prompt / プロンプト",
|
| 84 |
+
placeholder="Enter your prompt | モデルに入力するプロンプトをここに入力",
|
| 85 |
+
lines=1,
|
| 86 |
+
max_lines=1
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
with gr.Accordion("Advanced Settings", open=False):
|
| 90 |
+
txt_num_images = gr.Slider(
|
| 91 |
+
minimum=1, maximum=4, value=4, step=1,
|
| 92 |
+
label="Number of Images / 生成枚数"
|
| 93 |
+
)
|
| 94 |
+
txt_step = gr.Slider(
|
| 95 |
+
minimum=2, maximum=50, value=12, step=1,
|
| 96 |
+
label="Step"
|
| 97 |
+
)
|
| 98 |
+
txt_guidance = gr.Slider(
|
| 99 |
+
minimum=0, maximum=20, value=7.5, step=0.5,
|
| 100 |
+
label="Guidance Scale (プロンプトの反映力)"
|
| 101 |
+
)
|
| 102 |
+
txt_prompt_strength = gr.Slider(
|
| 103 |
+
minimum=0, maximum=1, value=0.8, step=0.05,
|
| 104 |
+
label="Prompt Strength (画像入力時のみ)",
|
| 105 |
+
interactive=False
|
| 106 |
+
)
|
| 107 |
+
txt_size = gr.Slider(
|
| 108 |
+
minimum=512, maximum=2048, value=1024, step=64,
|
| 109 |
+
label="Size (px)"
|
| 110 |
+
)
|
| 111 |
+
|
| 112 |
+
txt_generate_btn = gr.Button(
|
| 113 |
+
"画像生成開始",
|
| 114 |
+
variant="primary"
|
| 115 |
+
)
|
| 116 |
+
|
| 117 |
+
with gr.Column(scale=3):
|
| 118 |
+
txt_gallery = gr.Gallery(
|
| 119 |
+
label="生成された画像",
|
| 120 |
+
columns=2,
|
| 121 |
+
rows=2,
|
| 122 |
+
height="auto",
|
| 123 |
+
show_download_button=True,
|
| 124 |
+
object_fit="contain"
|
| 125 |
+
)
|
| 126 |
+
|
| 127 |
+
# Image-to-Image タブ
|
| 128 |
+
with gr.TabItem("Image to Image", id="img2img"):
|
| 129 |
+
|
| 130 |
+
with gr.Row():
|
| 131 |
+
with gr.Column(scale=2):
|
| 132 |
+
# 参考画像入力
|
| 133 |
+
img_reference = gr.Image(
|
| 134 |
+
label="参考画像",
|
| 135 |
+
type="pil"
|
| 136 |
+
)
|
| 137 |
+
|
| 138 |
+
img_prompt = gr.Textbox(
|
| 139 |
+
label="Prompt / プロンプト",
|
| 140 |
+
placeholder="Enter your prompt | モデルに入力するプロンプトをここに入力",
|
| 141 |
+
lines=1,
|
| 142 |
+
max_lines=1
|
| 143 |
+
)
|
| 144 |
+
|
| 145 |
+
with gr.Accordion("Advanced Settings", open=False):
|
| 146 |
+
img_num_images = gr.Slider(
|
| 147 |
+
minimum=1, maximum=4, value=4, step=1,
|
| 148 |
+
label="Number of Images / 生成枚数"
|
| 149 |
+
)
|
| 150 |
+
img_step = gr.Slider(
|
| 151 |
+
minimum=2, maximum=50, value=12, step=1,
|
| 152 |
+
label="Step"
|
| 153 |
+
)
|
| 154 |
+
img_guidance = gr.Slider(
|
| 155 |
+
minimum=0, maximum=20, value=7.5, step=0.5,
|
| 156 |
+
label="Guidance Scale (プロンプトの反映力)"
|
| 157 |
+
)
|
| 158 |
+
img_prompt_strength = gr.Slider(
|
| 159 |
+
minimum=0, maximum=1, value=0.8, step=0.05,
|
| 160 |
+
label="Prompt Strength (画像入力時のみ)"
|
| 161 |
+
)
|
| 162 |
+
img_size = gr.Slider(
|
| 163 |
+
minimum=512, maximum=2048, value=1024, step=64,
|
| 164 |
+
label="Size (px)"
|
| 165 |
+
)
|
| 166 |
+
|
| 167 |
+
img_generate_btn = gr.Button(
|
| 168 |
+
"画像生成開始",
|
| 169 |
+
variant="primary"
|
| 170 |
+
)
|
| 171 |
+
|
| 172 |
+
with gr.Column(scale=3):
|
| 173 |
+
img_gallery = gr.Gallery(
|
| 174 |
+
label="生成された画像",
|
| 175 |
+
columns=2,
|
| 176 |
+
rows=2,
|
| 177 |
+
height="auto",
|
| 178 |
+
show_download_button=True,
|
| 179 |
+
object_fit="contain"
|
| 180 |
+
)
|
| 181 |
+
|
| 182 |
+
|
| 183 |
+
|
| 184 |
+
# イベントバインディング
|
| 185 |
+
txt_generate_btn.click(
|
| 186 |
+
fn=generate_txt2img,
|
| 187 |
+
inputs=[txt_prompt, txt_num_images],
|
| 188 |
+
outputs=txt_gallery,
|
| 189 |
+
show_progress=True
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
img_generate_btn.click(
|
| 193 |
+
fn=generate_img2img,
|
| 194 |
+
inputs=[img_prompt, img_reference, img_num_images],
|
| 195 |
+
outputs=img_gallery,
|
| 196 |
+
show_progress=True
|
| 197 |
+
)
|
| 198 |
+
|
| 199 |
+
# フッター部分(ロゴのみ表示 / 背景白 / 中心位置 / サイズ半分)
|
| 200 |
+
with gr.Row(elem_classes=["footer-bg"]):
|
| 201 |
+
try:
|
| 202 |
+
with open("imgs/tvasahi.svg", 'rb') as f:
|
| 203 |
+
b64 = base64.b64encode(f.read()).decode()
|
| 204 |
+
gr.HTML(
|
| 205 |
+
'<div style="width:100%;display:flex;justify-content:center;align-items:center;">'
|
| 206 |
+
f'<img src="data:image/svg+xml;base64,{b64}" style="height:30px;aspect-ratio:auto;display:block;" alt="TV Asahi logo" />'
|
| 207 |
+
'</div>'
|
| 208 |
+
)
|
| 209 |
+
except:
|
| 210 |
+
gr.HTML('<span>Missing TV Asahi logo</span>')
|
| 211 |
+
|
| 212 |
+
# 起動設定
|
| 213 |
+
if __name__ == "__main__":
|
| 214 |
+
demo.launch(
|
| 215 |
+
server_name=None,
|
| 216 |
+
server_port=7860,
|
| 217 |
+
share=False,
|
| 218 |
+
show_error=True,
|
| 219 |
+
quiet=False
|
| 220 |
+
)
|
gradio_ui/imgs/J_channel.svg
ADDED
|
|
gradio_ui/imgs/tvasahi.svg
ADDED
|
|
gradio_ui/main.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
def main():
|
| 2 |
+
print("Hello from gradio UI!")
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
if __name__ == "__main__":
|
| 6 |
+
main()
|
gradio_ui/pyproject.toml
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "gradio-ui-asahi"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = "Add your description here"
|
| 5 |
+
readme = "README.md"
|
| 6 |
+
requires-python = ">=3.11.12"
|
| 7 |
+
dependencies = [
|
| 8 |
+
"gradio>=5.48.0",
|
| 9 |
+
"numpy>=2.3.3",
|
| 10 |
+
"pillow>=11.3.0",
|
| 11 |
+
]
|
gradio_ui/requirements.txt
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Required libraries (consistent with pyproject.toml)
|
| 2 |
+
gradio>=5.48.0
|
| 3 |
+
numpy>=2.3.3
|
| 4 |
+
pillow>=11.3.0
|
prompt_base.txt
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Main Prompt:
|
| 2 |
+
xxmixgirl, 1girl, solo, photorealistic, RAW photo, best quality, ultra high res, 8k uhd, professional photography, sharp focus, natural lighting, beach photography,
|
| 3 |
+
|
| 4 |
+
Japanese woman, early 20s, black hair, medium length hair, wispy bangs, brown eyes, detailed eyes, beautiful detailed face, pale skin, natural makeup, soft lips,
|
| 5 |
+
|
| 6 |
+
white bikini top with lace trim, blue and white striped bikini bottom, white cardigan, off-shoulder,
|
| 7 |
+
|
| 8 |
+
standing on sandy beach, ocean background, blurred beach background, bokeh, depth of field, soft natural sunlight, daytime, cloudy sky,
|
| 9 |
+
|
| 10 |
+
perfect anatomy, detailed skin texture, skin pores, realistic proportions, curvy body, natural pose, looking at viewer, subtle smile, professional model pose
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
Negative Prompt:
|
| 15 |
+
(worst quality, low quality:1.4), (illustration, 3d, 2d, painting, cartoons, sketch:1.3), (monochrome, grayscale:1.2), teeth, open mouth, (bad hands, bad fingers, deformed hands, mutated fingers:1.3), watermark, signature, text, logo, extra limbs, malformed limbs, poorly drawn face, poorly drawn hands, mutation, deformed, bad anatomy, bad proportions, duplicate, cropped, jpeg artifacts, blurry, out of focus, oversaturated, artificial lighting
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
xxmixgirl, 1girl, black hair, brown eyes, face, beach, huge breast, white background, high quality
|
| 22 |
+
|
| 23 |
+
xxmixgirl, 1girl, black hair, brown eyes, face, i, kyoto city background, high quality, sunny
|
pyproject.toml
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "jagirl-ui"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = "SDXL-based anime girl image generation using aipicasso/jagirl model with Gradio UI"
|
| 5 |
+
readme = "README.md"
|
| 6 |
+
keywords = ["ai", "image-generation", "stable-diffusion", "sdxl", "anime", "gradio", "huggingface"]
|
| 7 |
+
classifiers = [
|
| 8 |
+
"Development Status :: 3 - Alpha",
|
| 9 |
+
"Intended Audience :: Developers",
|
| 10 |
+
"Topic :: Scientific/Engineering :: Artificial Intelligence",
|
| 11 |
+
"Programming Language :: Python :: 3",
|
| 12 |
+
"Programming Language :: Python :: 3.11",
|
| 13 |
+
]
|
| 14 |
+
requires-python = ">=3.11"
|
| 15 |
+
dependencies = [
|
| 16 |
+
"huggingface-hub>=0.35.3",
|
| 17 |
+
"diffusers>=0.35.2",
|
| 18 |
+
"numpy>=2.3.4",
|
| 19 |
+
"scipy>=1.11.0",
|
| 20 |
+
"gradio>=4.0.0",
|
| 21 |
+
"transformers>=4.30.0",
|
| 22 |
+
"accelerate>=0.20.0",
|
| 23 |
+
"pillow>=10.0.0",
|
| 24 |
+
]
|
| 25 |
+
|
| 26 |
+
[project.optional-dependencies]
|
| 27 |
+
gpu = [
|
| 28 |
+
"xformers>=0.0.20",
|
| 29 |
+
]
|
| 30 |
+
|
| 31 |
+
# ⚠️ 重要: PyTorchは依存関係に含めていません ⚠️
|
| 32 |
+
# 理由: pip install -e . でCPU版に上書きされる問題を防ぐため
|
| 33 |
+
#
|
| 34 |
+
# 【必須】CUDA版PyTorchの手動インストール手順:
|
| 35 |
+
# 1. 仮想環境をアクティベート
|
| 36 |
+
# 2. 以下を個別に実行(一括ではなく1つずつ):
|
| 37 |
+
# pip install torch --index-url https://download.pytorch.org/whl/cu121 --no-cache-dir
|
| 38 |
+
# pip install torchvision --index-url https://download.pytorch.org/whl/cu121
|
| 39 |
+
# pip install torchaudio --index-url https://download.pytorch.org/whl/cu121
|
| 40 |
+
# 3. インストール確認:
|
| 41 |
+
# python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
|
| 42 |
+
#
|
| 43 |
+
# 参考: 20251017_全ログ.md - torch一括インストールで20分以上フリーズした実績あり
|
| 44 |
+
|
| 45 |
+
[build-system]
|
| 46 |
+
|
| 47 |
+
[tool.setuptools]
|
| 48 |
+
# パッケージ自動検出を無効化(utilsのみを明示的にインストール)
|
| 49 |
+
packages = ["utils"]
|
| 50 |
+
|
| 51 |
+
[tool.setuptools.package-data]
|
| 52 |
+
# データファイルを除外(logs, outputs, gradio_uiはプロジェクトディレクトリとして扱う)
|
| 53 |
+
"*" = []
|
utils/download_hugginface_repo.py
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Hugging Face モデルダウンロードスクリプト.
|
| 2 |
+
|
| 3 |
+
環境変数 ``HF_TOKEN`` もしくは ``HUGGINGFACEHUB_API_TOKEN`` にセットされた
|
| 4 |
+
トークンでログインしてから Stable Diffusion XL パイプラインを取得する。
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from huggingface_hub import login
|
| 8 |
+
from diffusers import StableDiffusionXLPipeline
|
| 9 |
+
import torch
|
| 10 |
+
import os
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
def main() -> None:
|
| 14 |
+
token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACEHUB_API_TOKEN")
|
| 15 |
+
if token:
|
| 16 |
+
login(token=token)
|
| 17 |
+
else:
|
| 18 |
+
print("⚠️ 環境変数に Hugging Face トークンが設定されていません。公開モデルのみアクセス可能です。")
|
| 19 |
+
|
| 20 |
+
print("🔧 aipicasso/emix-0-5 モデルをダウンロード中...")
|
| 21 |
+
print("📝 モデル情報: Conterfeit XL v2.5 + Animagine v2.0 + Emix 0.4")
|
| 22 |
+
|
| 23 |
+
StableDiffusionXLPipeline.from_pretrained(
|
| 24 |
+
"aipicasso/emix-0-5",
|
| 25 |
+
torch_dtype=torch.float16,
|
| 26 |
+
use_safetensors=True
|
| 27 |
+
)
|
| 28 |
+
|
| 29 |
+
print("✅ ダウンロード完了!")
|
| 30 |
+
print("📂 キャッシュ場所: ~/.cache/huggingface/hub/")
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
if __name__ == "__main__":
|
| 34 |
+
main()
|
utils/logger.py
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
統一ログ機能モジュール
|
| 3 |
+
画像生成の全パラメータと結果を包括的に記録する統一ログシステム
|
| 4 |
+
|
| 5 |
+
設計原則:
|
| 6 |
+
- 生成に使用された全パラメータの記録
|
| 7 |
+
- 生成画像との確実な紐づけ
|
| 8 |
+
- JSON形式での構造化データ保存
|
| 9 |
+
- 検索・分析しやすい形式
|
| 10 |
+
- パフォーマンス情報の詳細記録
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
import json
|
| 14 |
+
import os
|
| 15 |
+
import time
|
| 16 |
+
from datetime import datetime
|
| 17 |
+
from typing import Dict, Any, Optional
|
| 18 |
+
import torch
|
| 19 |
+
from PIL import Image
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
class UnifiedLogger:
|
| 23 |
+
"""統一ログ機能クラス"""
|
| 24 |
+
|
| 25 |
+
def __init__(self, log_dir: str = "logs"):
|
| 26 |
+
"""
|
| 27 |
+
ログ機能の初期化
|
| 28 |
+
|
| 29 |
+
Args:
|
| 30 |
+
log_dir: ログファイルを保存するディレクトリ
|
| 31 |
+
"""
|
| 32 |
+
self.log_dir = log_dir
|
| 33 |
+
self.json_log_file = os.path.join(log_dir, "generation_history.json")
|
| 34 |
+
|
| 35 |
+
# ログディレクトリ作成
|
| 36 |
+
os.makedirs(log_dir, exist_ok=True)
|
| 37 |
+
|
| 38 |
+
# 既存ログの読み込み
|
| 39 |
+
self._load_existing_logs()
|
| 40 |
+
|
| 41 |
+
def _load_existing_logs(self):
|
| 42 |
+
"""既存のログデータを読み込み"""
|
| 43 |
+
if os.path.exists(self.json_log_file):
|
| 44 |
+
try:
|
| 45 |
+
with open(self.json_log_file, 'r', encoding='utf-8') as f:
|
| 46 |
+
self.log_data = json.load(f)
|
| 47 |
+
except (json.JSONDecodeError, FileNotFoundError):
|
| 48 |
+
self.log_data = {"metadata": self._create_metadata(), "generations": []}
|
| 49 |
+
else:
|
| 50 |
+
self.log_data = {"metadata": self._create_metadata(), "generations": []}
|
| 51 |
+
|
| 52 |
+
def _create_metadata(self) -> Dict[str, Any]:
|
| 53 |
+
"""ログファイルのメタデータを作成"""
|
| 54 |
+
return {
|
| 55 |
+
"format_version": "2.0",
|
| 56 |
+
"created_at": datetime.now().isoformat(),
|
| 57 |
+
"last_updated": datetime.now().isoformat(),
|
| 58 |
+
"description": "Unified generation history for jagirl UI project",
|
| 59 |
+
"model_info": {
|
| 60 |
+
"model_name": "aipicasso/jagirl",
|
| 61 |
+
"model_type": "StableDiffusionXL",
|
| 62 |
+
"specialized_for": "Japanese female faces"
|
| 63 |
+
},
|
| 64 |
+
"log_schema": {
|
| 65 |
+
"timestamp": "ISO format timestamp",
|
| 66 |
+
"generation_id": "Unique identifier for each generation",
|
| 67 |
+
"prompts": "All text prompts used",
|
| 68 |
+
"parameters": "Complete parameter set used for generation",
|
| 69 |
+
"output": "Generated image information",
|
| 70 |
+
"performance": "Execution metrics",
|
| 71 |
+
"system_info": "Hardware and software environment"
|
| 72 |
+
}
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
def _get_image_info(self, filepath: str) -> Dict[str, Any]:
|
| 76 |
+
"""画像ファイルの詳細情報を取得"""
|
| 77 |
+
if not os.path.exists(filepath):
|
| 78 |
+
return {"error": "File not found"}
|
| 79 |
+
|
| 80 |
+
try:
|
| 81 |
+
# ファイルサイズ
|
| 82 |
+
file_size_bytes = os.path.getsize(filepath)
|
| 83 |
+
file_size_mb = round(file_size_bytes / (1024 * 1024), 3)
|
| 84 |
+
|
| 85 |
+
# 画像情報
|
| 86 |
+
with Image.open(filepath) as img:
|
| 87 |
+
width, height = img.size
|
| 88 |
+
mode = img.mode
|
| 89 |
+
format_type = img.format
|
| 90 |
+
|
| 91 |
+
return {
|
| 92 |
+
"filepath": os.path.abspath(filepath),
|
| 93 |
+
"file_url": f"file:///{os.path.abspath(filepath).replace(os.sep, '/')}",
|
| 94 |
+
"filename": os.path.basename(filepath),
|
| 95 |
+
"file_size_bytes": file_size_bytes,
|
| 96 |
+
"file_size_mb": file_size_mb,
|
| 97 |
+
"image_width": width,
|
| 98 |
+
"image_height": height,
|
| 99 |
+
"image_mode": mode,
|
| 100 |
+
"image_format": format_type,
|
| 101 |
+
"created_at": datetime.fromtimestamp(os.path.getctime(filepath)).isoformat()
|
| 102 |
+
}
|
| 103 |
+
except Exception as e:
|
| 104 |
+
return {"error": f"Failed to get image info: {str(e)}"}
|
| 105 |
+
|
| 106 |
+
def _get_system_info(self) -> Dict[str, Any]:
|
| 107 |
+
"""システム情報を取得"""
|
| 108 |
+
system_info = {
|
| 109 |
+
"python_version": None,
|
| 110 |
+
"torch_version": None,
|
| 111 |
+
"cuda_available": False,
|
| 112 |
+
"cuda_version": None,
|
| 113 |
+
"gpu_name": None,
|
| 114 |
+
"vram_total_gb": 0,
|
| 115 |
+
"vram_allocated_gb": 0
|
| 116 |
+
}
|
| 117 |
+
|
| 118 |
+
try:
|
| 119 |
+
import sys
|
| 120 |
+
system_info["python_version"] = sys.version.split()[0]
|
| 121 |
+
|
| 122 |
+
system_info["torch_version"] = torch.__version__
|
| 123 |
+
system_info["cuda_available"] = torch.cuda.is_available()
|
| 124 |
+
|
| 125 |
+
if torch.cuda.is_available():
|
| 126 |
+
system_info["cuda_version"] = torch.version.cuda
|
| 127 |
+
system_info["gpu_name"] = torch.cuda.get_device_name(0)
|
| 128 |
+
system_info["vram_total_gb"] = round(
|
| 129 |
+
torch.cuda.get_device_properties(0).total_memory / (1024**3), 2
|
| 130 |
+
)
|
| 131 |
+
system_info["vram_allocated_gb"] = round(
|
| 132 |
+
torch.cuda.memory_allocated(0) / (1024**3), 2
|
| 133 |
+
)
|
| 134 |
+
except Exception as e:
|
| 135 |
+
system_info["error"] = f"Failed to get system info: {str(e)}"
|
| 136 |
+
|
| 137 |
+
return system_info
|
| 138 |
+
|
| 139 |
+
def log_generation(
|
| 140 |
+
self,
|
| 141 |
+
prompt: str,
|
| 142 |
+
negative_prompt: str = "",
|
| 143 |
+
parameters: Dict[str, Any] = None,
|
| 144 |
+
output_filepath: str = "",
|
| 145 |
+
execution_time: float = 0.0,
|
| 146 |
+
additional_info: Dict[str, Any] = None
|
| 147 |
+
) -> str:
|
| 148 |
+
"""
|
| 149 |
+
画像生成の完全なログを記録
|
| 150 |
+
|
| 151 |
+
Args:
|
| 152 |
+
prompt: メインプロンプト
|
| 153 |
+
negative_prompt: ネガティブプロンプト
|
| 154 |
+
parameters: 生成に使用された全パラメータ
|
| 155 |
+
output_filepath: 生成された画像ファイルパス
|
| 156 |
+
execution_time: 実行時間(秒)
|
| 157 |
+
additional_info: 追加情報
|
| 158 |
+
|
| 159 |
+
Returns:
|
| 160 |
+
generation_id: 生成された記録のユニークID
|
| 161 |
+
"""
|
| 162 |
+
|
| 163 |
+
# ユニークIDの生成
|
| 164 |
+
timestamp = datetime.now()
|
| 165 |
+
generation_id = f"gen_{timestamp.strftime('%Y%m%d_%H%M%S')}_{int(time.time() * 1000) % 100000}"
|
| 166 |
+
|
| 167 |
+
# デフォルトパラメータの設定
|
| 168 |
+
if parameters is None:
|
| 169 |
+
parameters = {}
|
| 170 |
+
|
| 171 |
+
# 完全なパラメータセットの作成
|
| 172 |
+
complete_parameters = {
|
| 173 |
+
# 基本パラメータ
|
| 174 |
+
"num_inference_steps": parameters.get("num_inference_steps", 20),
|
| 175 |
+
"guidance_scale": parameters.get("guidance_scale", 7.5),
|
| 176 |
+
"width": parameters.get("width", 1024),
|
| 177 |
+
"height": parameters.get("height", 1024),
|
| 178 |
+
"seed": parameters.get("seed", None),
|
| 179 |
+
|
| 180 |
+
# スケジューラー関連
|
| 181 |
+
"scheduler_type": parameters.get("scheduler_type", "default"),
|
| 182 |
+
"eta": parameters.get("eta", 0.0),
|
| 183 |
+
|
| 184 |
+
# 画像生成関連
|
| 185 |
+
"num_images": parameters.get("num_images", 1),
|
| 186 |
+
"batch_size": parameters.get("batch_size", 1),
|
| 187 |
+
|
| 188 |
+
# モデル関連
|
| 189 |
+
"torch_dtype": str(parameters.get("torch_dtype", "float16")),
|
| 190 |
+
"enable_xformers": parameters.get("enable_xformers", False),
|
| 191 |
+
"enable_cpu_offload": parameters.get("enable_cpu_offload", False),
|
| 192 |
+
|
| 193 |
+
# その他のパラメータ
|
| 194 |
+
**{k: v for k, v in parameters.items() if k not in [
|
| 195 |
+
"num_inference_steps", "guidance_scale", "width", "height",
|
| 196 |
+
"seed", "scheduler_type", "eta", "num_images", "batch_size",
|
| 197 |
+
"torch_dtype", "enable_xformers", "enable_cpu_offload"
|
| 198 |
+
]}
|
| 199 |
+
}
|
| 200 |
+
|
| 201 |
+
# ログエントリの作成
|
| 202 |
+
log_entry = {
|
| 203 |
+
"generation_id": generation_id,
|
| 204 |
+
"timestamp": timestamp.isoformat(),
|
| 205 |
+
"prompts": {
|
| 206 |
+
"main_prompt": prompt,
|
| 207 |
+
"negative_prompt": negative_prompt,
|
| 208 |
+
"prompt_length": len(prompt),
|
| 209 |
+
"negative_prompt_length": len(negative_prompt)
|
| 210 |
+
},
|
| 211 |
+
"parameters": complete_parameters,
|
| 212 |
+
"output": self._get_image_info(output_filepath) if output_filepath else {},
|
| 213 |
+
"performance": {
|
| 214 |
+
"execution_time_seconds": round(execution_time, 3),
|
| 215 |
+
"estimated_speed_sec_per_step": round(
|
| 216 |
+
execution_time / max(complete_parameters.get("num_inference_steps", 1), 1), 3
|
| 217 |
+
) if execution_time > 0 else 0
|
| 218 |
+
},
|
| 219 |
+
"system_info": self._get_system_info(),
|
| 220 |
+
"additional_info": additional_info or {}
|
| 221 |
+
}
|
| 222 |
+
|
| 223 |
+
# ログに追加
|
| 224 |
+
self.log_data["generations"].append(log_entry)
|
| 225 |
+
self.log_data["metadata"]["last_updated"] = timestamp.isoformat()
|
| 226 |
+
|
| 227 |
+
# ファイルに保存
|
| 228 |
+
self._save_logs()
|
| 229 |
+
|
| 230 |
+
return generation_id
|
| 231 |
+
|
| 232 |
+
def _save_logs(self):
|
| 233 |
+
"""ログをファイルに保存"""
|
| 234 |
+
try:
|
| 235 |
+
with open(self.json_log_file, 'w', encoding='utf-8') as f:
|
| 236 |
+
json.dump(self.log_data, f, ensure_ascii=False, indent=2)
|
| 237 |
+
except Exception as e:
|
| 238 |
+
print(f"ログ保存エラー: {e}")
|
| 239 |
+
|
| 240 |
+
def get_generation_by_id(self, generation_id: str) -> Optional[Dict[str, Any]]:
|
| 241 |
+
"""generation_idで特定の生成記録を取得"""
|
| 242 |
+
for generation in self.log_data["generations"]:
|
| 243 |
+
if generation["generation_id"] == generation_id:
|
| 244 |
+
return generation
|
| 245 |
+
return None
|
| 246 |
+
|
| 247 |
+
def get_recent_generations(self, count: int = 10) -> list:
|
| 248 |
+
"""最近の生成記録を取得"""
|
| 249 |
+
return self.log_data["generations"][-count:] if self.log_data["generations"] else []
|
| 250 |
+
|
| 251 |
+
def search_by_prompt(self, search_term: str, case_sensitive: bool = False) -> list:
|
| 252 |
+
"""プロンプトで検索"""
|
| 253 |
+
results = []
|
| 254 |
+
search_term = search_term if case_sensitive else search_term.lower()
|
| 255 |
+
|
| 256 |
+
for generation in self.log_data["generations"]:
|
| 257 |
+
main_prompt = generation["prompts"]["main_prompt"]
|
| 258 |
+
if not case_sensitive:
|
| 259 |
+
main_prompt = main_prompt.lower()
|
| 260 |
+
|
| 261 |
+
if search_term in main_prompt:
|
| 262 |
+
results.append(generation)
|
| 263 |
+
|
| 264 |
+
return results
|
| 265 |
+
|
| 266 |
+
def get_statistics(self) -> Dict[str, Any]:
|
| 267 |
+
"""生成統計を取得"""
|
| 268 |
+
generations = self.log_data["generations"]
|
| 269 |
+
|
| 270 |
+
if not generations:
|
| 271 |
+
return {"total_generations": 0}
|
| 272 |
+
|
| 273 |
+
total_time = sum(g["performance"]["execution_time_seconds"] for g in generations)
|
| 274 |
+
avg_time = total_time / len(generations)
|
| 275 |
+
|
| 276 |
+
schedulers = {}
|
| 277 |
+
for g in generations:
|
| 278 |
+
scheduler = g["parameters"].get("scheduler_type", "unknown")
|
| 279 |
+
schedulers[scheduler] = schedulers.get(scheduler, 0) + 1
|
| 280 |
+
|
| 281 |
+
return {
|
| 282 |
+
"total_generations": len(generations),
|
| 283 |
+
"total_execution_time_hours": round(total_time / 3600, 2),
|
| 284 |
+
"average_execution_time_seconds": round(avg_time, 2),
|
| 285 |
+
"scheduler_usage": schedulers,
|
| 286 |
+
"date_range": {
|
| 287 |
+
"first": generations[0]["timestamp"],
|
| 288 |
+
"last": generations[-1]["timestamp"]
|
| 289 |
+
}
|
| 290 |
+
}
|
| 291 |
+
|
| 292 |
+
def cleanup_old_logs(self, keep_days: int = 30):
|
| 293 |
+
"""古いログエントリを削除"""
|
| 294 |
+
cutoff_date = datetime.now().timestamp() - (keep_days * 24 * 3600)
|
| 295 |
+
|
| 296 |
+
original_count = len(self.log_data["generations"])
|
| 297 |
+
self.log_data["generations"] = [
|
| 298 |
+
g for g in self.log_data["generations"]
|
| 299 |
+
if datetime.fromisoformat(g["timestamp"]).timestamp() > cutoff_date
|
| 300 |
+
]
|
| 301 |
+
|
| 302 |
+
removed_count = original_count - len(self.log_data["generations"])
|
| 303 |
+
|
| 304 |
+
if removed_count > 0:
|
| 305 |
+
self._save_logs()
|
| 306 |
+
print(f"古いログエントリ {removed_count} 件を削除しました")
|
| 307 |
+
|
| 308 |
+
return removed_count
|
| 309 |
+
|
| 310 |
+
|
| 311 |
+
# グローバルロガーインスタンス
|
| 312 |
+
_global_logger = None
|
| 313 |
+
|
| 314 |
+
def get_logger(log_dir: str = "logs") -> UnifiedLogger:
|
| 315 |
+
"""グローバルロガーインスタンスを取得"""
|
| 316 |
+
global _global_logger
|
| 317 |
+
if _global_logger is None:
|
| 318 |
+
_global_logger = UnifiedLogger(log_dir)
|
| 319 |
+
return _global_logger
|
| 320 |
+
|
| 321 |
+
def log_generation(**kwargs) -> str:
|
| 322 |
+
"""グローバルロガーを使用して生成をログ"""
|
| 323 |
+
logger = get_logger()
|
| 324 |
+
return logger.log_generation(**kwargs)
|
| 325 |
+
|
| 326 |
+
def get_statistics() -> Dict[str, Any]:
|
| 327 |
+
"""生成統計を取得"""
|
| 328 |
+
logger = get_logger()
|
| 329 |
+
return logger.get_statistics()
|
utils/migrate_logs.py
ADDED
|
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
ログ移行スクリプト
|
| 3 |
+
既存のgeneration_history.jsonを新しい統一ログフォーマットに移行
|
| 4 |
+
|
| 5 |
+
実行方法:
|
| 6 |
+
python utils/migrate_logs.py
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import json
|
| 10 |
+
import os
|
| 11 |
+
import sys
|
| 12 |
+
from datetime import datetime
|
| 13 |
+
from pathlib import Path
|
| 14 |
+
|
| 15 |
+
# プロジェクトルートをPythonパスに追加
|
| 16 |
+
project_root = Path(__file__).parent.parent
|
| 17 |
+
sys.path.insert(0, str(project_root / "utils"))
|
| 18 |
+
|
| 19 |
+
from unified_logger import UnifiedLogger
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def migrate_old_logs():
|
| 23 |
+
"""既存のログを新しいフォーマットに移行"""
|
| 24 |
+
|
| 25 |
+
old_log_file = "logs/generation_history.json"
|
| 26 |
+
backup_file = "logs/generation_history_backup.json"
|
| 27 |
+
|
| 28 |
+
if not os.path.exists(old_log_file):
|
| 29 |
+
print("❌ 既存のログファイルが見つかりません")
|
| 30 |
+
return
|
| 31 |
+
|
| 32 |
+
# バックアップ作成
|
| 33 |
+
with open(old_log_file, 'r', encoding='utf-8') as f:
|
| 34 |
+
old_data = json.load(f)
|
| 35 |
+
|
| 36 |
+
with open(backup_file, 'w', encoding='utf-8') as f:
|
| 37 |
+
json.dump(old_data, f, ensure_ascii=False, indent=2)
|
| 38 |
+
|
| 39 |
+
print(f"✅ 既存ログをバックアップしました: {backup_file}")
|
| 40 |
+
|
| 41 |
+
# 統一ログ機能で移行
|
| 42 |
+
logger = UnifiedLogger()
|
| 43 |
+
|
| 44 |
+
migration_count = 0
|
| 45 |
+
|
| 46 |
+
if "generations" in old_data:
|
| 47 |
+
for old_entry in old_data["generations"]:
|
| 48 |
+
try:
|
| 49 |
+
# 旧フォーマットから新フォーマットへの変換
|
| 50 |
+
old_params = old_entry.get("parameters", {})
|
| 51 |
+
old_output = old_entry.get("output", {})
|
| 52 |
+
old_performance = old_entry.get("performance", {})
|
| 53 |
+
|
| 54 |
+
# パラメータの変換
|
| 55 |
+
new_params = {
|
| 56 |
+
"num_inference_steps": old_params.get("steps", 20),
|
| 57 |
+
"guidance_scale": old_params.get("cfg_scale", 7.5),
|
| 58 |
+
"width": old_params.get("width", 1024),
|
| 59 |
+
"height": old_params.get("height", 1024),
|
| 60 |
+
"seed": old_params.get("seed", None),
|
| 61 |
+
"scheduler_type": old_params.get("scheduler", "default"),
|
| 62 |
+
"eta": 0.0, # 旧ログにはない
|
| 63 |
+
"torch_dtype": "float16", # 推定値
|
| 64 |
+
"enable_xformers": True, # 推定値
|
| 65 |
+
"enable_cpu_offload": False # 推定値
|
| 66 |
+
}
|
| 67 |
+
|
| 68 |
+
# 追加情報
|
| 69 |
+
additional_info = {
|
| 70 |
+
"migrated_from": "generation_history.json",
|
| 71 |
+
"migration_date": datetime.now().isoformat(),
|
| 72 |
+
"original_timestamp": old_entry.get("timestamp", "")
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
# 新しいログエントリとして記録
|
| 76 |
+
generation_id = logger.log_generation(
|
| 77 |
+
prompt=old_entry.get("prompt", ""),
|
| 78 |
+
negative_prompt=old_entry.get("negative_prompt", ""),
|
| 79 |
+
parameters=new_params,
|
| 80 |
+
output_filepath=old_output.get("filepath", ""),
|
| 81 |
+
execution_time=old_performance.get("execution_time_seconds", 0),
|
| 82 |
+
additional_info=additional_info
|
| 83 |
+
)
|
| 84 |
+
|
| 85 |
+
migration_count += 1
|
| 86 |
+
print(f"✅ 移行完了: {generation_id}")
|
| 87 |
+
|
| 88 |
+
except Exception as e:
|
| 89 |
+
print(f"⚠️ エントリの移行に失敗: {e}")
|
| 90 |
+
continue
|
| 91 |
+
|
| 92 |
+
print(f"\n🎉 移行完了: {migration_count} エントリを新しいフォーマットに移行しました")
|
| 93 |
+
|
| 94 |
+
# 統計表示
|
| 95 |
+
stats = logger.get_statistics()
|
| 96 |
+
print(f"📊 総生成数: {stats['total_generations']} 枚")
|
| 97 |
+
print(f"📊 総実行時間: {stats['total_execution_time_hours']} 時間")
|
| 98 |
+
print(f"📊 平均実行時間: {stats['average_execution_time_seconds']} 秒")
|
| 99 |
+
|
| 100 |
+
# 旧ファイルをリネーム
|
| 101 |
+
old_archived = "logs/generation_history_old.json"
|
| 102 |
+
os.rename(old_log_file, old_archived)
|
| 103 |
+
print(f"📦 旧ログファイルをアーカイブしました: {old_archived}")
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
def test_new_logger():
|
| 107 |
+
"""新しいログ機能のテスト"""
|
| 108 |
+
print("\n🧪 新しいログ機能のテスト...")
|
| 109 |
+
|
| 110 |
+
logger = UnifiedLogger()
|
| 111 |
+
|
| 112 |
+
# テストエントリ
|
| 113 |
+
test_params = {
|
| 114 |
+
"num_inference_steps": 25,
|
| 115 |
+
"guidance_scale": 7.5,
|
| 116 |
+
"width": 1024,
|
| 117 |
+
"height": 1024,
|
| 118 |
+
"seed": 12345,
|
| 119 |
+
"scheduler_type": "DPMSolver",
|
| 120 |
+
"eta": 0.0,
|
| 121 |
+
"torch_dtype": "float16",
|
| 122 |
+
"enable_xformers": True,
|
| 123 |
+
"enable_cpu_offload": False
|
| 124 |
+
}
|
| 125 |
+
|
| 126 |
+
test_info = {
|
| 127 |
+
"test_mode": True,
|
| 128 |
+
"description": "ログ機能テスト"
|
| 129 |
+
}
|
| 130 |
+
|
| 131 |
+
generation_id = logger.log_generation(
|
| 132 |
+
prompt="test prompt for new logging system",
|
| 133 |
+
negative_prompt="low quality, test",
|
| 134 |
+
parameters=test_params,
|
| 135 |
+
output_filepath="", # テストなのでファイルなし
|
| 136 |
+
execution_time=45.67,
|
| 137 |
+
additional_info=test_info
|
| 138 |
+
)
|
| 139 |
+
|
| 140 |
+
print(f"✅ テストエントリ作成: {generation_id}")
|
| 141 |
+
|
| 142 |
+
# 作成したエントリを取得してテスト
|
| 143 |
+
entry = logger.get_generation_by_id(generation_id)
|
| 144 |
+
if entry:
|
| 145 |
+
print("✅ エントリ取得テスト成功")
|
| 146 |
+
print(f" プロンプト: {entry['prompts']['main_prompt'][:50]}...")
|
| 147 |
+
print(f" 実行時間: {entry['performance']['execution_time_seconds']} 秒")
|
| 148 |
+
print(f" パラメータ数: {len(entry['parameters'])} 個")
|
| 149 |
+
|
| 150 |
+
# 検索テスト
|
| 151 |
+
search_results = logger.search_by_prompt("test prompt")
|
| 152 |
+
print(f"✅ 検索テスト: {len(search_results)} 件ヒット")
|
| 153 |
+
|
| 154 |
+
print("🎉 新しいログ機能のテスト完了!")
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
if __name__ == "__main__":
|
| 158 |
+
print("🔄 ログ移行プロセスを開始...")
|
| 159 |
+
|
| 160 |
+
try:
|
| 161 |
+
migrate_old_logs()
|
| 162 |
+
test_new_logger()
|
| 163 |
+
|
| 164 |
+
print("\n" + "=" * 60)
|
| 165 |
+
print("🎊 ログ移行が正常に完了しました!")
|
| 166 |
+
print("📄 新しいログファイル: logs/unified_generation_history.json")
|
| 167 |
+
print("📄 バックアップファイル: logs/generation_history_backup.json")
|
| 168 |
+
print("📄 旧ファイル: logs/generation_history_old.json")
|
| 169 |
+
|
| 170 |
+
except Exception as e:
|
| 171 |
+
print(f"❌ 移行プロセスでエラーが発生しました: {e}")
|
| 172 |
+
import traceback
|
| 173 |
+
traceback.print_exc()
|
utils/test_download_hugginface_repo.py
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import unittest
|
| 4 |
+
from unittest.mock import patch
|
| 5 |
+
|
| 6 |
+
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
class TestDownloadHuggingfaceRepo(unittest.TestCase):
|
| 10 |
+
|
| 11 |
+
@patch.dict(os.environ, {"HF_TOKEN": "dummy-token"}, clear=True)
|
| 12 |
+
@patch("utils.download_hugginface_repo.StableDiffusionXLPipeline")
|
| 13 |
+
@patch("utils.download_hugginface_repo.login")
|
| 14 |
+
def test_huggingface_login_called(self, mock_login, mock_pipeline):
|
| 15 |
+
import utils.download_hugginface_repo as target
|
| 16 |
+
|
| 17 |
+
target.main()
|
| 18 |
+
|
| 19 |
+
mock_login.assert_called_once_with(token="dummy-token")
|
| 20 |
+
mock_pipeline.from_pretrained.assert_called_once()
|
| 21 |
+
|
| 22 |
+
@patch("utils.download_hugginface_repo.StableDiffusionXLPipeline")
|
| 23 |
+
@patch.dict(os.environ, {}, clear=True)
|
| 24 |
+
def test_stable_diffusion_pipeline_creation(self, mock_pipeline):
|
| 25 |
+
import utils.download_hugginface_repo as target
|
| 26 |
+
|
| 27 |
+
target.main()
|
| 28 |
+
|
| 29 |
+
mock_pipeline.from_pretrained.assert_called_once_with(
|
| 30 |
+
"aipicasso/emix-0-5",
|
| 31 |
+
torch_dtype=target.torch.float16,
|
| 32 |
+
use_safetensors=True
|
| 33 |
+
)
|
| 34 |
+
|
| 35 |
+
@patch("utils.download_hugginface_repo.StableDiffusionXLPipeline")
|
| 36 |
+
@patch("utils.download_hugginface_repo.login")
|
| 37 |
+
@patch.dict(os.environ, {"HF_TOKEN": "dummy-token"}, clear=True)
|
| 38 |
+
def test_full_workflow(self, mock_login, mock_pipeline):
|
| 39 |
+
import utils.download_hugginface_repo as target
|
| 40 |
+
|
| 41 |
+
target.main()
|
| 42 |
+
|
| 43 |
+
mock_login.assert_called_once_with(token="dummy-token")
|
| 44 |
+
mock_pipeline.from_pretrained.assert_called_once_with(
|
| 45 |
+
"aipicasso/emix-0-5",
|
| 46 |
+
torch_dtype=target.torch.float16,
|
| 47 |
+
use_safetensors=True
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
if __name__ == "__main__":
|
| 52 |
+
unittest.main()
|
utils/test_high_quality_generation.py
ADDED
|
@@ -0,0 +1,556 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
高品質画像生成テスト用スクリプト
|
| 3 |
+
aipicasso/jagirlモデルを使用した高品質なアニメ画像生成のテスト
|
| 4 |
+
|
| 5 |
+
=== パラメータ詳細解説 ===
|
| 6 |
+
|
| 7 |
+
🔧 Sampling Method (スケジューラー):
|
| 8 |
+
- DDIM: 高品質、少ないステップで良い結果
|
| 9 |
+
- DPMSolver: 高速で高品質(推奨)
|
| 10 |
+
- Euler: 安定した結果
|
| 11 |
+
- EulerA: より多様な結果
|
| 12 |
+
- LMS: 古典的手法
|
| 13 |
+
- PNDM: デフォルト
|
| 14 |
+
|
| 15 |
+
📊 Sampling Steps (num_inference_steps): 10-150
|
| 16 |
+
- 少ない (10-20): 高速だが品質低め
|
| 17 |
+
- 中程度 (25-40): バランス良好(推奨)
|
| 18 |
+
- 多い (50-150): 高品質だが時間かかる
|
| 19 |
+
|
| 20 |
+
🎲 Seed (generator):
|
| 21 |
+
- 同じシード = 同じ画像(再現性)
|
| 22 |
+
- ランダムシード = バリエーション
|
| 23 |
+
|
| 24 |
+
⚙️ CFG Scale (guidance_scale): 1-20
|
| 25 |
+
- 低い (3-5): プロンプトに緩く従う、自然
|
| 26 |
+
- 中程度 (7-10): バランス良好(推奨)
|
| 27 |
+
- 高い (12-20): プロンプトに厳密に従う
|
| 28 |
+
|
| 29 |
+
🔧 その他:
|
| 30 |
+
- eta: ノイズ制御 (0.0-1.0)
|
| 31 |
+
- width/height: 画像サイズ (64の倍数推奨)
|
| 32 |
+
"""
|
| 33 |
+
|
| 34 |
+
import torch
|
| 35 |
+
from diffusers import StableDiffusionXLPipeline
|
| 36 |
+
from huggingface_hub import login
|
| 37 |
+
import os
|
| 38 |
+
from datetime import datetime
|
| 39 |
+
import random
|
| 40 |
+
import json
|
| 41 |
+
import logging
|
| 42 |
+
from unified_logger import get_logger
|
| 43 |
+
|
| 44 |
+
def setup_model():
|
| 45 |
+
"""モデルのセットアップと最適化"""
|
| 46 |
+
print("🔧 モデルをセットアップ中...")
|
| 47 |
+
|
| 48 |
+
# HuggingFace認証(必要に応じてトークンを設定)
|
| 49 |
+
# login(token="your_token_here")
|
| 50 |
+
|
| 51 |
+
# モデルロード(SDXL用設定)
|
| 52 |
+
try:
|
| 53 |
+
print("🔄 SDXL設定でモデルロード中...")
|
| 54 |
+
pipe = StableDiffusionXLPipeline.from_pretrained(
|
| 55 |
+
"aipicasso/jagirl",
|
| 56 |
+
torch_dtype=torch.float16,
|
| 57 |
+
use_safetensors=True,
|
| 58 |
+
variant="fp16"
|
| 59 |
+
)
|
| 60 |
+
except Exception as e:
|
| 61 |
+
print(f"⚠️ FP16でのロードに失敗、標準設定で再試行: {e}")
|
| 62 |
+
try:
|
| 63 |
+
pipe = StableDiffusionXLPipeline.from_pretrained("aipicasso/jagirl")
|
| 64 |
+
except Exception as e2:
|
| 65 |
+
print(f"❌ モデルロードに失敗: {e2}")
|
| 66 |
+
return None
|
| 67 |
+
|
| 68 |
+
# GPUに手動で移動
|
| 69 |
+
if torch.cuda.is_available():
|
| 70 |
+
pipe = pipe.to("cuda")
|
| 71 |
+
print(f"✅ モデルをGPUに移動: {torch.cuda.get_device_name(0)}")
|
| 72 |
+
|
| 73 |
+
# GPU移動後にFP16に変換
|
| 74 |
+
try:
|
| 75 |
+
pipe = pipe.to(dtype=torch.float16)
|
| 76 |
+
print("✅ FP16モードに変換")
|
| 77 |
+
except:
|
| 78 |
+
print("⚠️ FP16変換をスキップ、FP32で継続")
|
| 79 |
+
else:
|
| 80 |
+
print("❌ CUDAが利用できません")
|
| 81 |
+
return None
|
| 82 |
+
|
| 83 |
+
# パフォーマンス最適化
|
| 84 |
+
try:
|
| 85 |
+
pipe.enable_xformers_memory_efficient_attention()
|
| 86 |
+
print("✅ xFormers有効化")
|
| 87 |
+
except:
|
| 88 |
+
print("⚠️ xFormersが利用できません")
|
| 89 |
+
|
| 90 |
+
# CPU Offloadは無効化(全てGPUで処理)
|
| 91 |
+
# try:
|
| 92 |
+
# pipe.enable_model_cpu_offload()
|
| 93 |
+
# print("✅ CPU Offload有効化")
|
| 94 |
+
# except:
|
| 95 |
+
# print("⚠️ CPU Offloadが利用できません")
|
| 96 |
+
print("🎯 GPU専用モードで動作")
|
| 97 |
+
|
| 98 |
+
print(f"🎯 モデルセットアップ完了 - デバイス: {pipe.device}")
|
| 99 |
+
return pipe
|
| 100 |
+
|
| 101 |
+
def check_gpu_status():
|
| 102 |
+
"""GPU状態の確認"""
|
| 103 |
+
if torch.cuda.is_available():
|
| 104 |
+
gpu_name = torch.cuda.get_device_name(0)
|
| 105 |
+
memory_allocated = torch.cuda.memory_allocated(0) / 1024**3
|
| 106 |
+
memory_total = torch.cuda.get_device_properties(0).total_memory / 1024**3
|
| 107 |
+
|
| 108 |
+
print(f"🖥️ GPU: {gpu_name}")
|
| 109 |
+
print(f"💾 VRAM使用量: {memory_allocated:.1f}GB / {memory_total:.1f}GB")
|
| 110 |
+
return True
|
| 111 |
+
else:
|
| 112 |
+
print("❌ CUDAが利用できません")
|
| 113 |
+
return False
|
| 114 |
+
|
| 115 |
+
# 古いログ機能は統一ログ機能に統合されました
|
| 116 |
+
|
| 117 |
+
# 古いログ機能は統一ログ機能に統合されました
|
| 118 |
+
|
| 119 |
+
def setup_scheduler(pipe, scheduler_type):
|
| 120 |
+
"""
|
| 121 |
+
スケジューラーの設定(SDXL対応)
|
| 122 |
+
|
| 123 |
+
各スケジューラーの特徴:
|
| 124 |
+
- DDIM: 高品質、少ないステップで良い結果(20-30ステップ推奨)
|
| 125 |
+
- DPMSolver: 高速で高品質、最もバランスが良い(15-25ステップ推奨)
|
| 126 |
+
- Euler: 安定した結果、予測しやすい(30-50ステップ推奨)
|
| 127 |
+
- EulerA: より多様で芸術的な結果(25-40ステップ推奨)
|
| 128 |
+
- LMS: 古典的手法、安定しているが遅い(50+ステップ推奨)
|
| 129 |
+
- PNDM: デフォルト、標準的な品質(30-50ステップ推奨)
|
| 130 |
+
"""
|
| 131 |
+
from diffusers import (
|
| 132 |
+
DDIMScheduler, DPMSolverMultistepScheduler,
|
| 133 |
+
EulerDiscreteScheduler, EulerAncestralDiscreteScheduler,
|
| 134 |
+
LMSDiscreteScheduler, PNDMScheduler
|
| 135 |
+
)
|
| 136 |
+
|
| 137 |
+
schedulers = {
|
| 138 |
+
"DDIM": DDIMScheduler,
|
| 139 |
+
"DPMSolver": DPMSolverMultistepScheduler,
|
| 140 |
+
"Euler": EulerDiscreteScheduler,
|
| 141 |
+
"EulerA": EulerAncestralDiscreteScheduler,
|
| 142 |
+
"PNDM": PNDMScheduler
|
| 143 |
+
}
|
| 144 |
+
|
| 145 |
+
# LMSはscipyが必要なため、利用可能な場合のみ追加
|
| 146 |
+
try:
|
| 147 |
+
schedulers["LMS"] = LMSDiscreteScheduler
|
| 148 |
+
except:
|
| 149 |
+
print("⚠️ LMSスケジューラーは利用できません (scipyが必要)")
|
| 150 |
+
|
| 151 |
+
if scheduler_type in schedulers:
|
| 152 |
+
try:
|
| 153 |
+
return schedulers[scheduler_type].from_config(pipe.scheduler.config)
|
| 154 |
+
except ImportError as e:
|
| 155 |
+
print(f"⚠️ {scheduler_type}スケジューラーが利用できません: {e}")
|
| 156 |
+
return pipe.scheduler
|
| 157 |
+
return pipe.scheduler
|
| 158 |
+
|
| 159 |
+
def generate_high_quality_image(pipe, prompt, negative_prompt="", seed=None, output_dir="outputs",
|
| 160 |
+
num_inference_steps=50, guidance_scale=7.5, width=1024, height=1024,
|
| 161 |
+
scheduler_type="default", eta=0.0):
|
| 162 |
+
"""
|
| 163 |
+
高品質画像生成(詳細パラメータ対応)
|
| 164 |
+
|
| 165 |
+
Args:
|
| 166 |
+
pipe: StableDiffusionPipeline
|
| 167 |
+
prompt: プロンプト
|
| 168 |
+
negative_prompt: ネガティブプロンプト
|
| 169 |
+
seed: シード値
|
| 170 |
+
output_dir: 出力ディレクトリ
|
| 171 |
+
num_inference_steps: サンプリングステップ数 (10-150)
|
| 172 |
+
guidance_scale: CFG Scale/ガイダンス強度 (1-20)
|
| 173 |
+
width, height: 画像サイズ
|
| 174 |
+
scheduler_type: スケジューラータイプ (DDIM, DPMSolver, Euler, EulerA, LMS, PNDM)
|
| 175 |
+
eta: ノイズ制御 (0.0-1.0)
|
| 176 |
+
"""
|
| 177 |
+
|
| 178 |
+
# 出力ディレクトリ作成
|
| 179 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 180 |
+
|
| 181 |
+
# シード設定
|
| 182 |
+
if seed is None:
|
| 183 |
+
seed = random.randint(0, 1000000)
|
| 184 |
+
|
| 185 |
+
generator = torch.Generator(device="cuda").manual_seed(seed)
|
| 186 |
+
|
| 187 |
+
# スケジューラー設定
|
| 188 |
+
original_scheduler = pipe.scheduler
|
| 189 |
+
if scheduler_type != "default":
|
| 190 |
+
pipe.scheduler = setup_scheduler(pipe, scheduler_type)
|
| 191 |
+
|
| 192 |
+
print(f"🎨 画像生成開始")
|
| 193 |
+
print(f"📝 プロンプト: {prompt}")
|
| 194 |
+
print(f"🔧 設定: steps={num_inference_steps}, cfg={guidance_scale}, seed={seed}")
|
| 195 |
+
print(f"� サイズ: {width}x{height}, スケジューラー: {scheduler_type}")
|
| 196 |
+
|
| 197 |
+
# 高品質設定での画像生成(時間測定付き)
|
| 198 |
+
import time
|
| 199 |
+
start_time = time.time()
|
| 200 |
+
|
| 201 |
+
try:
|
| 202 |
+
image = pipe(
|
| 203 |
+
prompt=prompt,
|
| 204 |
+
negative_prompt=negative_prompt,
|
| 205 |
+
num_inference_steps=num_inference_steps, # Sampling Steps
|
| 206 |
+
guidance_scale=guidance_scale, # CFG Scale (ガイダンス強度)
|
| 207 |
+
width=width,
|
| 208 |
+
height=height,
|
| 209 |
+
generator=generator # Seed制御
|
| 210 |
+
).images[0]
|
| 211 |
+
except Exception as e:
|
| 212 |
+
print(f"⚠️ 生成エラー、シンプルな設定で再試行: {e}")
|
| 213 |
+
# 最小限のパラメータで再試行
|
| 214 |
+
image = pipe(
|
| 215 |
+
prompt=prompt,
|
| 216 |
+
num_inference_steps=20,
|
| 217 |
+
guidance_scale=7.5,
|
| 218 |
+
generator=generator
|
| 219 |
+
).images[0]
|
| 220 |
+
|
| 221 |
+
end_time = time.time()
|
| 222 |
+
execution_time = end_time - start_time
|
| 223 |
+
|
| 224 |
+
# スケジューラーを元に戻す
|
| 225 |
+
pipe.scheduler = original_scheduler
|
| 226 |
+
|
| 227 |
+
# ファイル名生成
|
| 228 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 229 |
+
filename = f"jagirl_{timestamp}_seed{seed}.png"
|
| 230 |
+
filepath = os.path.join(output_dir, filename)
|
| 231 |
+
|
| 232 |
+
# 画像保存
|
| 233 |
+
image.save(filepath)
|
| 234 |
+
print(f"💾 画像保存: {filepath}")
|
| 235 |
+
|
| 236 |
+
# VRAM使用量取得
|
| 237 |
+
vram_usage = torch.cuda.memory_allocated(0) / 1024**3 if torch.cuda.is_available() else 0
|
| 238 |
+
|
| 239 |
+
# 統一ログ機能で詳細記録
|
| 240 |
+
logger = get_logger()
|
| 241 |
+
|
| 242 |
+
# GPU最適化設定の記録
|
| 243 |
+
additional_info = {
|
| 244 |
+
"gpu_optimizations": {
|
| 245 |
+
"fp16_enabled": hasattr(pipe, 'dtype') and pipe.dtype == torch.float16,
|
| 246 |
+
"xformers_enabled": getattr(pipe, '_use_xformers', False),
|
| 247 |
+
"cpu_offload_enabled": getattr(pipe, '_cpu_offload', False)
|
| 248 |
+
},
|
| 249 |
+
"generation_mode": "high_quality_test"
|
| 250 |
+
}
|
| 251 |
+
|
| 252 |
+
params = {
|
| 253 |
+
"negative_prompt": negative_prompt,
|
| 254 |
+
"scheduler_type": scheduler_type,
|
| 255 |
+
"num_inference_steps": num_inference_steps,
|
| 256 |
+
"guidance_scale": guidance_scale,
|
| 257 |
+
"width": width,
|
| 258 |
+
"height": height,
|
| 259 |
+
"seed": seed,
|
| 260 |
+
"eta": eta,
|
| 261 |
+
"torch_dtype": str(pipe.dtype) if hasattr(pipe, 'dtype') else "unknown",
|
| 262 |
+
"enable_xformers": getattr(pipe, '_use_xformers', False),
|
| 263 |
+
"enable_cpu_offload": getattr(pipe, '_cpu_offload', False)
|
| 264 |
+
}
|
| 265 |
+
|
| 266 |
+
generation_id = logger.log_generation(
|
| 267 |
+
prompt=prompt,
|
| 268 |
+
negative_prompt=negative_prompt,
|
| 269 |
+
parameters=params,
|
| 270 |
+
output_filepath=filepath,
|
| 271 |
+
execution_time=execution_time,
|
| 272 |
+
additional_info=additional_info
|
| 273 |
+
)
|
| 274 |
+
|
| 275 |
+
print(f"📋 ログ記録完了: {generation_id}")
|
| 276 |
+
|
| 277 |
+
return image, filepath
|
| 278 |
+
|
| 279 |
+
def test_various_prompts(pipe):
|
| 280 |
+
"""様々なプロンプトでテスト生成"""
|
| 281 |
+
|
| 282 |
+
# 日本女性(顔つきにフォーカス)向けプロンプト例を追加
|
| 283 |
+
test_prompts = [
|
| 284 |
+
{
|
| 285 |
+
"prompt": "日本人女性の顔立ち, 美しい顔, 繊細な瞳, 自然な黒髪, 柔らかな表情, リアリスティックな肌質",
|
| 286 |
+
"negative": "low quality, blurry, ugly, deformed, western features, caucasian",
|
| 287 |
+
"description": "基本的な日本女性の顔つき (SDXL)",
|
| 288 |
+
"scheduler": "default",
|
| 289 |
+
"steps": 25,
|
| 290 |
+
"cfg": 7.0,
|
| 291 |
+
"width": 1024,
|
| 292 |
+
"height": 1024
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"prompt": "日本人女性, 上品で優しい顔立ち, アーモンド形の瞳, ストレート黒髪, 控えめなメイク",
|
| 296 |
+
"negative": "low quality, blurry, ugly, deformed, western features, heavy makeup",
|
| 297 |
+
"description": "上品な日本女性 (DPMSolver SDXL)",
|
| 298 |
+
"scheduler": "DPMSolver",
|
| 299 |
+
"steps": 20,
|
| 300 |
+
"cfg": 7.5,
|
| 301 |
+
"width": 1024,
|
| 302 |
+
"height": 1024
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"prompt": "日本人の若い女性, 愛らしい笑顔, 小さめの鼻, 優しい茶色の瞳, 肩にかかる髪",
|
| 306 |
+
"negative": "low quality, blurry, ugly, deformed, aged, wrinkles, western features",
|
| 307 |
+
"description": "若々しい日本女性 (Euler SDXL)",
|
| 308 |
+
"scheduler": "Euler",
|
| 309 |
+
"steps": 30,
|
| 310 |
+
"cfg": 6.0,
|
| 311 |
+
"width": 1024,
|
| 312 |
+
"height": 1024
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"prompt": "落ち着いた日本人女性, 大人の魅力, きちんとした顔立ち, はっきりした頬骨, 長い黒髪, プロフェッショナルな雰囲気",
|
| 316 |
+
"negative": "low quality, blurry, ugly, deformed, childish, western features",
|
| 317 |
+
"description": "大人っぽい日本女性 (EulerA SDXL)",
|
| 318 |
+
"scheduler": "EulerA",
|
| 319 |
+
"steps": 25,
|
| 320 |
+
"cfg": 8.0,
|
| 321 |
+
"width": 1024,
|
| 322 |
+
"height": 1024
|
| 323 |
+
}
|
| 324 |
+
]
|
| 325 |
+
|
| 326 |
+
print(f"\n🎯 {len(test_prompts)}パターンのテスト生成を開始...")
|
| 327 |
+
|
| 328 |
+
results = []
|
| 329 |
+
for i, test_case in enumerate(test_prompts, 1):
|
| 330 |
+
print(f"\n--- テスト {i}/{len(test_prompts)}: {test_case['description']} ---")
|
| 331 |
+
|
| 332 |
+
image, filepath = generate_high_quality_image(
|
| 333 |
+
pipe=pipe,
|
| 334 |
+
prompt=test_case["prompt"],
|
| 335 |
+
negative_prompt=test_case["negative"],
|
| 336 |
+
seed=42 + i, # 再現可能なシード
|
| 337 |
+
num_inference_steps=test_case["steps"],
|
| 338 |
+
guidance_scale=test_case["cfg"],
|
| 339 |
+
scheduler_type=test_case["scheduler"]
|
| 340 |
+
)
|
| 341 |
+
|
| 342 |
+
results.append({
|
| 343 |
+
"description": test_case["description"],
|
| 344 |
+
"filepath": filepath,
|
| 345 |
+
"prompt": test_case["prompt"],
|
| 346 |
+
"scheduler": test_case["scheduler"],
|
| 347 |
+
"steps": test_case["steps"],
|
| 348 |
+
"cfg": test_case["cfg"]
|
| 349 |
+
})
|
| 350 |
+
|
| 351 |
+
# VRAM使用量チェック
|
| 352 |
+
if torch.cuda.is_available():
|
| 353 |
+
memory_used = torch.cuda.memory_allocated(0) / 1024**3
|
| 354 |
+
print(f"📊 現在のVRAM使用量: {memory_used:.1f}GB")
|
| 355 |
+
|
| 356 |
+
return results
|
| 357 |
+
|
| 358 |
+
def test_scheduler_comparison(pipe):
|
| 359 |
+
"""各種スケジューラーの比較テスト"""
|
| 360 |
+
print("\n🔄 スケジューラー比較テスト開始...")
|
| 361 |
+
|
| 362 |
+
base_prompt = "anime girl, beautiful face, detailed eyes, colorful hair"
|
| 363 |
+
negative_prompt = "low quality, blurry, ugly"
|
| 364 |
+
|
| 365 |
+
schedulers = ["default", "DDIM", "DPMSolver", "Euler", "EulerA", "LMS"]
|
| 366 |
+
results = []
|
| 367 |
+
|
| 368 |
+
for scheduler in schedulers:
|
| 369 |
+
print(f"\n🔧 テスト中: {scheduler}")
|
| 370 |
+
|
| 371 |
+
image, filepath = generate_high_quality_image(
|
| 372 |
+
pipe=pipe,
|
| 373 |
+
prompt=base_prompt,
|
| 374 |
+
negative_prompt=negative_prompt,
|
| 375 |
+
seed=12345, # 固定シードで比較
|
| 376 |
+
num_inference_steps=30,
|
| 377 |
+
guidance_scale=7.5,
|
| 378 |
+
scheduler_type=scheduler
|
| 379 |
+
)
|
| 380 |
+
|
| 381 |
+
results.append({
|
| 382 |
+
"scheduler": scheduler,
|
| 383 |
+
"filepath": filepath
|
| 384 |
+
})
|
| 385 |
+
|
| 386 |
+
print(f"\n✅ スケジューラー比較完了: {len(results)} パターン")
|
| 387 |
+
return results
|
| 388 |
+
|
| 389 |
+
def test_parameter_variations(pipe):
|
| 390 |
+
"""パラメータバリエーションテスト"""
|
| 391 |
+
print("\n⚙️ パラメータバリエーションテスト開始...")
|
| 392 |
+
|
| 393 |
+
base_prompt = "anime girl, school uniform, detailed"
|
| 394 |
+
negative_prompt = "low quality, blurry"
|
| 395 |
+
|
| 396 |
+
# CFG Scale テスト
|
| 397 |
+
cfg_tests = [3.0, 5.0, 7.5, 10.0, 15.0]
|
| 398 |
+
print("\n📊 CFG Scale テスト:")
|
| 399 |
+
|
| 400 |
+
for cfg in cfg_tests:
|
| 401 |
+
print(f" CFG: {cfg}")
|
| 402 |
+
image, filepath = generate_high_quality_image(
|
| 403 |
+
pipe=pipe,
|
| 404 |
+
prompt=base_prompt,
|
| 405 |
+
negative_prompt=negative_prompt,
|
| 406 |
+
seed=54321,
|
| 407 |
+
guidance_scale=cfg,
|
| 408 |
+
num_inference_steps=25
|
| 409 |
+
)
|
| 410 |
+
|
| 411 |
+
# Steps テスト
|
| 412 |
+
steps_tests = [10, 20, 30, 50, 80]
|
| 413 |
+
print("\n🔢 Steps テスト:")
|
| 414 |
+
|
| 415 |
+
for steps in steps_tests:
|
| 416 |
+
print(f" Steps: {steps}")
|
| 417 |
+
image, filepath = generate_high_quality_image(
|
| 418 |
+
pipe=pipe,
|
| 419 |
+
prompt=base_prompt,
|
| 420 |
+
negative_prompt=negative_prompt,
|
| 421 |
+
seed=98765,
|
| 422 |
+
num_inference_steps=steps,
|
| 423 |
+
guidance_scale=7.5
|
| 424 |
+
)
|
| 425 |
+
|
| 426 |
+
def benchmark_generation_speed(pipe):
|
| 427 |
+
"""生成速度のベンチマーク"""
|
| 428 |
+
print("\n⚡ 生成速度ベンチマーク開始...")
|
| 429 |
+
|
| 430 |
+
prompt = "anime girl, beautiful face, detailed"
|
| 431 |
+
negative_prompt = "low quality, blurry"
|
| 432 |
+
|
| 433 |
+
import time
|
| 434 |
+
|
| 435 |
+
# ウォームアップ
|
| 436 |
+
print("🔥 ウォームアップ中...")
|
| 437 |
+
_ = pipe(prompt, num_inference_steps=10, width=256, height=256)
|
| 438 |
+
|
| 439 |
+
# ベンチマーク実行
|
| 440 |
+
step_counts = [10, 20, 30, 50]
|
| 441 |
+
|
| 442 |
+
for steps in step_counts:
|
| 443 |
+
start_time = time.time()
|
| 444 |
+
|
| 445 |
+
_ = pipe(
|
| 446 |
+
prompt=prompt,
|
| 447 |
+
negative_prompt=negative_prompt,
|
| 448 |
+
num_inference_steps=steps,
|
| 449 |
+
width=512,
|
| 450 |
+
height=512
|
| 451 |
+
)
|
| 452 |
+
|
| 453 |
+
elapsed_time = time.time() - start_time
|
| 454 |
+
print(f"📈 {steps}ステップ: {elapsed_time:.2f}秒")
|
| 455 |
+
|
| 456 |
+
# VRAM使用量クリア
|
| 457 |
+
torch.cuda.empty_cache()
|
| 458 |
+
|
| 459 |
+
def simple_test(pipe):
|
| 460 |
+
"""シンプルなテスト生成(SDXL対応)"""
|
| 461 |
+
print("\n🔍 シンプルテスト開始...")
|
| 462 |
+
|
| 463 |
+
try:
|
| 464 |
+
# SDXL用最小限のパラメータでテスト(日本女性特化)
|
| 465 |
+
print("🎨 SDXL最小限設定で生成中(日本女性の顔つき)...")
|
| 466 |
+
image = pipe(
|
| 467 |
+
"japanese woman, beautiful face, natural features",
|
| 468 |
+
negative_prompt="western features, caucasian, low quality",
|
| 469 |
+
width=1024, # SDXL推奨サイズ
|
| 470 |
+
height=1024, # SDXL推奨サイズ
|
| 471 |
+
num_inference_steps=20,
|
| 472 |
+
guidance_scale=7.0
|
| 473 |
+
).images[0]
|
| 474 |
+
|
| 475 |
+
# 保存
|
| 476 |
+
os.makedirs("outputs", exist_ok=True)
|
| 477 |
+
filepath = "outputs/simple_test_sdxl.png"
|
| 478 |
+
image.save(filepath)
|
| 479 |
+
print(f"✅ シンプルテスト成功: {filepath}")
|
| 480 |
+
return True
|
| 481 |
+
except Exception as e:
|
| 482 |
+
print(f"❌ シンプルテスト失敗: {e}")
|
| 483 |
+
import traceback
|
| 484 |
+
traceback.print_exc()
|
| 485 |
+
return False
|
| 486 |
+
|
| 487 |
+
def main():
|
| 488 |
+
"""メイン実行関数"""
|
| 489 |
+
print("🚀 高品質画像生成テストを開始...")
|
| 490 |
+
print("=" * 50)
|
| 491 |
+
|
| 492 |
+
# 統一ログ機能の初期化
|
| 493 |
+
logger = get_logger()
|
| 494 |
+
print("📋 統一ログ機能を初期化しました")
|
| 495 |
+
print(f"📄 ログファイル: {logger.json_log_file}")
|
| 496 |
+
|
| 497 |
+
# 統計情報の表示
|
| 498 |
+
stats = logger.get_statistics()
|
| 499 |
+
print(f"📊 これまでの生成総数: {stats.get('total_generations', 0)}枚")
|
| 500 |
+
|
| 501 |
+
# GPU状態確認
|
| 502 |
+
if not check_gpu_status():
|
| 503 |
+
print("❌ GPU環境が必要です")
|
| 504 |
+
return
|
| 505 |
+
|
| 506 |
+
try:
|
| 507 |
+
# モデルセットアップ
|
| 508 |
+
pipe = setup_model()
|
| 509 |
+
|
| 510 |
+
if pipe is None:
|
| 511 |
+
print("❌ モデルセットアップに失敗しました")
|
| 512 |
+
return
|
| 513 |
+
|
| 514 |
+
# まずシンプルテスト
|
| 515 |
+
if not simple_test(pipe):
|
| 516 |
+
print("❌ シンプルテストに失敗、処理を中止します")
|
| 517 |
+
return
|
| 518 |
+
|
| 519 |
+
# テスト生成実行
|
| 520 |
+
results = test_various_prompts(pipe)
|
| 521 |
+
|
| 522 |
+
# スケジューラー比較テスト
|
| 523 |
+
scheduler_results = test_scheduler_comparison(pipe)
|
| 524 |
+
|
| 525 |
+
# パラメータバリエーションテスト
|
| 526 |
+
test_parameter_variations(pipe)
|
| 527 |
+
|
| 528 |
+
# 速度ベンチマーク
|
| 529 |
+
benchmark_generation_speed(pipe)
|
| 530 |
+
|
| 531 |
+
# 結果サマリー
|
| 532 |
+
print("\n" + "=" * 50)
|
| 533 |
+
print("✅ テスト完了! 生成された画像:")
|
| 534 |
+
for result in results:
|
| 535 |
+
print(f" 📄 {result['description']}: {result['filepath']}")
|
| 536 |
+
|
| 537 |
+
print(f"\n🎯 合計 {len(results)} 枚の画像を生成しました")
|
| 538 |
+
|
| 539 |
+
# 最終VRAM使用量
|
| 540 |
+
if torch.cuda.is_available():
|
| 541 |
+
final_memory = torch.cuda.memory_allocated(0) / 1024**3
|
| 542 |
+
print(f"💾 最終VRAM使用量: {final_memory:.1f}GB")
|
| 543 |
+
|
| 544 |
+
except Exception as e:
|
| 545 |
+
print(f"❌ エラーが発生しました: {str(e)}")
|
| 546 |
+
import traceback
|
| 547 |
+
traceback.print_exc()
|
| 548 |
+
|
| 549 |
+
finally:
|
| 550 |
+
# メモリクリーンアップ
|
| 551 |
+
if torch.cuda.is_available():
|
| 552 |
+
torch.cuda.empty_cache()
|
| 553 |
+
print("🧹 VRAMキャッシュをクリアしました")
|
| 554 |
+
|
| 555 |
+
if __name__ == "__main__":
|
| 556 |
+
main()
|
uv.lock
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
仮想環境への入り方.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
I:\jagirl_ui\.venv\Scripts\Activate.ps1
|