Spaces:
Sleeping
feat: Add dynamic model count display
Browse filesChanges:
- Add TOTAL_MODEL_COUNT, PUBLIC_MODEL_COUNT, GATED_MODEL_COUNT variables
- Calculate model counts dynamically from MODEL_CONFIGS
- Replace hardcoded "10κ°μ§" with dynamic {TOTAL_MODEL_COUNT}
- Show breakdown: "13κ°μ§ λͺ¨λΈ (10 Public + 3 Gated)"
Dynamic Calculation:
- TOTAL_MODEL_COUNT = len(MODEL_CONFIGS)
- PUBLIC_MODEL_COUNT = count models without π icon
- GATED_MODEL_COUNT = TOTAL_MODEL_COUNT - PUBLIC_MODEL_COUNT
Updated Locations:
- Header (ZeroGPU mode): Shows total and breakdown
- Header (CPU Upgrade mode): Shows total and breakdown
- Footer (ZeroGPU mode): Shows total model count
- Footer (CPU Upgrade mode): Shows total model count
Benefits:
- Auto-updates when models are added/removed
- No manual text updates needed
- Prevents outdated information
- Shows clear Public vs Gated distinction
Model Analysis:
- Reviewed suggested models (DialoGPT, GPT-2, Llama-2-Ko, etc.)
- Current list already contains superior alternatives
- No additions needed - current selection optimal
π€ Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
|
@@ -33,8 +33,8 @@ if HF_TOKEN:
|
|
| 33 |
else:
|
| 34 |
print("β οΈ HF_TOKEN not found in environment - some models may not be accessible")
|
| 35 |
|
| 36 |
-
# Model configurations
|
| 37 |
-
# Note: Gated models require HF access approval at https://huggingface.co/[model-name]
|
| 38 |
MODEL_CONFIGS = [
|
| 39 |
{
|
| 40 |
"MODEL_NAME": "LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct",
|
|
@@ -137,6 +137,11 @@ loaded_model_name = None # Track which model is currently loaded
|
|
| 137 |
model = None
|
| 138 |
tokenizer = None
|
| 139 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
def check_model_cached(model_name):
|
| 142 |
"""Check if model is already downloaded in HF cache"""
|
|
@@ -348,7 +353,7 @@ with gr.Blocks(title="π€ Multi-Model Chatbot") as demo:
|
|
| 348 |
|
| 349 |
**νΉμ§**:
|
| 350 |
- β‘ GPU κ°μμΌλ‘ λΉ λ₯Έ μλ΅ (3-5μ΄)
|
| 351 |
-
- π―
|
| 352 |
- π λͺ¨λΈ μ ν μ μλ μ¬λ‘λ©
|
| 353 |
- π° PRO ꡬλ
μ ν루 25λΆ λ¬΄λ£ μ¬μ©
|
| 354 |
"""
|
|
@@ -359,7 +364,7 @@ with gr.Blocks(title="π€ Multi-Model Chatbot") as demo:
|
|
| 359 |
**νλμ¨μ΄**: CPU Upgrade (8 vCPU / 32 GB RAM)
|
| 360 |
|
| 361 |
**νΉμ§**:
|
| 362 |
-
- π―
|
| 363 |
- π λͺ¨λΈ μ ν μ μλ μ¬λ‘λ©
|
| 364 |
- β³ CPU νκ²½μ΄λ―λ‘ μλ΅μ΄ λ€μ λ립λλ€ (30μ΄~1λΆ)
|
| 365 |
- π° μκ°λΉ $0.03 (μ μ½ $22)
|
|
@@ -435,10 +440,10 @@ with gr.Blocks(title="π€ Multi-Model Chatbot") as demo:
|
|
| 435 |
|
| 436 |
# Dynamic footer based on hardware
|
| 437 |
if ZEROGPU_AVAILABLE:
|
| 438 |
-
footer = """
|
| 439 |
---
|
| 440 |
**μ°Έκ³ μ¬ν (ZeroGPU λͺ¨λ)**:
|
| 441 |
-
- π€
|
| 442 |
- β‘ ZeroGPUλ μμ² μ μλμΌλ‘ GPUλ₯Ό ν λΉν©λλ€
|
| 443 |
- π° PRO ꡬλ
μλ ν루 25λΆ λ¬΄λ£ μ¬μ©
|
| 444 |
- π λͺ¨λΈ λ³κ²½ μ λν λ΄μμ΄ μ΄κΈ°νλ©λλ€
|
|
@@ -450,10 +455,10 @@ with gr.Blocks(title="π€ Multi-Model Chatbot") as demo:
|
|
| 450 |
- "νκ΅μ μλλ μ΄λμΈκ°μ?"
|
| 451 |
"""
|
| 452 |
else:
|
| 453 |
-
footer = """
|
| 454 |
---
|
| 455 |
**μ°Έκ³ μ¬ν (CPU Upgrade λͺ¨λ)**:
|
| 456 |
-
- π€
|
| 457 |
- π λͺ¨λΈ λ³κ²½ μ λν λ΄μμ΄ μ΄κΈ°νλ©λλ€
|
| 458 |
- β³ CPU νκ²½μ΄λ―λ‘ μλ΅μ΄ λ립λλ€ (30μ΄~1λΆ)
|
| 459 |
- β±οΈ 첫 μλ΅μ λͺ¨λΈ λ‘λ© μκ° ν¬ν¨ (~1-2λΆ)
|
|
|
|
| 33 |
else:
|
| 34 |
print("β οΈ HF_TOKEN not found in environment - some models may not be accessible")
|
| 35 |
|
| 36 |
+
# Model configurations
|
| 37 |
+
# Note: Gated models (marked with π) require HF access approval at https://huggingface.co/[model-name]
|
| 38 |
MODEL_CONFIGS = [
|
| 39 |
{
|
| 40 |
"MODEL_NAME": "LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct",
|
|
|
|
| 137 |
model = None
|
| 138 |
tokenizer = None
|
| 139 |
|
| 140 |
+
# Dynamic model count
|
| 141 |
+
TOTAL_MODEL_COUNT = len(MODEL_CONFIGS)
|
| 142 |
+
PUBLIC_MODEL_COUNT = sum(1 for cfg in MODEL_CONFIGS if "π" not in cfg["MODEL_CONFIG"]["name"])
|
| 143 |
+
GATED_MODEL_COUNT = TOTAL_MODEL_COUNT - PUBLIC_MODEL_COUNT
|
| 144 |
+
|
| 145 |
|
| 146 |
def check_model_cached(model_name):
|
| 147 |
"""Check if model is already downloaded in HF cache"""
|
|
|
|
| 353 |
|
| 354 |
**νΉμ§**:
|
| 355 |
- β‘ GPU κ°μμΌλ‘ λΉ λ₯Έ μλ΅ (3-5μ΄)
|
| 356 |
+
- π― {TOTAL_MODEL_COUNT}κ°μ§ νκΈ μ΅μ ν λͺ¨λΈ μ ν κ°λ₯ ({PUBLIC_MODEL_COUNT} Public + {GATED_MODEL_COUNT} Gated)
|
| 357 |
- π λͺ¨λΈ μ ν μ μλ μ¬λ‘λ©
|
| 358 |
- π° PRO ꡬλ
μ ν루 25λΆ λ¬΄λ£ μ¬μ©
|
| 359 |
"""
|
|
|
|
| 364 |
**νλμ¨μ΄**: CPU Upgrade (8 vCPU / 32 GB RAM)
|
| 365 |
|
| 366 |
**νΉμ§**:
|
| 367 |
+
- π― {TOTAL_MODEL_COUNT}κ°μ§ νκΈ μ΅μ ν λͺ¨λΈ μ ν κ°λ₯ ({PUBLIC_MODEL_COUNT} Public + {GATED_MODEL_COUNT} Gated)
|
| 368 |
- π λͺ¨λΈ μ ν μ μλ μ¬λ‘λ©
|
| 369 |
- β³ CPU νκ²½μ΄λ―λ‘ μλ΅μ΄ λ€μ λ립λλ€ (30μ΄~1λΆ)
|
| 370 |
- π° μκ°λΉ $0.03 (μ μ½ $22)
|
|
|
|
| 440 |
|
| 441 |
# Dynamic footer based on hardware
|
| 442 |
if ZEROGPU_AVAILABLE:
|
| 443 |
+
footer = f"""
|
| 444 |
---
|
| 445 |
**μ°Έκ³ μ¬ν (ZeroGPU λͺ¨λ)**:
|
| 446 |
+
- π€ {TOTAL_MODEL_COUNT}κ°μ§ λͺ¨λΈ μ€ μ ν κ°λ₯ (λλ‘λ€μ΄μμ μ ν)
|
| 447 |
- β‘ ZeroGPUλ μμ² μ μλμΌλ‘ GPUλ₯Ό ν λΉν©λλ€
|
| 448 |
- π° PRO ꡬλ
μλ ν루 25λΆ λ¬΄λ£ μ¬μ©
|
| 449 |
- π λͺ¨λΈ λ³κ²½ μ λν λ΄μμ΄ μ΄κΈ°νλ©λλ€
|
|
|
|
| 455 |
- "νκ΅μ μλλ μ΄λμΈκ°μ?"
|
| 456 |
"""
|
| 457 |
else:
|
| 458 |
+
footer = f"""
|
| 459 |
---
|
| 460 |
**μ°Έκ³ μ¬ν (CPU Upgrade λͺ¨λ)**:
|
| 461 |
+
- π€ {TOTAL_MODEL_COUNT}κ°μ§ λͺ¨λΈ μ€ μ ν κ°λ₯ (λλ‘λ€μ΄μμ μ ν)
|
| 462 |
- π λͺ¨λΈ λ³κ²½ μ λν λ΄μμ΄ μ΄κΈ°νλ©λλ€
|
| 463 |
- β³ CPU νκ²½μ΄λ―λ‘ μλ΅μ΄ λ립λλ€ (30μ΄~1λΆ)
|
| 464 |
- β±οΈ 첫 μλ΅μ λͺ¨λΈ λ‘λ© μκ° ν¬ν¨ (~1-2λΆ)
|