Spaces:
Sleeping
Sleeping
Upload 3 files
Browse files- README.md +75 -14
- app.py +243 -0
- requirements.txt +6 -0
README.md
CHANGED
|
@@ -1,14 +1,75 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Scribe (MCP Docx Tool)
|
| 2 |
+
|
| 3 |
+
Project for the MCP 1st Birthday Hackathon that lets you chat with multiple AI providers and export the conversation to a `.docx` file.
|
| 4 |
+
|
| 5 |
+
## Features
|
| 6 |
+
- Gradio web UI.
|
| 7 |
+
- Providers supported: OpenAI, Gemini, SambaNova, Nebius, and Ollama.
|
| 8 |
+
- Export full conversation to DOCX.
|
| 9 |
+
- Runs as MCP server (`mcp_server=True`) for integrations.
|
| 10 |
+
- Goal: help competitors use sponsor credits on their own accounts and tests during the hackathon.
|
| 11 |
+
|
| 12 |
+
## Sponsors π€
|
| 13 |
+
Special thanks to MCP 1st Birthday Hackathon sponsors:
|
| 14 |
+
- OpenAI β¨
|
| 15 |
+
- Google Gemini π
|
| 16 |
+
- SambaNova π
|
| 17 |
+
- Nebius β‘
|
| 18 |
+
|
| 19 |
+
> Note: Ollama isnβt a sponsor but is supported for local usage.
|
| 20 |
+
|
| 21 |
+
<div style="display:flex; gap:16px; flex-wrap:wrap; align-items:center;">
|
| 22 |
+
<img src="assets/openai.png" alt="OpenAI" title="OpenAI" style="height:48px;">
|
| 23 |
+
<img src="assets/gemini.png" alt="Google Gemini" title="Google Gemini" style="height:48px;">
|
| 24 |
+
<img src="assets/sambanova.png" alt="SambaNova" title="SambaNova" style="height:48px;">
|
| 25 |
+
<img src="assets/nebius.png" alt="Nebius" title="Nebius" style="height:48px;">
|
| 26 |
+
</div>
|
| 27 |
+
<p style="font-size:0.9em; color:#666;">Place images in <code>./assets</code> and adjust names if needed.</p>
|
| 28 |
+
|
| 29 |
+
## Requirements
|
| 30 |
+
- Python 3.10+
|
| 31 |
+
- Dependencies in `requirements.txt`.
|
| 32 |
+
|
| 33 |
+
## Installation
|
| 34 |
+
1. Create and activate a virtual environment:
|
| 35 |
+
- Windows:
|
| 36 |
+
- `python -m venv .venv`
|
| 37 |
+
- `.venv\Scripts\activate`
|
| 38 |
+
- Linux/Mac:
|
| 39 |
+
- `python -m venv .venv`
|
| 40 |
+
- `source .venv/bin/activate`
|
| 41 |
+
2. Install dependencies:
|
| 42 |
+
- `pip install -r requirements.txt`
|
| 43 |
+
|
| 44 |
+
## Provider Configuration
|
| 45 |
+
- OpenAI: API Key (`sk-...`) and model (e.g., `gpt-4o`).
|
| 46 |
+
- Gemini: API Key and model (e.g., `gemini-1.5-flash`).
|
| 47 |
+
- SambaNova: install `sambanova`, API Key, and model (leave empty for default `Meta-Llama-3.1-8B-Instruct`).
|
| 48 |
+
- Nebius: API Key and model (e.g., `openai/gpt-oss-120b`).
|
| 49 |
+
- Ollama: no API Key required; leave model empty for automatic local selection.
|
| 50 |
+
|
| 51 |
+
## Usage
|
| 52 |
+
1. Run the app:
|
| 53 |
+
- `python app.py`
|
| 54 |
+
2. In the UI:
|
| 55 |
+
- Select a provider.
|
| 56 |
+
- Enter API Key and model when applicable.
|
| 57 |
+
- Type your message and press βSendβ.
|
| 58 |
+
- Press βDownload Scribe .docxβ to export the conversation.
|
| 59 |
+
3. The generated file will appear in the download component.
|
| 60 |
+
|
| 61 |
+
## Hackathon Prep
|
| 62 |
+
- Event rules: https://huggingface.co/MCP-1st-Birthday
|
| 63 |
+
- Demo:
|
| 64 |
+
- End-to-end: chat with at least one provider and download DOCX.
|
| 65 |
+
- Explain value: automated documentation and traceability.
|
| 66 |
+
- Include simple metrics: time saved and number of documents generated.
|
| 67 |
+
- Thank sponsors in your presentation and show how Scribe helps use the credits to evaluate models and flows.
|
| 68 |
+
|
| 69 |
+
## Structure
|
| 70 |
+
- `app.py`: UI and chat/export logic.
|
| 71 |
+
- `requirements.txt`: project dependencies.
|
| 72 |
+
- `README.md`: project documentation.
|
| 73 |
+
|
| 74 |
+
## License
|
| 75 |
+
MIT (adjust if needed).
|
app.py
ADDED
|
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
from docx import Document
|
| 3 |
+
import openai
|
| 4 |
+
import google.generativeai as genai
|
| 5 |
+
import requests
|
| 6 |
+
from tempfile import NamedTemporaryFile
|
| 7 |
+
from typing import Any, Dict, List, Optional, Tuple, Union
|
| 8 |
+
from sambanova import SambaNova # hard dependency; no try/except
|
| 9 |
+
|
| 10 |
+
def _flatten_text(value: Any) -> str:
|
| 11 |
+
"""Converts nested structures (dict/list/tuple) into a single string."""
|
| 12 |
+
if value is None:
|
| 13 |
+
return ""
|
| 14 |
+
if isinstance(value, str):
|
| 15 |
+
return value
|
| 16 |
+
if isinstance(value, dict):
|
| 17 |
+
for key in ("text", "content", "parts"):
|
| 18 |
+
if key in value:
|
| 19 |
+
return _flatten_text(value[key])
|
| 20 |
+
return " ".join(filter(None, (_flatten_text(v) for v in value.values())))
|
| 21 |
+
if isinstance(value, (list, tuple)):
|
| 22 |
+
return " ".join(filter(None, (_flatten_text(v) for v in value)))
|
| 23 |
+
return str(value)
|
| 24 |
+
|
| 25 |
+
def _normalize_messages(history: Optional[List[Any]]) -> List[Dict[str, str]]:
|
| 26 |
+
"""Normalizes chat history into a list of {'role','content'} dicts."""
|
| 27 |
+
msgs = []
|
| 28 |
+
for h in (history or []):
|
| 29 |
+
if isinstance(h, dict) and "role" in h and "content" in h:
|
| 30 |
+
if h["role"] in ("user", "assistant"):
|
| 31 |
+
msgs.append({"role": h["role"], "content": _flatten_text(h["content"])})
|
| 32 |
+
elif isinstance(h, (list, tuple)) and len(h) >= 2:
|
| 33 |
+
u, a = h[0], h[1]
|
| 34 |
+
if u is not None:
|
| 35 |
+
msgs.append({"role": "user", "content": _flatten_text(u)})
|
| 36 |
+
if a is not None:
|
| 37 |
+
msgs.append({"role": "assistant", "content": _flatten_text(a)})
|
| 38 |
+
return msgs
|
| 39 |
+
|
| 40 |
+
def _pairs_from_history(history: Optional[List[Any]]) -> List[Tuple[str, str]]:
|
| 41 |
+
"""Converts normalized messages into (user, assistant) pairs."""
|
| 42 |
+
pairs = []
|
| 43 |
+
pending_user = None
|
| 44 |
+
for h in _normalize_messages(history):
|
| 45 |
+
if h["role"] == "user":
|
| 46 |
+
if pending_user is not None:
|
| 47 |
+
pairs.append((pending_user, "")) # user without assistant response
|
| 48 |
+
pending_user = h["content"]
|
| 49 |
+
elif h["role"] == "assistant":
|
| 50 |
+
if pending_user is None:
|
| 51 |
+
pairs.append(("", h["content"])) # assistant without prior user
|
| 52 |
+
else:
|
| 53 |
+
pairs.append((pending_user, h["content"]))
|
| 54 |
+
pending_user = None
|
| 55 |
+
if pending_user is not None:
|
| 56 |
+
pairs.append((pending_user, "")) # trailing user
|
| 57 |
+
return pairs
|
| 58 |
+
|
| 59 |
+
def _msgs(history: Optional[List[Any]], user_msg: str) -> List[Dict[str, str]]:
|
| 60 |
+
"""Builds a messages array with a system prompt."""
|
| 61 |
+
m = [{"role": "system", "content": "You are a helpful assistant."}]
|
| 62 |
+
m += _normalize_messages(history)
|
| 63 |
+
m.append({"role": "user", "content": user_msg})
|
| 64 |
+
return m
|
| 65 |
+
|
| 66 |
+
def guardar_conversacion(historia: Optional[List[Any]]) -> Optional[str]:
|
| 67 |
+
"""Generates a .docx file of the conversation and returns its filepath."""
|
| 68 |
+
try:
|
| 69 |
+
doc = Document()
|
| 70 |
+
doc.add_heading("Scribe Conversation", level=1)
|
| 71 |
+
for idx, (u, a) in enumerate(_pairs_from_history(historia)):
|
| 72 |
+
user_text = _flatten_text(u).strip()
|
| 73 |
+
assistant_text = _flatten_text(a).strip()
|
| 74 |
+
if idx:
|
| 75 |
+
doc.add_paragraph("")
|
| 76 |
+
p_user = doc.add_paragraph()
|
| 77 |
+
p_user.add_run("User: ").bold = True
|
| 78 |
+
p_user.add_run(user_text or "β")
|
| 79 |
+
p_assistant = doc.add_paragraph()
|
| 80 |
+
p_assistant.add_run("Assistant: ").bold = True
|
| 81 |
+
p_assistant.add_run(assistant_text or "β")
|
| 82 |
+
tmp = NamedTemporaryFile(delete=False, suffix=".docx", prefix="Scribe_")
|
| 83 |
+
doc.save(tmp.name)
|
| 84 |
+
return tmp.name
|
| 85 |
+
except Exception:
|
| 86 |
+
return None
|
| 87 |
+
|
| 88 |
+
def chat_response(message: str, history: Optional[List[Any]], provider: str, api_key: str, model: str) -> str:
|
| 89 |
+
"""Routes the chat request to the selected provider and returns the assistant text."""
|
| 90 |
+
if provider != "Ollama" and not api_key:
|
| 91 |
+
return "β οΈ Please enter an API Key to proceed."
|
| 92 |
+
# Require explicit model for all providers except Ollama
|
| 93 |
+
if provider != "Ollama" and not (model or "").strip():
|
| 94 |
+
return "β οΈ Please specify a model for the selected provider."
|
| 95 |
+
try:
|
| 96 |
+
if provider == "OpenAI":
|
| 97 |
+
client = openai.OpenAI(api_key=api_key)
|
| 98 |
+
r = client.chat.completions.create(model=model, messages=_msgs(history, message))
|
| 99 |
+
return r.choices[0].message.content
|
| 100 |
+
elif provider == "Gemini":
|
| 101 |
+
genai.configure(api_key=api_key)
|
| 102 |
+
mdl = genai.GenerativeModel(model)
|
| 103 |
+
ctx = "System: You are a helpful assistant.\n"
|
| 104 |
+
for u, a in _pairs_from_history(history):
|
| 105 |
+
ctx += f"User: {u or ''}\nModel: {a or ''}\n"
|
| 106 |
+
ctx += f"User: {message}\nModel:"
|
| 107 |
+
out = mdl.generate_content(ctx)
|
| 108 |
+
return getattr(out, "text", "") or "β οΈ Empty response from Gemini."
|
| 109 |
+
elif provider == "Sambanova":
|
| 110 |
+
client = SambaNova(api_key=api_key, base_url="https://api.sambanova.ai/v1")
|
| 111 |
+
r = client.chat.completions.create(
|
| 112 |
+
model=model,
|
| 113 |
+
messages=_msgs(history, message),
|
| 114 |
+
temperature=0.2,
|
| 115 |
+
top_p=0.9,
|
| 116 |
+
)
|
| 117 |
+
return r.choices[0].message.content
|
| 118 |
+
elif provider == "Nebius":
|
| 119 |
+
client = openai.OpenAI(base_url="https://api.tokenfactory.nebius.com/v1/", api_key=api_key)
|
| 120 |
+
r = client.chat.completions.create(model=model, messages=_msgs(history, message))
|
| 121 |
+
return r.choices[0].message.content
|
| 122 |
+
elif provider == "Ollama":
|
| 123 |
+
base = "http://127.0.0.1:11434"
|
| 124 |
+
mdl = (model or "").strip()
|
| 125 |
+
if not mdl:
|
| 126 |
+
try:
|
| 127 |
+
r = requests.get(f"{base}/api/tags", timeout=5)
|
| 128 |
+
if r.status_code == 200:
|
| 129 |
+
data = r.json() if r.headers.get("Content-Type", "").startswith("application/json") else {}
|
| 130 |
+
tags = data.get("models", [])
|
| 131 |
+
mdl = tags[0]["name"] if tags else "llama3"
|
| 132 |
+
else:
|
| 133 |
+
mdl = "llama3"
|
| 134 |
+
except Exception:
|
| 135 |
+
mdl = "llama3"
|
| 136 |
+
resp = requests.post(
|
| 137 |
+
f"{base}/v1/chat/completions",
|
| 138 |
+
json={"model": mdl, "messages": _msgs(history, message), "stream": False},
|
| 139 |
+
timeout=60,
|
| 140 |
+
)
|
| 141 |
+
if resp.status_code == 200:
|
| 142 |
+
try:
|
| 143 |
+
data = resp.json()
|
| 144 |
+
return data["choices"][0]["message"]["content"]
|
| 145 |
+
except Exception:
|
| 146 |
+
return "β οΈ Ollama returned invalid JSON."
|
| 147 |
+
return f"β οΈ Ollama Error {resp.status_code}: {resp.text}"
|
| 148 |
+
else:
|
| 149 |
+
return "π« Provider not supported."
|
| 150 |
+
except Exception as e:
|
| 151 |
+
return f"β οΈ Error: {e}"
|
| 152 |
+
|
| 153 |
+
# --- Dynamic help in the UI ---
|
| 154 |
+
def _provider_help(p: str) -> str:
|
| 155 |
+
"""Returns help text for the selected provider."""
|
| 156 |
+
if p == "Sambanova":
|
| 157 |
+
return (
|
| 158 |
+
"Sambanova:\n"
|
| 159 |
+
"- pip install sambanova\n"
|
| 160 |
+
"- Get your API Key at sambanova.ai.\n"
|
| 161 |
+
"- Specify the exact model name (e.g., Meta-Llama-3.1-8B-Instruct, Meta-Llama-3.1-70B-Instruct)."
|
| 162 |
+
)
|
| 163 |
+
if p == "Nebius":
|
| 164 |
+
return (
|
| 165 |
+
"Nebius:\n"
|
| 166 |
+
"- Paste your Nebius API Key.\n"
|
| 167 |
+
"- Base URL is preconfigured: https://api.tokenfactory.nebius.com/v1/\n"
|
| 168 |
+
"- Specify a model (e.g., openai/gpt-oss-120b, openai/gpt-4o-mini)."
|
| 169 |
+
)
|
| 170 |
+
if p == "OpenAI":
|
| 171 |
+
return (
|
| 172 |
+
"OpenAI: enter your API Key and specify a model.\n"
|
| 173 |
+
"Examples: gpt-4o-mini, gpt-4o, o4-mini, o3-mini."
|
| 174 |
+
)
|
| 175 |
+
if p == "Gemini":
|
| 176 |
+
return (
|
| 177 |
+
"Gemini: enter your API Key and specify a model.\n"
|
| 178 |
+
"Examples: gemini-1.5-flash, gemini-1.5-pro, gemini-1.5-flash-8b."
|
| 179 |
+
)
|
| 180 |
+
if p == "Ollama":
|
| 181 |
+
return "Ollama: no API Key required; leave the model empty to auto-select a local one (e.g., llama3, qwen2.5)."
|
| 182 |
+
return ""
|
| 183 |
+
|
| 184 |
+
def _on_provider_change(p: str):
|
| 185 |
+
"""Updates help text and model placeholder based on provider."""
|
| 186 |
+
if p == "Sambanova":
|
| 187 |
+
ph = "e.g. Meta-Llama-3.1-8B-Instruct"
|
| 188 |
+
elif p == "Nebius":
|
| 189 |
+
ph = "e.g. openai/gpt-oss-120b"
|
| 190 |
+
elif p == "OpenAI":
|
| 191 |
+
ph = "e.g. gpt-4o-mini"
|
| 192 |
+
elif p == "Gemini":
|
| 193 |
+
ph = "e.g. gemini-1.5-flash"
|
| 194 |
+
else:
|
| 195 |
+
ph = "(Ollama: leave empty for automatic)"
|
| 196 |
+
return _provider_help(p), gr.update(placeholder=ph)
|
| 197 |
+
|
| 198 |
+
def handle_chat(message: str, history: Optional[List[Any]], provider: str, api_key: str, model: str):
|
| 199 |
+
"""Gradio handler to process a message and update chat history."""
|
| 200 |
+
reply = chat_response(message, history, provider, api_key, model)
|
| 201 |
+
new_hist = (_normalize_messages(history)) + [
|
| 202 |
+
{"role": "user", "content": message},
|
| 203 |
+
{"role": "assistant", "content": str(reply)},
|
| 204 |
+
]
|
| 205 |
+
return "", new_hist
|
| 206 |
+
|
| 207 |
+
with gr.Blocks(title="π Scribe") as demo:
|
| 208 |
+
gr.Markdown("## π Scribe\nChat and save your conversation to .docx")
|
| 209 |
+
# Disclaimer about API key safety and best practices
|
| 210 |
+
gr.Markdown(
|
| 211 |
+
"Disclaimer: While this app takes reasonable steps to reduce risks related to API keys (e.g., not auto-filling secrets and using them only for requests you trigger), no application can fully prevent misuse. Follow these best practices:\n"
|
| 212 |
+
"- Use environment variables or a secure secrets manager where possible.\n"
|
| 213 |
+
"- Do not share or hard-code your API keys in source control.\n"
|
| 214 |
+
"- Rotate keys periodically and revoke any suspected-compromised keys.\n"
|
| 215 |
+
"- Restrict key permissions and scopes to the minimum needed.\n"
|
| 216 |
+
"- Monitor usage and set rate limits/quotas where available.\n"
|
| 217 |
+
"- Only run this app in trusted environments and networks."
|
| 218 |
+
)
|
| 219 |
+
with gr.Row():
|
| 220 |
+
with gr.Column(scale=1):
|
| 221 |
+
provider = gr.Dropdown(
|
| 222 |
+
choices=["OpenAI", "Gemini", "Sambanova", "Nebius", "Ollama"],
|
| 223 |
+
value="OpenAI",
|
| 224 |
+
label="π Service Provider"
|
| 225 |
+
)
|
| 226 |
+
api_key = gr.Textbox(label="π API Key", type="password", placeholder="sk-...") # do not auto-fill secrets
|
| 227 |
+
model = gr.Textbox(label="π§ Model", placeholder="(Ollama: leave empty for automatic)")
|
| 228 |
+
help_md = gr.Markdown(_provider_help("OpenAI"))
|
| 229 |
+
with gr.Column(scale=3):
|
| 230 |
+
chat = gr.Chatbot(label="π¬ Scribe Chat")
|
| 231 |
+
msg = gr.Textbox(placeholder="βοΈ Type your message and press Enter...")
|
| 232 |
+
with gr.Row():
|
| 233 |
+
send = gr.Button("π Send", variant="primary")
|
| 234 |
+
clear = gr.Button("π§Ή Clear")
|
| 235 |
+
download = gr.Button("β¬οΈ Download Scribe .docx")
|
| 236 |
+
file_out = gr.File(label="π Scribe Generated file", interactive=False)
|
| 237 |
+
send.click(handle_chat, inputs=[msg, chat, provider, api_key, model], outputs=[msg, chat])
|
| 238 |
+
msg.submit(handle_chat, inputs=[msg, chat, provider, api_key, model], outputs=[msg, chat])
|
| 239 |
+
clear.click(lambda: [], None, chat, queue=False)
|
| 240 |
+
download.click(guardar_conversacion, inputs=[chat], outputs=[file_out])
|
| 241 |
+
provider.change(_on_provider_change, inputs=[provider], outputs=[help_md, model])
|
| 242 |
+
|
| 243 |
+
demo.launch(mcp_server=True, allowed_paths=["."])
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio==4.44.0
|
| 2 |
+
python-docx==1.1.0
|
| 3 |
+
openai==1.55.3
|
| 4 |
+
google-generativeai==0.7.2
|
| 5 |
+
requests==2.32.3
|
| 6 |
+
sambanova==1.0.3
|